Sunday, January 11, 2026

MBSE Co‐Pilot: A Research Roadmap - Zhang - 2026 - Systems Engineering - Wiley Online Library

Autonomy Level for AI-assisted MBSE inspired by the autonomous levels defined in other contexts, such as for cars in SAE J3016 [25]

MBSE Co‐Pilot: A Research Roadmap - Zhang - 2026 - Systems Engineering - Wiley Online Library

AI Co-Pilots Could Democratize Complex Engineering Tools

BLUF (Bottom Line Up Front): The steep learning curve and high cost of Model-Based Systems Engineering (MBSE) software have created a significant barrier to adoption, limiting the technology primarily to large aerospace and defense contractors. AI co-pilots that can interpret natural language commands and automate complex modeling tasks may finally make MBSE accessible to smaller organizations, potentially transforming the competitive landscape of systems engineering.


The promise of Model-Based Systems Engineering has been clear for over two decades: replace error-prone document shuffling with integrated digital models that serve as a single source of truth. Yet despite this compelling vision, MBSE adoption has remained frustratingly slow, concentrated primarily among large corporations with deep pockets and dedicated training budgets. The culprit? Enterprise MBSE tools that cost hundreds of thousands of dollars and require months—even years—of training to use effectively.

Now, AI co-pilots may finally break this logjam, potentially democratizing access to sophisticated engineering capabilities that have been the exclusive domain of aerospace giants and defense primes.

The Cost-Complexity Barrier

"The steep learning curve associated with learning fast-evolving modeling languages and specialized modeling software makes it difficult to adopt MBSE at the organizational level," notes Wenheng Zhang in a comprehensive research roadmap published in INCOSE Insight. His team's research, supported by Innovate UK, identifies organizational barriers as "among the most predominant" challenges facing MBSE adoption.

The numbers tell a stark story. Enterprise licenses for leading MBSE platforms—tools like IBM Rhapsody, Dassault Systèmes' CATIA Magic (formerly No Magic), PTC's Windchill Modeler, or Siemens' Cameo Systems Modeler—can run from $5,000 to over $15,000 per seat annually. For a mid-sized engineering firm with 50 systems engineers, initial software investment alone could exceed $750,000, before accounting for training costs, infrastructure, and the productivity loss during the transition period.

But the financial barrier pales compared to the learning curve. Systems Modeling Language (SysML) and its predecessor UML (Unified Modeling Language) require engineers to master not just graphical notation systems but entire methodologies for decomposing complex systems into interconnected model elements. Papke and colleagues found that "the adoption of MBSE is more than just training and tooling, but requires an enterprise-level approach with proper enterprise architecture."

The result: MBSE has remained largely confined to organizations that can afford both the tools and the multi-year organizational transformation required to use them effectively—primarily major defense contractors like Raytheon, Lockheed Martin, Northrop Grumman, and Boeing, along with aerospace manufacturers like Airbus and automotive giants pursuing electrification and autonomy.

The Small Firm Disadvantage

For smaller engineering consultancies, startups developing complex hardware, or mid-sized manufacturers, the MBSE barrier has created a significant competitive disadvantage. These organizations often possess deep domain expertise but lack the resources for expensive toolchains and extended training programs.

"Organizations mistakenly treat MBSE as a 'one size fits all' solution, overloading models with excessive detail, which increases complexity rather than reducing it," Zhang's research team found. This problem intensifies for smaller firms that cannot afford dedicated MBSE specialists to guide appropriate abstraction levels and modeling practices.

The challenge extends beyond initial adoption. Fast-evolving modeling languages mean that expensive training investments can depreciate rapidly as standards change. The Object Management Group (OMG), which maintains SysML, continues to release new versions with enhanced capabilities—but each evolution requires additional learning and tool updates.

How AI Co-Pilots Change the Game

AI co-pilots could fundamentally alter this equation by shifting the interface from specialized modeling languages to natural language. Instead of mastering SysML's complex notation system, engineers could describe what they want in plain English—or any other language—and let the AI handle the technical implementation.

"Drawing on insights from the use of AI in education, an AI capable of understanding modeling languages could greatly simplify the process of learning and using these tools," Zhang argues. "This could not only accelerate the learning process but also shift the focus away from the technical complexities of the language itself."

The implications are profound. Consider a mechanical engineer at a small robotics startup who needs to create a block definition diagram showing the hierarchical decomposition of a robotic arm subsystem. Currently, this requires:

  1. Understanding SysML block diagram syntax and semantics
  2. Navigating complex MBSE software interfaces
  3. Manually creating and connecting model elements
  4. Ensuring consistency with other model views
  5. Maintaining traceability to requirements

With an AI co-pilot, the same engineer might simply describe: "Create a block diagram for our robotic arm showing the shoulder assembly, elbow assembly, and wrist assembly, with motors, encoders, and structural components for each." The AI would generate the properly formatted SysML model, suggest appropriate relationships, and automatically establish traceability links.

Evidence from Early Implementations

Recent practical demonstrations support this optimistic vision. Tony Sukhwani's 2025 work integrating large language models into Cameo Systems Modeler shows that even current AI technology can meaningfully reduce MBSE complexity. His implementation uses GPT-4o for model creation and modification tasks, allowing engineers to work through natural language prompts rather than mastering every aspect of the modeling environment.

The system employs a human-in-the-loop approach where complex modeling tasks are "segmented into a set of smaller tasks, each producing a homogeneous type of model elements," Zhang explains. This allows engineers to verify outputs at each step without needing deep expertise in SysML minutiae.

More importantly, this approach enables experimentation with MBSE methodologies that would be prohibitively expensive under traditional adoption models. "Assuming AI achieves reliable execution of MBSE tasks, organizations will have the leisure to experiment with different MBSE methodologies without committing significant resources and time," Zhang notes. "This would enable them to derive a fit-for-purpose approach."

The Standardization Dividend

Another critical advantage for smaller organizations: AI co-pilots could provide standardization across the fragmented MBSE landscape. Currently, different tool vendors implement SysML with subtle variations, and different organizations develop incompatible modeling conventions. As Vaneman and Carlson pointed out, "this lack of standardization can prevent models from being interoperable, leading to fragmentation and inefficiency."

An AI co-pilot trained on best practices and industry standards could automatically enforce consistency, helping small firms produce models that meet the same quality standards as those from larger competitors—without requiring years of accumulated organizational knowledge.

"Usage of AI is also anticipated to offer standardization of practices across organizations, addressing the issue of fragmented frameworks and enhancing model interoperability, ultimately streamlining development processes," Zhang's roadmap emphasizes.

Remaining Challenges

The vision isn't without caveats. Zhang's research identifies three critical challenges that must be addressed before AI co-pilots can fully democratize MBSE:

Data Readiness: MBSE integrates diverse data sources with unique formats and terminologies. AI systems must accurately classify and structure this heterogeneous information—a particular challenge for smaller organizations with less formalized data management practices.

Dynamic Traceability: Proper traceability requires clear rationale behind AI recommendations, not just automated link creation. Small firms relying on AI without deep MBSE expertise need especially transparent decision-making to maintain model integrity.

Trustworthiness: Current large language models can produce "hallucinations"—fabricated outputs that appear plausible but are factually incorrect. For safety-critical applications common in aerospace and medical devices (areas where many smaller specialized firms operate), this poses unacceptable risks.

Zhang's roadmap addresses these through phased development, but full realization of a Level 3 "trusted co-pilot" remains a work in progress. The research team emphasizes four key trust measures: accuracy, consistency, reliability, and explainability—all essential for smaller organizations that cannot afford extensive verification resources.

The Competitive Landscape Shift

If AI co-pilots successfully reduce MBSE barriers, the competitive implications could be significant. Industries that have seen consolidation partly driven by the need for sophisticated engineering infrastructure—aerospace, defense, automotive—might experience renewed entrepreneurial activity.

Smaller firms could bid on complex systems integration projects previously dominated by large primes. Startups developing autonomous systems, advanced robotics, or space hardware could adopt the same model-based practices as established players, potentially accelerating innovation cycles.

The shift could also affect geographic distribution of engineering capability. Currently, MBSE expertise concentrates in regions with major aerospace and defense employers—Southern California, Seattle, the Washington D.C. corridor, certain European aerospace hubs. AI co-pilots could enable distributed teams anywhere to work at comparable sophistication levels.

Investment and Development Timeline

Zhang's research, supported by Innovate UK Grant 1334-AKT, proposes a realistic development timeline that acknowledges both technical challenges and market realities. Rather than fixed deadlines, "each phase advances based on the maturity of the corresponding research and the readiness of technologies for integration into existing MBSE workflows."

Early commercial implementations are already emerging. Major MBSE tool vendors are experimenting with AI integration, though most current offerings remain at what Zhang categorizes as Level 1 (task automation) or Level 2 (intelligent feedback) rather than Level 3 (trusted collaboration).

The roadmap suggests that meaningful democratization effects could begin appearing within 3-5 years as AI capabilities mature and cloud-based MBSE platforms reduce infrastructure costs. Full realization of Level 3 co-pilots across all capability groups (Model Development, Model Management, and Model Comprehension) likely requires a longer horizon—potentially 7-10 years.

A Transformed Engineering Landscape

The ultimate promise extends beyond cost reduction. By removing the learning curve barrier, AI co-pilots could fundamentally change who participates in complex systems engineering.

"Developing such a Co-Pilot is not meant to replace engineers but to work alongside them, helping them focus on more valuable tasks," Zhang emphasizes. For smaller organizations, those "more valuable tasks" might be applying deep domain expertise—in specialized sensors, novel materials, advanced algorithms—without being blocked by MBSE methodology mastery.

The research team envisions "a fully integrated, data-driven approach to engineering across the entire system lifecycle" becoming accessible not just to large corporations but to any organization tackling genuinely complex technical problems.

Whether this vision fully materializes depends on solving the trustworthiness challenges that have plagued AI applications in safety-critical domains. But the potential is clear: the same AI revolution that has democratized access to sophisticated writing, coding, and analysis tools could finally make industrial-strength systems engineering accessible beyond the Fortune 500.

For the mechanical engineer at that robotics startup, the future might mean focusing on innovative joint mechanisms and control algorithms—while the AI handles the SysML syntax. That shift could unlock a new era of innovation from organizations that combine deep technical expertise with entrepreneurial agility, previously locked out by tool complexity and cost.


Sources

  1. Zhang, W. (2025). "MBSE Co-Pilot: A Research Roadmap." INCOSE Insight, Wiley Online Library. https://incose.onlinelibrary.wiley.com

  2. Papke, T., et al. (Referenced in Zhang, 2025). Enterprise-level MBSE adoption and architecture requirements.

  3. Vaneman, W.K., & Carlson, D. (Referenced in Zhang, 2025). Standardization challenges in MBSE frameworks and interoperability.

  4. Chami, M., et al. (Referenced in Zhang, 2025). Balancing detail and abstraction in MBSE model management.

  5. Sukhwani, T. (2025). "AI Plugin for Cameo." Presentation at MBSE Cyber Systems Symposium, January 2025.

  6. Konrad, S., et al. (Referenced in Zhang, 2025). Risk of clarity loss as MBSE models increase in detail.

  7. Madni, A.M., & Purohit, S. (Referenced in Zhang, 2025). Digital infrastructure investment requirements for MBSE transition.

  8. Amorim, T., et al. (Referenced in Zhang, 2025). Process maturity considerations in MBSE implementation.

  9. Object Management Group (OMG). Systems Modeling Language (SysML) specifications and standards development.

  10. Innovate UK. Grant Number 1334-AKT supporting MBSE Co-Pilot research.

Note: Pricing estimates for enterprise MBSE tools are based on industry knowledge and may vary significantly based on licensing agreements, deployment scale, and specific vendor offerings. Organizations should consult directly with vendors for accurate pricing information.

 

AI Co-Pilots Face the "Expert's Dilemma" in Systems Engineering

BLUF (Bottom Line Up Front): The steepest MBSE adoption barrier may not be inexperienced engineers but veteran systems engineers deeply invested in document-centric practices, who face career disruption learning new paradigms while simultaneously managing government customers requiring traditional Contract Data Requirements Lists (CDRLs). AI co-pilots could bridge this gap by automatically translating between model-based and document-based artifacts, but only if they solve the bidirectional synchronization problem that has plagued MBSE for two decades.


When the U.S. Department of Defense issues a contract for a new radar system or satellite, the deliverables list typically runs dozens of pages: System Specification documents, Interface Control Documents, Test Plans, Requirements Traceability Matrices—all in traditional formats specified in Contract Data Requirements Lists (CDRLs). Meanwhile, modern systems engineering theory insists these same contractors should abandon documents entirely in favor of integrated digital models.

The collision between these realities creates a particularly acute problem for the engineers caught in the middle: experienced systems professionals with 20-30 years of document-based practice, who now face learning complex modeling tools while their customers still demand Word documents and PDFs.

"The rapid growth of technology and the increasing complexity of engineered systems have introduced significant challenges in the field of systems engineering," notes Wenheng Zhang in his MBSE Co-Pilot research roadmap. But he identifies a critical human factor: "Traditionally, systems engineering relied heavily on document-centric approaches, which focus on managing system artifacts such as requirements, designs, and test cases."

For senior engineers who built entire careers mastering those document-centric approaches, the transition to Model-Based Systems Engineering isn't just a technical challenge—it's a professional identity crisis.

The Veteran Engineer's Dilemma

Consider a senior systems engineer at a defense contractor—call her Maria—who has led successful programs for two decades. She knows how to write a tight system specification, maintain a requirements database, conduct a critical design review, and deliver exactly what CDRLs demand. Her expertise commands premium billing rates and customer trust.

Now her organization mandates MBSE adoption. Maria must learn Systems Modeling Language (SysML), master expensive tools like Cameo or Rhapsody, and reconceptualize her entire workflow around block diagrams, activity diagrams, and parametric models. The "unlearning" proves harder than the learning.

"The steep learning curve associated with learning fast-evolving modeling languages and specialized modeling software makes it difficult to adopt MBSE at the organizational level," Zhang's research emphasizes. But the challenge intensifies for experienced engineers because they're not learning from scratch—they're rewiring decades of ingrained practice.

Research by Papke and colleagues found that "the adoption of MBSE is more than just training and tooling, but requires an enterprise-level approach with proper enterprise architecture." For veteran engineers, this enterprise transformation often feels like being told their expertise has become obsolete overnight.

The psychological barrier compounds the technical one. Younger engineers entering the field can learn MBSE as their baseline. Experienced engineers face what organizational psychologists call "competency disruption"—the uncomfortable realization that hard-won skills may no longer suffice.

The CDRL Problem: Government Customers Still Want Documents

Even organizations that successfully transition to MBSE internally hit a brick wall when interfacing with customers, particularly government agencies. The problem crystallizes in Contract Data Requirements Lists.

CDRLs specify exactly what contractors must deliver, in exactly what format, tied to specific contract milestones and payment schedules. A typical DoD contract might require:

  • DI-IPSC-81430: System/Subsystem Specification (traditional document format)
  • DI-TMSS-80527: Test Plan (Word/PDF with specific section structure)
  • DI-MGMT-80004: Interface Control Document (traditional formatted document)
  • DI-SESS-80255: Requirements Traceability Matrix (Excel spreadsheet or database export)

None of these CDRLs say "deliver a SysML model." They demand documents that look like documents, formatted according to DoD standards that predate MBSE by decades.

"A key limitation of document-centric approaches lies in their reliance on human cognition to establish and maintain the relationships between artifacts, making the process prone to errors and inefficiencies," Zhang notes. Yet government acquisition processes remain fundamentally document-driven, creating what industry insiders call the "MBSE-CDRL gap."

Contractors adopting MBSE face an uncomfortable reality: they must maintain models internally while still producing traditional document deliverables for customers. This often means:

  1. Building and maintaining a SysML model
  2. Extracting information from the model
  3. Reformatting it into traditional documents
  4. Manually ensuring consistency between model and documents
  5. Managing version control across both representations

Far from reducing workload, this dual-track approach initially increases it—creating resistance from veteran engineers who correctly observe they're now doing more work, not less.

The Synchronization Challenge

The MBSE community has long promised automated document generation from models. Tools like Cameo and Rhapsody offer document generation capabilities. The Capella tool "leverages M2Doc and Acceleo Query Language (AQL) to automate document-to-model transformations," as Zhang's research notes.

But automation proves deceptively difficult. A model captures relationships and constraints that don't translate cleanly into linear document narratives. A single requirement in a model might trace to multiple design elements, test cases, and interface specifications—relationships easily navigated graphically but awkward to express in prose.

More problematic: customers often provide feedback on documents, not models. When a government program manager redlines a System Specification document with changes, someone must manually propagate those changes back into the model. As Li and colleagues discuss, "maintaining consistency when managing interdisciplinary models within MBSE frameworks" becomes especially challenging "when a change is made in one part of the model, it must be accurately propagated across all related elements without introducing errors or inconsistencies."

This bidirectional synchronization—model to document and document to model—remains largely unsolved at industrial scale. The result: experienced engineers spend significant time as "translators" between representations, using neither the full power of models nor the simplicity of pure document-based workflows.

How AI Co-Pilots Could Bridge the Gap

Zhang's research roadmap specifically addresses these challenges through what he calls the "Model Management" capability group, with goals "to provide seamless information and change management for large-scale system models."

AI co-pilots could potentially solve both the learning curve and CDRL problems through several mechanisms:

Natural Language Interfaces: Instead of mastering SysML syntax, veteran engineers could describe what they need in familiar terms. "Create a requirements allocation view showing how system requirements 3.2.1 through 3.2.5 flow down to the radar processing subsystem"—language any experienced systems engineer understands.

The AI handles the SysML implementation details, leveraging what Zhang calls "Foundational Data Classification" where "the MBSE co-pilot employs rule-based methods, such as identifying subjects, verbs, or specific linguistic patterns, to classify and structure data according to predefined rules."

Automated Document Generation: More sophisticatedly, AI co-pilots could generate CDRL-compliant documents from models, understanding not just what information to extract but how to structure narratives that satisfy specific Data Item Description (DID) requirements.

Zhang's roadmap emphasizes "Context-Aware Data Adaptation" where "the MBSE Co-pilot tailors data transformations to the specific context of the task." Generating a DI-IPSC-81430 compliant specification requires different narrative structure than generating a DI-SESS-80255 traceability matrix, even though both draw from the same underlying model.

Bidirectional Synchronization: Perhaps most critically, AI could manage the synchronization problem that has plagued MBSE adoption. When customers mark up documents with changes, AI co-pilots could interpret those changes and propagate them back into the model, maintaining consistency.

This capability relates to what Zhang calls "Impact Analysis and Change Propagation," where "the MBSE Co-pilot performs a comprehensive impact analysis based on the view, identifying how specific changes in any part of the model will propagate through other components."

Evidence from Current Implementations

Tony Sukhwani's 2025 work integrating GPT-4o into Cameo demonstrates practical progress. His implementation "encourages a human-in-the-loop process where a complex modeling task is segmented into a set of smaller tasks, each producing a homogeneous type of model elements."

This approach particularly suits experienced engineers transitioning to MBSE. Rather than learning the entire SysML methodology upfront, they can work incrementally, using familiar systems engineering thinking while the AI handles modeling mechanics.

"Based on the success demonstrated in this presentation, it can be argued that an LLM-based MBSE Co-Pilot likely has already fulfilled most of what is needed for the Data Readiness," Zhang observes, referring to the challenge of understanding different types of engineering data.

However, document generation remains imperfect. Current large language models can produce plausible-looking text but struggle with the precise consistency and traceability that CDRLs demand. Zhang identifies "Trustworthiness" as a critical challenge: "LLMs, while capable of generating useful insights, frequently produce outputs that are difficult to explain or validate, raising concerns about their reliability in high-stakes engineering applications."

The Government Modernization Gap

The CDRL problem reflects a broader challenge: government acquisition processes evolve slowly compared to engineering practice. The Defense Acquisition University still teaches traditional systems engineering V-models. Program management tools across DoD assume document-based deliverables. Contracting officers and program managers trained over decades expect information in familiar formats.

Some progressive programs are beginning to accept model-based deliverables. The DoD Digital Engineering Strategy, released in 2018 and updated in subsequent years, explicitly calls for transitioning to model-centric approaches. But strategy documents move faster than contract templates and acquisition regulations.

A 2023 Government Accountability Office report on DoD acquisition noted persistent challenges in modernizing engineering data management, particularly the gap between contractor internal practices and government delivery expectations.

For experienced systems engineers, this creates a paradox: their organizations push MBSE adoption while their customers still require traditional outputs. They become caught between competing imperatives, with AI co-pilots offering potential relief if—and only if—the technology matures sufficiently.

The Training Investment Problem

Organizations face difficult decisions about training investments. Sending a senior engineer to week-long SysML training costs not just tuition but opportunity cost—billable hours lost. If that engineer then struggles to apply new skills while maintaining document-based deliverables, ROI becomes questionable.

"A failure to appreciate the long-term benefits of MBSE leads to inadequate resource allocation and slow adoption," Zhang's research notes. But "long-term benefits" offer cold comfort to engineers experiencing short-term productivity decline.

AI co-pilots could compress this learning curve dramatically. Instead of months mastering modeling tools, engineers might become productive within weeks, using natural language interaction while gradually absorbing SysML concepts through AI-guided practice.

The roadmap envisions "flexible" AI systems that "adapt to both advancing AI technologies and evolving MBSE methodologies." For veteran engineers, this means the tools could adapt to their existing mental models rather than demanding complete cognitive restructuring.

The Cultural Resistance Factor

Beyond individual learning challenges, organizations face cultural resistance from senior technical staff—often the very people whose buy-in determines adoption success. When respected chief engineers express skepticism about MBSE, younger engineers notice.

"Organizations mistakenly treat MBSE as a 'one size fits all' solution, overloading models with excessive detail, which increases complexity rather than reducing it," Zhang warns. This mistake often occurs when organizations mandate MBSE without adequately consulting experienced engineers about appropriate abstraction levels.

Konrad and colleagues "emphasized the risk of losing clarity as MBSE models grow more detailed, ultimately undermining their effectiveness. Instead of streamlining development, these overly complex models can hinder engineers, making it harder to navigate and manage essential information."

AI co-pilots could help by recommending appropriate detail levels based on project context, drawing on what Zhang calls "Hierarchical Model Simplification" to "transform complex, higher-order model representations into simpler, lower-order forms using specific rules and code."

A Path Forward: Hybrid Workflows

The most realistic near-term solution may be hybrid workflows where AI co-pilots serve as intelligent translators between document and model paradigms. Experienced engineers could continue working in familiar ways while AI maintains underlying models and generates required deliverables.

Zhang's vision of a Level 3 "trusted co-pilot" specifically maintains "humans in the loop" where "creativity and accountability remain firmly with human engineers." For veterans, this means leveraging their domain expertise and systems thinking while offloading modeling mechanics to AI.

The roadmap's emphasis on "Transparent Decision-Making" becomes crucial for gaining veteran engineers' trust: "Every recommendation or decision made by the Co-Pilot should be accompanied by an explanation that outlines its reasoning, the data sources used, and the expected impact of its suggestion."

Experienced engineers, particularly those who've seen previous automation initiatives fail, will scrutinize AI outputs carefully. Explainability isn't optional—it's essential for adoption.

Timeline and Realism

Zhang's research avoids promising quick fixes: "Each phase advances based on the maturity of the corresponding research and the readiness of technologies for integration into existing MBSE workflows."

For the document-model synchronization challenge, meaningful solutions likely require 3-5 years of development, particularly solving trustworthiness issues around CDRL-compliant document generation. Government acquisition reform operates on even longer timescales—potentially a decade or more before model-based deliverables become standard.

In the interim, experienced engineers will continue managing dual-track workflows. AI co-pilots may ease but won't eliminate this burden in the near term.

The Stakes

The experienced engineer problem has broader implications than individual career disruption. These senior professionals hold decades of hard-won knowledge about complex systems, failure modes, integration challenges, and customer needs. If MBSE adoption alienates them, organizations lose critical expertise during the transition.

"Developing such a Co-Pilot is not meant to replace engineers but to work alongside them, helping them focus on more valuable tasks," Zhang emphasizes. For veterans, those "more valuable tasks" mean applying deep systems thinking rather than struggling with tool mechanics.

Successfully bridging the document-model gap could unlock the full promise of MBSE: leveraging experienced engineers' expertise within modern frameworks while maintaining customer interface requirements. Failure means continued friction, slow adoption, and potential loss of senior talent to frustration or retirement.

The AI co-pilot vision offers hope, but realization requires solving hard problems in natural language understanding, document generation, bidirectional synchronization, and trustworthiness. For the veteran systems engineer caught between old and new paradigms, the technology can't arrive soon enough—provided it actually works when it does.


Sources

  1. Zhang, W. (2025). "MBSE Co-Pilot: A Research Roadmap." INCOSE Insight, Wiley Online Library. https://incose.onlinelibrary.wiley.com

  2. Papke, T., et al. (Referenced in Zhang, 2025). Enterprise architecture requirements for MBSE adoption.

  3. Li, Y., et al. (Referenced in Zhang, 2025). Consistency challenges in interdisciplinary MBSE model management.

  4. Konrad, S., et al. (Referenced in Zhang, 2025). Clarity loss risks in detailed MBSE models.

  5. Sukhwani, T. (2025). "AI Plugin for Cameo." Presentation at MBSE Cyber Systems Symposium, January 2025.

  6. Duhil, A., et al. (Referenced in Zhang, 2025). M2Doc and AQL for document-to-model transformation in Capella.

  7. U.S. Department of Defense. (2018, updated). "Digital Engineering Strategy." Office of the Deputy Assistant Secretary of Defense for Systems Engineering.

  8. U.S. Government Accountability Office. (2023). "Defense Acquisition: Observations on Data Management Challenges." GAO Report.

  9. Defense Acquisition University. Systems Engineering fundamentals and acquisition documentation standards. https://www.dau.edu

  10. Meissner, M., et al. (Referenced in Zhang, 2025). Rapid engineering change management in MBSE environments.

  11. Bordeleau, F., et al. (Referenced in Zhang, 2025). Bi-directional synchronization challenges in digital twin environments.

  12. Innovate UK. Grant Number 1334-AKT supporting MBSE Co-Pilot research.

Note: Contract Data Requirements Lists (CDRLs) and Data Item Descriptions (DIDs) are standardized through DoD documentation systems. Specific DID numbers referenced (DI-IPSC-81430, DI-TMSS-80527, etc.) are examples of commonly used formats in defense acquisition programs. Organizations should consult current DoD acquisition guidance and contract-specific requirements.

No comments:

Post a Comment

MBSE Co‐Pilot: A Research Roadmap - Zhang - 2026 - Systems Engineering - Wiley Online Library

Autonomy Level for AI-assisted MBSE inspired by the autonomous levels defined in other contexts, such as for cars in SAE J3016 [ 25 ] MBSE C...