The SEI recently announced the release of white papers outlining the challenges and opportunities of three initial pillars of artificial intelligence (AI) engineering: human centered, scalable, and robust and secure.
To mature AI practices and help national defense and security agencies adopt AI, the SEI has begun formalizing the field of AI engineering, much as it did for software engineering in the 1980s. AI engineering is an emerging field of research and practice that combines the principles of systems engineering, software engineering, computer science, and human-centered design to create AI systems in accordance with human needs for mission outcomes.
In October 2020, the Office of the Director of National Intelligence sponsored the SEI to lead an initiative to advance the discipline of AI engineering for defense and national security. The SEI had already released 11 foundational principles of AI engineering and held a 2019 workshop with thought leaders to identify areas of focus for AI initiatives. The workshop’s findings and subsequent collaborations with government, the armed services, industry, and academia led to the newly released pillars of AI engineering.
The government sphere has many barriers that can prevent successful implementation of AI, such as increased scrutiny, limited data resources, rigorous acquisition process, and high-stakes application areas. The SEI’s government partners cited scalability challenges in the private sector, amplified by the government-sector barriers, as particularly worrisome. “It’s been reported that most AI projects fail to capture the intended business value,” said Rachel Dzombak, digital transformation lead at the SEI’s Emerging Technology Center and a leader of the SEI’s work in AI engineering. “A lot of that comes from the inability to transition prototypes into systems that achieve the right outcomes over time and at scale.”
After consultation with its partners, the SEI developed its scalability pillar of AI engineering, which includes three areas of focus:
# Scalable management of data and models
# Enterprise scalability of AI development and deployment
# Scalable algorithms and infrastructure
Even highly scalable systems will not fulfill mission outcomes if they are not robust and secure. AI systems must be robust against real-world variations—those that the systems can reason about and those that they cannot. The SEI’s white paper on robust and secure AI calls out three focus areas:
- Improving robustness of AI components and systems, including going beyond measuring accuracy to measuring mission outcome achievements
- Development of processes and tools for testing, evaluating, and analyzing AI systems
- Designing for security challenges in modern AI systems
While security is a must for AI implementations in the DoD, so is keeping humans at the center. “If your smart device at home recommends the wrong song, it doesn’t necessarily have long-term effects,” said Dzombak. “But for the applications and problems in the national security space, AI output has consequences for human lives.”
The human-centered pillar of AI engineering is intended to ensure that AI systems are built in alignment with the ethical principles of the DoD and other government agencies. “We’re challenging ourselves to ask what transparent and responsible systems really are,” said Dzombak, “and how to measure and ensure system integrity over time.”
The white paper on human-centered AI engineering highlights these areas:
- The need for designers and systems to understand context of use and sense changes over time
- Development of tools, processes, and practices to scope and facilitate human-machine teaming
- Methods, mechanisms, and mindsets to engage in critical oversight
The SEI envisions AI engineering as a discipline founded on all three pillars—scalable, robust and secure, and human-centered. Such a discipline would produce AI systems that not only have those qualities, but deliver on their intended purpose. “By putting these pillars in place as AI system design and development starts,” said Dzombak, “you’re more likely to build systems that achieve mission outcomes.”
Dzombak sees the three white papers as the beginning of a conversation. “These papers state the open questions we see in the field and identify gaps where work is needed. If we want to drive progress in the field, we need to start taking steps towards defining and answering these hard questions.”
Bolstered by the recent establishment of an AI Division within the SEI, the team is exploring those questions with new and ongoing AI projects, by examining project portfolios for AI engineering insights, and by preparing a roadmap for the discipline based on AI use cases. It is also inviting the AI community to join the effort. “The SEI doesn’t have all the answers,” said Dzombak. “A big part of our role is to convene the perspectives on best practices.”