Pentagon Plans New AI Initiative: What’s Set for 2024?

AI Initiative

The Pentagon has announced plans to establish an Artificial Intelligence-Enabled Weapon Systems Center of Excellence in 2024, but the implications of this move are vast and evolving.

At a Glance

  • An amendment to the Senate’s fiscal year 2024 defense authorization bill proposes the creation of this new center.
  • The Center will aggregate, analyze, and disseminate information on AI-enabled weapon systems.
  • The initiative aims to enhance the Department of Defense’s capabilities in AI technology and countermeasures.
  • Rapid decision-making and situational awareness are key goals of this strategy.

Framework and Mandate

The Pentagon’s new initiative stems from an amendment in the Senate’s fiscal year 2024 defense authorization bill. This amendment calls for the launch of an “Artificial Intelligence-Enabled Weapon Systems Center of Excellence” to be established within the Department of Defense (DOD) by 2024. The Center will focus on capturing, analyzing, assessing, and sharing lessons learned about advancements in AI-enabled weapon systems, countermeasures, tactics, techniques, procedures, and training methodologies.

The responsibilities of this Center are multi-faceted. They include capturing and sharing lessons learned regarding AI-enabled weapon systems across the Department of Defense. Focus areas for the COE include advancements in AI-enabled weapon systems, countermeasures, tactics, techniques, procedures, and training methodologies.

Strategic Vision

The vision for the AI Center aligns with the broader strategy of the Department of Defense to scale digital data analytics and artificial intelligence across the department. Craig Martell, the Defense Department’s Chief Digital and Artificial Intelligence Officer, outlined this vision at the Advantage DOD 2024: Defense Data and AI Symposium. Martell paints a future where combatant commanders can achieve rapid situational awareness, reducing turnaround time from days to minutes.

“Imagine a world where combatant commanders can see everything they need to see to make strategic decisions,” he explained. “Imagine a world where those combatant commanders aren’t getting that information via PowerPoint or via emails from across the organization—the turnaround time for situational awareness shrinks from a day or two to 10 minutes.”

USAF’s Role in AI Advancement

The United States Air Force (USAF) is already heavily investing in AI research and development to boost aerial dominance and ensure national security. Their AI initiatives emphasize autonomy, operational efficiency, and strategic advantage. For instance, the X-62 Variable In-Flight Simulator Test Aircraft (VISTA) serves as a key platform for testing artificial intelligence in aerial systems. The USAF is also developing next-generation unmanned aerial vehicles (UAVs) equipped with AI capabilities, like the X-62 VISTA and XQ-58A Valkyrie.

The VENOM-AFT program at Eglin Air Force Base accelerates the testing of autonomy software on both crewed and uncrewed aircraft. The USAF collaborates with commercial and academic partners, such as the Massachusetts Institute of Technology (MIT) AI Accelerator program, ensuring ethical considerations and responsible AI development remain central to their initiatives.

Ethical Implications and Risks

The Pentagon’s push towards AI-enabled weapons is not without controversy. Many experts warn of the potential risks associated with rapid deployment. A significant internal conflict at OpenAI, for example, highlighted the divide between those advocating for unrestricted AI research and those urging caution due to potential dangers.

“Humanity is about to cross a major threshold of profound importance when the decision over life and death is no longer taken by humans but made on the basis of pre-programmed algorithms. This raises fundamental ethical issues,” stated Amb. Alexander Kmentt, Austria’s chief negotiator for disarmament, arms control, and nonproliferation.

Global AI Arms Race

The AI arms race is intensifying globally, driven by concerns over maintaining technological superiority. The United States, China, and Russia are at the forefront of this race, primarily out of fear of losing the upper hand. Future warfare may rely more on AI capabilities like data collection, connectivity, and algorithms rather than traditional military factors.

The Department of Defense supports the responsible development of AI for military use, emphasizing policies and technical capacities for safe deployment. Recently, the U.S. unveiled a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” advocating voluntary restraints over legally binding obligations. Nonetheless, the ethical concerns and potential catastrophic risks must be continually addressed.