More Battlefield AI Will Make the Fog of War More Deadly

Share

The United States military is not the unrivaled force it once was, but Alexandr Wang, CEO of startup Scale AI, told a congressional committee last week that it could establish a new advantage by harnessing artificial intelligence.

“We have the largest fleet of military hardware in the world,” Wang told the House Armed Services Subcommittee on Cyber, Information Technology and Innovation. “If we can properly set up and instrument this data that’s being generated … then we can create a pretty insurmountable data advantage when it comes to military use of artificial intelligence.”

Wang’s company has a vested interest in that vision, since it regularly works with the Pentagon processing large quantities of training data for AI projects. But there is a conviction within US military circles that increased use of AI and machine learning are virtually inevitable—and essential. I recently wrote about that growing movement and how one Pentagon unit is using off-the-shelf robotics and AI software to more efficiently surveil large swaths of the ocean in the Middle East.

Besides the country’s unparalleled military data, Wang told the congressional hearing that the US has the advantage of being home to the world’s most advanced AI chipmakers, like Nvidia, and the world’s best AI expertise. “America is the place of choice for the world’s most talented AI scientists,” he said.

Wang’s interest in military AI is also worth paying attention to because Scale AI is at the forefront of another AI revolution: the development of powerful large language models and advanced chatbots like ChatGPT.

No one is thinking of conscripting ChatGPT into military service just yet, although there have been a few experiments involving use of large language models in military war games. But observers see US companies’ recent leaps in AI performance as another key advantage that the Pentagon might exploit. Given how quickly the technology is developing—and how problematic it still is—this raises new questions about what safeguards might be needed around military AI.

This jump in AI capabilities comes as some people’s attitudes toward the military use of AI are changing. In 2017, Google faced a backlash for helping the US Air Force use AI to interpret aerial imagery through the Pentagon’s Project Maven. But Russia’s invasion of Ukraine has softened public and political attitudes toward private sector collaboration with tech companies and demonstrated the potential of cheap autonomous drones and of commercial AI for data analysis. Ukrainian forces are using neural deep learning algorithms to analyze aerial imagery and footage. The US company Palantir has said that it is providing targeting software to Ukraine. And Russia is increasingly focusing on AI for autonomous systems.