#AI #Drone Hunter: Anduril and OpenAI Team Up to Develop Autonomous Aerial Defense System
Defense technology company Anduril has unveiled its latest innovation in military defense - the Roadrunner and Roadrunner-M, AI-guided autonomous jets designed to hunt and neutralize enemy drones. The company has simultaneously announced a strategic partnership with OpenAI to enhance these systems' capabilities.
The Roadrunner platform, spearheaded by 31-year-old Palmer Luckey, aims to provide a mobile and cost-effective solution for protecting military personnel from aerial threats. The system integrates with Anduril's existing Lattice software platform, which allows a single operator to manage the entire drone detection and response operation from a computer interface.
In a demonstration at Anduril's test site near San Clemente, California, the system showed how it could autonomously detect potential threats using the Sentry tower's sensors, deploy surveillance drones to track targets, and launch interceptor drones to neutralize hostile aircraft. The entire operation required minimal human intervention, with the operator primarily confirming automated suggestions from the AI system.
The new partnership with OpenAI will focus on improving the system's counter-unmanned aircraft systems (CUAS) capabilities, particularly in detecting, assessing, and responding to aerial threats in real-time. OpenAI CEO Sam Altman emphasized that the collaboration aims to "protect U.S. military personnel and help the national security community understand and responsibly use this technology."
"Anduril builds defense solutions that meet urgent operational needs for the U.S. and allied militaries," said Brian Schimpf, Anduril's co-founder and CEO. "Our partnership with OpenAI will allow us to utilize their world-class expertise in artificial intelligence to address urgent Air Defense capability gaps across the world."
The collaboration comes at a critical time as the Pentagon seeks solutions to counter the growing threat of drone warfare, particularly in light of recent conflicts that have highlighted the vulnerability of military installations to drone attacks. Both companies have emphasized their commitment to AI safety and ethics, promising robust oversight in the development and deployment of these advanced defense systems.
About the Video
One afternoon in late November, I visited a weapons test site in the foothills east of San Clemente, California, operated by Anduril, a maker of AI-powered drones and missiles that recently announced a partnership with OpenAI. I went there to witness a new system it’s expanding today, which allows external parties to tap into its software and share data in order to speed up decision-making on the battlefield. If it works as planned over the course of a new three-year contract with the Pentagon, it could embed AI more deeply into the theater of war than ever before.
Near the site’s command center, which looked out over desert scrubs and sage, sat pieces of Anduril’s hardware suite that have helped the company earn its $14 billion valuation. There was Sentry, a security tower of cameras and sensors currently deployed at both US military bases and the US-Mexico border, and advanced radars. Multiple drones, including an eerily quiet model called Ghost, sat ready to be deployed. What I was there to watch, though, was a different kind of weapon, displayed on two large television screens positioned at the test site’s command station.
I was here to examine the pitch being made by Anduril, other companies in defense tech, and growing numbers of people within the Pentagon itself: A future “great power” conflict—military jargon for a global war involving competition between multiple countries—will not be won by the entity with the most advanced drones or firepower, or even the cheapest firepower. It will be won by whoever can sort through and share information the fastest. And that will have to be done “at the edge” where threats arise, not necessarily at a command post in Washington.
A desert drone test
“You’re going to need to really empower lower levels to make decisions, to understand what’s going on, and to fight,” Anduril CEO Brian Schimpf says. “That is a different paradigm than today.” Currently, information flows poorly among people on the battlefield and decision-makers higher up the chain.
To show how the new tech will fix that, Anduril walked me through an exercise demonstrating how its system would take down an incoming drone threatening a base of the US military or its allies (the scenario at the center of Anduril’s new partnership with OpenAI). It began with a truck in the distance, driving toward the base. The AI-powered Sentry tower automatically recognized the object as a possible threat, highlighting it as a dot on one of the screens. Anduril’s software, called Lattice, sent a notification asking the human operator if he would like to send a Ghost drone to monitor. After a click of his mouse, the drone piloted itself autonomously toward the truck, as information on its location gathered by the Sentry was sent to the drone by the software.
The truck disappeared behind some hills, so the Sentry tower camera that was initially trained on it lost contact. But the surveillance drone had already identified it, so its location stayed visible on the screen. We watched as someone in the truck got out and launched a drone, which Lattice again labeled as a threat. It asked the operator if he’d like to send a second attack drone, which then piloted autonomously and locked onto the threatening drone. With one click, it could be instructed to fly into it fast enough to take it down. (We stopped short here, since Anduril isn’t allowed to actually take down drones at this test site.) The entire operation could have been managed by one person with a mouse and computer.
Anduril is building on these capabilities further by expanding Lattice Mesh, a software suite that allows other companies to tap into Anduril’s software and share data, the company announced today. More than 10 companies are now building their hardware into the system—everything from autonomous submarines to self-driving trucks—and Anduril has released a software development kit to help them do so. Military personnel operating hardware can then “publish” their own data to the network and “subscribe” to receive data feeds from other sensors in a secure environment. On December 3, the Pentagon’s Chief Digital and AI Office awarded a three-year contract to Anduril for Mesh.
Anduril’s offering will also join forces with Maven, a program operated by the defense data giant Palantir that fuses information from different sources, like satellites and geolocation data. It’s the project that led Google employees in 2018 to protest against working in warfare. Anduril and Palantir announced on December 6 that the military will be able to use the Maven and Lattice systems together.
The military’s AI ambitions
The aim is to make Anduril’s software indispensable to decision-makers. It also represents a massive expansion of how the military is currently using AI. You might think the US Department of Defense, advanced as it is, would already have this level of hardware connectivity. We have some semblance of it in our daily lives, where phones, smart TVs, laptops, and other devices can talk to each other and share information. But for the most part, the Pentagon is behind.
“There’s so much information in this battle space, particularly with the growth of drones, cameras, and other types of remote sensors, where folks are just sopping up tons of information,” says Zak Kallenborn, a warfare analyst who works with the Center for Strategic and International Studies. Sorting through to find the most important information is a challenge. “There might be something in there, but there’s so much of it that we can’t just set a human down and to deal with it,” he says.
Right now, humans also have to translate between systems made by different manufacturers. One soldier might have to manually rotate a camera to look around a base and see if there’s a drone threat, and then manually send information about that drone to another soldier operating the weapon to take it down. Those instructions might be shared via a low-tech messenger app—one on par with AOL Instant Messenger. That takes time. It’s a problem the Pentagon is attempting to solve through its Joint All-Domain Command and Control plan, among other initiatives.
“For a long time, we’ve known that our military systems don’t interoperate,” says Chris Brose, former staff director of the Senate Armed Services Committee and principal advisor to Senator John McCain, who now works as Anduril’s chief strategy officer. Much of his work has been convincing Congress and the Pentagon that a software problem is just as worthy of a slice of the defense budget as jets and aircraft carriers. (Anduril spent nearly $1.6 million on lobbying last year, according to data from Open Secrets, and has numerous ties with the incoming Trump administration: Anduril founder Palmer Luckey has been a longtime donor and supporter of Trump, and JD Vance spearheaded an investment in Anduril in 2017 when he worked at venture capital firm Revolution.)
Defense hardware also suffers from a connectivity problem. Tom Keane, a senior vice president in Anduril’s connected warfare division, walked me through a simple example from the civilian world. If you receive a text message while your phone is off, you’ll see the message when you turn the phone back on. It’s preserved. “But this functionality, which we don’t even think about,” Keane says, “doesn’t really exist” in the design of many defense hardware systems. Data and communications can be easily lost in challenging military networks. Anduril says its system instead stores data locally.
An AI data treasure trove
The push to build more AI-connected hardware systems in the military could spark one of the largest data collection projects the Pentagon has ever undertaken, and companies like Anduril and Palantir have big plans.
“Exabytes of defense data, indispensable for AI training and inferencing, are currently evaporating,” Anduril said on December 6, when it announced it would be working with Palantir to compile data collected in Lattice, including highly sensitive classified information, to train AI models. Training on a broader collection of data collected by all these sensors will also hugely boost the model-building efforts that Anduril is now doing in a partnership with OpenAI, announced on December 4. Earlier this year, Palantir also offered its AI tools to help the Pentagon reimagine how it categorizes and manages classified data. When Anduril founder Palmer Luckey told me in an interview in October that “it’s not like there’s some wealth of information on classified topics and understanding of weapons systems” to train AI models on, he may have been foreshadowing what Anduril is now building.
Even if some of this data from the military is already being collected, AI will suddenly make it much more useful. “What is new is that the Defense Department now has the capability to use the data in new ways,” Emelia Probasco, a senior fellow at the Center for Security and Emerging Technology at Georgetown University, wrote in an email. “More data and ability to process it could support great accuracy and precision as well as faster information processing.”
The sum of these developments might be that AI models are brought more directly into military decision-making. That idea has brought scrutiny, as when Israel was found last year to have been using advanced AI models to process intelligence data and generate lists of targets. Human Rights Watch wrote in a report that the tools “rely on faulty data and inexact approximations.”
“I think we are already on a path to integrating AI, including generative AI, into the realm of decision-making,” says Probasco, who authored a recent analysis of one such case. She examined a system built within the military in 2023 called Maven Smart System, which allows users to “access sensor data from diverse sources [and] apply computer vision algorithms to help soldiers identify and choose military targets.”
Probasco said that building an AI system to control an entire decision pipeline, possibly without human intervention, “isn’t happening” and that “there are explicit US policies that would prevent it.”
A spokesperson for Anduril said that the purpose of Mesh is not to make decisions. “The Mesh itself is not prescribing actions or making recommendations for battlefield decisions,” the spokesperson said. “Instead, the Mesh is surfacing time-sensitive information”—information that operators will consider as they make those decisions.
Anduril Partners with OpenAI to Advance U.S. Artificial Intelligence Leadership and Protect U.S. and Allied Forces
Anduril Industries, a defense technology company, and OpenAI, the maker of ChatGPT and frontier AI models such as GPT 4o and OpenAI o1, are proud to announce a strategic partnership to develop and responsibly deploy advanced artificial intelligence (AI) solutions for national security missions. By bringing together OpenAI’s advanced models with Anduril’s high-performance defense systems and Lattice software platform, the partnership aims to improve the nation’s defense systems that protect U.S. and allied military personnel from attacks by unmanned drones and other aerial devices.
U.S. and allied forces face a rapidly evolving set of aerial threats from both emerging unmanned systems and legacy manned platforms that can wreak havoc, damage infrastructure and take lives. The Anduril and OpenAI strategic partnership will focus on improving the nation’s counter-unmanned aircraft systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real-time. As part of the new initiative, Anduril and OpenAI will explore how leading edge AI models can be leveraged to rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness. These models, which will be trained on Anduril’s industry-leading library of data on CUAS threats and operations, will help protect U.S. and allied military personnel and ensure mission success.
The accelerating race between the United States and China to lead the world in advancing AI makes this a pivotal moment. If the United States cedes ground, we risk losing the technological edge that has underpinned our national security for decades. The decisions made now will determine whether the United States remains a leader in the 21st century or risks being outpaced by adversaries who don’t share our commitment to freedom and democracy and would use AI to threaten other countries. Bringing together world-class talent in their respective fields, this effort aims to ensure that the U.S. Department of Defense and Intelligence Community have access to the most advanced, effective, and safe AI-driven technologies available in the world.
“Anduril builds defense solutions that meet urgent operational needs for the U.S. and allied militaries,” said Brian Schimpf, co-founder & CEO of Anduril Industries. “Our partnership with OpenAI will allow us to utilize their world-class expertise in artificial intelligence to address urgent Air Defense capability gaps across the world. Together, we are committed to developing responsible solutions that enable military and intelligence operators to make faster, more accurate decisions in high-pressure situations.”
“OpenAI builds AI to benefit as many people as possible, and supports U.S.-led efforts to ensure the technology upholds democratic values," said Sam Altman, OpenAI's CEO. "Our partnership with Anduril will help ensure OpenAI technology protects U.S. military personnel, and will help the national security community understand and responsibly use this technology to keep our citizens safe and free."
Anduril and
OpenAI’s shared commitment to AI safety and ethics is a cornerstone of
this new strategic partnership. Subject to robust oversight, this
collaboration will be guided by technically-informed protocols
emphasizing trust and accountability in the development and employment
of advanced AI for national security missions.
Anduril, OpenAI enter ‘strategic partnership’ to use AI against drones
WASHINGTON — Defense startup Anduril is teaming up with ChatGPT-maker OpenAI in a “strategic partnership” that Anduril says will “develop and responsibly deploy advanced artificial intelligence (AI) solutions for national security missions” — particularly in countering drones.
“By bringing together OpenAI’s advanced models with Anduril’s high-performance defense systems and Lattice software platform, the partnership aims to improve the nation’s defense systems that protect U.S. and allied military personnel from attacks by unmanned drones and other aerial devices,” Anduril said in a press release Wednesday. “The Anduril and OpenAI strategic partnership will focus on improving the nation’s counter-unmanned aircraft systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real-time.”
The collaboration comes as the Pentagon is racing to find ways to defend its troops and facilities, both abroad and at home, from the threat of drones of all sizes, a threat that’s come to the fore rapidly since Russia’s invasion of Ukraine, especially.
RELATED: Hundreds of drone incursions reported at military installations over past few years, NORTHCOM says
Breaking Defense recently observed a military exercise in the Colorado mountains during which different companies demonstrated their own counter-drone solutions for the homeland, from cyberattacks to nets. In July, the Pentagon conducted a similar experiment, this time attempting to defend against drone swarms.
“No one capability, whether kinetic or non-kinetic, in itself could really just beat this kind of [attack] profile,” Col. Michael Parent, chief of acquisitions & resources at the Army-led Joint Counter-small Unmanned Aircraft System Office, said at the time. “What we saw was they really do need a full system of systems approach, a layered approach.”
Officials and experts have held up AI as a potential key aid in defeating drone swarms, allowing much faster identification of several threats that would otherwise overwhelm current systems and their human operators. In October, defense industry giant Northrop Grumman announced it was adding AI to an Army command system to better defend against the drone threat.
However, the Pentagon is also grappling with the policy and ethical considerations of integrating AI into its operations, especially any missions involving kinetic fires. In other applications, like chat programs, the DoD has shown it’s especially wary of potential mistakes current popular AI systems can make.
Anduril appeared to acknowledge that concern, and in its release Wednesday said the two firms’ “shared commitment to AI safety and ethics is a cornerstone of this new strategic partnership.” The collaboration, Anduril said, will be “subject to robust oversight.”
No comments:
Post a Comment