0:00
/
0:00

How AI Will Transform Military Command and Control - Paul Scharre

A conversation with Paul Scharre, author of Four Battlegrounds: Power in the Age of Artificial Intelligence joins us to talk about

  • how AI’s superhuman command and control abilities will change the battlefield

  • why offense/defense balance isn’t a well-defined concept

  • “race to the bottom” dynamics for autonomous weapons

  • how a US/taiwan conlict in the age of drones might play out

  • and more…

Links mentioned:

Gradual Disempowerment

Swarms over the Strait

Transcript

Rai Sur 00:00:36

Today we’re speaking with Paul Scharre. Paul is the Executive Vice President at the Center for a New American Security and the author of Four Battlegrounds: Power in the Age of Artificial Intelligence. He also formerly worked at the Pentagon on emerging technologies. Welcome, Paul.

Paul Scharre 00:00:52

Thanks for having me.

Rai Sur 00:00:53

We’re also joined by Sentinel forecaster and co-founder Nuno Sempere, Sentinel Superforecaster Lisa, and Superforecaster Scott Eastman, whose specialty is in geopolitics, epidemiology, and AI. Welcome, Scott.

Paul, what is command and control, and how do you see AI significantly altering it?

Paul Scharre 00:01:18

The term command and control is used by militaries to describe the internal organizational and informational processes used to organize military forces and coordinate their behavior. Militaries work in a very hierarchical format, with teams, squads, platoons, companies, and battalions. The way they communicate has evolved from signal flags on the battlefield to radio communications, to today’s use of computers.

Improvements in command and control can yield dramatic improvements on the battlefield, even if the military forces themselves are the same. We often think about the physical hardware of drones, missiles, or robots, and that’s absolutely valuable. But there are also potentially transformative effects in command and control.

If a military can gather more information about the battlefield, make sense of it faster than the adversary, make better decisions, and execute those decisions in a coordinated fashion, it can have dramatic effects. This is particularly true if they can change the battlespace faster than the adversary can react, leaving the adversary constantly trying to respond to what happened six hours ago.

Nuño Sempere 00:03:04

What’s a concrete example of this?

Paul Scharre 00:03:11

During the US advance into Iraq in 2003, US military forces were advancing very rapidly toward Baghdad. Through a precision bombing campaign, the US disrupted the command and control of Saddam Hussein’s forces by taking out headquarters and radio communications.

This created a situation where Iraqi commanders were reacting to events after they had already happened. They would identify the location of US troops, but by the time they reacted, the US forces had already advanced. That’s a potential advantage of having better situational awareness and command and control. Artificial intelligence has a lot of potential value in this space.

Rai Sur 00:04:11

In Four Battlegrounds, you wrote about how AI might change decision-making and strategic planning at higher levels. What are the potential impacts there, and how could it change the look of warfare?

Paul Scharre 00:04:25

Let’s imagine what this evolution might look like 15 to 30 years down the road as militaries integrate more AI, autonomy, drones, and robotics. Military advantage can come from better technology—better drones, larger production capacity, more robots on the battlefield. We tend to think about the physical object, which is important. But major advantages can also come from improvements in command and control, and we can see examples of this in gaming.

Look at how AlphaZero plays chess. The pieces are the same for both the human and the AI, so there’s no material advantage. In many cases, the situational awareness is the same—they’re looking at the same board. Yet we’ve seen that AI is able to process that information better and more holistically than people. In games like StarCraft or Dota 2, the AI can see the big picture and comprehend it all.

Across a number of games, AI agents can engage in coordinated, multi-axis attacks and balance their resources more effectively than people. This isn’t just in real-time strategy games; chess grandmasters have noted that AlphaZero can conduct attacks across the whole board more effectively. We’ve also seen AI systems engage in superhuman levels of calibrated risk-taking. In chess, this can look like ferocious attacks. In StarCraft and Dota 2, human players have talked about feeling constantly pressured by the AI, never having a moment to rest.

In poker, AI systems bet differently than humans, engaging in wild betting swings that are hard for even cold-blooded poker players to manage because of human emotions. There’s an information processing element, but also a psychological one. In combat, you get lulls in the action because people need to rest and reset. AI systems don’t. In the command and control aspect, AI has the potential to not just be better than humans, but to transform the very strategy and psychology of war.

Rai Sur 00:07:53

What does this look like in practice, at the level of grand strategy, where generals use game theory and deception? What does adoption look like there, and what is the minimum capability required? Naively, it sounds like an AGI-complete problem.

Paul Scharre 00:08:27

There are many beneficial things militaries could do on the command and control front that are not AGI-complete problems. If you hand over all your military forces to an AI, you need a lot of trust in its intelligence to handle that complexity.

But you can imagine very narrow algorithms that optimize logistics or communication on the battlefield, which would have significant advantages. You could also have more advanced algorithms that assist with planning military operations while commanders remain in the loop. The AI could generate plans for offensive or defensive operations, and commanders would still review and decide what to execute. Over time, they might see that the AI’s plans are better than what humans come up with and decide to trust them more.

Rai Sur 00:09:29

Who is best positioned to gain these advantages sooner rather than later?

Paul Scharre 00:09:35

In general, more technologically advanced militaries have some advantages, but we’ve seen AI technologies diffusing very rapidly. The biggest limiting factor for militaries isn’t getting access to AI technology, but finding the best ways to use it and transform how they operate.

The history of military-technical revolutions shows that what matters is not getting the technology first or even having the best technology, but finding the best ways of using it. An interesting historical example is British innovation in carrier aviation during the interwar period. The British military was ahead in inventing carrier aviation but fell behind Japan and the United States due to bureaucratic and cultural squabbles. It was primarily a technology adoption problem, not a technology creation problem.

This suggests that militaries in active conflicts, like we’re seeing in Gaza and Ukraine, are much more incentivized to overcome these barriers to adoption. They’re also getting real-world feedback on how effective these technologies are, whereas getting those feedback loops in peacetime is much more challenging.

Nuño Sempere 00:11:22

So you would greatly discount which countries have top AI labs as a factor, because you think adoption feedback loops are more important?

Paul Scharre 00:11:32

Yes, absolutely. The top US AI labs are maybe a few months ahead of the top Chinese labs—perhaps 12 months, but not three years. The US military is, charitably, five years behind leading AI labs in adopting the technology, and realistically more like 10 years behind in actually using it.

It’s not enough to have an image classifier on your drone video feed; you also have to use it to transform your intelligence operations and make your analysts more effective. That’s a much longer process, particularly in peacetime without competitive pressures. Having leading AI labs in your country is probably not the most important factor for battlefield operations. For example, what Ukraine is doing with drones is far more innovative than what the US military is doing. That pressure is more important.

The exception might be in niche areas like offensive cyber operations. There, you might imagine that access to the most advanced AI systems could provide large, step-function increases in capability. If the NSA has a 12-month head start on Chinese competitors and can actually employ that advantage, that’s a place where a 12-month lead could matter a lot.

Rai Sur 00:13:21

And if the difference-maker is having a place to apply it and get feedback quickly, cyber operations don’t require a hot conflict. You are constantly searching for intelligence and testing capabilities against adversaries, so those feedback loops are always active.

Paul Scharre 00:13:44

You have more feedback in cyberspace during peacetime because of the constant engagement between intelligence services. That engagement may not be as robust as it would be in wartime—many constraints would come off in a wartime environment, leading to more cyber operations and tighter feedback loops. But you probably have more feedback in cyber operations now than you do in ground warfare for the US military.

Scott Eastman 00:14:26

My understanding is that tens of thousands of drones might be used in the Ukraine war in a 24-hour period. Is a primary purpose of AI to assimilate all of that information in a meaningful way? An individual soldier might have access to their own drone feed, but how do you integrate that across the entire battlefield? Is AI critical for filtering that much information into something usable?

Paul Scharre 00:15:02

This is probably the area where AI is most useful, getting back to command and control. AI could do a couple of things here. First, simply improving sense-making and situational awareness over the battlefield would be incredibly transformative. Instead of one pilot looking through the “soda straw” of a drone camera, AI could process all that information collectively and say, “Here are the positions of all enemy fighters and vehicles along the front.”

At a higher level of abstraction, it could even analyze current positions relative to historical data to suggest anticipated behavior, like, “It looks like they’re massing for a potential attack.” The physical presence of drones is already enabling a persistent network of aerial surveillance that makes the battlefield more transparent and harder for forces on either side to mass for an assault. You need to concentrate your forces to be effective, and drones make that harder to do without being detected.

Having AI to process that information would provide a lot of advantages. Taking it a step further, having AI help make decisions in a coordinated fashion would be extremely effective. Right now, drones have made the battlefield not just more transparent, but also more lethal for ground forces. But those are individual drones. Getting to the point where you have a coordinated swarm of 10, 50, 100, or even 1,000 drones coordinating their behavior could be much more lethal and effective.

Scott Eastman 00:17:34

I’ve heard that about 70% of Russian casualties are now from drones. I don’t know if that’s corroborated, but that’s recent information I have.

Paul Scharre 00:17:46

I have not heard that figure, but it’s certainly possible and would say a lot about the effectiveness of drones. However, part of that could be because the Ukrainians have suffered from a lack of artillery ammunition. They have been desperately asking the US and Europeans for more 155mm artillery shells. In a world where they had more artillery, you might see that number balance out. They’re likely compensating for the lack of artillery at the moment.

Rai Sur 00:18:15

The fact that it’s harder to amass a ground force for an offensive seems to make the offense-defense balance of drones lean more toward defense. Is that the net effect, or are there other factors that make drones more offense-leaning? What is the net effect of drones on the offense-defense balance in warfare?

Paul Scharre 00:18:41

Unbeknownst to you, perhaps, you have opened a massive can of worms. I do not think the concept of offense-defense balance is analytically tractable, but let’s dive into why.

You have drones overhead that make offensive maneuver operations harder. At a tactical level, they seem offense-dominant; if a drone finds an infantry troop in the open, that troop is toast. But operationally, the second-order effect is that it seems to favor the defense. So are drones offense-dominant or defense-dominant?

I want to disaggregate the idea of offense-defense balance into three separate, more meaningful concepts. The first is the cost-exchange ratio of a technology. For example, what is the cost of shooting down an incoming drone versus the marginal cost of buying more drones? We’ve seen examples in Ukraine where Russian drones are so cheap that even though Ukrainian air defense systems can shoot them down, they don’t because their missiles are too expensive. If you shoot down a $1,000 drone with a $1 million missile, you’re losing every time.

A second concept is the relative cost of power projection. What does it cost for militaries to engage in offensive maneuver operations on the ground or at sea? The cost to launch an amphibious invasion is already very high. Better anti-ship cruise missiles or sea mines increase that cost further. The Chinese military has made a series of investments in long-range radars and anti-ship missiles that make it more difficult for the US military to project power into the Western Pacific, thereby increasing the cost of power projection.

The third concept is first-mover advantage. In the era of precision-guided missiles, there’s a significant first-mover advantage in naval surface warfare. One effective missile strike can take out a ship, so striking first can be a major advantage. Contrast that with nuclear deterrence, where militaries have invested heavily to take away that first-mover advantage. A submarine is designed to do just that. Even if you launch a surprise nuclear attack, you won’t get all the submarines at sea, which can then launch a second strike.

The answer to your question depends on what we’re talking about specifically. It’s really hard to connect tactical effects to strategic outcomes. A stealthy submarine is, in some ways, very defense-dominant, but that enables it to carry out a nuclear strike that could take out a whole country. That single action is very offensive, but the net effect is stabilizing for nuclear deterrence.

Nuño Sempere 00:25:32

I understand your framework is better, but couldn’t you still simplify it? If your country is being invaded, does having drones make things better or worse for you as the defender? It seems like there should be a straightforward answer.

Paul Scharre 00:25:58

If you’re worried about being invaded, what you ultimately care about is whether this technology makes it easier or harder for an aggressor to gain territory. But it’s very hard to look at a technology and extrapolate to that level. Decades ago, defense analysts tried to map which technologies were offense-dominant versus defense-dominant, and it’s difficult to do.

Drones are a good example. At a tactical level, you might say drones are offense-dominant. You can imagine drone swarms overwhelming defenses. But what we’re seeing in practice in Ukraine is that drones seem to be contributing to the stasis on the front lines, making it harder to amass ground forces. They’ve made the battlefield more transparent and more lethal.

The role of drones today is analogous to the role the machine gun played in World War I. The machine gun dramatically increased lethality on the battlefield, enabling a hundredfold increase in the number of bullets a defender could put downrange. This made traditional tactics of advancing forward effectively impossible. It took militaries a long time to adapt. They eventually developed a more integrated system of suppressing fire with artillery and machine guns, and then maneuvering to flank the enemy.

I think something similar will unfold over the next 10 to 15 years in ground warfare because of drones. Drones are making ground warfare three-dimensional. While ground troops have worried about aerial attacks since World War II, those threats were transient. Drones create a persistent threat from the air. Over time, militaries will have to adapt by finding new ways to conceal themselves and carry out offensive operations. I don’t think they’ve figured that out yet.

Lisa 00:30:19

With the emergence of low-cost drones, people initially thought they could level the playing field for the underdog and favor asymmetric warfare. But it’s clear from Ukraine that Russia has realized the power of the drone and is vastly ramping up production. The future holds drone swarms conducting both offensive and defensive operations. These low-cost drones can be used by either side to have a large-scale effect on the battlefield.

A Russian commander recently said that the winner in Ukraine will be the side that develops the best drone and anti-drone technologies. Where do you think that is going? As we see larger drone swarms and increasingly autonomous drones, what is the future of anti-drone technology? It won’t be a guy shooting a drone down with his pistol. What do you see?

Paul Scharre 00:32:40

That’s a meaty question. To answer the initial part about who benefits more, I think that while sophisticated nation-states can put more resources into the technology, on net, drones favor less capable actors. They get more of a relative uplift in power.

Drones are essentially lowering the cost and technological barrier to air power. Before drones, to have air power for surveillance or attack, you needed to build an airplane big enough to put a person in it. Those military aircraft are incredibly expensive to build, maintain, and train pilots for. Drones completely change the economics of that. For a couple hundred bucks, you can put a drone in the air that gives you the ability to see the enemy and call in an artillery strike, which is incredibly valuable. Both Ukraine and Russia are able to field air power in the form of drones in a way that would be cost-prohibitive with crewed aircraft.

You’re right that there are already counter-drone defenses. A funny anecdote: a buddy of mine shot down a drone in Iraq back in 2004. He was all excited. I had to tell him, “You know that was one of ours, right?” He said it didn’t matter; he got a drone kill.

We’re seeing more investment in counter-drone technologies. There’s a lot of jamming on the front lines in Ukraine because the communications link to a remote pilot is a point of vulnerability. There are lots of ways to go after a drone. You can shoot it down with expensive missiles for large drones, or with bullets for smaller ones, but you still need a way to find it and aim effectively. People are developing systems for that, as well as drones designed to intercept other drones.

Lisa 00:37:51

If I may interrupt—one thing we’re seeing in Ukraine is the emergence of increasingly autonomous drones that will no longer be susceptible to jamming. They can navigate by landscape features and don’t need to communicate with a base. What do you do with that?

Between that and first-person view (FPV) drones using long fiber-optic cables to communicate, it seems like a game of cat and mouse where the options for countering drones keep narrowing. It seems that drones that attack other drones will have to be an increasingly important strategy.

Paul Scharre 00:39:14

You’re exactly right. These are the innovations we’re seeing, and you can see the direction the technology is evolving. Autonomy is a good example. The communications link is vulnerable, so you jam it. In response, you integrate autonomy into the drone.

When I was in Ukraine last year, I saw a demo of what they called an “autonomous last mile” solution, which is basically autonomous terminal guidance. The human pilot locks onto a target, and then, as the pilot demonstrated, literally takes his hands off the controls. The drone follows the target, even a moving one, for the final attack. That helps if there’s jamming in the last kilometer or so.

You can see the next evolution of that: a drone that can just hunt over a wider area and find targets on its own. That will lead to other countermeasures. If the drones are fully autonomous and you can’t jam the link, you have to find a way to shoot them down, fry their electronics with high-powered microwave weapons, or blind their optics with lasers.

Nuño Sempere 00:40:46

If you make the drones fully autonomous locally, doesn’t that substantially raise the quality of the chips they’ll need?

Paul Scharre 00:41:01

You don’t need a fancy Nvidia GPU to do this. The “fancy” version would be to use a machine learning-based image classifier to identify objects like a Russian tank. The “unfancy” version is to just do a pixel lock. You lock onto a group of pixels against a background and keep them centered until you collide with the target. That doesn’t require a lot of sophistication. I don’t need to know what the object is; I just need an algorithm that can identify the boundaries of the object I’ve locked onto.

Rai Sur 00:41:49

Then who is deciding on that lock? At some point, it has a link back to a human who is saying, “Here’s the pixel blob. Track it and attack it.” And from that point on, it’s autonomous.

Paul Scharre 00:42:03

It depends on how autonomous you want it to be. If you’re talking about autonomous terminal guidance where the human picks the target, you don’t need something very complicated. The vulnerability there is that if the target hides, you might break the lock. If you want something more autonomous where you just launch it and say, “Go get them,” with no human interaction, then you need an image classifier to identify, for example, a Russian tank. But you don’t need a lot of sophistication there either. The question becomes how much compute you need to run an image classifier with a few key object classes. My guess is probably not that much.

Scott Eastman 00:43:02

If you’re dealing with heat signatures, probably even less.

Paul Scharre 00:43:07

Right. If you’re going after tanks, they’re big, hot things out in the open. As long as you box the drone into an area where the enemy is and not your own troops, there probably aren’t a lot of false targets.

Scott Eastman 00:43:27

I was thinking of a problem that is complicated for humans and would also be complicated for AI. In the background of the Ukraine war, there’s always the threat that if things go too horribly for Russia, they might use tactical nuclear weapons. If it’s just a chess game of winning territory, an AI could probably do better than some humans. But at what point would a human need to get in the way and say, “We don’t want to go past a certain point because it may create a massive political backlash that goes beyond a tactical victory”?

Paul Scharre 00:44:17

I agree. This is an area where it’s hard for people—security analysts and political experts disagree about how much the US should be willing to push Putin, for example. We should expect AI to do quite poorly here for a couple of reasons.

One is that war games using large language models have shown them escalating needlessly, sometimes just playing out a narrative script. But setting aside the weird neuroses of LLMs, the more fundamental reason AI would do poorly is that there’s no good training data. It’s not like we have 10,000 instances of military conflict and can see which ones led to nuclear war. Human experts look at the history of nuclear brinksmanship and come to wildly different conclusions. Some think nuclear deterrence is solid, while others think we’ve just been really lucky. These are areas where we should expect AI to do very poorly.

Rai Sur 00:46:06

This ties into the race-to-the-bottom dynamics. There are many advantages here, but there’s a tension for any actor adopting this technology between getting the benefits and understanding that the more power they give it, the more likely they are to encounter a bad, out-of-distribution outcome. How likely is it that we see some coordination or intentional pullback? Is the equilibrium just to go as fast as possible?

Nuño Sempere 00:46:59

That question assumes that as you give AI more power, the error rate increases. But humans also have an error rate, and it’s not clear that you do get a race to the bottom, because the system might just become more accurate than humans.

Rai Sur 00:47:22

We do see examples of accidents when using machine learning in military applications, like adversarial examples that can fool image classifiers. These could be deployed to take advantage of an enemy’s model. I’m extrapolating from the existence of these errors in past systems to assume they will remain at higher levels of capability.

Nuño Sempere 00:47:55

But the US accidentally dropped a nuclear bomb off the coast of Spain. And Waymos are far safer than human drivers.

Rai Sur 00:48:13

I think the difference there is design versus growth. The US dropped the bomb because a mechanism they designed failed. They had a good story for why that mechanism should work because the system is decomposed into smaller pieces that humans designed and understand. The error is in execution, not in understanding the system’s design. With machine learning systems, you are “growing” something based on feedback from a training distribution, so there’s more uncertainty in the method itself.

Nuño Sempere 00:49:02

The case of a self-driving car that’s safer than humans sort of sidesteps all that. Even if a battlefield is far more chaotic than the streets of San Francisco, humans still make mistakes. I’m curious for Paul’s perspective: where do you think it makes sense to hand over more decision-making now and in the next few years?

Paul Scharre 00:50:13

Let’s consider two archetypes: the self-driving car where AI gets better than humans, and the war-bot that goes berserk. The conditions under which each of those worlds becomes true are different, and you can find military applications that match both.

For self-driving cars, several factors favor improving their performance to be better than humans. First, the baseline for human driving is terrible. Second, we can get real-world feedback by putting cars on the road in their actual operating environment. We can collect that data, run it in simulations with varied conditions, and build up a very large training data set. The environment is semi-adversarial—people might cut you off, but they aren’t generally trying to hit you.

Some military applications fit this model, like automated takeoff and landing for aircraft on a carrier. We’ve demonstrated we can do that, and there’s no reason to have humans doing it anymore. In any area where we can get good training data and have clear metrics for performance, we are likely to be able to train AI systems to do better than humans.

The places where we’ll see AI struggle are where we don’t have good training data, where the metric of performance is context-dependent, or where it requires human judgment. For example, a drone looking at an object in a person’s hands—is it a rifle or a rake? That is a factually correct question. I can build a training data set for that and test it.

But a decision like, “Is this person a valid enemy combatant?” is super hard. It’s very context-dependent. Are they holding a rifle? That doesn’t necessarily mean they’re a combatant. Are they not holding a rifle? They could still be a combatant. Context, human judgment, and what was happening in the area a few minutes ago all matter. It’s hard to build a training data set for that. The same goes for things like a surprise attack. That’s an area where we would expect AI systems to fail, and where I would want militaries to be much more cautious.

Nuño Sempere 00:55:40

What’s a concrete example of this “race to the bottom” dynamic?

Paul Scharre 00:55:59

One example could be lethal autonomous weapons in Ukraine. Let’s fast-forward a few years. The jamming environment becomes so heavy that both sides decide it’s better to deploy unreliable autonomous weapons than no drones at all. You might see accidents where they strike the wrong target, but it’s deemed a necessary risk. Then that technology gets used in other conflicts where there are more civilians, leading to casualties.

Another concern could be around safety. The US builds an autonomous drone boat and sends it to the South China Sea. China races to get its drone boat out there. Then the US decides it needs to deploy a swarm of boats. Both sides might cut corners on safety because they are in a race to get ahead of the other. The institutional incentives for militaries are all about staying ahead of their competitors. For a proof of concept, look at what leading AI labs are doing today: rushing to push out products that are not reliable and safe just to get ahead of competitors.

Lisa 00:58:13

We’ve seen the importance of drones in Ukraine. Looking ahead to a potential invasion of Taiwan, what do you see as the potential for drone usage on both sides? Do you have a sense of where their capabilities are? China is learning lessons from Russia and is effectively training a large number of drone operators through civilian industries.

Also, what potential threats could China pose to the United States in that situation? We know Chinese vessels could position themselves at our ports—ostensibly civilian cargo ships—with containers full of anything from missiles to drones. Where do you see that going?

Paul Scharre 00:59:40

A colleague of mine, Dr. Stacie Pettyjohn, wrote a report on exactly this issue called Swarms Over the Strait. For anyone interested, I’d say go check out that report.

The fundamental challenge in any US-China fight over Taiwan is the asymmetry of geography. Taiwan is an island 100 miles off the coast of the Chinese mainland. The US has to project power across the entire Pacific Ocean. That gives China a huge advantage in projecting air and naval power. They have layered air defense systems right across the strait and aircraft well within range. The US has to rely on a limited number of aircraft carriers, which are vulnerable targets, or long-range bombers.

This tyranny of geography has huge implications for drones. In Ukraine, you’re right on the front lines, so you don’t need long-range drones. That won’t work for the US in a Taiwan conflict. Any drone the US uses has to be brought to the fight, either on ships or in submarines, which means they’ll need to invest in more expensive drones. Some could be pre-positioned on Taiwan, but that geographic asymmetry is very hard to overcome.

For Taiwan, small drones could be very beneficial for targeting Chinese troops if they land on the beaches. For the US, there’s also the problem of resupply. In Ukraine, the US is arming Ukrainians across a land border with Poland. None of that exists for Taiwan. Drones can certainly be helpful, especially if you narrow the operational problem to finding and sinking Chinese invasion ships in a limited time and area. But the geography is a huge factor.

Lisa 01:04:43

I was also wondering about the importance of pre-positioning drones, and whether China might pre-position drones at US ports as part of a threat. If they were to invade Taiwan, they might line up a number of threats to the US—cyber threats, threats at our ports—and say, “Look, you can let us have Taiwan, or all of these things will happen.” I wonder about the possibility of drones being pre-positioned at US ports in that kind of scenario.

Scott Eastman 01:05:29

That seems like the situation we’re already in, where China, Russia, and America can all hit each other with nuclear weapons. It’s a political decision about whether they are willing to risk escalation. Certain things, like attacking the leaders of other countries, are usually kept off the table, but there’s nothing that says they have to be. A country could decide Taiwan is important enough to take that risk.

Paul Scharre 01:06:03

One of the interesting things about drones that changes this political calculus is that you can put them into a contested area without risking a person. We see this with recent Russian drone incursions into Poland. It creates a rung on the escalation ladder that didn’t exist before. Flying a piloted aircraft into Poland would be seen as more escalatory.

Drones allow for a sort of sneaky escalation, similar to behavior in cyberspace. Hacking and causing monetary damages is not seen as being as escalatory as lobbing missiles that cause the same amount of damage. It’s a psychological issue, but we’re starting to see that drones create this ambiguous space that countries are sometimes exploiting to escalate.

Rai Sur 01:07:29

Paul, what are your big areas of uncertainty here? This might be something our forecasters can weigh in on.

Paul Scharre 01:07:41

My biggest uncertainties are probably in the direction of frontier AI progress overall. We’ve heard arguments that deep learning is about to hit a wall, but I don’t know if that’s true. I also wonder to what extent large amounts of compute are necessary to continue advancing the frontier, how rapidly things will diffuse, and how much the time gap matters. A lot of the big labs are betting on a first-mover advantage, and maybe that will be true, or maybe it won’t. As a security analyst, I’m mostly worried about the diffusion of dangerous AI capabilities.

What does this mean for offensive cyber operations or the development of biological weapons? How hard is it to make biological weapons? There’s contention among experts. Some say it’s very hard, noting the Soviets struggled despite massive investment. Others are worried that AI might enable more sophisticated agents or empower less capable actors to make nasty things relatively easily. How much of the required knowledge is tacit and not written down? And how effective would automated cloud labs be at automating that kind of physical knowledge? I just don’t know.

Perhaps the most important question I have is to what extent we will get warning shots that people believe ahead of really bad outcomes. Some of these effects are super non-linear. We all lived through a global pandemic, which shows how you can get these non-linear effects where a pathogen fizzles out, until one is transmissible and lethal enough that millions of people are dead. Do you get warning shots ahead of a really, really bad catastrophe with AI systems?

Nuño Sempere 01:11:55

That’s a really interesting focusing question. Conditional on a big catastrophe, have we seen one that’s at least 10 times smaller? I’m not sure how I’d answer it right now, but I’m happy to think about it.

Paul Scharre 01:12:17

For example, take an AI takeover scenario. If it comes as a bolt from the blue that nobody ever saw coming, that’s one type of problem. It’s a very different problem if it comes after 10 or 15 years of dealing with AI agents running corporations, doing nefarious things, and attempting hostile takeovers, allowing us to build up societal defenses to keep these agents in a box.

Nuño Sempere 01:12:55

This comes back to what you said before about AI agents displaying much greater aggression in a poker game. You could delegate more capability, and then the correct move is to just magically escalate. I’m not sure how much I believe in that.

Rai Sur 01:13:16

This question of diffusion and adoption is interesting. You wrote in your book about how the gulf between the private sector and the Department of Defense was really wide. It became clear to tech companies that if they cooperated too closely with the defense establishment, they could face a mutiny from within. Has that gap narrowed since then?

Paul Scharre 01:13:50

There was a moment of panic in the national security community several years ago when Google discontinued its work on the Defense Department’s Project Maven, which used AI to process drone video feeds. There was an uproar from a vocal subset of Google employees, which caused Google’s leadership to pull out of the project. At the time, there was a lot of anxiety in the Pentagon about being locked out of this game-changing technology.

I think what we’ve seen since then is that this cultural barrier is perhaps not as severe as was feared. All of these companies have continued to work with the US military, including Google. The bigger barriers to military adoption are more mundane: the acquisition and procurement system, the “color of money” that codes funds for specific purposes, and the standard practice of filing a protest when you lose a big contract, which freezes everything for years.

I remember talking to one company that provided computing services to the Defense Department. They saw the need to get in the queue to buy state-of-the-art Nvidia GPUs, but they couldn’t because they didn’t have a formal demand signal and money from the DoD yet. By the time that came through, it was too late. I think those are the much more significant barriers.

An anecdote that blew my mind was when I was talking to people at the DoD’s Joint AI Center. They were trying to build an application using drone video feeds from California wildfires to map the fire boundaries in real time for firefighters on the ground. They got the data, but then found out there are two different unclassified Defense Department computer networks, and DoD IT policy did not permit them to transfer the data from one to the other. They ended up just downloading the data onto hard drives and mailing it across the United States. They found a workaround, but is that the best way for us to be doing business? Probably not.

Nuño Sempere 01:17:51

A related question has been on my mind for a while. Is this a good time to do a drone startup? I was thinking I could convince some friends to do a drone startup in Spain, but then I might get an arson attack from Russia, which doesn’t seem great.

Rai Sur 01:18:21

What kind of drones are you trying to build, Nuno?

Paul Scharre 01:18:23

I’m not going to give business advice, but let me share what I heard from Ukrainians when I was there last year. First, if you want to break into the commercial drone market, you’ve got to compete against DJI, which is a tough challenge.

In the military space, it’s a little different. Ukrainians are building a lot of do-it-yourself drones. I heard a lot of complaints about US drones being over-engineered and having development cycles that were too slow. In Ukraine, I met with drone developers who were in real-time chats with troops on the front lines about the performance of their drones. That feedback loop is incredibly valuable.

One of the problems they’re running into on the ground is sourcing components. They’re building the drones themselves but are sourcing a lot of the componentry, like high-precision electric motors, from China. They’re nervous about being reliant on Chinese components that they might get cut off from. So there are opportunities in the drone space, or in other areas like componentry, optics, electronic warfare, and drone-finding technology.

Lisa 01:20:39

You’ve talked about a whole range of possible future developments. Is there anything else you see as a potential development or risk that we aren’t even thinking about today?

Paul Scharre 01:20:57

One thing I wonder and worry about is the slow seeding of human authority to AI. We’ve talked about the military dimensions and the near-term dangers of the transition to superintelligent AI. But there’s this longer-term issue that if we make it through the next 50 years, we could end up in a world where humans are no longer really in control of human destiny.

You can already see vast, impersonal forces in the world that drive the economy and markets, which are out of the control of any one person. But at least there are humans at all the nodes. We try to grapple with what the economy is doing and how to avoid a depression. I don’t even know how to begin to think about that in a world of AI.

Nuño Sempere 01:22:41

I have a perspective for you. I’m currently living in Paraguay, and Paraguay doesn’t control the world—America does. Paraguay is subject to global trends like COVID or the depreciation of the dollar, but it does fine. Maybe that’s a possible future, where the US moves down the ladder and instead of China being the superpower, it’s the cumulative AI agents that are deployed. But if those systems still have a human-centric bias, maybe we’ll be okay.

Rai Sur 01:23:38

For our listeners, I would refer you to the “Gradual Disempowerment by AI” paper. I think the disanalogy with Paraguay is what Scott said about things remaining human-centric. As long as everything is human-centric, you get these ripple effects where Paraguay is useful to its neighbors, who are eventually useful to the US, and it all grounds out in human needs. As you gradually cede control to AI, that may no longer be the case.

Nuño Sempere 01:24:37

A less positive response is that at one point, the US installed a tyrannical dictatorship in Paraguay, so the analogy isn’t only positive.

One area we didn’t get into was the political battlefield and AI’s impact on hearts and minds. There is clearly hybrid warfare going on, from Russia throughout Europe and America, and other countries are engaged in it as well. AI is increasingly good at influencing large numbers of people—how they vote, how they perceive their government, or the threats around them. But at some point, people may also become increasingly resistant and start to not trust anything. I’m curious how much you view that as a major part of AI in warfare or power structures.

Paul Scharre 01:26:21

I feel like my intuitions in this space are garbage. When I look back at social media, I feel like I got it totally wrong. I imagined that a world with radically democratized communication would be beneficial for society. While there are some positive aspects, in general, the social media landscape seems to have had a destabilizing effect on politics.

At the micro level, you can envision a world where people get a lot of their information from AI, and that could happen very fast. We could end up there in just a couple of years. That can go south in many ways. If social media can trap people in a bubble, language models seem to have the ability to trap people in hyper-miniaturized and hyper-weird bubbles and encourage disturbing behavior.

At the macro level, just like the algorithms for social media can have systemic effects and biases, AI systems could do that in ways that are really hard for us to detect or are disruptive to society. I don’t know what that nets out to, but it seems like a real possibility that AI could upend information flows in a way that’s very disruptive to politics, culture, and everything else.

Rai Sur 01:28:45

Well, I think that’s a great place to wrap it. Paul, thank you for coming on. What is your X handle and where can people find your book?

Paul Scharre 01:28:55

I’m @Paul_Scharre on X, and you can find my book, Four Battlegrounds: Power in the Age of Artificial Intelligence, anywhere books are sold.

Rai Sur 01:29:18

There’s a bunch of stuff in the book we didn’t talk about, for example, authoritarianism, strict surveillance, and AI’s impact on that. I recommend the read. I enjoyed it. Thank you, Paul, for coming on.

Paul Scharre 01:29:34

Thank you all. It’s been a great discussion. I’ve really enjoyed it.

Rai Sur 01:29:37

If you enjoyed this episode, please consider sharing it with someone. Most of the benefit of forecasting comes from influencing decisions, and to do that, we need to grow our reach. If you want to speak to us about anything, send us an email at podcast@sentinel-team.org.

Discussion about this video

User's avatar