Artificial intelligence is collapsing the cost of covert action while multiplying its strategic rewards. For states willing to operate in the shadows, the calculus has never been more favorable — and the world should be paying attention.
In the early hours of May 2, 2011, two Black Hawk helicopters carrying two dozen Navy SEALs crossed from Afghanistan into Pakistani airspace without authorization, descended on a compound in Abbottabad, and killed Osama bin Laden. The operation was a masterpiece of precision: months of painstaking intelligence work, a narrow window of action, and a team of elite operators who had rehearsed every contingency. It was also extraordinarily expensive — not in blood or treasure alone, but in the vast institutional machinery required to make it possible. The CIA’s decade-long hunt, the satellite coverage, the signals intelligence, the human networks: all of it represented an investment that only the world’s most powerful intelligence apparatus could sustain.
That era is ending. The machinery that once made such operations the exclusive province of a handful of great powers is being commoditized by artificial intelligence. The cost of conducting a sophisticated special military operation is falling sharply, while the precision, speed, and strategic leverage such operations can deliver are rising in tandem. The result is a structural shift in the logic of conflict — one that will make Special Military Operations (SMOs) not merely more common, but effectively irresistible as instruments of state and non-state power.
The Intelligence Bottleneck, Dissolved
For most of modern history, the limiting factor in special operations was not the quality of the soldiers but the quality of the intelligence. Identifying a high-value target, mapping their pattern of life, understanding the physical environment, and timing an operation to exploit a narrow window of vulnerability — these tasks required enormous human and technical resources. Signals intelligence analysts, imagery interpreters, human intelligence networks, and fusion centers staffed around the clock: the overhead was staggering, and the timeline from identification to action was measured in months or years.
AI is dismantling this bottleneck with remarkable speed. Machine learning systems can now process satellite imagery, intercept and parse communications, cross-reference biometric databases, and synthesize open-source intelligence at a scale and velocity that no human team could match. The US Special Operations Command (SOCOM) has made this transformation a formal priority, issuing requests for industry capabilities in facial recognition, speaker identification, and DNA profiling — all aimed at enabling operators to process intelligence gathered during raids in near real time, generating follow-on targeting packages within hours rather than days. The ambition is explicit: to compress the sensor-to-shooter timeline from a process measured in weeks to one measured in minutes.
The Israeli military’s deployment of its “Lavender” and “Gospel” AI systems during operations in Gaza offered the world its first extended look at what this transformation means in practice. Lavender, a machine-learning system trained on the behavioral signatures of known militants, was used to generate target lists at a scale and speed that would have been inconceivable using traditional analytical methods. Gospel, its companion system, automated the identification of physical infrastructure targets. Whatever one’s view of the ethical and legal questions these systems raise — and those questions are serious — they demonstrated beyond doubt that AI can perform in hours the analytical work that once took months. The intelligence bottleneck, the great limiting factor of special operations, is dissolving.
This is not merely an American or Israeli phenomenon. Global military spending on AI is estimated to have doubled from $4.6 billion to $9.2 billion between 2022 and 2023, and is projected to reach $38.8 billion by 2028. China has made “intelligentization” — the integration of AI into military operations — a formal pillar of its military modernization strategy under Xi Jinping. Russia, despite its technological limitations, has pursued AI-enabled targeting in Ukraine. The democratization of foundation models, the large general-purpose AI systems developed by companies like Anthropic, Google, and OpenAI, has further lowered the barriers to entry: smaller states can now license proprietary models for military applications rather than building their own from scratch.
The Cost Equation Rewritten
The economic logic of special operations has always been compelling in theory. A small team of elite operators, acting on precise intelligence, can achieve effects that would otherwise require a conventional force many times larger. The problem has been that the intelligence infrastructure required to enable such precision was itself enormously expensive, effectively negating the cost advantage. AI changes this equation in two ways.
First, it dramatically reduces the human capital required for intelligence processing. A task that once required a team of fifty analysts working for three months can now be accomplished by a handful of operators working with AI tools in a fraction of the time. The Belfer Center’s 2025 study on military AI noted that foundation models have “lowered the entry barrier for military integration of AI for states with smaller economies,” creating opportunities for actors that previously lacked the resources to sustain sophisticated intelligence operations.
Second, AI-enabled autonomous systems — drones, in particular — are reducing the physical risk to operators themselves. Drone swarms can now be deployed for surveillance, target confirmation, and in some cases direct action, with limited human oversight. When the human cost of an operation falls, so does the political cost. Governments that might hesitate to risk the lives of their soldiers in a covert mission face a different calculation when the primary risk is to a machine. The moral and political threshold for authorizing action drops accordingly.
The Belfer Center’s analysis was direct on this point: “the deployment of autonomous AI systems may reduce political and moral thresholds for engaging in military conflict as the human costs of warfare, at least with the exclusion of civilian casualties, diminish.” This is not a theoretical concern. It is a structural change in the incentive landscape facing every government that possesses or can acquire these capabilities.
The Rewards, Amplified
If AI is lowering the costs of SMOs, it is simultaneously amplifying their potential rewards. The strategic value of a well-executed special operation has always derived from its ability to achieve effects disproportionate to its scale — eliminating a key adversary, disrupting an enemy’s command structure, gathering intelligence that reshapes the strategic picture. AI makes each of these outcomes more likely by improving the precision and timeliness of the operations themselves.
Consider the targeting cycle. In the Global War on Terror, the most effective use of special operations forces was not the dramatic direct-action raid but the grinding, iterative process of sensitive site exploitation: the intelligence gathered during one raid would generate the targeting package for the next, which would generate the next, in a continuous cycle that dismantled terrorist networks node by node. AI accelerates this cycle dramatically. What once required a full night’s work by a team of analysts can now be accomplished in minutes, enabling multiple follow-on operations within a single night. The compounding effect on network disruption is profound.
Beyond targeting, AI enhances the strategic value of SMOs through its impact on information operations. The ability to rapidly analyze and exploit captured communications, documents, and digital devices gives operators access to intelligence that can reshape entire campaigns. SOCOM’s investment in AI-powered sensitive site exploitation — including real-time DNA profiling to match captured individuals against existing databases — reflects a recognition that the intelligence value of a single successful operation can far exceed its immediate tactical effect.
The Atlantic Council’s 2024 assessment of US Special Operations Forces in strategic competition identified a further dimension of this reward amplification: the ability of SOF to “shape the strategic environment” through operations that fall below the threshold of armed conflict. In the language of modern strategic competition, this means the ability to influence the political, informational, and military landscape of a rival’s sphere of influence without triggering a conventional military response. AI makes such operations more precise, more deniable, and more effective — a combination that is, from the perspective of any state seeking strategic advantage, extraordinarily attractive.
The Grey Zone Expands
The rise of AI-enabled SMOs is unfolding within a broader geopolitical context that makes them even more appealing. The concept of “grey zone” warfare — conflict that falls below the threshold of traditional armed conflict, characterized by ambiguity, deniability, and the use of non-military and paramilitary means — has become the dominant framework through which great powers and regional actors alike are pursuing their interests.
In the grey zone, the rules of engagement are undefined and the mechanisms of deterrence are weak. States can conduct coercive activities against rivals below the threshold likely to trigger a costly military response, exploiting the ambiguity of attribution to avoid accountability. Russia’s use of the Wagner Group as a proxy force across Africa and the Middle East, China’s deployment of maritime militia in the South China Sea, and Iran’s cultivation of proxy networks across the Levant are all expressions of this logic. AI does not create the grey zone, but it makes operations within it more effective and more difficult to attribute — two qualities that are essential to the grey zone’s appeal.
The Arms Control Association has noted that autonomous AI systems “reduce the risks to a state’s own soldiers” and “may reduce the political threshold for deploying or using force.” When the risk of exposure and the human cost of failure both decline, the grey zone expands. More actors will operate within it, more frequently, and with greater ambition.
The Democratization of Lethality
Perhaps the most unsettling dimension of this transformation is that it is not limited to states. The same technological forces that are empowering governments to conduct more effective SMOs are also empowering non-state actors. The Hamas attack of October 7, 2023, was a watershed moment in this regard. Researchers at War on the Rocks documented how Hamas had developed what they termed a “non-state special operation” — a carefully planned, multi-domain assault that achieved strategic effects far beyond its tactical scale, exploiting the growing democratization of technology to generate military capabilities previously reserved for states.
The Belfer Center’s analysis was unsparing: “the democratization of AI through the distribution of sophisticated open-source models allows non-state actors, including terrorist groups and armed militias, to acquire greater capability to inflict damage.” As foundation models become more widely available and as the cost of autonomous drone systems continues to fall, the barriers to entry for conducting sophisticated military operations will continue to erode. The future may well be one in which a wide range of actors — state and non-state, large and small, with resources ranging from the lavish to the modest — are capable of conducting operations that were once the exclusive province of the world’s most powerful militaries.
A World of Permanent Shadow War
The convergence of these trends points toward a future in which SMOs become a near-permanent feature of international relations — not episodic crises but a continuous, low-level struggle conducted in the shadows, punctuated by moments of acute violence and strategic disruption. The table below summarizes the structural shift underway.
| Factor | Pre-AI Era | AI-Enabled Era |
| Intelligence processing time | Weeks to months | Hours to minutes |
| Analyst requirements | Large dedicated teams | Small AI-augmented teams |
| Operator physical risk | High | Reduced by autonomous systems |
| Political cost of action | High (human casualties) | Lower (machine-mediated risk) |
| Attribution difficulty | Moderate | High (AI obscures signatures) |
| Barrier to entry (state actors) | Very high | Moderate |
| Barrier to entry (non-state actors) | Very high | Declining rapidly |
| Strategic reward per operation | Limited by intelligence lag | Amplified by real-time exploitation |
This is not a future that policymakers have adequately prepared for. The international regulatory frameworks governing military AI remain fragmented and non-binding. The Belfer Center’s study found that while 129 nations advocate for a legally binding agreement on lethal autonomous weapons systems, only a handful of the most powerful states — including the United States, Russia, and the United Kingdom — actively oppose such an agreement. The gap between the pace of technological change and the pace of governance is widening, and it is in that gap that the next generation of shadow wars will be fought.
The old deterrence model, built on the threat of massive retaliation and the clarity of state-on-state conflict, is poorly suited to a world of AI-enabled SMOs. When operations are deniable, when attribution is contested, and when the human cost to the aggressor is minimal, the calculus of deterrence breaks down. The world is entering an era in which the cheapening of covert action is making conflict, in its many forms, structurally more likely — and the institutions designed to prevent it have not yet caught up.