AI at the Frontlines
How conflict, capital, and power dynamics are being redefined in real time
If you feel like the world’s broken, you’re not wrong. We have seen systems collapse with predictable regularity: financial crashes, pandemics, wars, climate disasters. Each failure teaches the same lesson. Traditional institutions move too slowly for modern problems.
In 2025 alone, three geopolitical fault lines cracked open simultaneously. Ukraine deployed 10,000 AI-guided drones that destroyed $7 billion worth of Russian aircrafts. Israel's Lavender system scored 2.3 million Gaza residents for assassination risk in seconds. Pakistan neutralized India's numerical advantage using Chinese drone swarms and smart missiles that think faster than human commanders can react.
These events signify the new normal where wars aren’t declared, but deployed. Built on code, guided by data, and amplified by autonomy. AI shapes modern conflicts, and and the results disturb.
This transformation appears sudden, but decades of development preceded the explosion (see graph below). While we dismissed the dangers as science fiction, algorithms grew in the background until they burst into view this year. As AI became mainstream, it rewrote military doctrine in real time. The question now becomes: are you prepared for what comes next?
Current landscape: defense tech as the new frontier
The last decade saw software (including SaaS) eat the world. This decade watches software arm it.
Four conflict zones reveal how artificial intelligence platforms became the new force multiplier:
Case # 1: Cyber-signal warfare precedes kinetic strikes
The current Iran-Israel conflict depicts a hybrid warfare domain where cyber-signal probing precedes physical strikes. Following Israel's Operation Rising Lion on June 13, 2025, which targeted Iranian nuclear and military infrastructure, Iran escalated cyber operations. The attacks focused on espionage, DDoS strikes, ransomware, and wiper malware against Israeli critical infrastructure.
A notable counter cyberattack disrupted Sepah Bank, Iran's largest financial institution. ATM access and gas station operations failed nationwide. Iranian hackers infiltrated Israeli missile defense networks, triggering false air raid sirens and spreading panic without launching physical projectiles.
The consequence: psychological warfare that costs pennies per person affected, compared to millions per missile. Traditional defense budgets become irrelevant when panic spreads faster than projectiles.
Case # 2: AI-guided drone warfare is deployed at scale
Earlier this month, Ukrainian drone offensive destroyed 41 Russian aircrafts, including strategic bombers and rare A-50 reconnaissance planes. Damages reached $7 billion. The drones included the AI-powered "mother drone" from the Brave1 defense tech cluster which delivers smaller strike drones up to 300 kilometers behind enemy lines using visual-inertial navigation and LiDAR.
The AI identifies, selects, and destroys high-value targets with 70-80% first-strike accuracy. Human oversight becomes optional when machines operate at this speed and precision.
The implication transforms military procurement forever. Why spend $90 million on a fighter jet when $50,000 drones achieve similar destruction rates? Every defense contractor watching these results now faces existential questions about their product lines.
Case # 3: Machine learning targets civilian populations
In Gaza, the Israeli Defense Forces (IDF) employed ‘Lavender’ machine‑learning system to identify targets within seconds. Lavender processes surveillance data on 2.3 million residents to assign a 1-100 score for suspected militant affiliation, flagging individuals for potential assassinations with minimal human oversight in seconds.
The speed differential makes human judgment optional, not required. Algorithms decide who lives and dies faster than humans can intervene.
Case # 4: AI-enabled hardware neutralizes conventional advantages
Pakistan's victory in May's confrontation with India reveals something profound about modern warfare. The country deployed drone swarms, smart missiles, satellite intelligence, and autonomous logistics to defeat a larger, wealthier adversary.
On May 9, Pakistan launched 400 to 500 drones that mapped India's air defense systems. Smart missiles followed, guided by Chinese Wing Loong II surveillance aircraft. Pakistan's PL-15E missiles used artificial intelligence to track targets and resist jamming.
India's military advantage vanished when decision-making accelerated from hours to seconds.
What this shift signifies
For decades, defense meant tanks, jets, missiles and troop formations. Today, the premium falls on compute, inference and autonomy. These systems cause destruction equal to conventional military engagements, sometimes exceeding it.
This shift has lowered barriers for tech firms previously excluded from defense contracts. Consequently, markets responded by creating an intelligence-industrial complex: a web of state actors, dual-use startups, and institutional investors betting on AI as the new doctrine of war and deterrence.
Companies like Anduril, Shield AI, Palantir, Rafael and CETC build defense focused cognitive systems that perceive, decide and act with minimal oversight. Many operate without even waiting for human input.
The market shifts reflect operational realities. RAND and Future Today Institute (FTI) forecasts estimate global defense‑AI spending will grow from $11 billion in 2024 to $30–40 billion by 2035. In Q1 2025 alone, dual‑use defense startups raised over $5.8 billion in equity. Sovereign wealth and VC funds now AI payloads as geopolitical assets.
Capital funds operational realities. AI represents the present of conflict, not the future. Military dominance and capital allocation rules restructure in real time.
This intelligence-industrial complex operates under different rules than traditional defense contractors. Speed matters more than scale. Code matters more than steel. The firms that understand this transition are capturing disproportionate returns. Those that don't will find themselves building yesterday's weapons for tomorrow's wars.
But the transformation extends beyond market dynamics. The same technologies reshaping defense budgets are reshaping the nature of conflict itself. What emerges looks nothing like the wars your strategic planning assumes.
Seven signals: What the next decade will unfold
The patterns visible in 2025's conflicts preview the next decade's strategic landscape. They are the projections based on current trajectories. The signals below represent inevitable consequences of decisions already made by technologists, investors, and military planners.
If you're positioning for the next cycle of geopolitical competition, these seven developments will determine which strategies succeed and which become obsolete. Each signal represents both opportunity and threat, depending on how quickly you adapt to the new reality.
AI systems will export ideology, not just capability. As the stealth carriers of geopolitical ideologies, AI systems will shape how client states define threats, assess escalation, and wage war. One state’s defined language models will teach different escalation thresholds than another. The code becomes foreign policy by other means.
Compute autonomy replaces energy independence. Nations that control chips, models, and inference capacity will control military outcomes. GPU-sharing agreements will matter more than oil treaties.
Feedback loop between war and markets will close. Every successful AI deployment in conflict creates a new speculative floor for investment. However unfortunate that may be, firms that deliver visible battlefield impact are more likely to win capital, turning military success into investor sentiment loops.
The next generation of proxy conflicts will center on compute access. Regional alliances will form not just around defense treaties, but GPU-sharing agreements and model-hosting protocols. Something, we have seen with US control on chip exports.
AI war crimes will force multilateral regulatory action. Geneva conventions assume human decision-makers. AI systems that predict, prioritize, and execute kill chains operate in a legal vacuum. The first high-profile algorithmic war crime will trigger international intervention, but only after the damage is done.
Disinformation becomes tactical firepower. We have seen AI-powered campaigns in Sub-Saharan Africa swaying votes, delaying peace deals, triggering protests, and escalating internal unrest. Future conflicts will deploy disinformation campaigns weeks before kinetic action to paralyze responses and create divisions inside adversary regimes.
Domestic repression will inherit military AI. In a few years, the same systems optimized for external adversaries could be used to predict, preempt, or suppress domestic unrest, especially in fragile democracies.
Investment meets sovereignty
These seven developments converge on a single point: AI-enabled warfare creates new forms of wealth and new categories of risk. The same technologies that generate defense returns also generate dependencies that threaten the sovereignty those defenses protect.
Early investment creates high returns as new markets emerge in autonomy, surveillance, and cybersecurity. The firms that prove battlefield effectiveness first capture disproportionate market share, turning military success into financial advantage.
This creates a contradiction unaddressed in Sam Altman’s reflections: The Gentle Singularity or in discussions surrounding the future of AI, including Artificial General Intelligence. First, emerging economies access low-cost surveillance systems that strengthen short-term security. The same systems threaten medium-term political autonomy when foreign vendors control the algorithms that define threats and responses. It's like inviting foreign intelligence services to design your national security protocols.
Second, capital flows toward proven battlefield impact. Firms that demonstrate visible military effectiveness attract more investment, creating cycles where successful violence generates financial returns. A bullish portfolio now includes companies whose stock prices rise when their weapons kill effectively. Market dynamics now reinforce conflict escalation in ways that should disturb any rational investor.
This contradiction between profit and sovereignty is an operational reality for every decision maker allocating capital or designing policy in this space. The question becomes whether market forces can self-correct before they destabilize the systems they're meant to protect.
They won't. Market incentives reward speed and effectiveness, not stability or safety. When profits depend on battlefield performance, traditional safeguards become obsolete to competitive advantage. This is why guardrails matter more than market mechanisms.
Why guardrails matter
Speed advocates argue that restraint slows innovation. Others assume markets will self-correct. But both positions miss the central problem: when systems operate at machine speed and with minimal human involvement, traditional diplomatic and legal frameworks become obsolete.
Machine speed breaks human oversight. AI targeting systems process surveillance data and recommend targets in milliseconds. Human operators need minutes to verify intelligence. Military personnel exhibit documented automation bias, accepting algorithmic recommendations under time pressure without thorough verification. This speed gap makes meaningful human control impossible.
Algorithmic failures amplify at scale. AI systems experience brittleness, misclassifying civilians as combatants after observing edge cases like militants misusing ambulances. Gender bias flags all military-age males as legitimate targets. Hallucinations trigger responses to nonexistent threats. When systems process thousands of potential targets hourly, small error rates become mass casualty events.
Accountability disappears into black boxes. Current AI military systems cannot explain their targeting decisions. When civilian casualties occur, no human can reconstruct why the algorithm selected specific targets. This opacity eliminates deterrence and legal responsibility that traditionally constrained military behavior.
Treaties cannot adapt to dual-use technology. Unlike nuclear weapons requiring rare materials and visible infrastructure, AI runs on commercial hardware. The same chips training chatbots power military targeting systems. Verification becomes impossible when AI capabilities emerge from civilian research labs and deploy through private contractors operating across multiple jurisdictions.
Nuclear command acceleration creates existential risk. AI systems already recommend responses to perceived nuclear threats. Within five years, these systems may control launch sequences. When machines interpret ambiguous signals as existential threats in microseconds, human decision-making becomes physically impossible. This creates first-strike instability beyond Cold War precedents.
New solutions match technological speed. Export restrictions on advanced semiconductors can slow dangerous capability proliferation while updating rapidly as technology evolves. Disclosure mandates create real-time monitoring of military AI deployments. Transparency requirements force explainable system design. These measures operate at technological timescales rather than diplomatic ones, making them enforceable when traditional treaties fail.
What we need to do
Regional stability today requires new system design, not additional treaties. Nation states have demonstrated their complete disregard for treaties in the recent conflicts we have seen. The future world needs to act fast and adopt structural protocols to prevent a catastrophe. They include, but certainly aren’t limited to:
Export restrictions on advanced semiconductors. The chips that power military AI systems come from three companies. Taiwan Semiconductor manufactures 90% of advanced processors. ASML provides the only machines capable of producing cutting-edge chips. NVIDIA controls the software stack that trains large AI models. Export controls at these chokepoints can slow dangerous capability proliferation while adapting rapidly as technology changes. Unlike treaty negotiations that take years, export restrictions update within months.
Disclosure mandates for military AI deployments. When defense contractors deploy AI systems affecting civilian populations, governments need real-time visibility. Mandatory reporting creates accountability trails that operate at technological speed. Companies must document algorithmic decision processes, training data sources, and error rates. This transparency forces developers to build explainable systems rather than optimizing purely for performance.
Civic observatories for disinformation attribution. AI-generated propaganda campaigns now launch within hours of geopolitical events. Traditional fact-checking cannot match this speed. Automated detection systems can identify synthetic content, trace distribution networks, and attribute sources in real-time. These observatories provide decision makers with situational awareness when information warfare accelerates.
Shared compute infrastructure for regional resilience. Single-vendor dependency creates strategic vulnerability. When one company controls the computing power that trains military AI models, it controls military capability. Regional compute networks distributed across multiple providers prevent technological colonization. Smaller nations gain access to AI capabilities without surrendering sovereignty to foreign corporations.
These measures create scaffolding for stability when AI becomes the next deterrent. The world needs something closer to a Tallinn Manual 2.0, an updated playbook that aligns platforms and technologies, militaries, and markets when conflict is coded.
Final Thought
We stand at an inflection. AI is both a growth lever and a global flashpoint. The question is whether we treat it as commerce, conflict, or shared infrastructure.
Three paths remain visible: ubiquitous autonomous systems that eliminate human delay, international transparency protocols that stabilize escalation, or tech oligopoly that creates digital colonization. Which path prevails depends on institutional design choices and fiscal alignment decisions happening now.
Watch how sovereign funds adjust expectations for dual-use companies. Watch how conflict zones classify AI-powered weapons under international law. Watch whether emerging economies build domestic AI capacity or accept foreign dependency. These decisions will form the next market cycles, diplomatic positions, and accountability frameworks.
The shift from infantry to automation, from steel to software, asks whether our next generation of decision tools will support distributed strength or concentrate ephemeral control.
Think in systems, not silos. The code we write today becomes the conflict we inherit tomorrow.
Subscribe for future notes on systems thinking, strategic execution, and the invisible structures shaping technology, capital, and institutional change.



![All About AI] The Origins, Evolution & Future of AI All About AI] The Origins, Evolution & Future of AI](https://substackcdn.com/image/fetch/$s_!kgKB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97fadb6f-d81e-4903-ac16-5e96f30a9448_1000x563.png)


