Kargu-2: The First Autonomous Killer Drone?

Introduction: The Drone That Crossed a Line

In March 2020, on the outskirts of Tripoli, something quietly historic may have happened.

As Libyan Government of National Accord (GNA) forces pushed back against retreating units of the Haftar Affiliated Forces (HAF), a Turkish-made STM Kargu-2 drone reportedly engaged fleeing targets—without any known human operator in the loop. According to a United Nations Panel of Experts report on the conflict, the loitering munition was “programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.”

It was a single paragraph buried in a 500+ page UN document. But for military observers, AI ethicists, and international lawyers, it sounded like a historic threshold had been crossed: a lethal autonomous weapon system (LAWS) may have made the first real-world kill decision—on its own.

There was no confirmation. No investigation. No international outcry.

Just a line of code, executed in the fog of war.

This is the story of the STM Kargu-2—how it works, what it did in Libya, and why it may have just opened the gates to an era of algorithmic warfare where no human is required to pull the trigger.

What Is the STM Kargu-2? Specs, Capabilities, and Claims

The Kargu-2 is a small, quadrotor loitering munition designed and manufactured by STM (Savunma Teknolojileri Mühendislik), a Turkish defense company closely aligned with the country’s Ministry of Defense. On the surface, it looks like a consumer drone—but it’s built for war.

At roughly 7 kg total weight, the Kargu-2 carries a 1.3 kg explosive payload, capable of striking personnel and light vehicles. It has a 10 km operational range, 30 minutes of flight time, and can operate in manual, semi-autonomous, or fully autonomous modes.

What sets it apart is its onboard AI system, which STM claims enables “real-time processing” of video feeds using machine learning algorithms. According to the company’s brochure, it can autonomously track and engage targets based on a “target library,” identifying humans or vehicles through visual and infrared signatures.

The Kargu-2 is also designed for swarm operation. Up to 20 drones can coordinate autonomously as a networked strike group, using mesh communication to adjust flight paths, avoid redundancy, and dynamically assign targets.

Its deployment doesn’t require satellite or network connectivity—making it ideal for GPS-denied environments or jammed electronic warfare zones.

While STM maintains that a human is always “in the loop,” the platform’s design clearly supports “fire-and-forget” autonomous strikes. And that’s precisely what the United Nations report described in Libya.

With exports confirmed to at least three countries and growing interest in the Middle East, Asia, and Africa, the Kargu-2 is positioned as the world’s first widely deployed autonomous loitering munition—a budget-friendly alternative to more complex systems like Israel’s Harpy or U.S.-built Switchblade drones.

This is more than a weapons system. It’s a glimpse at how the future of war is being packaged, marketed, and sold through autonomous killing. 

 

A photograph of the Kargu-2 autonomous drone against a white background by Armyinform.com.ua - armyinform.com, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=106153206

The Libya Incident: “Fire, Forget, and Find”

The most consequential mention of the STM Kargu-2 didn’t come from a defense expo or company press release. It came from the United Nations Security Council’s Panel of Experts on Libya, buried deep within their March 2021 final report (S/2021/229).

Describing an engagement near Tripoli in March 2020, the report, embedded at the end of this article, stated that retreating forces of Khalifa Haftar’s Libyan National Army—referred to as the Haftar Affiliated Forces (HAF)—were:

“…hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2… The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.”

Technical specifications of the STM Kargu-2 loitering munition drone, showing its key parameters: 7kg total weight, 1.3kg explosive payload, 10km operational range, and 30-minute flight time. The infographic includes a silhouette of the quadrotor drone and highlights its autonomous target identification capability, ability to operate in swarms of up to 20 units, and functionality in GPS-denied environments. Copyright - Prime Rogue Inc - 2025.

The engagement took place during Operation PEACE STORM, as Turkish-backed GNA forces pushed HAF units away from the capital. A range of high-tech systems were deployed: loitering munitions, UCAVs, electronic jammers—and, critically, the Kargu-2.

While the UN report does not explicitly confirm that the Kargu-2 carried out a fully autonomous strike on human targets, its language heavily implies that at least some units were deployed without human control once airborne. The use of the phrase “fire, forget and find” suggests the drone may have autonomously selected and engaged individuals—marking an unprecedented battlefield scenario.

Adding to the concern: these targets were reportedly retreating, not actively engaging in combat. That raises questions about compliance with international humanitarian law (IHL), especially the principles of distinction and proportionality.

Despite the gravity of this claim, no official investigation was launched. The Turkish government made no public statement. STM did not respond to the implications. The story faded—overshadowed by geopolitical churn and pandemic headlines.

But for experts in autonomous weapons and AI ethics, this was the moment the line may have been crossed: a weapon with artificial intelligence, deployed in a warzone, making its own decisions about who to kill.

And the world barely noticed.

Legal and Ethical Red Flags

If the Kargu-2 was used in fully autonomous mode in Libya, the implications are profound—not just for warfare, but for the international legal order and the ethical boundaries of lethal force.

At the heart of the issue is accountability. Under International Humanitarian Law (IHL), combatants are required to uphold principles such as distinction, proportionality, and military necessity. These rules exist to protect civilians and combatants hors de combat—those who are wounded, surrendering, or retreating.

Autonomous systems like the Kargu-2 introduce a core challenge: Who is responsible when a machine kills unlawfully?

The drone’s manufacturer, STM, claims a human operator remains in the loop. But the UN report strongly implies the Kargu-2 was operating without real-time connectivity, using a pre-programmed target profile. If a drone autonomously selects a person—without oversight—and that person is later found to be a civilian, who is liable? The operator? The commander? The algorithm’s author?

This is what experts call the “accountability gap.”

The International Committee of the Red Cross (ICRC) has called for prohibiting autonomous systems that apply force directly against human beings. Their 2021 position statement warns:

“The process of functioning risks effectively substituting human decisions about life and death with sensor, software and machine processes.”

This erodes the foundational principle of human dignity in armed conflict. Autonomous systems are not capable of understanding context, assessing surrender, or evaluating combatant intent. They follow code, not conscience.

Even beyond legality, the Libya incident triggers an ethical threshold: Should a machine ever decide who lives or dies?

With the Kargu-2, we may have already answered that question—not through debate or treaty, but through deployment.

omparison diagram showing the decision-making process in human-in-the-loop versus fully autonomous weapons systems. The left side depicts the traditional OODA loop (Observe, Orient, Decide, Act) with human decision-making, while the right shows an autonomous system's Sense, Process, Decide, Execute cycle. The diagram highlights key differences including ethical judgment, contextual understanding, and accountability, with a special note about the Kargu-2's 'fire, forget and find' capability. Copyright - Prime Rogue Inc - 2025.

Technical Risk Factors: When AI Kills Incorrectly

The potential use of the Kargu-2 in a fully autonomous strike brings forward not only legal and ethical dilemmas—but also significant technical vulnerabilities that undermine its reliability on the battlefield.

At the core is a problem familiar to AI researchers: brittleness. Computer vision models used in drones like the Kargu-2 rely on pattern recognition from sensor inputs—often trained on limited, curated datasets. In the unpredictable chaos of war, these systems may misidentify civilians as combatants, especially in low-visibility or crowded environments.

Studies have shown that slight changes in lighting, angle, or even a single pixel can throw off machine learning models, causing false positives. In practical terms, that could mean the difference between hitting a retreating fighter—or a fleeing child.

The Kargu-2 also embodies the AI black box problem. If it autonomously selects a target and detonates, there’s no built-in mechanism to explain why that target was chosen. No audit trail. No accountability. Commanders and investigators are left guessing—or worse, accepting the outcome at face value.

Even if a human approves deployment, over-trust in automation is a well-documented phenomenon. Operators may assume the drone “knows better,” leading to rubber-stamping behavior that defeats any meaningful human oversight. Problematically, AI systems always maximize rewards over ethics, and this means that human over-trust in automation is likely to produce war crimes.

Finally, the Kargu-2’s swarm capability raises alarm about emergent behavior—unpredictable interactions between drones operating in coordination without centralized control. In a battlefield scenario, that unpredictability could become lethal not just for enemies, but for allies and civilians.

These risks aren’t theoretical. They are baked into the architecture of autonomous weapons—and they’re being field-tested in live conflicts.

Strategic Implications: Turkey, Export Risks, and Global Uptake

The Kargu-2 isn’t just a one-off prototype—it’s part of Turkey’s broader push to dominate the next generation of warfare technologies. Alongside the globally recognized Bayraktar TB2, the Kargu-2 signals Ankara’s ambition to become a major exporter of low-cost, AI-enabled lethal drones.

STM, the state-aligned manufacturer behind the Kargu series, has actively promoted the system as a scalable, export-ready solution for counterinsurgency and urban warfare. As of 2024, it has been delivered to the Turkish Armed Forces and reportedly exported to at least three foreign buyers, though the full list remains classified.

This raises a critical concern: who’s buying autonomy—and for what purpose?

Unlike high-end U.S. or Israeli platforms, the Kargu-2 is relatively affordable, portable, and modular, making it attractive to nations with limited oversight mechanisms or volatile domestic conflicts. Once delivered, there are few barriers to activating its autonomous modes, especially in combat zones where transparency is non-existent.

There’s also the doctrinal signaling effect. Turkey’s alleged use of the Kargu-2 in Libya shows other military powers—and non-state actors—that autonomy can be deployed, denied, and normalized, all without triggering global backlash.

This “plausible deniability by design” is a strategic feature, not a bug.

With Turkey positioning itself as a go-to vendor for AI-enhanced combat tech, the Kargu-2 sets a precedent: not just for what can be done on the battlefield—but for what the world is willing to tolerate.

A stylized world map showing the global proliferation of autonomous weapons systems. Major developer countries like the USA, Russia, China, Israel, UK, Australia, and South Korea are marked with blue circles. Turkey, the origin of the Kargu-2 drone, is highlighted in red. Libya, the site of the reported autonomous engagement, is marked in orange. Smaller gray dots indicate potential export markets. The visualization includes statistics noting that over 12 countries are actively developing lethal autonomous weapons systems, with at least 3 confirmed Kargu-2 export markets, and emphasizes the lack of binding international regulations. Copyright - Prime Rogue Inc - 2025

Conclusion: The Beginning of AI-Driven Warfare?

The Kargu-2 may be remembered not for what it did—but for what it revealed.

Whether or not it made a truly autonomous kill in Libya, the capability is now proven, deployed, and spreading. The threshold has already been crossed. We now live in a world where a drone, guided by machine vision and trained on opaque data, can seek out and strike human targets—with no one watching in real time.

There was no global debate. No treaty. No line drawn in the sand.

Instead, the first autonomous weapon may have taken a life in a forgotten corner of a civil war, flagged quietly in a UN report, and buried under the noise of everything else.

That’s the nature of the threat: it doesn’t announce itself.

It executes.

And unless the world confronts the implications now, the next Kargu-2 won’t be the first. It will just be the latest.

Embedded Original PDF Report of the United Nations Security Council’s Panel of Experts on Libya March 2021 final report (S/2021/229) preserved by AI Weapons Watch