Complicity

The Algorithm of Complicity

Big Tech and the Automation of War

GSF Editorial Team · · 5 min read

Original document available

Download PDF
The Algorithm of Complicity
Photo Credit: Ali Jadallah
Photo Credit: Ali Jadallah

                                                   Photo Credit: Ali Jadallah

The Algorithm of Complicity: Big Tech and the Automation of War

 

As we prepare for our Spring 2026 action, mobilizing over 70+ boats and more than 1,000+ participants from across the globe, our mission remains clear: we are sailing to uphold the human dignity and international law that governments have categorically abandoned.

 

But as we organize on the water, a different kind of mobilization is happening in Silicon Valley and military headquarters. It is a quiet, highly lucrative and utterly terrifying shift in how war is waged. We are entering the era of automated warfare, where the lives of Palestinians and marginalized people worldwide are being reduced to data points in an algorithmic kill chain.

Here is what the everyday organizer, humanitarian and seafarer needs to understand about the alarming integration of Artificial Intelligence into modern warfare, and why our human-led resistance is more vital than ever.

 

The Digital "Fog Procedure"

 

To understand where we are, we have to look at how military logic has evolved. During the second intifada, the Israeli military utilized an unofficial strategy called the "fog procedure." Soldiers guarding posts in low visibility were encouraged to shoot bursts of gunfire into the darkness on the mere theory that a threat might be lurking.

 

It was violence licensed by chosen blindness: shoot into the darkness and call it deterrence.

 

Today, that same logic has been refined, systematized and handed off to a machine. The darkness is no longer a condition of the terrain; it is a condition of algorithmic design. By processing billions of data points, phone records, movement patterns and social connections, AI systems generate lists of purported militants. The system infers identities statistically across an entire population, generating targets that no human has individually assessed. The fog did not lift; it was given a probability score and called intelligence.

 

Based on the ground-breaking April 2024 investigative reports spearheaded by journalist Yuval Abraham for +972 Magazine and Local Call (published in coordination with The Guardian), of the deaths in Gaza, named fighters accounted for roughly 17%, as if resistance fighters are legitimate targets instead of civilians that have taken up arms in order to defend themselves and their families. The investigations detailed testimonies from israeli intelligence officers and exposed the military's use of AI-driven targeting systems, most notably a program called "Lavender", and the internal databases tracking the high civilian casualty rates associated with those automated targets.

 

These are not the statistics of a fight delivered with precision; this is a genocide where imprecision is automated.

 

Gaza was the laboratory. Minab is the market.

 

The systems tested and refined in Gaza are now being exported and deployed in new theatres of war.

 

In the recent US-israeli campaign in Iran, the strike on the Shajareh Tayyebeh elementary school in Minab killed at least 168 people, the vast majority of them girls aged 7 to 12. The weapons used were incredibly precise, striking individual buildings exactly as programmed. The fatal flaw was the intelligence: the school had been repurposed for civilian use nearly a decade ago, but the algorithmic targeting system failed to flag that the data was out of date.

 

To strike 1,000 targets in the first 24 hours of the campaign in Iran, the israeli military relied on AI to generate and prioritize lists at a speed no human team could match. The result is a world where the most consequential decisions in modern warfare are made by systems that cannot explain themselves, operating on outdated data, in conflicts that generate no accountability.

 

The Illusion of the "Human in the Loop"

 

The engineers and executives profiting from these systems often defend them by claiming there is always a "human in the loop." They assure the public that an algorithm doesn't drop a bomb… a person does.

 

But when an AI system produces more than 37,000 targets in the first weeks of a war, or generates 100 potential bombing sites per day, human verification becomes a myth. Reports indicate that human operators review these AI-generated names for an average of about 20 seconds, just long enough to confirm the target is male before signing off.

 

The humans in the loop are not exercising moral judgment or upholding international humanitarian law: they are managing a pipeline. The moral weight of taking a human life is scrubbed clean by sleek software interfaces.

 

They Aren't AI Firms. They Are Defense Contractors.

 

As a movement unaffiliated with any government, we must name the institutions actively profiting from this violence. We must stop calling these companies "tech start-ups" and recognize them for what they are: defense contractors.

The companies building the architecture that determines who gets killed sit entirely outside the legal frameworks of the Geneva Conventions.

 

  • Palantir, which is heavily funded by early CIA money, spent nearly $6 million lobbying Washington in 2024 to shape wartime policy and supplying systems that rely on large language models like Anthropic's Claude.
  • Google and Amazon share the $1 billion Project Nimbus contract, providing cloud-computing and AI infrastructure to the Israeli government and military, despite massive internal employee protests.
  • OpenAI, which historically prohibited military use in its terms of service, quietly erased that restriction in 2024 to pursue Pentagon contracts.
  •  

These companies operate with impunity. The EU AI Act explicitly exempts military applications, and current US defense strategies mandate moving at "wartime speed" to adopt more AI, deliberately bypassing regulatory caution.

 

Our Answer is Sumud

 

They are building systems designed to detach, distance and destroy. Accountability dissolves across a supply chain of software engineers, cloud providers and military commanders.

 

In response, the Global Sumud Flotilla is doing the exact opposite. We are moving closer. We are putting our bodies, our boats and our voices on the line. Against the cold, calculating efficiency of AI-driven warfare, we bring the collective care and unyielding power of global, grassroots solidarity.

 

  • We demand accountability: Liability does not stop at the soldier who clicks a button; it must extend up the supply chain to the corporations that knowingly build and sell opaque systems for use in war.
  • We confront complicity: We refuse to look away, and we will continue to name and pressure the tech companies profiting from civilian death.
  • We take direct action: Because when institutions optimize for destruction, everyday people must optimize for justice.

Sovereignty over Palestinian land and water belongs solely to the Palestinian people, not to foreign militaries, and certainly not to the Silicon Valley executives automating their displacement.

 

Join us on the water. Click here to Subscribe to our Blog and find other ways to be involved in the global uprising.