Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

In a scene reminiscent of a computer war game, three battle-weary soldiers, dressed in snowsuits, emerge from a war-torn corridor with their arms raised above their heads.
They knelt down, following the instructions being molded, fear and dread etched on their faces as they looked down at the barrel of the machinegun mounted on the supposed ground robot.
list of 4 itemsend of series
The footage, released in January by Ukrainian security firm DevDroid, is said to show the moment Russian soldiers were captured by a Ukrainian robot using artificial intelligence.
In April, the President of Ukraine Volodymyr Zelenskyy said that, “for the first time in the history of the war, enemy positions were taken only by unmanned platforms – ground systems and drones”.
“The robotics machine has already made more than 22,000 forward journeys in just three months,” he wrote in X’s newsletter, along with photos of the green machine with tanks and equipment mounted on top.
But for experts who studied the intersection of Artificial Intelligence (AI) is warThe video shows the expected evolution – one that will take place beyond the front lines Ukraine as the world grapples with the consequences of authoritarian behavior.
For years, military forces have used ground robots primarily for bomb disposal and reconnaissance.
But in Ukraine, their role has grown rapidly, with some organizations reporting that up to 70 percent of front-line operations are now handled by robots rather than soldiers.
These machines carry weapons, food and medicine, and evacuate wounded soldiers from dangerous areas.
Yet the prospect of robotic systems moving across the battlefield is part of a major shift in warfare – one that has been building for decades.
The current debate about AI in the military was largely driven by the rise of US unmanned aerial vehicle (UAV) operations in the early 2000s.
In 2002, the MQ-1 Predator was used by the US to conduct one of the first airstrikes in Afghanistan, which changed the way warfare could be fought remotely.
Its use grew rapidly throughout the 2000s and peaked in the late 2000s to mid-2010s, particularly in Pakistan, Yemen and Somalia.
As AI has advanced, the debate has gone beyond remote tasks.
The focus on systems that can help identify targets, prioritize strikes and control military decisions, raises deep questions about how much autonomy should be given to machines.
Analysts say the question of autonomy should be at the center, rather than overshadowed by technological progress, however the interest of the increasingly sophisticated machines in the war may be.
“These technologies are here to stay,” Toby Walsh, an AI expert at the University of New South Wales, told Al Jazeera. He also described AI-driven warfare as “the third revolution in warfare”.
The change is also spreading beyond land targets.
Naval drones loaded with explosives have already reshaped the war in the Black Sea, while autonomous underwater weapons are being developed for surveillance, mine clearance and destruction missions by soldiers around the world.
Robot dogs, meanwhile, are already being tested for surveillance, detection and disposal of bombs, and other experimental versions with weapons.
In recent years, the emergence of autonomous or so-called drones “killer robots” has sparked a fierce debate after a United Nations report said that the Turkish-made Kargu-2 drones, which operate autonomously, detected and attacked fighters in Libya in 2020.
The incident sparked a wide-ranging discussion among scholars, activists and diplomats around the world, as they grapple with the moral and ethical consequences of mechanizing — and making — the decision to take a human life.
However, there is a need to focus on the debate about the use of autonomous weapons, “where people are still in the loop”, Anna Nadibaidze, a researcher in international politics at the Center for War Studies, University of Southern Denmark, told Al Jazeera.
The main concern, he said, is that “enough time and space” is being given to “the human consideration that is necessary for war”.
The magnitude of human intervention is often something that observers must admit to military forces; a difficult task when their actions become less interdependent, said Toby Walsh.
In terms of robotics on the ground in Ukraine, the human operator has, until now, been in control, controlling machines that can be stopped by obstacles such as uneven terrain.
However, when AI is involved in decision-making, as was the case in Israel’s attack on Gaza and the rest of the region, the number of attacks that have resulted in “massive economic damage and civilian casualties for limited military purposes” contradicts the rules of international economic law and, in particular, the idea of equality, said Walsh.
The issue, Nadibaidze said, is that it is difficult to establish rules for the use of AI in the military because it is basically “a matter for each soldier to decide what he considers to be part of the citizenship of a person, and there is not enough international debate on that”.
An April report by the Stockholm International Peace Research Institute warned that AI marketing processes are fragmented, global and heavily dependent on civilian technologies, making efforts to control or monitor AI military operations difficult.
The United States Department of Defense and the Pentagon regularly include secret products software programs in their armor.
In the middle of last year, the Department of Defense awarded OpenAI a $200m contract to use AI for the US military, along with $200m contracts for xAI and Anthropic.
“If we are not careful, the war will be so dangerous, so deadly, so fast, so fast that people will not be able to live anymore, because people will not have the speed, they will not have the accuracy or the ability to respond,” Walsh warned.
Technology and AI aren’t inherently dangerous, experts say — it’s how they’re used that matters.
In Ukraine, robotics have also been used to rescue civilians and provide support in multiple and fraudulent mines.
However, what is happening on the front line is, in many ways, a testing ground, and countries need to look ahead to how these technologies can be used and manage future conflicts.
There is also room for cautious optimism. Despite the “moral failure” of Israel’s actions in Gaza, Walsh said, there is a recognition among countries that these issues must be resolved, including a series of UN meetings focused on the control of Lethal Autonomous Weapons Systems.
The United Nations Institute for Disarmament Research (UNIDIR), an independent organization within the UN that conducts independent research on disarmament and international security, is due to meet in June to assess the impact of AI on global peace and security.
This isn’t the first time that new weapon technologies have threatened to upend the rules, Walsh said, pointing to drug weapons as an example. Although they were imperfect, international agreements were eventually made to cross a certain threshold.
“There are many actors in the Global South who want to improve, so there can be regional efforts,” said Nadibaidze, adding that although such efforts may not involve large powers or leading technologies, they can still help shape emerging cultures.