Robots are already killing people

Rate this post

TIt’s the robot revolution It started a long time ago and so did the killing. One day in 1979, a robot broke down at Ford Motor Company’s casting plant—the human workers decided it wasn’t going fast enough. And so 25-year-old Robert Williams was asked to climb into the storage rack to help move things along. The one-ton robot continued to work silently, striking Williams in the head and killing him instantly. This was the first instance of a robot killing a human; Many more will follow.

In 1981 at Kawasaki Heavy Industries, Kenji Urada died in a similar situation. According to Gabriel Halevy in his 2013 book, he was killed by a malfunctioning robot that got in his way, When Robots Kill: Artificial Intelligence in Criminal Law. As Halevy explains, the robot simply decided that “the most efficient way to eliminate the hazard was to push the worker into the nearest machine.” From 1992 to 2017, robots were responsible for 41 recorded workplace deaths in the United States—and this is likely an underestimate, especially when you consider the knock-on effects of automation, such as lost jobs. A robotic anti-aircraft gun killed nine South African soldiers in 2007 when a possible software glitch caused the machine to maneuver itself and fire dozens of lethal rounds in less than a second. In a 2018 trial, a medical robot was implicated in killing Stephen Pettit during a routine operation a few years earlier.

When Robots Kill: Artificial Intelligence in Criminal Law

through Gabriel Halevy

You get the picture. Robots – “intelligent” and not – have been killing people for decades. And the development of more advanced artificial intelligence has increased the ability of machines to cause harm. Self-driving cars are already on American roads, and robotic “dogs” are enforcing the law. Computerized systems are being given the ability to use tools, allowing them to directly affect the physical world. Why worry about the theoretical emergence of an all-powerful, super-intelligent program when more immediate problems are at our doorstep? Regulation requires companies to innovate in safety and security. We’re not there yet.

Historically, regulation has required major disasters to occur—catastrophes that we would ideally anticipate and avoid in today’s AI paradigm. The 1905 Grover Shoe Factory disaster led to regulation of the safe operation of steam boilers. At the time, companies claimed that large steam-automation machines were too complex to pass safety regulations. This, of course, led to overlooked safety flaws and increased disasters. It wasn’t until the American Society of Mechanical Engineers called for risk analysis and transparency that the dangers of these huge tanks of boiling water, once considered a mystery, became more readily understood. The 1911 fire at the Triangle Shirtwaist Factory led to sprinkler systems and emergency exit regulations. And the preventable 1912 sinking of the Titanic led to new regulations on lifeboats, safety audits and on-ship radios.

Perhaps the best analogy is the evolution of the Federal Aviation Administration. The demise of aviation in the first decade forced regulation, requiring new developments in both law and technology. Beginning with the Air Commerce Act of 1926, Congress recognized that the integration of aerospace technology into people’s lives and our economy demanded the highest scrutiny. Today, every plane crash is scrutinized, prompting new technologies and procedures.

Any regulation of industrial robots stems from existing industrial regulation, which has been evolving over decades. The Occupational Safety and Health Act of 1970 established safety standards for machinery, and the Robotic Industries Association, now merged with the Association for Advancing Automation, has been instrumental in developing and updating specific robot-safety standards since its founding in 1974. Those standards, with vague names like R15.06 and ISO 10218, emphasize inherently safe design, protective measures, and rigorous risk assessments for industrial robots.

But as technology continues to change, governments need to more clearly regulate how and when robots can be used in society. The law needs to clarify who is liable and what the legal consequences are when a robot’s actions cause harm. Yes, accidents happen. But the lessons of aviation and workplace safety show that accidents are preventable when they are discussed openly and scrutinized by the right experts.

AI and robotics companies don’t want that to happen. For example, OpenAI has fought to “water down” security regulations and reduce AI-quality requirements. According to an article in time, he lobbied European Union officials against classifying models like ChatGPT as “high risk”, which would have introduced “stricter legal requirements including transparency, traceability and human monitoring”. The reasoning was that OpenAI did not intend to put its products to high-risk use – a logical twist similar to the Titanic owners’ lobbying that the ship should not be inspected for lifeboats on the basis that it was a “general purpose” ship. Warm waters could also be navigated where there was no ice, and people could float for days. (OpenAI did not comment when asked about its position on regulation; previously, it said that “achieving our goals requires us to work to mitigate both current and long-term risks,” and that it is “working toward that goal by collaborating policy makers, researchers and users.)

Large corporations tend to develop computer technology to shift the burden of their own deficiencies onto society at large, or security regulations that protect society impose unfair costs on the corporation itself, or security baselines hinder innovation. We’ve heard it all before, and we should be highly skeptical of such claims. Today’s AI-related robot deaths are no different from the robot accidents of the past. Those industrial robots malfunctioned and human operators trying to help were killed in unexpected ways. Since the first known fatality caused by the feature in January 2016, official reports estimate Tesla’s Autopilot has been involved in more than 40 deaths. Poorly performing Teslas on Autopilot have deviated from their advertised capabilities by misreading road markings, suddenly swerving into other cars or trees, hitting well-marked service vehicles or ignoring red lights, stop signs and crosswalks. We worry that AI-controlled robots are already moving beyond accidental killing in the name of efficiency and “deciding” to kill someone to achieve opaque and remotely controlled goals.

As we move towards a future where robots are becoming an integral part of our lives, we cannot forget that safety is an important part of innovation. True technological progress comes from implementing comprehensive safety standards across all technologies, even in the realm of futuristic and fascinating robotic vision. By learning from past deaths, we can enhance safety protocols, correct design flaws, and prevent further unnecessary loss of life.

For example, the UK government already sets out statements regarding the importance of security. Lawmakers need to go further back in history to be more future-focused on what we should be demanding now: modeling threats, calculating possible scenarios, enabling technical blueprints, and ensuring responsible engineering to build within parameters that protect society at large. Decades of experience have given us empirical evidence to guide our actions for a safer future with robots. What is needed now is the political will to regulate.


We receive a commission when you purchase a book using a link on this page. Thank you for your support Atlantic.

Leave a Comment