When Robots Kill: Artificial Intelligence Under the Criminal Law.

AuthorCharney, Rachel
PositionBook review

CAN ANDROIDS PLEAD AUTOMATISM? A REVIEW OF WHEN ROBOTS KILL: ARTIFICIAL INTELLIGENCE UNDER THE CRIMINAL LAW BY GABRIEL HALLEVY

Humans have long feared the darker side of artificial intelligence (AI). Mutiny, sabotage, and xenocide are themes that science fiction has frequently explored, kindling what Isaac Asimov called the Frankenstein Complex, or the fear of mechanical people. (1) Over time, machines have become both increasingly pervasive and intelligent, allowing us to rely on them to perform a broad range of functions. But does our comfort in allowing machines to clean our homes or beat us in a game of chess mean that we are willing to go so far as to treat humans and machines as similar entities under the criminal law? In When Robots Kill: Artificial Intelligence Under Criminal Law, Gabriel Hallevy argues in favour of applying criminal law to artificial intelligence, contending that it would not require any major theoretical revisions to the current legal system.

The issues that Hallevy deals with were raised many years ago, (2) but by expanding on articles that he has previously written Hallevy is the first to set out how the current criminal law framework could be applied to AI. (3) Hallevy begins his book by examining the elusive quest for machina sapiens. He explains that although we may never create AI that fully imitates the human mind, criminal liability can still be imposed on machines that are acting under their own agency and not merely being used as tools. (4) Hallevy does not provide a clear summary of the minimum capabilities required for AI to achieve agency. Instead, Hallevy avoids the issue by categorizing AI that have the agency to be found criminally liable as 'strong AI', leaving the the reader to piece together the definition of this term. (5)

Hallevy describes in great detail how strong AI can incur criminal liability by meeting both the physical and mental requirements of subjective mens rea offenses, negligence based offenses, and strict liability offenses. He states that while AI of varying competencies can commit the actus reus, only more advanced AI will have the processing capacity necessary for the awareness and volition elements that comprise subjective mens rea. (6) Negligence and strict liability offences, which do not require subjective mens rea, still require the offender to be capable of forming awareness. (7)

Hallevy discusses his framework as though it can be applied to current technology, but modern AI is simply not sufficiently advanced to meet the legal standard of awareness and volition that Hallevy describes. (8) Hallevy draws upon examples of modern day robots but attributes qualities to these...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT