iOCATA 2015

An outlook into criminal offences related to artificial intelligence

It is currently difficult to say if the ongoing process of automation and the increasingly rapid advances in and application of artificial intelligence (AI) present more of a challenge or an opportunity for law enforcement - the underlying problem from a crime development perspective was already briefly addressed in the 2014 IOCTA.

Artificial intelligence and Big Data analysis have the potential to provide significant input to the work of law enforcement1 - if it is possible to overcome legitimate concerns related to data protection and fundamental human rights. However, as with all new developments, there is potential for abuse as is evident, for instance, in the increasing number of targeted attacks against automated systems, such as modern, computer-controlled factories2. Stuxnet was certainly only the first widely discussed example of the capability of such attacks3. Taking into account that AI is ultimately a complex automated system, the threats are applicable to AI systems as well. Therefore the current situation can be aptly described as a combination of both opportunity and challenge.

In addition to the need to address the recent challenges of automation and AI it would be worthwhile to look a few years ahead with a view to trying to predict the impact of realistic and more mainstream AI applications on the work of law enforcement4. Artificial intelligence is an area that offers immense potential for new services and innovative products. The success of AI-based systems in beating humans at playing video games by applying deep learning and deep reinforcement learning underlines, in a very illustrative way, the progress of this field5. The fact that the AI system was able to quickly pick up the rules of the game without being taught in advance attracted a lot attention even outside the scientific community6. Other visible signs of the integration of AI are for example Google’s successful tests with self-driving cars7 or the successful Turing test in 2014, which was seen as a major breakthrough in computer history8. What may sound like a nightmare vision to some is hope for major progress in road safety to others. Similar to ‘airbags’ and ‘Collision Prevention Assist’, self-driving vehicles could lead to a decrease in traffic accident-related injuries and fatalities. Google’s monthly report for May 2015 indicates that, in six years of the project, more than a million miles of self-driving cars had been involved in 12 minor accidents - none of them caused by a self-driving car9.

It would be naive to believe that these developments will not have an impact on society by introducing a number of potential challenges. For example, some statistics indicate that ‘truck driver’ remains the most common job in 29 out of 50 of the United States10. Recent reports predicting that self-driving trucks are only two years away could have a truly disruptive impact on this market11.

When thinking about law enforcement implications, the discussion about ‘hacked’ cars might be one of the obvious responses. However, this topic is far away from being visionary as the integration of computer and network technology in cars continues at high speed. Already back in 2002 Forbes brought this issue to the attention of a wider public12. In 2013 Volkswagen tried to stop the publication of research on how to hack anti-theft systems13. And Wired reported about potential and real attacks in 2014 and in 201514. This is of course not limited to smart cars but applies to smart devices in general.

The practical relevance of these developments for law enforcement is primarily related to the ability to prevent such crimes and to have the forensic capabilities to investigate them. The advantage is that these attacks are covered by up-to-date legal systems. With regard to the potential impact there are certainly differences between hacking a desktop computer and a computer system in a car - however, from a legal point of view, both are quite similar.

Therefore it might be worth looking ahead to the developments that we could expect in the coming years. One issue that could become a true challenge for law enforcement is the involvement of AI-based machines in the commission of crime. Machines are already widely used to automate production processes15. This has also led to automation-related accidents and incidents, a recent example being the case of a worker ‘killed’ by a robot in a car manufacture company in Germany, which stimulated a public debate16. Unfortunately this is not the first time that somebody lost his life due to a malfunction of a robot - the first reported incidents were filed more than 30 years ago17. And, as old, is the debate about legal and ethical implications.

But the relevance of the debate might quickly change. While the malfunction of a machine can rather easily be handled as an accident that does not require intensive criminal investigations, the increasing use of AI could be a game changer. While a concluding discussion would go well beyond the scope of this Appendix, four main issues of relevance to the debate should be briefly mentioned. Before doing so, however, the fact that this is already of practical relevance today can be easily demonstrated by the following example18: an AI-based self-driving car is driving along a narrow road with concrete bollards on both sides. All of a sudden a child jumps right on the street. In response, the AI system may identify several different options. Without any action or even by performing an emergency stop the car would hit the child and may seriously injure or even kill him/her. To avoid the collision the car’s AI system may instead decide to make a right or left turn. The crash into the concrete bollards could seriously injure or even kill the passenger. The same or similar situations have been discussed in criminal law for decades - with the difference being that it is a human being who takes the decision in those conflict situations.

  • The first question arising is the general question of liability. Who will be made responsible? The hardware production company? The AI software company? The implementer? This question, which has been discussed in literature to some extent19, will require further attention, especially with regard to the required capacities to analyse the underlying reasoning process - which can be challenging taking into account the complexity of the systems and algorithms.
  • But the challenge for law enforcement is going beyond this. The story about AI beating humans in video games by learning the rules of the game without pre-programming them shows that one essential component of AI is that that the system is going beyond what was programmed. Therefore the differentiation between action and omission will become even more relevant in the future. Not having implemented measures to restrict possible action of AI-based systems could in the future be the focus of law enforcement investigations against manufacturers of such systems. And it might even be necessary to customise their ‘ethical and legal value system’ to differing ethical and legal systems.
  • The third element that will need to be further discussed is mens rea or the ‘guilty mind’. Just like general aspects of liability and the differentiation between action and omission, mens rea is a fundamental element of criminal law20. It is ultimately the concurrence of intelligence and violation21. The question is if this includes artificial intelligence. This is certainly not the traditional understanding of mens rea. The application of traditional criminal law provisions to crimes involving AI could therefore go along with unique challenges and raises the question if we need a specific legal regime for AI or if it is favourable or even essential to apply one legal framework to AI and non AI-base criminal activities.
  • Finally what will be the consequences and penalties that will be applied? Imprisonment will most likely not be a suitable option. The challenge is not new; within the debate about criminal liability of legal persons, similar challenges were discussed. But in this context even applying fines and financial penalties goes along with unique challenges22.

This brief overview underlines some of the challenges for law enforcement that might be worth observing already at this early stage. It certainly includes rather philosophical questions like: Do we expect AI to act better than humans? But ultimately it also includes questions related to the core work of law enforcement: The application of law and enforcement.

  1. Alzou’bi/Alshibly/Ma’aitah, Artifical Intelligence in Law Enforcement, A Review, International Journal of Advanced Information Technology, Vol. 4, No. 4
  2. Cardenas/Amin/Lin/Huang/Huang/Sastry, Attacks Against Process Controll Systems: Risk Assessment, Detection, and Response
  3. Albright/Brannan/Walrond, Did Stuxnet Take Out 1.000 Centrifuges at the Natanz Enrichment Plant?, Institute for Science and International Security, 22.12.2010; Broad/Markoff/Sanger, Israeli Test on Worm Called Crucial in Iran Nuclear Delay, The New York Times, 15.01.2011; Kerr/Rollins/Theohary, The Suxnet Computer Worm: Harbinger of an Emerging Warfare Capability, 2010; Timmerman, Computer Worm Shuts Down Iranian Centrifuge Plant, Newsmax, 29.11.2010
  4. For a discussion on the application of AI in the context of the objectives and purposes of the Geneva Convention, specifically in relation to lethal autonomous weapons systems, http://www.unog.ch/80256EE600585943/
  5. Mnih/Kavukcuoglu/Silver/Rusu/Veness/Bellemare/Graves/Riedmiller/Fidjeland/Ostrovaski/Petersen/Beattie/Sadik/Antonoglou/King/Kumaran/Wierstra/Legg/Hassabis, Human-level control through deep reinforcement learning, Natur, 2015
  6. McMillan, Google’s AI is now smart enough to play Atari like the Pros, Wired Magazine, 2015
  7. KMPG, Self-driving Cars: The Next Revolution, https://www.kpmg.com/US/en/IssuesAndInsights/ArticlesPublications/Documents/self-driving-cars-next-revolution.pdf, 2012
  8. University of Reading, Turing Test Success Marks Milestone in Computing History, http://www.reading.ac.uk/news-and-events/releases/PR583836.aspx, 2014
  9. Google Self-Driving Car Project, Monthly Report, May 2015
  10. Balance Sheet Solutions, Weekly Relative Value, http://www.balancesheetsolutions.org/stored/pdf/WRV062915.pdf, 2015
  11. Prigg, Self-Driving trucks are just two years away says Daimler as it is set to get go-ahead for trials on German roads within months, Daily Mail, 27.07.2015
  12. Fahey, How to Hack Your Car, Forbes, 7.8.2002
  13. Volkswagen sues UK university after it hacked sports cars, The Telegraph, 30.7.2013
  14. Greenberg, Hackers could take control of your car. This device can stop them, Wired 22.7.2014; Greenberg, Hackers remotely kill a jeep on the highway - with me in it, Wired, 21.7.2015
  15. Singh/Sellappan/Kumaradhas, Evolution of Industrial Robots and their Applications, International Journal of Emerging Technology and Advanced Engineering, Vol. 3, Issue 5, 2013
  16. Robot kills worker at Volkswagen plant in Germany, The Guardian, 2.7.2015
  17. Dennet, When HAL Kills, Who’s to Blame? Computer Ethics, in Stork, Hal’s Legacy: 2001’s Computer as Dream and Reality, 1997
  18. Bloomberg, Should a Driverless Car Decide Who Lives or Dies?, http://www.bloomberg.com/news/articles/2015-06-25/should-a-driverless-car-decide-who-lives-or-dies-in-an-accident-, 2015
  19. Hallevy, The Criminal Liability of Artificial Intelligence Entities – From Science Fiction to Legal Social Control, Akron Intellectual Property Journal, 2010
  20. Llewelyn/Edwards, Mens rea in statutory offences, 1955
  21. Hall, General Principles of Criminal Law, 2005
  22. A recent report by the RAND Corporation provides an interesting overview of how emerging and future Internet technologies can strengthen the work of law enforcement and the judiciary. In relation to smart or driverless cars, the report suggest developing policies, procedures and technical interfaces that take into account law enforcement requirements. http://www.rand.org/content/dam/rand/pubs/research_reports/RR900/RR928/RAND_RR928.pdf