It is currently difficult to say if the ongoing process of automation and the increasingly rapid advances in and application of artificial intelligence (AI) present more of a challenge or an opportunity for law enforcement - the underlying problem from a crime development perspective was already briefly addressed in the 2014 IOCTA.
Artificial intelligence and Big Data analysis have the potential to provide significant input to the work of law enforcement1 - if it is possible to overcome legitimate concerns related to data protection and fundamental human rights. However, as with all new developments, there is potential for abuse as is evident, for instance, in the increasing number of targeted attacks against automated systems, such as modern, computer-controlled factories2. Stuxnet was certainly only the first widely discussed example of the capability of such attacks3. Taking into account that AI is ultimately a complex automated system, the threats are applicable to AI systems as well. Therefore the current situation can be aptly described as a combination of both opportunity and challenge.
In addition to the need to address the recent challenges of automation and AI it would be worthwhile to look a few years ahead with a view to trying to predict the impact of realistic and more mainstream AI applications on the work of law enforcement4. Artificial intelligence is an area that offers immense potential for new services and innovative products. The success of AI-based systems in beating humans at playing video games by applying deep learning and deep reinforcement learning underlines, in a very illustrative way, the progress of this field5. The fact that the AI system was able to quickly pick up the rules of the game without being taught in advance attracted a lot attention even outside the scientific community6. Other visible signs of the integration of AI are for example Google’s successful tests with self-driving cars7 or the successful Turing test in 2014, which was seen as a major breakthrough in computer history8. What may sound like a nightmare vision to some is hope for major progress in road safety to others. Similar to ‘airbags’ and ‘Collision Prevention Assist’, self-driving vehicles could lead to a decrease in traffic accident-related injuries and fatalities. Google’s monthly report for May 2015 indicates that, in six years of the project, more than a million miles of self-driving cars had been involved in 12 minor accidents - none of them caused by a self-driving car9.
It would be naive to believe that these developments will not have an impact on society by introducing a number of potential challenges. For example, some statistics indicate that ‘truck driver’ remains the most common job in 29 out of 50 of the United States10. Recent reports predicting that self-driving trucks are only two years away could have a truly disruptive impact on this market11.
When thinking about law enforcement implications, the discussion about ‘hacked’ cars might be one of the obvious responses. However, this topic is far away from being visionary as the integration of computer and network technology in cars continues at high speed. Already back in 2002 Forbes brought this issue to the attention of a wider public12. In 2013 Volkswagen tried to stop the publication of research on how to hack anti-theft systems13. And Wired reported about potential and real attacks in 2014 and in 201514. This is of course not limited to smart cars but applies to smart devices in general.
The practical relevance of these developments for law enforcement is primarily related to the ability to prevent such crimes and to have the forensic capabilities to investigate them. The advantage is that these attacks are covered by up-to-date legal systems. With regard to the potential impact there are certainly differences between hacking a desktop computer and a computer system in a car - however, from a legal point of view, both are quite similar.
Therefore it might be worth looking ahead to the developments that we could expect in the coming years. One issue that could become a true challenge for law enforcement is the involvement of AI-based machines in the commission of crime. Machines are already widely used to automate production processes15. This has also led to automation-related accidents and incidents, a recent example being the case of a worker ‘killed’ by a robot in a car manufacture company in Germany, which stimulated a public debate16. Unfortunately this is not the first time that somebody lost his life due to a malfunction of a robot - the first reported incidents were filed more than 30 years ago17. And, as old, is the debate about legal and ethical implications.
But the relevance of the debate might quickly change. While the malfunction of a machine can rather easily be handled as an accident that does not require intensive criminal investigations, the increasing use of AI could be a game changer. While a concluding discussion would go well beyond the scope of this Appendix, four main issues of relevance to the debate should be briefly mentioned. Before doing so, however, the fact that this is already of practical relevance today can be easily demonstrated by the following example18: an AI-based self-driving car is driving along a narrow road with concrete bollards on both sides. All of a sudden a child jumps right on the street. In response, the AI system may identify several different options. Without any action or even by performing an emergency stop the car would hit the child and may seriously injure or even kill him/her. To avoid the collision the car’s AI system may instead decide to make a right or left turn. The crash into the concrete bollards could seriously injure or even kill the passenger. The same or similar situations have been discussed in criminal law for decades - with the difference being that it is a human being who takes the decision in those conflict situations.
This brief overview underlines some of the challenges for law enforcement that might be worth observing already at this early stage. It certainly includes rather philosophical questions like: Do we expect AI to act better than humans? But ultimately it also includes questions related to the core work of law enforcement: The application of law and enforcement.