Robots in public spaces are becoming more commonplace as time goes on. These public robots offer benefits such as cleaning, delivery, security, and public information (Mintrom et al., 2021, p. 1). The list of benefits will undoubtedly get longer with time, and it is fascinating to think about how robots will work for humanity in the future. However, despite the exciting new benefits, society faces challenges as we rely more on these new cyber companions.
The first challenge is ethics. Many people know that the possibility of a Terminator-style robot revolution is possible. Because of this, great care must be taken when designing these machines. Luckily, this issue isn't a new one. Science fiction writer Isaac Asimov introduced the first ethical code for artificial intelligence (AI) systems in 1942 and presented three Robotics laws in Runaround. Later in 1985, Asimov amended the list with a fourth law in Robots and Empire.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
A robot may not harm humanity or, by inaction, allow humanity to come to harm.
The second challenge is legal. Does society need to enact new laws and regulations regarding robots and AI? Elon Musk thinks so. "I am not normally an advocate of regulation and oversight…I think one should generally err on the side of minimizing those things…but this is a case where you have a very serious danger to the public" (Thomas, 2020). It is only a matter of time before congress enacts laws regarding robots/AI in public spaces.
Cyber Risks
The third (but not necessarily the final challenge) is cybersecurity risks. Many if not all risks related to computer networks and Internet of Things (IoT) devices also apply to robotics. Three threats to focus on are Physical Security, Misconfigurations, and Spoofing.
Physical Security The most straightforward way for a robot to be compromised is through physical interactions with the public. Anyone can get in the way of a robot, knock it over, or damage it. The stopping of a robot is a tangible indication that it has been overwhelmed by a Denial of Service (DoS) attack. The robot may stop many times and for varying amounts of time. It may also cause the robot to move erratically, and its velocity may change. A robot undergoing an attack delays reacting to direction commands while transitioning from low to high speeds (Priyadarshini, 2017, p. 7). Any threat actor could interfere with wireless signals around the robot or even connect to the robot directly via USB, uploading any malicious payloads the attacker sees fit.
Misconfigurations Misconfiguration and poor programming may leave the robot and operating system unable to complete the intended duties with the needed degree of precision, posing a hazard to people in the vicinity and negatively impacting the software's navigation orders (Yaacoub et al., 2021). The first human to be killed by a robot was Robert Williams in 1979 while working inside a Ford factory. In 2018, Elaine Herzberg was hit by a self-driving car while crossing the road (Summer, 2021).
Spoofing Spoofing is a network attack in which an attacker impersonates another device or user. This approach is used to steal data, disseminate malware, or circumvent access constraints. In robotics, a spoofing attack may induce erroneous behavior in a robot. For example, an attacker performs GPS spoofing by transmitting false GPS coordinates to a drone's control system to alter its trajectory (Lacava et al., 2021).
Summary
Robots in public settings are becoming more ubiquitous over time. These public robots provide several advantages, including cleaning, distribution, security, and information dissemination. However, the increased robotic usage in public brings ethical, legal, and cybersecurity challenges. Three main cyber risks with robots are physical security, misconfigurations, and spoofing. Each of these risks focuses on controlling the robot and making the robot move and do tasks in unintended ways. This potentially poses public safety concerns for anyone around these new public-facing companions.
References
Lacava, G., Marotta, A., Martinelli, F., Saracino, A., La Marra, A., Gil-Uriarte, E., & Mayoral-Vilche, V. (2021, September 30). Cybsersecurity Issues in Robotics. Innovative Information Science & Technology Research Group. https://isyou.info/jowua/papers/jowua-v12n3-1.pdf
Mintrom, M., Sumartojo, S., Kulić, D., Tian, L., Carreno-Medrano, P., & Allen, A. (2021). Robots in public spaces: Implications for policy design. Policy Design and Practice, 5(2), 123-139. https://doi.org/10.1080/25741292.2021.1905342
Priyadarshini, I. (2017). Cyber security risks in robotics. Detecting and Mitigating Robotic Cyber Security Risks, 333-348. https://doi.org/10.4018/978-1-5225-2154-9.ch022
Summer. (2021, July 14). Four crazy real cases of humans killed by robots. History of Yesterday. https://historyofyesterday.com/four-crazy-real-cases-of-humans-killed-by-robots-7ab9bc0a9e38
Thomas, A. (2020, February 29). Top ten best quotes by Elon Musk on AI. Analytics India Magazine. https://analyticsindiamag.com/top-ten-best-quotes-by-elon-musk-on-artificial-intelligence/
Yaacoub, J. A., Noura, H. N., Salman, O., & Chehab, A. (2021). Robotics cyber security: Vulnerabilities, attacks, countermeasures, and recommendations. International Journal of Information Security, 21(1), 115-158. https://doi.org/10.1007/s10207-021-00545-8