The chapter examines how a class of robots for personal or domestic use, i.e. “consumer robots,” may impact current legal frameworks of informational privacy and data protection. Since most of these robots are not a simple sort of “out of the box” machine, their behaviour will crucially depend on the ways individuals train, treat or manage them. From the viewpoint of the robot-centred HRI (human-robot interaction), different types of contacts with humans, robot functionalities and roles, much as requirements of social skills, will affect the flow of information that individuals deem appropriate to reveal, share, or transfer, in a given context. From the stance of the human-centred HRI, people’s interaction with consumer robots will increasingly concern the aim to embed legal safeguards into technology and more particularly, what scholars dub as the principle of “privacy by design.” In light of different design policies and the challenges of humans as caretakers of their artificial agents, the overall aim of the chapter is to offer a hopefully comprehensive view of these issues, and address the old question of the Roman poet Juvenal in the Satires, “Who will guard the guards themselves?”
Teaching “Consumer Robots” Respect for Informational Privacy: A Legal Stance on HRI
PAGALLO, Ugo
2015-01-01
Abstract
The chapter examines how a class of robots for personal or domestic use, i.e. “consumer robots,” may impact current legal frameworks of informational privacy and data protection. Since most of these robots are not a simple sort of “out of the box” machine, their behaviour will crucially depend on the ways individuals train, treat or manage them. From the viewpoint of the robot-centred HRI (human-robot interaction), different types of contacts with humans, robot functionalities and roles, much as requirements of social skills, will affect the flow of information that individuals deem appropriate to reveal, share, or transfer, in a given context. From the stance of the human-centred HRI, people’s interaction with consumer robots will increasingly concern the aim to embed legal safeguards into technology and more particularly, what scholars dub as the principle of “privacy by design.” In light of different design policies and the challenges of humans as caretakers of their artificial agents, the overall aim of the chapter is to offer a hopefully comprehensive view of these issues, and address the old question of the Roman poet Juvenal in the Satires, “Who will guard the guards themselves?”I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.