Exploring the social and ethical implications of using autonomous robots in the...

Exploring the social and ethical implications of using autonomous robots in the classroom

876

Author: Selena Nemorin

Artificially intelligent (AI) robotic technologies have opened up new possibilities for a range of both public and private sectors including aging, health care, and defence. Specific to robots as tools for learning, the GLAM industry (galleries, libraries, archives, and museums) has successfully experimented with ways in which machine learning and deep neural networks can be applied to innovate their educational environments. The use of AI in these institutions has included both static machine applications and untethered interactive / social robots. For example, Sage is a robot that has been installed at the Carnegie Museum of Natural History as a full-time autonomous staff member. The goal of Sage is to provide educational content to visitors in order to augment their museum experience.

Cicerobot is another autonomous robot used for interactive tours of the Archaeological Museum of Agrigento. Based on a cognitive architecture for robot vision and action, Cicerobot is programmed to integrate visual perception and actions with knowledge representation so that it may generate a deep understanding of its environment. These robots, as some would argue, provide innovative means for human-robot interaction with crowds of people in public spaces and interactive capabilities that appeal to people’s intuition – they have the capacity to engage crowds through managing affective responses and, thereby, allegedly raising attendance levels.

AI is also being introduced into the formal educational setting. As such, educators have started to use social robots to support teaching and learning in K-12 contexts. In most cases, these robots have been deployed in schools in the role of teacher, although there are instances where the robot has taken on the role of learner. Educational researchers have claimed that robots provide innovative means for teaching and learning with large groups, they possess interactive capabilities that appeal to students’ emotional responses, and they have the capacity to engage a range of student populations.

Studies have also demonstrated that robots can help students acquire science, technology, engineering, and mathematics (STEM) skills; develop problem-solving abilities, support communication skills, and assist in the rehabilitation of students with social or cognitive disorders. Robots have been particularly successful in primary school literacy classes as storytellers, and as aids in secondary school settings to improve the development of English as a second language.

Indeed, the use of autonomous interactive robots in learning contexts certainly offers many benefits. But as autonomous robots are increasingly developed to assist human beings with day-to-day tasks, questions regarding the implications of human-robot interactions must inevitably be addressed. Although some scholars have explored the ethics of robot-child interactions, sustained research on the implications of robots in K-12 environments has been lacking. Important questions that must be asked centre on the social and ethical impacts of using autonomous robots in K-12 school settings. In particular, attention must be paid to at least three dimensions of human-robot interactions: affective, educational, and economic. Key areas that ought to be investigated are as follows:

Affective

  • Examine how students make sense of, trust, and engage with robots in knowledge spaces;
  • What are the social and ethical implications of student-robot interactions?

Educational

  • Gain new insights into the effectiveness of autonomous robots as mediators for adaptive learning in educational institutions;

 Economic

  • Identify how robots are being used in educational institutions – what work is being done by robots?
  • Examine the added economic value of having robots in these settings;
  • What are the implications for teachers (e.g. deskilling and deprofessionalization)?

Navigating a world that is increasingly embedding robots into various dimensions of socio-economic life has profound consequences for ethics. The scholarly and public accounts of the implications of robotics have placed the human being / AI relation at the centre of ethical deliberation. Social scientists have maintained that the impact of artificially intelligent robots should be examined critically, and philosophers have sought to construct ethical frameworks for the human designers of intelligent robotic platforms.

As educational researchers, we ought to take seriously growing concerns about the deployment of autonomous robots into public spaces, focusing on their effects on the school as a social institution. Understanding issues of moral responsibility, accountability, and privacy are of utmost importance to the use of robots in schools, as well as an examination of regulations governing their use. Our attention, then, ought to be directed at making sense of the implications of robot-student interactions with a view to developing rigorous ethical standards and behaviours for individuals developing, programming, producing, and operating social robots in educational settings.