Socio-economy & New Tech

    Artificial Intelligence

AXA Awards

United Kingdom

AI : how human should humanoid robots look?

How does a robot’s appearance affect our perception and our understanding of what it really is? In 2010, the United Kingdom published a set of five ethical rules for robotics – the first national level document on AI ethics –, to advise those who design, sell and use robots about their responsibilities. One of these 'Principles of Robotics' states that « Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users: instead their machine nature should be transparent. ». A delegate of the workshop that produced this set of rules, Dr. Joanna Bryson of the Department of Computer Science at the University of Bath, is putting this particular ethical rule to the test with a series of experiments involving humanoid robots. By investigating how humans behave around humanoid robots and examining whether making them look human actually does negatively effect people’s relation to artificial intelligence, the project aims to provide solutions for making the robot’s nature explicit, while not hindering potential therapeutic uses.
« People tend to both be afraid of AI and to expect too much of it. In particular, they expect it to be like a human », says Dr. Joanna Bryson. « People have in mind what movies show, but that’s science fiction.  In reality, AI is very different – it’s a tool, it’s a way of programming ». «For people to be safe, it’s important for them to understand AI is a machine », the researcher explains. « For instance, it can record things you say and give away data. If people think of robots as humans, a lot of thing could go wrong. It could open them up for economic exploitation. They might think they need to protect the robot or feel badly for turning it off ». At the same time, many people argue that, for therapeutic reasons, humanoid robots should also be perceived as companions. « A good compromise would be to have explicit ways to know that it’s a robot, but at the same time, implicitly feel that there’s someone in the room. That’s how it works with TVs for instance, so why not with robots? The same goes for movies. You know it’s fiction, but you can still feel emotions. »

Experimenting with humanoid Pepper robots

 Dr. Joanna Bryson and her team have experimented with non-humanoid robots in the past. The AXA Award on Responsible Artificial Intelligence allows them to test with humanoid robots this time, and compare the results with their previous findings. Dr Bryson’s group will use advanced humanoid Pepper robots in a variety of scenarios. Their first experiment is to test whether being able to see the goals of the robot in real time – via screens, for instance –, helps understand how AI works. « A window into the robot’s brain », as Dr. Joanna Bryson puts it. « Exposing users to the robot’s priorities and reasoning in a graphic user interface could be a solution, even for extremely humanoid robots ». The other experiments will consist of ordinary psychology experiments to see how people behave with a robot in the room.

 Robots will play an increasingly important role in the future – at home, at work, in institutions, etc. While some argue that humanity will benefit greatly from the rise of the robots, others advocate caution. Human/robot relations might prove tricky, especially when it comes to increasingly human-like robots. By investigating how a robot’s appearance affects human/machine interactions and by testing ways to make their machine nature explicit, Dr. Bryson’s experiments will greatly contribute to our understanding of what humanoid robots should look like to allow safe use in the future. In addition to educating the people about robots, she and her team aim to affect policy. In particular, their objective is to contribute to answering some of the European Union’s concerns about robot ethics.

Joanna
BRYSON

Institution

University of Bath

Country

United Kingdom

Nationality

American

ORCID Open Researcher and Contributor ID, a unique and persistent identifier to researchers