FutureFive New Zealand - Consumer technology news & reviews from the future
Story image
Rebellious robots: the latest in robotic research
Fri, 25th Oct 2019
FYI, this story is more than a year old

The latest in human-robot research is a handheld robot that is designed to try and frustrate its human counterpart in a bid to develop robotic cooperation. Computer scientists at the University of Bristol are working on a handheld robot that first predicts plans of the user and then rebels against them.

According to the researchers, cooperation between humans and machines is an essential aspect of automation and this new research shows frustrating people on purpose is part of the process of developing robots that better cooperate with users.

The intelligent, handheld robots are programmed to complete tasks in collaboration with the user. In contrast to conventional power tools, that know nothing about the tasks they perform and are fully under the control of users, the handheld robot holds knowledge about the task and can help through guidance, fine-tuned motion and decisions about sequences.

Latest research in this space by PhD candidate Janis Stolzenwald and professor Walterio Mayol-Cuevas, from the University of Bristols Department of Computer Science, explores the use of intelligent tools that can bias their decisions in response to the intention of users.

Therefore, while the robots can help fulfil tasks quicker and with higher accuracy, their decisions are not always in line with the users plans, resulting in frustration.

According to Stolzenwald and Mayol-Cuevas, this is a new and interesting twist on human-robot studies as it aims to first predict what users want and then go against these plans.

Mayol-Cuevas says, “If you are frustrated with a machine that is meant to help you, this is easier to identify and measure than the often elusive signals of human-robot cooperation. If the user is frustrated when we instruct the robot to rebel against their plans, we know the robot understood what they wanted to do.

“Just as short-term predictions of each other's actions are essential to successful human teamwork, our research shows integrating this capability in cooperative robotic systems is essential to successful human-machine cooperation.

For the study, researchers used a prototype that can track the users eye gaze and derive short-term predictions about intended actions through machine learning. This knowledge is then used as a basis for the robots decisions such as where to move next. The Bristol team trained the robot in the study using a set of over 900 training examples from a pick and place task carried out by participants.

Core to this research is the assessment of the intention-prediction model. The researchers tested the robot for two cases: obedience and rebellion. The robot was programmed to follow or disobey the predicted intention of the user. Knowing the users aims gave the robot the power to rebel against their decisions.

The difference in frustration responses between the two conditions served as evidence for the accuracy of the robots predictions, thus validating the intention-prediction model.

Stolzenwald, a PhD student sponsored by the German Academic Scholarship Foundation and the UKs EPSRC, conducted the user experiments and identified new challenges for the future.

He says, “We found that the intention model is more effective when the gaze data is combined with task knowledge. This raises a new research question: how can the robot retrieve this knowledge? We can imagine learning from demonstration or involving another human in the task.

In preparation for this new challenge, the researchers are currently exploring shared control, interaction and new applications within their studies about remote collaboration through the handheld robot.

A maintenance task serves as a user experiment, where a handheld robot user receives assistance through an expert who remotely controls the robot. The research builds on the handheld robot designed and built by former PhD student Austin Gregg-Smith, and which is available as an open source design via the researcher's site.