Sarvapali Gopal Ramchurn
|Nationality||British , Mauritian|
|Year of selection||2017|
|Institution||University of Southampton|
|Risk||Data & Tech Risks|
Type of support
250 000 €
Artificial intelligence (AI) is emerging as the defining technology of our age, allowing us to do things we never thought possible. For the benefits to be truly and safely accessible, though, key challenges still need to be resolved, the foremost of which is the development of AI that is responsible. Indeed, the creation of intelligent machines that mimic human capacities is radically transforming human-computer interactions, calling for algorithms that can account for human needs, perceptions, behaviours and expectations. « So, how do we design these interactions in a way that’s responsible? », that is the question at the heart of Dr. Gopal Ramchurn’s scientific program. More specifically, his research aims to address key technical and practical challenges for the development of responsible intelligent agents and machine learning (ML) systems. The overall objective is two-fold : to establish some of the underpinning methodologies and algorithms, and to contribute insights to the legal and insurance communities by establishing the range of risks that AI systems expose organisations and end-users to.
« The last few years have seen a significant rise in the development of machine learning approaches with great success in specific areas such as game playing and time-series prediction for traffic monitoring, epidemiology, and disaster response applications », the researcher reports. « In fact, the pace of change in the field over the last decade has been too fast for those that use, operate, and regulate systems that end up using such AI-based solutions ». To illustrate this gap between theory and application, Dr. Ramchurn uses the example of autonomous UAVs (unmanned aerial vehicles), one of the key areas he will be focusing his investigation on. « The Civil Aviation Authority in the UK has struggled to design the rules for flights involving purely autonomous UAVs, let alone fleets of UAVs ! ». « Such systems are therefore liable to major failures that may negatively impact the organisations running them, but, more importantly, the end-users of such systems ». « Take Uber for instance. When they designed the system, they failed to account for the impact on the drivers. If drivers want to make any decent money, they need to work long hours and without any job security. Technology can completely change an industry and impact lives negatively », he presses. « Improving systems afterwards is not efficient enough. These issues need to be thought of before ». This is what the research program is about : building a methodology to ensure future algorithms are more responsible.
Allowing for AI and humans to work hand in hand
His programme will focus on two key application areas : the use of drones for disaster response and the use of IoT systems for energy conservation in smart homes. « Building on a number of existing results we already have on these two kinds of AI systems, we’re going to try and expose methods that allow these AI systems to make sense of what is going on around them and to take decisions we can trust. », Dr. Ramchurn explains. Among the questions that are investigated by the project are the design of algorithms accounting for the risks they expose others to, the design principles ensuring that interactions can be understood by the end-users, as well as fair and sustainable, the preservation of privacy, or the development of « responsibility » within the reasoning of ML systems. To provide answers to these questions, the research program collaborates with experts from a lot of different domains : social psychologists, ethnographers, computer/human interaction specialists, legal experts.
Scientists are constantly trying to find new ways to bridge the gap between man and machine. In the past, the effort has led to the invention of keyboards, mice and touch screens. Now, with AI, interactions have become infinitely more complex. « New issues have arisen, like humans and machines acting as equal members of a team, for instance ». With new AI technologies about to come out, it is urgent we come up with answers on how to ensure harmonious human/computer interactions. By aiming to develop a methodology for AI that is responsible, Dr. Ramchurn’s research addresses questions that are about to become of utmost importance. In this sense, the project aligns closely with existing AXA funded projects, including Prof. Christophe Marsala’s, Prof. Maurizio Filippone’s and Dr. Joanna Bryson’s.