Back

Artificial Intelligence: fostering trust through research

2018.09.24

3mins | Publication

How can we ensure data security and privacy for all at a time of democratization of Artificial Intelligence (AI)? How can we adapt a relevant ethical and regulatory framework vis à vis this technological revolution? How can we increase algorithm transparency? Around the world, researchers are working on these issues and raise, more broadly, the question of trust. In order to share the latest thinking on this fundamental subject, the AXA Research Fund has dedicated its latest Research Guide to AI and Trust.

AI provides recommendations, guides us through the online world and helps us manage our personal finances. It translates our ideas into multiple languages simultaneously,  automatically improves our photos and answers the practical questions we may ask ... “Even though we may not always be aware of it, Artificial Intelligence (AI) is already part of our daily lives”, explained Jad Ariss, AXA Group Head of Public Affairs and Corporate Responsibility. “And its future potential is improving every day: for example, in the field of health we are finding that AI is capable of detecting certain cancers with a success rate comparable to - or that surpasses - that of radiologists. In the insurance sector, AI also offers many opportunities to create new services and to better meet the needs of our clients.” 

Download the guide

Moving beyond the polarized debate

Security, privacy, access to data, algorithm transparency... AI is also capable of generating concerns:

We realize that the public debate around AI has become polarized. On one side are the advocates of a ‘solutionist’ approach, who expect AI to solve all of society’s problems; while on the other side are those with a more ‘apocalyptic’ vision, in which AI threatens human autonomy”, according to Cécile Wendling, Group Head of Foresight at AXA and Member of the European Commission’s High-Level Expert Group on AI. “However, we must avoid shortcuts: the subject is complex, multifaceted, and must be addressed in a truly holistic manner.

With potential impacts on many aspects of our lives, it is essential that we have a societal and collective debate on AI, both politically and ethically, according to Lawrence Lessig, Professor of Law and Leadership at Harvard Law School and Member of the Scientific Advisory Board of the AXA Research Fund:

 

Informed discussion to ensure responsible Artificial Intelligence

“This is where science comes in,” said Jad Ariss. “It gives us the tools for an informed discussion, and helps us better prepare for the AI revolution in a responsible way.” It is with this in mind that the AXA Research Fund has recently published a Research Guide on the subject, giving voice to leading figures such as Lawrence Lessig and Raja Chatila, as well as to several AXA-supported researchers around the world (Antonio Acin, Alexandre d'Aspremont, Dominique Boullier, Joanna Bryson, Robert Deng, Maurizio Filippone, Phillip Hacker, Christophe Marsala and Paul Ohm), and to AXA experts.

These viewpoints provide us with a snapshot of the latest research on the subject, and enable us to question a concept that is central to the reflection on AI: trust. “Supporting research helps us to better understand the world around us and the key issues we face, now and in the future”, said Marie Bogataj, Head of the AXA Research Fund. “Artificial Intelligence is one of these issues. It is paramount that we give researchers a platform: their work will help us build an informed path towards responsible AI.”

Discover the Research Guide on Artificial Intelligence:

Ensuring data security and privacy

How can we ensure data security and privacy in a world where the use of data is increasing?

Tailoring the ethical and regulatory framework

How can we adapt ethical rules and legislation to AI developments while innovations in the field continue to accelerate? 

Strengthening accountability through transparency

How can we ensure the auditability and transparency of code, which is the only way to guarantee the legal responsibility of Artificial Intelligence?