Back

AI Regulation and the Limits of Transparency Dr. Joanna Bryson

    Socio-economy & New Tech

2021.07.04

9mins | Article

Is there such a thing as trustworthy AI? Does anthropomorphism interfere with transparency? How can we improve system’s transparency? During AXA’s Security Days – an internal event that brought together AXA Group security teams to discuss the future of security – Joanna Bryson, Professor of Ethics and Technology at the Hertie School in Berlin and an AXA Award recipient for AI Ethics,  discussed the issues of AI Regulation and the Limits of Transparency. Read below an abridged version of her remarks.

One of the original definitions of intelligence is from the 19th century, when we were trying to decide which animals were intelligent or not: It's the capacity of doing the right thing at the right time, which is a form of computation. It entails transforming information about context into action. This very general definition includes plants and thermostats. With this definition, it's easy to define AI as anything that behaves intelligently and that someone deliberately built. As a result, thermostat is in, and plant is now out.

The important thing for transparency is that this deliberation implies responsibility, at least in human adults. We're only talking about human adults because only they can really be held legally accountable. Ethics is the way a society defines and secures itself, and the fundamental components of an ethical system are the moral agents. They are the ones our society considers to be responsible. Moral patients are the ones that they are responsible for, and that can include things like the ecosystem. Different societies define moral agents and moral patients differently. Some don’t recognize the moral agency capacities of women or minorities, for example. We construct our society out of the way we define these agents and these patients.

These definitions only work to the extent that moral agents are roughly peers. There will be leaders – kings or presidents – but there is still more or less equality. Even an autocratic leader can't completely determine what people do. This is the key when we're thinking about how ICT is changing society. How are we going to handle enforcement when we enact laws?

I don’t talk about trustworthy AI because it isn't the kind of thing you trust. You can only trust peers. We can exploit the psychological sensation of trust and talk about trust in governments or robots, but it's not coherent. Trust is a peer-wise relationship where you say I'm not going to try to micromanage you. When we think about enforcement, which is important for understanding transparency, then we must think about peers. 

It's not that we should trust corporations and governments. We should hold them accountable. That's why we want transparency. And that's what the new digital services act is about: how can we make sure that we know what's going on with digital artifacts.

AI is not a peer. The ways that we enforce, law and justice, have to do with dissuasion much more than recompense. If the robot itself or an AI company does something wrong, then it is just ordered to pay a fine, but discovering and proving the problem is unfortunately very unlikely. So, we must also dissuade, and this dissuasion is based on what humans do or do not like. We really care about not going to jail or not losing our money, but we can't build that into AI. We can't guarantee that a system we build is going to feel systemic aversion as animals do to isolation. Safe AI is modular. That's how we can make sure that we know how it works; we construct systems that allow us to hold accountable or trace accountability.

If we let AI itself be a legal agent, it would be the ultimate shell company. But what about robots? This comes back to the work that AXA has been funding: not even your robots are your peers. And I find it astounding. Here’s one of the robots that AXA has funded for us. This robot cannot do anything to help this gentleman, for example, get up off the couch. It's too fragile. And it's amazing to me that we can even consider whether a robot could be left to take care of the elderly.

Robots are designed and owned, which means we can't even think about consensual relationships. We're not talking about trust. A robot is basically an extension of a corporation with cameras and microphones in your home. Is that a good idea? Anthropomorphism means we see a thing and start thinking it's a person like us that we use. We accommodate the fact that we find it convenient and don't worry too much about security. We start thinking of it as a member of our household. This is not necessarily a conscious decision. Some natural language processing researchers believe we cannot put natural language into someone's house ethically, because all these language-speaking toys affect how families interact with each other. They affect the language we use because humans naturally anthropomorphize. We naturally try to accommodate. The flip side is dehumanization, which unfortunately we do too. When we feel threatened, we can decide we don't want to deal with something that's too different from us. Instead, we exclude. I got into AI ethics because I was astounded by this phenomenon and didn’t understand, as someone who built robots, why people felt that they owed ethical obligations to robots. It has to do with this inclusion/exclusion process.

Does anthropomorphism interfere with transparency? Can we help humans understand that a robot is an artifact, that it's an extension of a corporation and how to ensure it is safe to have in their homes? Or is the bias too ingrained? We've played with this idea. Just putting a bee costume on a robot alters how people understand it. We have a system for showing people how robots work. We know this increases human understanding. Getting people to understand the goals of the robot helps them understand and reason about these, but not perfectly, unfortunately. 

Digital systems are easily transparent. This doesn't mean every digital system is transparent. It's also easy to make them non-transparent. The point is that, since it's an artifact, we can design it and we can keep track of how we design it. What we're trying to audit is not the micro details of how AI works but rather how humans behave when they build, train, test, deploy, and monitor AI. What's essential is showing that humans did the right things when they built and deployed and tested the software.

We're trying to maintain order within our own society. A good maintainable system that should be a legitimate product includes things like an architecture. We have an idea of what modules are in there and where they came from. As the Solar Winds example shows, you need to know the provenance of your software, but you also need to know the components.

If you're planning a building or any kind of process, you design and document its components, processes for development, use, and maintenance. For a digital system including one with AI, you also have to secure it: this includes the development and operation logs, and provenance of software and data libraries. If you're using machine learning, you need to be sure about the provenance of the data. All these things must be cyber-secure and keeping this all straight is called development and operations. It helps us write our software better. Good software companies have been doing this for decades.

But AI companies are often not doing it and it isn’t clear why. You can document with secure revision control every change to the code base. It’s helpful to programmers to be able to see who changed what and why. For Machine Learning, you need to keep track of your data libraries and the model parameters. Unbelievably, people in Machine Learning often cannot go back and replicate their own results. You also need to keep logs of testing. For the last several decades, we in software have been “programming to test.” You think beforehand how you want the system to work, and then you document whether you achieved those goals. You write the tests before you write the code.

If you have a system that is not changing, neither through machine learning nor corrosion, then you may only test up front and then release. Otherwise, testing should be frequent, even continuous. Companies like Facebook have enormous numbers of processes running to make sure it doesn't look like something is going wrong in real time. Normally, testing is done in advance and monitoring/testing is continuous during deployment. Again, records should be kept for the benefit of developers as well as subsequent auditors.

If you're good with digital technology, transparency should be easy. The best AI companies should have simple, transparent processes. At Google, people were brought in specifically to do ethics and they got fired for writing a paper about the ethics of natural language processing. The bigger issue about transparency for me, of course, was in ATEAC when Google put together a panel of external experts, and they couldn't even communicate internally about what they were doing and why. And they also had problems with external relations. It left me wondering why one of the world’s leading communication companies cannot do transparency!

I have thought of three reasons.

o-list-item
o-list-item
o-list-item

What can we do? We are doing amazing things with combinatorics. It will never be perfectly solved, but the work we are doing in quantum and in bringing people together to be able to communicate is changing the world. With polarization, we need to reduce vulnerability. If people feel like they are going to go bankrupt, lose their home, or lose their children, it is plausible that reducing their risk profile is more important than having a riskier opportunity to do better, which is what comes from working with more diverse groups. This problem can be solved through infrastructure and investment. As for multiple conflicting goals, the best way to solve that is iteratively, through iterative design. This is what governance and politics are all about. People tend to think that we've done something wrong because we're in a broken situation. But it’s natural that innovations lead to new problems to solve. When we talk about regulation in biology, it is about keeping things going and it often involves oscillations. We aren't necessarily looking for a solution that’s going to last forever. We are looking for a solution that we can apply regularly. If it is every five years or every 10 or 30 years and we can keep ourselves in a more or less sustainable balance, then it is fine. Ultimately a perpetuation is also intractable, but let's just make it for another billion years. 

About Prof. Joanna Bryson

Joanna Bryson is Professor of Ethics and Technology at the Hertie School in Berlin. Her research focuses on the impact of technology on human cooperation, and AI/ICT governance.  From 2002-19 she was on the Computer Science faculty at the University of Bath. She has also been affiliated with the Department of Psychology at Harvard University, the Department of Anthropology at the University of Oxford, the School of Social Sciences at the University of Mannheim, and the Princeton Center for Information Technology Policy. During her PhD work, she observed the confusion generated by anthropomorphized AI, leading to her first AI ethics publication “Just Another Artifact” in 1998. In 2010, she co-authored the first national-level AI ethics policy, the UK's Principles of Robotics. She holds degrees in psychology and artificial intelligence from the University of Chicago (BA), the University of Edinburgh (MSc and MPhil), and the Massachusetts Institute of Technology (PhD). Since July 2020, Prof. Bryson has been one of nine experts nominated by Germany to the Global Partnership for Artificial Intelligence. Professor Bryson  has received an AXA Award on Responsible Artificial Intelligence for her project about dealing with humanoid robots.