Socio-economy & New Tech

    Artificial Intelligence

Post-Doctoral Fellowships

Germany

Fairness in Machine Learning: Algorithmic Discrimination and Exploitation as a Challenge for EU Law

Artificial Intelligence (AI) is advancing at an exponential pace. Machine learning (ML) algorithms are now able to sift through and interpret massive amounts of data, promising considerable benefits for businesses and economies. However, the rapid adoption of these powerful tools carries with it significant risks, including unfair differentiation between individuals. “The research topic has received increasing attention in computer science and economics, but less attention in the legal literature”, explains Dr. Philipp Hacker from the Humboldt-Universität in Berlin. As examples of discrimination and exploitation in machine learning have already been observed, tackling the regulatory challenges resulting from this gap has become essential. The project aims to analyze the current way in which EU law responds to such risks, and explores the possibility of novel regulatory tools.
Machine leaning (ML) algorithms enable the swift extrapolation of patterns within a large dataset. Nowadays, companies are capturing enormous amounts of customer data in order to target each individual with more tailored products. “However, the project starts from the observation that not all differentiations driven by ML are benign; rather, some must arguably be considered illegitimate”, as Dr. Philipp Hacker explains. To stress his point, he mentions the recent example of a beauty contest decided by an AI agent which turned out to show a bias against women of color. “Imagine the proportions this problem would take if similar biases affect decisions about credit applications, for instance”. The other risk the project aims to address is the exploitation of cognitive vulnerability. As algorithms increasingly adapt contractual offers to individual characteristics, there is a significant risk that contractual content is tailored not only to the preferences, but also to the cognitive weaknesses that become apparent in the data sets. This may pay off for companies, but it “violates basic norms of fairness”. Hence the necessity for legal dispositions. “Trust is another reason why we need to address these risks”. Such abuse damages the trust in machine learning systems, which are already viewed with suspicion. “Trust in the systems we interact with on a daily basis, be they political institutions or technological systems, is one of the most important resources of our societies. This trust is currently eroding at a dangerous pace. With respect to AI, it needs to be fostered not only for instrumental reasons, but also because it positively impacts on the well-being of those affected by AI.”

How to adapt the EU legal framework to the challenges of the future

The question Dr. Philipp Hacker asks is, how do we offer people the assurance that the ML algorithms can take ‘fair’ and trustworthy decisions on a legal level? To be able to provide an informed answer, the researcher intends to proceed in consecutive steps, starting with questions about the current doctrinal structure offered by EU law in this context: Are current regulatory strategies efficient? What are their limits? The researcher will then be able to tackle the issue of new regulatory tools to address the shortcomings of the existing ones. This step of the project will draw primarily on technologically-informed regulations. Two main strategies will be investigated: personalized law, and what may be called ‘principled coding rules’. The first uses ML to detect the potential vulnerability of a data subject, in order to then provide specific legal protection; the second infuses regulation directly into algorithms by intervening during the coding process. “Personalized law brings ML technology to regulation; conversely, in principled coding, regulation is infused into coding”, resumes Dr. Hacker. Both strategies have their upsides and downsides, the major one being that the first allows for more flexibility but creates its own problems of privacy protection as regulators, too, gain access to citizen’s data.

Regulation of the digital economy is one of the most pressing and cutting-edge topics of legal research. Dr. Hacker’s research approach innovatively offers to move beyond the current fixation on privacy in the area of Big Data and Machine learning and to address topics that have received too little attention so far, namely discrimination and exploitation. His approach, considering ML both as an object of regulation and as a means, is also very novel. Finally, the simultaneous treatment of both discrimination and exploitations risks holds great promise of cross-fertilization in the design of novel and state-of-the-art regulatory solutions.

Philipp
HACKER

Institution

Humboldt Universitaet zu Berlin

Country

Germany

Nationality

German

ORCID Open Researcher and Contributor ID, a unique and persistent identifier to researchers