The title is linked to my memory of the early internet days. President Bill Clinton, referring to the intention of the Chinese government to control the internet, said: “That’s sort of like trying to nail Jello to the wall”. http://www.techlawjournal.com/cong106/pntr/20000308sp.htm China proved him wrong. The similarity of concerns about the internet then and the worries about AI today is striking.
An EU regulation of AI is around the corner. The EU AI White Paper published on 19 February 2020 is the first step. We can expect a proposal by the Commission early 2021. What it will say, we do not know yet, but the White Paper presents an opportunity to contribute to its shape and scope. The guiding principle is to establish trust in AI and to foster a human-centric application of AI, as elaborated by the High-Level Group (HLG) set up by the Commission.
Would a generic AI regulation based on principles be adequate? This seems to have been the first idea, using the recommendations of the HLG as input and translating them into legal provisions. This is way too simple an approach to capture the complex issues related to AI. In the White Paper the Commission favours a risk-based approach concentrating on specific AI applications and sectors. So, how will this play out?
If a regulator targets a specific technology, it must define this technology precisely. This is a complication with AI, which escapes a precise definition and is still developing. In addition, AI does not work stand-alone but enhances products and services. Thus, to single out algorithms, which add features to a product or service, for instance, a more agile distance warning or automatic parking capabilities to a car, looks exaggerated and impractical. A ‘generic AI regulator’ would need to understand all sector specific issues, e.g. in automotive industry or in the health area, thoroughly and be able to separate AI related issues from the other parts of these products.
Policy makers should treat AI as part of specific regulation as far as it raises risks. For instance, increasingly cars contain electronic components powered by AI algorithms, which must comply with standing safety, health, and environment legislation. As we move towards automatic driving those AI components will be subject to particular conformity tests, based on evidence and the broader transport policy. AI generic laws do not fit here. So the question is whether applicable provisions, for instance, regarding electrical appliances or medical equipment, needs revising because of new capabilities. In the EU, the product safety, liability and machinery directives are candidates for a review.
What about services? To what extent will AI have harmful or unethical effects? Regulators will pay particular attention to services which affect human life such as the justice system, public safety, insurance, banking, personnel management, education or media. All these sectors are regulated or governed by public rules. In addition, data protection rules (GDPR) horizontally apply to private data used in these services, and implicitly to AI. Thus regulators will need a specific gap analysis for each service field to answer the question what AI introduces to these services that would call for additional rules. Some examples:
Justice system. Decision making about parole for prisoners. AI makes a prediction of re-criminalisation.
Education: Giving access to a university course, for instance, to medical studies. Offering grants and scholarships to promising candidates.
Access to training schemes for unemployed people: Based on the persons profile and career history, which re-training scheme promises success.
Insurance: Determine a risk premium or offer discounts taking into account personal life style.
In the above examples concerns could be raised as to discrimination, privacy, and missing explainability. However, we need to analyse each case individually. General provisions, for instance, to introduce a non-discrimination clause, are not effective. Who is addressed? What do they mean?
Let’s do more analysis and simulate such a gap analysis using two examples. One is public face recognition, the other scrutinizes hiring procedures.
Face Recognition
AI is good at face recognition, but not flawless. Several reports mention biases in the algorithms and unreliable results, which mainly go back to non-representative data sets. Privacy advocates warn about the usage of this technology as it can lead to discrimination and can negatively affect people’s life. The discussion around face recognition is gaining speed and becoming itself biased, with people taking default positions, either negative or positive. So, let’s unpack it.
First, let us distinguish face recognition as an authentication tool from identification. Authentication is the confirmation, a proof, of someone’s identity, which the application or device already knows. I claim to be the owner of that smart phone and my face acts like a password. The usage of face recognition as an identification tool is to find something out about people, in particular their identity and related information, for instance, their social media activities. It is this ‘who is’ function of face recognition that dominates the discussion.
Second, besides authentication and identification, increasingly something one could call ‘face analysis or interpretation’ is being developed and deployed. In analysing face expressions, an algorithm makes predictions about what people feel, their emotional state, health situation, and even intentions. Such applications can be deployed in a variety situations, such as police interrogations, job interviews, measuring students attention, or identifying suspicious people at the airport. In this article I will not discuss this further, but here is a useful reference, as we will hear more about these applications soon. http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
Third, we should separate private from public actors as regulation applies to them differently. In Europe, for instance, a shop that replaces customer cards by face recognition needs explicit consent following detailed privacy and consumer protection provisions. The GDPR also applies to public administrations but with exemptions. Take, for instance, Article 6 GDPR, which allows processing of private data ‘for performing a task carried out in the public interest’, which must be proportionate, i.e. must be necessary, effective, and be the least intrusive choice. In democracies public law grants authorities specific power, but also sets limits and imposes processes, for instance, to ask for a warrant before wire-tapping someone.
Face recognition as an identification tool has become widespread in social media, which allow to find photos of people, their friends or someone. This appears to be uni-directional, from identity (e.g., a name) to an image. Yet, it also works the other way round as, for example, the Russian/US company Find Face or New-York-based Clearview AI show. Or, look at the Chinese face swapping App ‘Zao’. These companies or Apps exploit loose privacy settings of users on platforms and also lure them into uploading their photos by offering fancy services (like showing how you will look in 20 years). How these companies use the data is hard to verify and believing their statements is a matter of trust. They are often not in the jurisdiction of the GDPR. Unless these companies have a legal establishment in a Member State, European regulation will struggle to be effective. Yet, using photos harvested from the Web for face recognition purposes without users even aware of it, is problematic. Ideally international cooperation could help, but the prospect to agree on common rules on the internet is close to Zero.
What about face recognition by public authorities? Public safety comes to mind first, i.e., face recognition embedded in video surveillance. Authorities can use it to scan people passing by to find someone in a database, a warranted person or a missing child. Many people will find this intrusive. One of the problem here are false positives; critics point to algorithms that incorrectly classify minorities because of biases in the data sets. In other use cases face recognition gathers ex-post evidence in crime investigation or support alert systems when an incidence, a car accident, a robbery or a terrorist attack occurs.
The usage of face recognition by the authorities in China and the police in the US seems to dominate the debate in Europe. Why should the decision of US companies, such as IBM and Microsoft, to stop selling face recognition systems to the US police, lead to a ban of this technology in Europe? We may come this conclusion, but, please, based on our own analysis and our legal system. This is also an expression of ‘digital sovereignty’.
In Europe, we should be able to deploy public face recognition with appropriate safeguards. Deployment should be based on explicit laws, which stipulate the purpose, design and supervision of such systems. Authorities should be obliged to introduce technical provisions such as encryption, access control and verification of accuracy. Assuming such safeguards, it is a question of political or democratic choice where and how to deploy these systems.
Recruitment of employees
AI driven tools can lower the workload of hiring people, for example, scanning a big number of applications or making predictions about promising candidates. These tools also improve the accuracy of online matching of vacancies and job seekers.
There are concerns that AI tools could discriminate people depending on their gender, race, religion, political party affiliation or age. The root cause of this mainly lies with the data sets and methods used to train the models. Even if the data did not contain sensitive information, the algorithm could still discriminate, for instance, when correlating other data points such as a place of living, work or education (indirect discrimination).
But are we fair to the algorithms? After all, there is a lot of bias with human recruiters selecting candidates, even coming down to unconscious value judgments and personal preferences. For instance, tests have shown that when CVs do not contain names, which show gender or nationality, recruiters choices become more fair. Recruiters regularly use expert systems and matching software that human experts have programmed and filled with data. Organisations conduct aptitude tests and send candidates to assessment centres. According to a 2018 CareerBuilder survey, 70% of employers use social media to screen candidates during recruitment process. Whilst none of these tools and methods are specifically regulated, in many countries general and employment specific non-discrimination rules apply.
In the EU two non-discrimination directives refer to ‘access to employment’: Directive 2006/54/EC on implementing the principle of equal opportunities and equal treatment of men and women in matters of employment and occupation (recast) stipulates in Article 14 that there shall be no direct or indirect discrimination on grounds of sex in relation to conditions for access to employment, including selection criteria. Directive 2000/78/EC establishing a general framework for equal treatment in employment and occupation contains the same provision (Article 3).
Member states have not yet approved a broader, more general Directive, proposed by the Commission in 2008 (COM(2008)426), for which unanimous decision in the Council is necessary. Views how to define and tackle discrimination differ, even within a group of countries supposedly sharing the same values.
The various national provisions for the hiring process, in transposing these directives, focus on blatant and open discrimination. However, to employ a person or not is by nature a matter of choice, including personal preferences. Employers are, for instance, not obliged to explain the reasons for their decision. It would be impractical to ask for non-discrimination proofs related to millions of annual recruitments across the economy. Yet, tools and methods should not contain obvious discriminatory criteria and questions. For instance, under current rules employers should not ask for discriminatory information unrelated to the qualification for a job. A candidate may not answer and can lie without consequences, for instance, dismissal after being hired.
The hiring methods are changing, becoming more digital, more based on social media, and less relying on CVs, application letters, and first-phase interviews. This does not automatically mean more AI, but more computer help. Unilever, for instance, publishes job vacancies on social media and applying candidates get directed to a registration site. The company adds LinkedIn information and let candidates play AI controlled games. Managers carry out face-to-face interviews to take a final decision among few remaining candidates. https://www.inc.com/wanda-thibodeaux/unilever-is-ditching-resumes-in-favor-of-algorithm-based-sortingunilever-is-di.html?cid=search
Take another example. DeepSense, a US based company, offers AI tools that analyses social activity to create a personality profile by looking at linguistic patterns. It does not look at obviously discriminatory information and ignores factors such as age, race and gender. The focus is on role fit and personality. Obviously such a tool only helps with candidates that have a significant online presence. There are limitations of language analysis, for instance, double meaning, slang or sarcasm. Yet, these are issues of efficiency, not of discrimination.
The Financial Times (“Will recruitment ‘gamification’ drive diversity or replicate biases?”) quotes Keith McNulty at McKinsey: “We are trying to open the aperture of who we recruit and the places we recruit from. There’s a danger in selection that you tend to default to the easiest decisions—those at the top universities. We want to give opportunities to people from a wide variety of backgrounds.” The same article quotes Cathy O’Neil, a computer scientist and sceptic of AI: “We have a long history of discrimination in hiring. We cannot allow recruitment platforms to simply propagate the past with naïve AI, which is what happen by default.”
Where does all of this leave us with AI regulation? Unfortunately, with more questions. When does a computer programme to aid in selection qualifies as AI? Does AI increase discrimination as critics fear or has it the opposite effect? Can you rely on the self-interest of employers to recruit the right people and dismiss biased algorithms or techniques? These are empirical issues. Their evaluation depends on the legal context in countries. Europe may regulate more than the US, but the scope to take legal action against an employer for discrimination is broader in the US.
Regulation could stipulate a transparency obligation. Bigger companies such as Unilever do this already, as it helps to attract the right people. This might be more of a problem for SMEs, for which even light rules quickly add up to become a burden. Law makers should also consider the interests of recruitment service providers, that are afraid of opening up their selection techniques. These algorithms are their intellectual property and fundamental to their business model.
What about requesting a proof that an AI tool does not discriminate? I refer to my article of 1 July ‘Demystifying AI’, which highlights the difficulties to eliminate biases and to define ‘non-discrimination’. Policy makers would have to set up a mechanism of testing and certification. Do we believe that a public authority can make consistent judgements about fairness of AI used in a variety of applications?
Regulation could ask that machine learning models go through a best-practice screening test, applying the latest available method. This is similar to the GDPR provision on privacy impact assessments for which no approved methods or standards exist either. Such a regulation could inject burden and uncertainty into the market without a big gain.
My hunch: Promotion of self-regulation coupled with transparency obligations, in combination with support for standards and research of explainable AI (X-AI) can increase confidence in the fairness of such applications.
One more point: Beyond recruitment, AI driven work force management is becoming increasingly popular. It is even the fastest growing application in Europe. Interestingly, European start-ups are successful in bringing such tools to the market (see MIT Technology Review (2020), The Global AI Agenda: Europe). These tools can help individual employees to improve their work, for instance, to analyse how they process emails or write documents, without being monitored. Companies can also implement AI as rigorous optimisation methods and enforce productivity targets. The unfamiliar element is automatization through algorithms setting targets or conducting performance assessments. This generates an uncomfortable feeling: machines control people. Companies are well advised to introduce such measures with care. Regulating algorithmic workforce management, however, would encounter a number of pitfalls. When is a method too intrusive, does it create biases or when does it impede on workers’ rights? Probably the best way forward is to let the social partners, a well-established practice in Europe, deal with the complex questions that will arise.
Conclusions
The above reflections aim to show that an attempt to regulate AI as a technology and without specific context will lead to problems, increase costs for business without solving much.
If law makers use general wordings to avoid falling into a definition trap, they will create uncertainty. Everyone in the AI eco-system will have to assess the implications for themselves: Am I addressed? Am I compliant? What is the interpretation for my business? The EU may end up regulating all digital technologies as they become indistinguishable from what is called AI.
Therefore, regulation should tackle well-defined problems, be case-specific, and based on existing laws. The article analyses two scenarios where AI is playing an increasing role. The idea behind this assessment is to weigh different arguments.
My recommendation to the European legislators is to focus on our legal and political context, ignore the hype around AI, and tackle problems where they show up. As I argue in ‘Demystifying AI’ those problems will predominantly occur with social media and dominant platforms. Reject the idea of a horizontal AI regulation.