Listen to this article

James AstrachanStephen Hawking warned that artificial intelligence could spell the end of the human race. He reasoned that thinking machines would take off on their own and independently build more capable systems. Humans, bound by the slow pace of their biological evolution, he added, would be outwitted by the machines. Hawking’s warning was so dramatic that few have bothered to pay it heed.

In response to Elon Musk wanting to use artificial intelligence to run the federal government, Professor Cary Cogliese responded that there is a very real potential for AI to be biased. And with the complex web of laws and regulations a business must abide by, bias in certain decisions, even if AI driven or unintentional, can lead to serious legal problems. This is a warning that should not be ignored even if it will not lead to the destruction of mankind.

There is no better example of the legal problems caused by AI in the employment arena then Derek Mobley’s suit against the software developer Workday, which created an AI-driven software program used by employers to screen applicants for advertised employment positions. Mobley, Black and over the age of 40, applied for 100 positions and was rejected each time, after which he sued the software company for age, race and disability discrimination, all of which are prohibited under federal law.

Although Workday claimed it did not make any hiring decisions and filed a motion to dismiss Mobley’s suit, the court denied the motion on the basis that Workday could potentially be liable as an agent of the employer-licensees of its software. Implicit in the court’s holding, at least at this early stage, is that the employers using the software could also be liable for discrimination. The case was certified as a nationwide class action.

With a claimed 87% of companies using some form of AI in their recruitment efforts, how do they protect themselves from claims that bias has been built into the AI tools, resulting in their unlawful actions, such as discrimination, especially when third parties, such as the university of Washington have shown in studies that AI tools used to screen resumes contain racial and gender bias. This bias can arise, for example, when the data used to train the AI is overrepresented by white people and underrepresented by non-white people. The result may be a rejection of qualified candidates, resulting from the software being trained on this biased data.

It’s a very complex topic and according to Forbes Magazine, AI that searches resumes for words and terms such as “president,” “debate team,” and “captain” could reject qualified candidates from less privileged backgrounds, or might serve to underrepresent groups who have leadership qualities that are not traditional.

There are steps that recruiters can use to try to avoid discrimination claims if they are going to use AI in their efforts. Some steps will be much easier for the larger employers with resources. For example, users of these recruitment tools need to be told and understand the nature of the data on which the AI was trained, and what steps, if any, it’s developers took to avoid gender, age, racial and disability bias. It is likely AI use in recruitment will spawn a new breed of ethics consultants who will review the AI and how it was trained and certify to the employer and the developer, that it is free of a bias. Legal counsel and HR should oversee these efforts and no doubt the methodology of these consultants will be examined if there are claims filed.

Mobley, a graduate of Moorehouse College, and a sufferer of anxiety and depression was qualified for the jobs he applied for. He was eventually hired by Allstate and has been twice promoted. His case is not an easy one, and the judge has ruled Workday did not intentionally discriminate, but by using AI its developer did penalize Mosbey due to him being over the age of 40. Mosby also claims that the AI software actually profiled him and then flagged his résumé at each submission, in essence blackballing him.

Whether or not he is successful in proving bias, the users of these AI systems will likely be next generation of defendants, and they must establish before they use this sort of AI software that it is free of the sort of data bias that can lead to claims of illegal discrimination.

Jim Astrachan is a counsel to Corey Tepe LLC and has taught intellectual property law in the two Maryland law schools since 1999.