As employers grapple with a widespread labor shortage, more are turning to artificial intelligence tools in their search for qualified candidates.
Hiring managers are using increasingly sophisticated AI solutions to streamline large parts of the hiring process. The tools scrape online job boards and evaluate applications to identify the best fits. They can even stage entire online interviews and scan everything from word choice to facial expressions before recommending the most qualified prospects.
But as the use of AI in hiring grows, so do the legal issues surrounding it. Critics are raising alarms that these platforms could lead to discriminatory hiring practices. State and federal lawmakers are passing or debating new laws to regulate them. And that means organizations that implement these AI solutions must not only stay abreast of new laws, but also look at their hiring practices to ensure they don’t run into legal trouble when they deploy them.
Proceed with caution
According to HireRight’s 2019 Employment Screening Benchmark Report, human resources professionals agree that game-changing technologies drive efficiencies. And as AI becomes more prevalent in our daily lives — from smart speakers to smart cars and homes — 25% of respondents said they see promise in using AI for recruiting too.
Organizations, however, must proceed with caution.
Technology companies say their AI solutions not only make the hiring process more efficient, but they also eliminate long-running biases. Studies have shown that discriminatory practices, including conscious and unconscious prejudices, have held back women, minorities, and older workers in the workplace for years.
But those promises to end hiring biases may not always play out and could set up employers for potential discrimination claims. AI software is often based on the resumes and backgrounds of job seekers who were successfully hired. Using algorithms that are informed by previous hiring decisions may only perpetuate the obstacles many have faced during their working lives.
In 2018, for example, Amazon stopped using its AI recruiting tool because it wasn’t gender neutral — it tossed out more women than men from consideration. And in November, the Electronic Privacy Information Center, a technology industry watchdog, filed a complaint with the Federal Trade Commission about HireVue, maker of a popular AI hiring tool. The complaint calls HireVue’s use of face-scanning technology in its review of pre-employment interviews “deceptive.” “Hiring algorithms are more likely to be biased by default,” the complaint reads.
Tech companies are working to eliminate bias in AI platforms. But as organizations incorporate these tools into hiring, they must be mindful of both the risks and federal anti-discrimination laws that protect workers.
Title VII of the Civil Rights Act of 1964 makes it illegal for employers to discriminate against job seekers because of their race, color, sex, national origin, or religion. The federal Age Discrimination in Employment Act also prohibits employers from discriminating against job candidates who are age 40 and up.
Laws target AI
While federal laws cover discrimination in hiring, new laws are specifically targeting the use of AI. Illinois is a trendsetter. This year, state legislators passed the Artificial Intelligence Video Interview Act. Starting January 1, employers who record video interviews and use AI analyses of applicant-submitted videos must do the following before the interview takes place:
- Notify applicants that AI may be used to analyze the video.
- Provide them with information about how AI works and evaluates general characteristics.
- Obtain consent from the applicant to be evaluated by the AI platform. If the job seeker refuses, the employer can’t move forward with it.
The law also says that once an applicant has asked for the video to be deleted, employers have 30 days to comply.
Illinois’ Biometric Information Privacy Act, while focused on the collection of biometric data such as the collection of fingerprints, scans of hands, faces, and irises, and retinas, also presents potential implications for the use of AI. Some AI solutions regularly gather and store biometric data as part of their analysis. In addition to Illinois, Texas and Washington also have similar laws regulating the use of an individual’s biometrics, and it’s likely more state leaders will consider them in 2020.
Like in Illinois, lawmakers elsewhere are tackling AI in hiring, but in different ways. At the federal level, Senate and House Democrats introduced the Algorithmic Accountability Act of 2019 in April. The act would regulate the use of AI and similar platforms and require users to audit them for their impacts on accuracy, fairness, bias, discrimination, privacy, and security and to correct any issues.
While the proposed federal law tackles biases in algorithms, California lawmakers hope to encourage AI’s use. Introduced in August, the California Assembly Concurrent Resolution No. 125, titled “Bias and discrimination in hiring reduction through new technology,” urges policymakers to promote the use of new technologies, including AI, to eliminate bias and discrimination.
Implications for employers
The legal landscape for employers will only continue to evolve as the technology grows. Indeed, non-compliance with laws that protect a job candidate’s rights concerning AI and biometrics are poised to be the next big wave of litigation and enforcement actions. Employers must be prepared to adapt their policies and programs to consider this class of new laws. That includes vetting the use of AI and its potential to impart bias in the hiring process.
As they wade into this new era in hiring, here are practical tips for employers considering AI.
Ensure that AI does not present discriminatory barriers to hiring. Consider questions like these: Would a job seeker with a disability be able to interact with it? Does the AI expose an employer to information about a person’s age, race, gender or other protected status that could potentially be discriminatory?
Provide accommodation to candidates who are unwilling or unable to use AI. Don’t ask why they’d prefer not to use it. Instead, simply have an alternate method that’s void of any AI at the ready, so they can continue through the pre-employment process.
Be transparent. Disclose the use of AI to candidates. Let them know exactly how AI will be used. And require their express authorization to use AI before they ever interact with it.
Work with vendors to understand what data will be gathered as part of an AI solution. If an organization is relying on a third party to conduct their recruiting and hiring, the organization must understand how the data will be used and retained to ensure that it meets the letter of the law.
Audit the AI solution. Develop internal processes to assess and remediate any biases that may develop over the course of implementing the tool.
Stay abreast of new and developing laws. In 2020, it is likely that federal, state and local lawmakers will address AI and, more specifically, its use in hiring.
Do not rely solely on AI. Traditional methods of vetting new hires, including face-to-face interviews and other evaluations, shouldn’t be scrapped. After all, there is no substitute for our own good judgment.