Balancing Innovation And Compliance: Navigating The Legal Landscape of AI In Employment Decisions
Emerging technologies are changing the workplace and how we make hiring decisions. Here, we discuss automated employment decision tools and the legal landscape around AI and hiring.
This article was first published on Forbes on October 31, 2023.
Emerging technologies have ushered in a new era of legal complexities for employers. While artificial intelligence and, more recently, generative AI have garnered significant attention, the initial wave of regulation is focusing on automated employment decision tools (AEDTs). This article delves into an overview of AEDTs, the guidance provided by the U.S. Equal Employment Opportunity Commission (EEOC), and the unique legal landscape in various jurisdictions, including New York City, Illinois, California, and Maryland. Our goal is to help employers navigate this evolving landscape while emphasizing the importance of automating processes without sacrificing fairness in employment decisions.
The Rise of Automated Employment Decision Tools (AEDTs)
The use of artificial intelligence (AI) in the employment lifecycle has become pervasive. As AI becomes integrated into employment decision-making, concerns regarding bias (or perceived bias) imparted by algorithms have taken center stage. Employment decisions, ranging from filtering job applications to making hiring, retention, promotion, and termination choices, have become reliant on these automated tools. The promise of using technology to streamline decision-making is alluring, but it comes with significant risks and, in some cases, imposes compliance requirements on employers who choose to incorporate AI into their processes.
Algorithmic Bias and Regulatory Attention
One of the most significant concerns in the AI-driven employment landscape is algorithmic bias. Just like humans, algorithms are not immune to bias, and this issue has garnered increasing attention from lawmakers and regulators in recent years. For example, President Biden recently signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, and advances American thought leadership in terms of AI around the world.
The focus on AI within the employment context has gained momentum, as demonstrated by a joint statement from the Consumer Financial Protection Bureau (CFPB), Department of Justice (DOJ), EEOC, and Federal Trade Commission (FTC) concerning discrimination and bias. The joint statement underscores three crucial points:
Application of Existing Legal Authorities: The statement affirms that existing laws and regulations are equally applicable to the use of automated systems and new technologies, signaling that agencies will apply existing legal frameworks to AI.
Addressing Harmful Outcomes: It highlights the ways AI can “perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes,” emphasizing the need for vigilance in AI employment practices.
Protection of Individual Rights: The statement concludes with a commitment from these agencies to vigorously protect individual rights against discriminatory AI practices.
Charlotte Burrows, Chair of the EEOC, emphasizes the need to ensure that new technologies do not become a “high-tech pathway to discrimination.” The EEOC has established an initiative on artificial intelligence and algorithmic fairness to address those concerns.
EEOC Guidance on AI in Employment
The EEOC has taken proactive steps to guide employers when using AI in employment decisions. Two essential pieces of guidance have been released:
1. Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964: This document addresses the use of AI in employment decisions and the concept of disparate impact. Disparate impact occurs when an employer uses a process that appears neutral but disproportionately affects individuals based on protected characteristics, such as race, color, religion, sex, or national origin. The guidance outlines several critical points:
Employers must audit AI tools that significantly contribute to employment decisions to ensure they are not discriminatory.
If there is a possibility that an AI tool imparts disparate impact, employers must demonstrate that using the tool is “job-related and consistent with business necessity” and that there are no less discriminatory alternatives that are equally effective.
Employers who use third-party AI solutions, software, or online tools for employment decisions are not shielded from liability if these tools disproportionately affect individuals in protected classes.
The “four-fifths rule,” which suggests that there should not be more than a 20% variance in selection between different groups, can be used as a benchmark for assessing disparate impact. 2. The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees: The guidance also addresses the Americans with Disabilities Act (ADA) in the context of AI. It outlines three ways in which an employer’s AI tools may violate the ADA:
Failure to provide a “reasonable accommodation” in lieu of AI. Employers should ensure that accommodations are available, such as oral exams for individuals with limited manual dexterity.
Screening out a candidate with a disability who can perform essential job functions with a reasonable accommodation. For example, if an AI tool screens out someone with a speech impediment who can perform the job, it likely raises discrimination concerns.
Conducting disability-related inquiries or medical examinations that violate the ADA’s restrictions.
The guidance provides employers with practical steps to comply with the ADA when using AI decision-making tools, including allowing for opt-outs, providing notice before AI use, focusing on essential job qualifications, and reviewing vendor compliance.
State and Local Measures: New York City, Illinois, California, and Maryland
Now, let’s focus on state-specific measures that directly affect employers using AI in employment decisions. These unique legal landscapes add an additional layer of complexity to the use of AI and AEDTs.
New York City: The city’s artificial intelligence and hiring ordinance, effective since July 5th, 2023, impacts employers who use AEDTs that significantly aid in employment decisions. This law prohibits employers from using AEDTs for candidate or employee screening without first conducting a bias audit. Employers must undergo an annual bias audit conducted by an independent auditor and publish the results on their website. Furthermore, employers must give candidates ten days’ notice before using an AEDT for hiring decisions.
Illinois: The state enacted the Artificial Intelligence Video Interview Act in 2020. This landmark legislation places specific obligations on employers:
Employers must provide prior notice to applicants that AI may be used to analyze video interviews.
Employers must explain how AI works and the types of characteristics it uses to evaluate applicants.
Applicants must give their consent to be evaluated by AI software.
Employers are prohibited from sharing an applicant’s video with others, except those necessary for evaluating fitness for the position.
Candidates can request the deletion of videos after a specified time period.
Recent amendments to this act aim to address racial bias and AI in employment. Employers relying solely on AI to determine in-person interviews must submit demographic information about those not offered in-person interviews and those hired.
Maryland: Maryland’s Facial Recognition Technology law defines “facial recognition service” and “facial template.” Employers must obtain signed waivers that provide an applicant’s consent if facial recognition technology is used during interviews. The waiver must include the applicant’s name, interview date, consent to facial recognition use, and whether the applicant read the consent waiver.
California: While California’s Fair Chance Act is not AI-specific, it influences the use of AI in employment decisions for candidates with a criminal history. Recent revisions expand the definition of an employer to include services that evaluate an applicant’s criminal history on behalf of an employer. This means AI solutions assessing an individual’s criminal history could be subject to compliance requirements under the Fair Chance Act. Additionally, the state’s case law aligns with this approach.
Legislation and Beyond
Dozens of bills were introduced by lawmakers in the 2023 legislative session, although few advanced beyond discussion. For example, a bill that would have imposed specific obligations on California employers to evaluate the impact of automated employment decision tools, though it did not pass, might be reintroduced in future legislative sessions. Employers should monitor developments in this area. Similarly, a bill in Washington, D.C., has reappeared and, although less restrictive than previous versions, still focuses on prohibiting discriminatory actions taken by AI tools in employment decisions. Expect similar bills to emerge in various statehouses in 2024.
As these state and local measures continue to evolve, it’s imperative for employers to stay vigilant and proactive in their approach to AI in employment decisions. While we’ve covered the guidance, regulations, and laws that are in place or emerging, it’s important to understand the practical implications and best practices for employers in this AI-driven landscape.
Balancing Innovation with Compliance
Employers understand that AI can significantly enhance the efficiency and precision of various aspects of the employment process. It offers the potential for substantial gains in terms of speed and accuracy. However, it’s equally important to strike a balance between leveraging AI for its capabilities while also ensuring that the technology aligns with legal requirements and maintains fairness and non-discrimination in employment decisions.
Scoring and predictive analysis represent areas where AI can significantly benefit employment decisions. Many employers are inundated with vast amounts of factual data about their workers, but they often struggle with synthesizing this data effectively. AI tools can provide solutions by assessing an individual’s criminal history and determining recidivism rates, offering insights into the likelihood of reoffending or committing future crimes. Similarly, AI can analyze an individual’s credit history, evaluating the risk based on unsecured debt, mortgages, student loans, and medical bills. It can also examine public records to explain gaps in an individual’s history.
While the benefits of scoring and predictive analysis are evident, there are potential pitfalls. Predictive analysis, especially in the context of criminal recidivism, has faced scrutiny for its accuracy and potential bias. AI models assessing recidivism have been found to be incorrect, particularly for black criminal offenders, highlighting the “bias in, bias out” nature of AI data models. Furthermore, the belief that scoring is impartial can lead to detachment from the human implications of machine-born decisions. From a compliance perspective, the EEOC mandates assessing if predictive analysis or scoring causes a disparate impact. Additionally, from an FTC or CFPB perspective, mechanisms must be created for individuals to challenge and dispute adverse information produced by AI.
Aside from scoring and predictive analysis, the background check industry is considering various ways to use AI to enhance the employment screening process:
Background Check Accuracy: AI could significantly enhance the accuracy of background checks. It may be used to verify identities by matching personal identifiers, helping verify credentials, and conducting geographic analysis to help narrow an individual’s location at the time of a purported crime.
Adjudication Analysis: AI can create continuous learning models for adjudication decisions. By analyzing past adjudication decisions and feedback from human adjudicators, AI can spot patterns and trends that influence decisions.
Quality Improvement: One of the most valuable applications of AI in background checks is the analysis of information disputed by candidates. AI can help identify trends and areas of improvement, leading to higher-quality background reports.
Transparency and Compliance
Transparency and compliance with evolving regulations are paramount. To navigate this terrain successfully, employers should focus on the following five factors:
Transparency: Employers must ensure transparency when implementing AI in their processes. Providing clear and understandable communication to candidates about the use of AI in decision-making is crucial. Candidates should be informed about how their data will be used and how decisions will be made. Transparency builds trust and enables individuals to make informed choices.
Accountability: Employers should establish accountability for AI-driven decisions. This means taking responsibility for the outcomes and actions of automated systems. Understanding the factors that influence AI decisions and being ready to explain or rectify any discrepancies is essential.
Auditing: Regular audits of AI systems can help identify and rectify bias or unintended discrimination. Employers should consider third-party audits to ensure impartiality.
Bias Mitigation: Employers must actively work on minimizing bias in AI models. This includes continuous monitoring, refining algorithms, and ensuring that the data used to train AI systems is representative and diverse.
Fairness and Equal Treatment: Employers should strive for fairness and equal treatment throughout the hiring and employment lifecycle. This includes conducting fair background checks, providing equal opportunities, and ensuring that AI does not lead to unlawful discrimination.
The use of AI in employment decisions is rapidly evolving, and employers must adapt while ensuring compliance with legal requirements. As we look to the future, technology will continue to transform how we make employment decisions. However, the responsibility lies with employers to embrace these changes while upholding the principles of fairness, transparency, and non-discrimination. In a world where technology and human values intersect, the key to success is to harness the power of AI to enhance processes while keeping human ethics at the forefront of decision-making. By achieving this balance, employers can navigate the complex legal landscape and make better, more informed employment decisions.
Release Date: November 9, 2023
Alonzo Martinez
Alonzo Martinez is Associate General Counsel at HireRight, where he supports the company’s compliance, legal research, and thought leadership initiatives in the background screening industry. As a senior contributor at Forbes, Alonzo writes on employment legislation, criminal history reform, pay equity, AI discrimination laws, and the impact of legalized cannabis on employers. Recognized as an industry influencer, he shares insights through his weekly video updates, media appearances, podcasts, and HireRight's compliance webinar series. Alonzo's commitment to advancing industry knowledge ensures HireRight remains at the forefront of creating actionable compliance content.