AI Discrimination in Hiring, and What We Can Do About It

Thanks for your application! Our algorithm will be in touch.
Blog Post
Sept. 27, 2022

Late last year, the U.S. Equal Employment Opportunity Commission launched an initiative to monitor the use of AI in employment decisions and ensure compliance with federal civil rights laws. It was about time. By 2020, 55% of hiring managers in the U.S. were using algorithmic software and AI tools in their recruitment process.

The pandemic profoundly shook the labor market. But the biggest transformation occurred in April 2021, when four million people willingly quit their jobs—marking the beginning of the “Great Resignation.” Fast forward to the present, and the future of work still remains uncertain. The Bureau of Labor Statistics projects the U.S. economy will add 8.3 million jobs from 2021 to 2023, making a streamlined hiring process all the more important. But the discriminatory hiring that can result from the use of AI inevitably hurts people of color, women, people with disabilities, and society as a whole. Improved employment outcomes for members of these groups increase overall wages and boost economic growth.

Job seekers now often use digital spaces to find jobs and connect with recruiters, especially on internet platforms like Indeed, LinkedIn, and Monster. Companies like Workable, Taleo, and Recruitee offer applicant tracking systems (ATS) to help hiring managers streamline the recruiting process, while candidates use JobScan and VMock to help improve how their resume appears to standard screening algorithms.

Alongside efforts to close the gender gap via pay equity and investments in DEI initiatives, hiring managers are turning to digital recruitment to expand the pool of potential candidates. One tool that’s commonly used is LinkedIn Recruiter, a product that connects recruiters with possible candidates. Like other major job platforms—ZipRecruiter and Glassdoor, to name two—LinkedIn collects explicit, implicit, and behavioral data to connect recruiters to candidates and present opportunities to job seekers. Explicit data comprises everything on a candidate’s profile. Any inferences that can be drawn from a profile are categorized as implicit data—for example, a data analyst’s profile could convey programming or data scraping skills, even if the analyst doesn’t directly mention those skills. Behavioral data includes all of your actions on the platform, from the positions you search for to the types of posts you engage with.

So where does the bias come into play?

Bias in Algorithmic Screening

Research suggests that women often downplay their skills on resumes, while men often exaggerate and include phrases tailored to the position—making their resumes stand out to an algorithm. Applicants may also unconsciously use gendered language by including words that are associated with gender stereotypes. For example, men are more likely to use assertive words like “leader,” “competitive,” and “dominant,” whereas women may use words like “support,” “understand,” and “interpersonal.” This can put female applicants at a disadvantage by replicating the gendered ways in which hiring managers judge applicants —when the algorithm scans their resumes compared with those of male counterparts, it may read the men as more qualified based on the active language they’re using. Gendered language isn’t exclusive to applicants, though. Job descriptions are flooded with gendered language; enough to warrant the creation of a tool to check whether a job description is biased.

Nearly six decades ago, Title VII of the Civil Right Act of 1964 made it illegal for firms to discriminate on the basis of race, sex, religion, and national origin. However, unregulated algorithmic screening tools don’t always comply with this mandate. A 2021 economic study found evidence of racial discrimination in present-day hiring processes. The researchers submitted around 84,000 fake applications to entry-level positions at companies across the U.S. They found that applications submitted with distinctively Black names, like Antwan, Darnell, Kenya, and Tamika, were less likely on average to receive a response compared to applications with distinctively white names like Brad, Joshua, Erin, and Rebecca.

But racial and gender discrimination are only the beginning of the bias that AI perpetuates. Individuals with disabilities who are covered by the Americans with Disabilities Act have the right to request accommodations during the hiring process—rights that are ensured by the Equal Employment Opportunity Commission (EEOC). Earlier this year, the EEOC and the Department of Justice (DOJ) Civil Rights Division released guidance warning employers that the use of algorithmic screening tools could be a violation of the ADA. Certain hiring practices, like personality tests, AI-scored video interviews, and gamified assessments, fail to consider individuals who may need accommodations. If an individual with anxiety speaks at a rapid pace during a video interview, an algorithm that links a comfortable speaking pace with successful career outcomes would assign that candidate a low score.

Regulating AI

Discriminatory screening algorithms across all sectors are gathering more attention from legislators and policymakers. The strongest effort comes from D.C. Attorney General Karl Racine, who announced an OTI-endorsed bill banning algorithmic discrimination at the end of 2021. Previously, U.S. Senators Ron Wyden (D-Ore.) and Cory Booker (D-N.J.) joined Representative Yvette Clarke, (D-N.Y.) to introduce the Algorithmic Accountability Act of 2022. The bill would require companies to conduct “impact assessments” scanning systems for bias, effectiveness, and other factors when using AI to make key decisions related to employment, loans, and even housing applications. It also proposes the creation of a public repository at the Federal Trade Commission to track and monitor these systems.

In addition to local legislative efforts, OTI recommends a number of steps that digital platforms should take to make their algorithms fair and transparent, like publishing detailed policies that allow consumers to understand how a company uses algorithmic systems and for what purposes. In addition, companies should be more transparent by describing how they use personal data to train and inform systems—while providing users with the opportunity to opt out of using algorithmic systems.

What Can Job Seekers Do?

It begins with your resume. Companies use ATS to scan resumes for keywords that match the language used in the job description, so you should list your skills, use action-oriented words—pulling from the job description—and include quantitative details. If you’re applying for positions that don’t require design or visual skills, consider using a basic format in lieu of a well-formatted resume; most career coaches recommend submitting a Microsoft Word or PDF file. You might be able to detect whether a company uses ATS by looking for branding on the website or checking the web domain of an online application for the name of an ATS program.

Though screening algorithms streamline the hiring process for companies, the use of AI discourages job seekers and risks subjecting them to biased outcomes. Every party has a role to play in fixing the problem. Tech companies that design the algorithms need to make the datasets that inform those decisions publicly available, explain to candidates how they’re being assessed, and inform them of how their data will be used. Organizations using these tools need to recognize their limitations and introduce alternative hiring efforts to recruit more women, people of color, and people with disabilities. Applicants can, and should, report companies that use discriminatory hiring practices, and our elected officials must step in to prioritize workers’ rights and hold companies accountable. Achieving equitable employment outcomes and driving social and economic mobility starts with deconstructing technological barriers that deny job seekers a fair shot at success.

Related Topics
Algorithmic Decision-Making Platform Accountability