“Serving those who serve in government”

  1. Home
  2.  → 
  3. Employee Discrimination
  4.  → Could more government AI lead to more discrimination?

Could more government AI lead to more discrimination?

On Behalf of | Dec 9, 2022 | Employee Discrimination

Federal agencies and other businesses have made increasing use of artificial intelligence (AI) in recent years. The technology has led to all manner of improved efficiencies, but there may be a hidden cost.

As the Guardian recently reported, a survey of recruitment executives found that nearly all of them used AI in their hiring processes. More concerning, however, is the fact that 88% of them knew the AI could reject qualified applicants. So, how do these AI programs interrupt the hiring process?

4 ways recruitment AI programs can fail

By now, the world is full of stories about qualified job seekers who found themselves inexplicably turned down. The Guardian notes that recruitment AI has some advantages. It can review everything in an applicant’s resume, instead of skimming as human HR professionals might. However, businesses and federal agencies need to understand the possible shortcomings of recruitment AI:

  • When employers list too many skill requirements on their job postings, the AI may not distinguish between the most important skills and less-important skills.
  • AI reviews frequently lead to automatic rejections for issues that human reviewers might weigh against the rest of a resume. For example, many AI recruitment programs will automatically reject any applicants with a 6-month gap in their employment histories. This is true even if the gap is for military deployment or other reasonable reasons.
  • AI screeners often base decisions on some questionable “facts” that they’ve learned. The Guardian noted that one program associated the words “Thomas” and “church” with better job performance. Another program promoted resumés that featured the words “Jared” and “lacrosse.”
  • Employers who use pattern-finding and personality tests to measure compatibility may be turning down the most qualified candidates.

Of course, these problems are all exacerbated when there’s limited or no transparency.

Disabled job seekers may suffer the most

While recruitment AI may present concerns in any situation, it’s even more concerning for job seekers with disabilities. In fact, the Equal Employment Opportunity Commission (EEOC) and Department of Justice (DOJ) released joint statements to express this concern.

Importantly, the EEOC notes that the risk of algorithmic bias goes beyond resume screening. AI can also influence other parts of the employment process:

  • Virtual interviews
  • “Job fit” testing software
  • Performance monitoring

Such programs may discriminate against people with disabilities, and violate the ADA, if they don’t allow reasonable accommodations, screen out qualified people because of their disabilities or breach the limitations on disability-related inquiries.

Importantly, the EEOC notes that candidates who face discrimination can hold the potential employers responsible. This is the case even when employers rely on information created and managed by outside vendors.

AI is not an excuse to discriminate

There have been some crazy cases of algorithmic discrimination. At one point, Amazon learned that its AI was biased against female candidates. The AI had “learned” about good candidates by reviewing the resumes of existing employees. Because those employees skewed heavily male, the AI internalized that gender bias.

That’s a fairly extreme case of AI bias, but it’s not the only one. Also, AI bias doesn’t need to be so obvious to be illegal. AI programs cannot turn down someone due to their inclusion in a protected class. And they need to offer a chance to seek reasonable accommodations. When they don’t, that’s likely discrimination.

Archives

RSS Feed