Selection History Repeats Itself – So Can Hiring Bias in AI

When it comes to Talent Acquisition, automation and artificial intelligence are on the forefront of the industry’s technological, and innovation roadmaps. In the last decade, Talent Acquisition has exhibited a pronounced trend of de-personalization catalyzed by the exponential growth in the number of ways applicants can now access job listings.

 

On the other side, hiring managers, overwhelmed by the number of applications, have turned to technologies focused on gathering, structuring, and analyzing the growing talent pools. But with an expanding number of startups offering solutions intended to reduce human capital costs and automate the candidate review process, it becomes increasingly important to recognize the appropriate timing and position for these products.

 

 

Fact: The resumé has become an inadequate source of information needed to identify the underlying qualities that make up a strong candidate.

 

 

Machine learning and natural language processing (NLP) technologies are the dominant forms of artificial intelligence being commercialized to combat growing applicant pools. These technologies manifest themselves in the growing abundance of parsing and sentiment analysis products entering the market. However, as we continue to deepen the capabilities of these solutions, it is important to recognize the appropriate data needed to enable these AI technologies to run an unbiased hiring process. Traditional automation mechanisms do not always gather this information effectively and can lead to troubling results.

 

 

Fact: In a high volume setting, Hiring Managers are not capable of gathering, structuring, and analyzing large volumes of data. Attempting to do so invites a multitude of biases into your process including confirmation bias, interview fatigue, and stereotyping.

 

 

For instance, let’s examine video recognition and analysis systems. If AI technology is calibrated to previous subjective accepted and rejected decisions, it will perpetuate implicit hiring biases that existed in the first place. In order to achieve a high level of algorithmic integrity, we must first establish an appropriate learning environment for Artificial Intelligence focused on new data inputs, rather than existing ones. History does repeat itself, and the biases/errors of previous hiring cycles are bound to re-emerge in an AI network without this awareness.

 

Merging AI & the Personal Touch

 

The goal of AI in Talent Acquisition is to invite objectivity into a previously subjective process, while simultaneously increasing the speed of screening candidates, minimizing human bias, and helping assess soft skills more precisely. AI and automation serve a powerful role in the era of mass job application. AI can be extremely successful in distinguishing prospective high performers, improving your time to hire, and allowing for a better allocation of your time. By reallocating your time and objectively identifying the best potential candidate, you can focus your human capital on reviewing a distinguished group of candidates and assessing, arguably one the most critical and complex element of success, cultural fit.

 

The candidate experience is a crucial part of this process. Candidates require a personal touch and will not take interviews knowing they will be judged solely by machine. Further, the implementation of AI tech into the hiring process should never create bad friction or burden candidate application requirements in the initial stages. As candidates increasingly emphasize the quality of the employer brand, a personalized level of engagement is required. AI will continue to reverberate through the HR channels as the purported magic bullet for Talent Acquisition issues. However, it’s important to ensure AI does not perpetuate existing biases and allows companies to focus a personal touch on the best-qualified candidates.

 

Download the full research report 

Comments are closed.