Published on the 10/04/2019 | Written by Jonathan Cotton
Does AI have a role to play helping recruiters’ hardest job?…
From multiple-round interview processes to those baffling personality tests, talent recruitment is truly an art as much as a science. And when it comes to IT recruitment, things get trickier still. Just how can recruiters make confident decisions about IT talent, when the job requirements call for difficult to measure complex technical capabilities?
Recruiters are increasingly turning to the tech to for help. According to Capterra, 75 percent of hiring and talent managers now use applicant tracking software, or some other kind of recruiting software, to improve their hiring process.
With unemployment low and recruitment expensive (according to one reckoning it costs, on average, about US$4,500 to hire every new employee, a task which also takes an average of 36 days), the relatively small recruitment tech market is on a growth path. According to an estimate by Crunchbase, the US recruitment tech market crossed US$600 million for 2018 alone.
“The ethical dimension of AI is not a luxury feature or an add-on – it is only with trust that our society can fully benefit from technologies.”
So what are the challenges facing recruiters? More than half of recruiters say the hardest part of recruitment is screening candidates from large applicant pools.
Enter artificial intelligence. Using AI, automated resume screening is where quick and easy efficiency gains can be made. The first take of applicants can quickly be screened, skill-sets tabulated and warning signs noted. After the first cull, AI can find and scan applicants online data, such as social media accounts, curating the results into a streamlined result.
And businesses can now collect and analyse data from years of employment, determining the quality of historical hires better than ever before. AI can be used to analyse past decisions made by the company itself, assessing the relative achievements of your current hires to generate a picture of what an ideal candidate looks like for your business specifically.
IBM has been using its AI platform, Watson, to improve the company’s employment processes. One internal app used by the company can predict – with a claimed 95 percent accuracy – which employees are about to leave the company.
But what about bias? Gone are the days of hiring based on ‘gut feeling’. It’s now accepted that gender, ethnicity and even things like voice and height can have a discriminatory effect on the recruitment process.
Solution: Hand it off to a robot. While Tengai, the creepy, AI-powered interview robot might not be the exact direction the tech takes in future, leveraging technology to ensure a fairer interview processes is here to stay.
Nothing’s perfect of course: Unconscious biases can be learnt by software and that’s a real danger. Amazon scrapped its new recruiting engine late last year after it uncovered just such a bias against women in the system.
“AI algorithms are trained to observe patterns in large data sets to help predict outcomes,” says Maude Lavanchy, research associate, IMD Business School.
“In Amazon’s case, its algorithm used all CVs submitted to the company over a 10 year period to learn how to spot the best candidates. Given the low proportion of women working in the company, as in most technology companies, the algorithm quickly spotted male dominance and thought it was a factor in success.”
Amazon says that the recommendations generated by the tool were supplied to recruiters, but the results were “never used by Amazon recruiters to evaluate candidates”.
This potential for bias is something the European Commission is looking to address with its new guidelines for creating trustworthy and ethical AI.
“The ethical dimension of AI is not a luxury feature or an add-on – it is only with trust that our society can fully benefit from technologies,” says Andrus Ansip, VP of the European Commission.
The guidelines insist on AI that supports diversity, non-discrimination and fairness: “AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.”
Currently, that’s easier said than done however. Bias can, after all, surface in unpredictable ways.
One recent study into how algorithms deliver ads promoting STEM jobs showed that men were more likely to see the job advertised, not because men were more likely to click on it, but because women are simply more expensive to advertise to.
“Since companies price ads targeting women at a higher rate (women drive 70 percent to 80 percent of all consumer purchases), the algorithm chose to deliver ads more to men than to women because it was designed to optimise ad delivery while keeping costs low,” says Lavanchy.
“If an algorithm only reflects patterns in the data we give it, what its users like, and the economic behaviours that occur in its market, isn’t it unfair to blame it for perpetuating our worst attributes?”
It’s a fair call, and an appropriate one for the early days of AI recruitment tech. AI is a fast learner, and getting faster all the time – but in 2019, it’s still only as good as the data it’s fed.