AI May Not Steal Your Job, but It Could Stop You Getting Hired

Share

If you’ve worried that candidate-screening algorithms could be standing between you and your dream job, reading Hilke Schellmann’s The Algorithm won’t ease your mind. The investigative reporter and NYU journalism professor’s new book demystifies how HR departments use automation software that not only propagate bias, but fail at the thing they claim to do: find the best candidate for the job.

Schellmann posed as a prospective job hunter to test some of this software, which ranges from résumé screeners and video-game-based tests to personality assessments that analyze facial expressions, vocal intonations, and social media behavior. One tool rated her as a high match for a job even though she spoke nonsense to it in German. A personality assessment algorithm gave her high marks for “steadiness” based on her Twitter use and a low rating based on her LinkedIn profile.

It’s enough to make you want to delete your LinkedIn account and embrace homesteading, but Schellmann has uplifting insights too. In an interview that has been edited for length and clarity, she suggested how society could rein in biased HR technology and offered practical tips for job seekers on how to beat the bots.

Caitlin Harrington: You’ve reported on the use of AI in hiring for The Wall Street Journal, MIT Technology Review, and The Guardian over the past several years. At what point did you think, I’ve got a book here?

Hilke Schellmann: One was when I went to one of the first HR tech conferences in 2018 and encountered AI tools entering the market. There were like 10,000 people, hundreds of vendors, a lot of buyers and big companies. I realized this was a gigantic market, and it was taking over HR.

Software companies often present their products as a way to remove human bias from hiring. But of course AI can absorb and reproduce the bias of the training data it ingests. You discovered one résumé screener that adjusted a candidate’s scores when it detected the phrase “African American” on their résumé.

Schellmann: Of course companies will say their tools ​​don’t have bias, but how have they been tested? Has anyone looked into this who doesn’t work at the company? One company’s manual stated that their hiring AI was trained on data from 18- to 25-year-old college students. They might have just found something very specific to 18- to 25-year-olds that’s not applicable to other workers the tool was used on.

There’s only so much damage a human hiring manager can do, and obviously we should try to prevent that. But an algorithm that is used to score hundreds of thousands of workers, if it is faulty, can damage so many more people than any one human.

Now obviously, the vendors don’t want people to look into the black boxes. But I think employers also shy away from looking because then they have plausible deniability. If they find any problems, there might be 500,000 people who have applied for a job and might have a claim. That’s why we need to mandate more transparency and testing.