Home » Fantasy/SciFi, Video » This year your first interview might be with a robot

This year your first interview might be with a robot

10 January 2018

From Fast Company:

Fixing the interview process and diversifying their workforce are top of mind at companies looking to add staff this year, according to LinkedIn’s Global Recruiting Trends. And some of them are turning to robots and chatbots to help.

LinkedIn’s new report surveyed 8,800+ recruiters and hiring managers on how these trends would impact hiring in 2018. Those polled indicated that AI is gaining steam because it’s a timesaver (67%), removes human bias (43%), and delivers the best candidate matches (31%). More than half of survey respondents also found AI to be most effective for sourcing candidates (58%), screening (56%), and nurturing candidates (55%).

. . . .

Once candidates record themselves answering standardized questions, “robots (aka computers programmed with advanced algorithms) analyze the interviews” across 15,000 different factors including body language and facial cues to vocal tone. If they pass muster with AI, they get invited to in-person interviews. The company says it’s cut hiring time in half and is more successful hiring for “attitude,” which isn’t always evident during a phone call.

Other companies like Deutsche Telekom AG and Sutherland are using chatbots to smooth the initial application process and improve the candidate’s experience. Bots can talk to candidates to filter out the ones who wouldn’t have a chance at the job, or they could keep the conversation going so good people don’t get away.

Link to the rest at Fast Company



Fantasy/SciFi, Video

24 Comments to “This year your first interview might be with a robot”

  1. Lawsuits incoming.

  2. I very much doubt that any real AIs are involved as they don’t actually exist.

    It’s very probable that the companies’ belief that human bias is removed is delusional: the bias is almost certainly already baked into the algorithms. Still it means that unsuccessful candidates can be told that the computer decided (so the decision cannot really be explained but does not need to be otherwise justified and all the biased assumptions – including those involving illegal discrimination – are well hidden from view.) There is, of course, no way to tell whether these rejected candidates would actually be a worse match than those selected but the company will always assume this to be true as they’ll believe the algorithms even though they don’t really understand them.

    Still, this may be no worse than current systems using human beings and it will definitely save money on the recruitment process and this is what this is about (other than software companies seeing a lucrative market.)

    • The biggest problem is that with humans the weighing factors aren’t necessarily applied as documented (if they are documented at all).
      Come lawsuit time it’s hard to prove specific bias.

      With software, the weighing factors are applied exactly as documented in the source code.

      So what’s the weight given to experience, training, age, diversity?
      Didn’t get the job? Sue and discovery will give you the ammo.

      • “With software, the weighing factors are applied exactly as documented in the source code.”

        Doesn’t apply if they are using a trained neural net for their analysis because the decision factors are is the neural net which the algorithm grows, not source code. Researchers are working on neural nets that can explain their decisions, but I doubt that any are operational enough to stand up in court. In a lawsuit, I guess you would have to show a statistical systematic bias to prove discrimination, which would not be that different from the situation today.

        If the system is designed for continual improvement based on the ongoing performance of successful candidates, the net might converge on the biases already existent in the organization. If the organization tends to gives bad performance reports to blue-eyed and big-nosed employees, the net would eventually quit hiring them, but it would be hard to determine whether the decisions were based on eye-color and nose-size or some other characteristics that go along with blue eyes and large noses.

        • I doubt those systems are that advanced.
          Remember that what is billed as AI today is typically a conversational front end atop a database of hardwired rules. Great at chess or Go or sorting questionaires but not even close to Actual AI.

          We’ll see once the suits get filed.

          • You could be right. A few years ago, you were very likely right, but there are so many platforms and tools for machine learning available now, machine learning is no longer rare.

            • It’s not rare but it’s still not good.
              And for the stated application of “bias free” diversity friendly hiring I expect a nice juicy mess to hit headlines long before it gets good.

              Arthur C. Clarke pointed out ages ago (2010 ODYSSEY 2) that a true AI is going to be an inherently rational entity that will find human hypocrisy and treachery troublesome. In the book it is what made HAL go bonkers.

              Me, I would not want to be around a real world AI that has been taught to lie.

        • A neural net that included race and gender in the training set would be great fun.

        • I read Felix’s comment before going to bed last night and woke this morning to find that you’d replied already, and in a more knowledgeable fashion than I would probably have managed.

          I’d add two points: firstly, this goes much wider than recruitment as algorithms – typically covered by commercial secrecy that disguises their workings from those using them – are being applied to take decisions about us right across the economy.

          Secondly, the data needed to make good decisions often does not exist and companies purchase our personal data to use as proxies for that which is not available, ignoring the fact that this data often is erroneous and that that its use as a proxy may be both dubious and discriminatory.

  3. Socially isolated individuals would benefit from the money being spent on humans to visit with them, rather than a robot – given that the NIH in Britain can’t afford many things, including nurses, spending money on what is an advanced Furby seems wasteful. Watch carefully who gets the contract to provide said machines.

  4. How about firing?
    A robot would be ideal, you could not even grab it by the tie, it doesn’t have one.

  5. In my latest book “The Devolution of Adam and Eve” coming up soon the boss, Maximus, is an AI character, and this is its response to threats:
    “If we die, we’re going to come back from the dead and haunt you, Maximus.”
    “Good luck haunting me,” said Maximus, the AI entity, living in the cloud.

  6. So anyone that misses the marks is an auto-fail. A good thing if you need robot workers, not so good if you’re looking for those that can be creative and actually solve the problems not already figured out.

    • I first encountered automated HR back in the mid ’90s. Companies that only accepted applications online, and then apparently sorted them by grepping for keywords.

      One of the HR people was telling me they were getting zero qualified applicants for a position. Apparently they didn’t see any problem with requiring five years of experience with a specific program version that had only been out for six months… their software wasn’t sophisticated enough to tell the difference. Of course, neither were they…

      • GIGO – garbage in garbage out.

        I spent a few years at dell (96-03) and got to watch it go to pot. I was in server support (those ‘big’ systems that held your databases and saved your work before there was this ‘cloud’ thing to lose things in), and you have to understand both the hardware and software in order to find/fix problems. HR would send us applicants that had passed the buzzword bingo part and the ‘managers’ liked them but nine out of ten couldn’t troubleshoot hardware to save their lives (under five minutes of one of the senior techs quizzing them proved it.)

        What did dell do when they couldn’t find enough techs that knew what they were doing? They took the senior tech out of the loop and hired any warm body – and then came up with scripts for those warm bodies to read to the customers calling in. (Is it plugged in? Have you tried rebooting the system? …)

        • I have a Dell laptop that sometimes gives me a message that says the hard drive is not installed (it is), and throws me off-line at least once a day even though my wifi is working fine.

      • And this is probably why employers keep complaining that they aren’t filling jobs because they “can’t find qualified applicants.” That, and they’re offering entry-level wages while requiring a college degree. *smh*

        • Terrence P OBrien

          I suspect that reflects what they think of a college degree.

          • In many industries they are even right to do so. Oftentimes a college degree merely suggests the person might be trainable for a productive trade.

            Then again, some degrees are proof they aren’t worth the effort to train. 🙂

        • It’s become a joke among millennials that employers seem to want recent college graduates with five years of work experience in their chosen field.

          • In some fields, signing up for summer internships and coop programs takes care of that. We got some of our best employees that way. (And avoided a couple too.)

Sorry, the comment form is closed at this time.