3 things businesses need to know as NYC begins enforcing its AI hiring law

From Venture Beat:

In July, New York City officially began cracking down on companies that run afoul of its first-in-the-nation law (NYC Law 144) governing the use of artificial intelligence in employment decisions.

Even companies that are not based in New York City but have operations and employees there — particularly global enterprises — must be compliant with this new regulation. The law doesn’t explicitly prohibit AI, but provides guidelines for how the technology should be used when making hiring decisions.

That’s an important distinction. Organizations across industries (healthcare, manufacturing, retail and countless others) already use intelligent technology in a multitude of ways. Examples include oncologists using AI to help diagnose cancer with a high degree of precision, manufacturing and retail predicting buying patterns to improve logistics and the consumer experience, and nearly all music recorded today utilizes auto-tune to correct or enhance a singer’s pitch.

When it comes to personnel matters, companies currently use AI to match relevant candidates with the right jobs — and this is NYC 144’s focus. After multiple delays, the new law has many companies a bit jittery at a time when job openings remain elevated and unemployment is near historic lows.

. . . .

Boldface tech names such as Microsoft’s president, Brad Smith, and Google’s CEO, Sundar Pichai, have endorsed a regulatory framework. Transparency is always a good thing. “I still believe A.I. is too important not to regulate and too important not to regulate well,” Pichai wrote in the Financial Times.

Conversely, if not done well, regulations could negatively impact job seekers and hiring managers by restricting the insightful information and tailored experiences that form the crux of a positive employment process. 

Thirty years ago, recruiters sifted through stacks of resumes sitting on their desks. Candidates were often selected based on inconsistent criteria, including Ivy League education, location within the pile and a bit of luck based on how high in the pile their resume was placed — over which they had no control. Humans’ unconscious biases add another untraceable filter when technology isn’t involved.

AI delivered scalability and accuracy to help level the playing field by matching individuals with the required skills and experience to the right roles, regardless of where they sit within the proverbial pile of resumes. AI also helps recruiters see the whole person and skills that the individual may not have thought to highlight within their resume. AI can’t prevent a recruiter or hiring manager from taking shortcuts. But it can make them less necessary by surfacing relevant resumes that might otherwise be lost in the pile.

The combination of human control and AI support is a good counter against bias in two ways. First, one cause of bias in human decision-making is that people often look for shortcuts to solving problems, like focusing only on candidates from Ivy League schools rather than investing time and effort to source and evaluate candidates from non-traditional backgrounds.

Second, bias detection with adverse-impact reporting can expose such bias in real time, allowing the organization to take action to stop such biased decisions.

There are potential laws being debated in Europe that might restrict the use of any personalization in the talent acquisition lifecycle. That could hamper employment prospects not only for external candidates, but for employees already in the company who are looking to move into a new role.

Pulling back hard on the reins of these technologies could actually lead to more bias because an imperfect human would then be solely in charge of the decision-making process. That could lead to a fine under the New York law and additional federal penalties since the Equal Employment Opportunity Commission has warned companies that they are on the hook for any discrimination in hiring, firing or promotions — even if it was unintentional and regardless of whether it is AI-assisted.

No law is perfect and NYC’s new legislation is no different. One requirement is to notify candidates that AI is being used — like cookie notifications on websites or end-user license agreements (EULAs) that most people click on without reading or truly understanding them.

Words matter. When reading AI-use notifications, individuals could easily conjure doomsday images portrayed in movies of technology overtaking humanity. There are countless examples of new technology evoking fear. Electricity was thought to be unsafe in the 1800s, and when bicycles were first introduced, they were perceived as reckless, unsightly and unsafe.

Explainability is a key requirement of this regulation, as well as just being good practice. There are ways to minimize fear and improve notifications: Make them clear and succinct, and keep legal jargon to a minimum so the intended audience can consume and understand the AI that’s in use.

Link to the rest at Venture Beat

One more reason not to have an office in New York City.

3 thoughts on “3 things businesses need to know as NYC begins enforcing its AI hiring law”

  1. Pulling back hard on the reins of these technologies could actually lead to more bias because an imperfect human would then be solely in charge of the decision-making process

    So, um, who was it who programmed the AIs? God Himself? Or imperfect humans who, I will guarantee you, had their own set of blind spots and unconscious biases that almost certainly made their way into the programming?

    The answer, of course, is the latter, and that the author fell into this cliche is not a good sign for their reasoning skills.

  2. how do you define AI so that you can forbid it’s use?

    if you rename AI to ‘expert system’ does that change things?
    at what point does a series of if…then rules become an “AI”?

Comments are closed.