Our Software Is Biased like We Are. Can New Laws Change That?

This content has been archived. It may no longer be accurate or relevant.

From The Wall Street Journal:

Lawyers for Eric Loomis stood before the Supreme Court of Wisconsin in April 2016, and argued that their client had experienced a uniquely 21st-century abridgment of his rights: Mr. Loomis had been discriminated against by a computer algorithm.

Three years prior, Mr. Loomis was found guilty of attempting to flee police and operating a vehicle without the owner’s consent. During sentencing, the judge consulted COMPAS (aka Correctional Offender Management Profiling for Alternative Sanctions), a popular software system from a company called Equivant. It considers factors including indications a person abuses drugs, whether or not they have family support, and age at first arrest, with the intent to determine how likely someone is to commit a crime again.

The sentencing guidelines didn’t require the judge to impose a prison sentence. But COMPAS said Mr. Loomis was likely to be a repeat offender, and the judge gave him six years.

An algorithm is just a set of instructions for how to accomplish a task. They range from simple computer programs, defined and implemented by humans, to far more complex artificial-intelligence systems, trained on terabytes of data. Either way, human bias is part of their programming. Facial recognition systems, for instance, are trained on millions of faces, but if those training databases aren’t sufficiently diverse, they are less accurate at identifying faces with skin colors they’ve seen less frequently. Experts fear that could lead to police forces disproportionately targeting innocent people who are already under suspicion solely by virtue of their appearance.

. . . .

No matter how much we know about the algorithms that control our lives, making them “fair” may be difficult or even impossible. Yet as biased as algorithms can be, at least they can be consistent. With humans, biases can vary widely from one person to the next.

As governments and businesses look to algorithms to increase consistency, save money or just manage complicated processes, our reliance on them is starting to worry politicians, activists and technology researchers. The aspects of society that computers are often used to facilitate have a history of abuse and bias: who gets the job, who benefits from government services, who is offered the best interest rates and, of course, who goes to jail.

“Some people talk about getting rid of bias from algorithms, but that’s not what we’d be doing even in an ideal state,” says Cathy O’Neil, a former Wall Street quant turned self-described algorithm auditor, who wrote the book “Weapons of Math Destruction.”

“There’s no such thing as a non-biased discriminating tool, determining who deserves this job, who deserves this treatment. The algorithm is inherently discriminating, so the question is what bias do you want it to have?” she adds.

. . . .

An increasingly common algorithm predicts whether parents will harm their children, basing the decision on whatever data is at hand. If a parent is low income and has used government mental-health services, that parent’s risk score goes up. But for another parent who can afford private health insurance, the data is simply unavailable. This creates an inherent (if unintended) bias against low-income parents, says Rashida Richardson, director of policy research at the nonprofit AI Now Institute, which provides feedback and relevant research to governments working on algorithmic transparency.

The irony is that, in adopting these modernized systems, communities are resurfacing debates from the past, when the biases and motivations of human decision makers were called into question. Ms. Richardson says panels that determine the bias of computers should include not only data scientists and technologists, but also legal experts familiar with the rich history of laws and cases dealing with identifying and remedying bias, as in employment and housing law.

Link to the rest at The Wall Street Journal

7 thoughts on “Our Software Is Biased like We Are. Can New Laws Change That?”

  1. Title be a question so ‘no’ is the correct answer.

    You can not ‘not’ have bias, it’s built into the system. You can not make a program or a law that doesn’t contain some type of bias in it.

    “Facial recognition systems, for instance, are trained on millions of faces, but if those training databases aren’t sufficiently diverse, they are less accurate at identifying faces with skin colors they’ve seen less frequently.”

    Hmm, someone remind me what the black/white/yellow/red ratios are for them to use in the first place? So to un-bias one thing they have to bias another – which then needs to somehow be un-biased in turn – without biasing yet something else.

    “There’s no such thing as a non-biased discriminating tool, determining who deserves this job, who deserves this treatment. The algorithm is inherently discriminating, so the question is what bias do you want it to have?”

    The biggest problem you will have with any such an idea/program is everyone will want it biased ‘their’ way.

    • Their bias is based on Zero-sum thinking.
      They can’t even conceive of the concept of a growth-driven algorithm that provides more for everybody participating in the system.

      One of the reasons the publishing establishment, for one, rails against ebooks in general and Indie ebooks in particular. The idea that a path exists that delivers more to authors and more to readers simultaneously is beyond their world view.

      • “They can’t even conceive of the concept of a growth-driven algorithm that provides more for everybody participating in the system.”

        Or that they won’t have a say in who can say/do/think what.

        “One of the reasons the publishing establishment, for one, rails against ebooks in general and Indie ebooks in particular. The idea that a path exists that delivers more to authors and more to readers simultaneously is beyond their world view.”

        They only rail against the lose of ‘control’ over those writers and readers – and of course all the money slipping through their hands … 😉

      • They don’t give a rip about the paths and authors and readers. They follow the money, and see the trail getting faint. They know exactly why authors choose KDP over agent submissions.

        They don’t want to say, “We oppose independent publishing on KDP because it diverts money from us.” So, they come up with some mush that most appeals to their base. (Maybe that “Book Community is the base?) Listen to what they say, but don’t believe it without very good reasons.

        • Of course it’s all about the money.
          Once you buy into Zero-sum thinking *everything* devolves into money.

          • Zero sum? That has little to do with fights over market share. Some of the best fights are over shares of an expanding market. The mush we hear about expanding markets benefiting everyone falls flat when some folks decide they want as much as they can get. Count both publishers and independent authors among them.

  2. algorithm:

    if:
    arrests > 10
    convictions > 5
    previous convictions for same crime > 2

    then:
    likely to be a repeat offender = YES

    BIAS! Woo-woo-woot! Thoughtcrime! Lawsuit!

    Even if it’s the exact same checklist the judge would have looked at, as prepared by his clerk and typed out on a piece of paper.

Comments are closed.