Forced Robot Arbitration

This content has been archived. It may no longer be accurate or relevant.

From Cornell Law Review, Vol. 109, forthcoming 2023:

Abstract

Recently, advances in artificial intelligence (AI) have sparked interest in a topic that sounds like science fiction: robot judges. Researchers have harnessed AI to build programs that can predict the outcome of legal disputes. Some countries have even begun allowing AI systems to resolve small claims. These developments are fueling a fascinating debate over whether AI courts will increase access to justice or undermine the rule of law.

However, this Article argues that AI adjudication is more likely to set down roots in one of the most controversial areas of the American civil justice system: forced arbitration. For decades, corporations and arbitration providers have capitalized on the U.S. Supreme Court’s muscular interpretation of the Federal Arbitration Act (FAA) to create their own alternative procedural universes. These entities may soon take the next step and eliminate human decision-makers in some contexts. First, most objections to AI judges do not apply to AI arbitrators. For example, because AI systems suffer from the “black box problem”—they cannot explain the reasoning behind their conclusions—deploying them in the judicial system might violate procedural due process principles. But opacity is already the norm in arbitration, which is private, confidential, and often features awards that are unwritten. Second, although AI legal prediction tools are still embryonic, they work well in the simple debt collection and employment misclassification disputes that businesses routinely funnel into arbitration. Third, AI programs require little overhead and operate at lightning speed. The ability to streamline the process has become especially important in the last few years, as plaintiffs’ lawyers have begun filing “mass arbitrations”—overloading the system with scores of individual claims in an effort to saddle defendants with millions of dollars in fees. For these reasons, companies and arbitration providers have powerful financial incentives to experiment with automating decision-making in certain cases.

The Article then offers an insight that will have a profound impact on this futuristic form of dispute resolution. Drawing on the FAA’s text, structure, and legislative history, the Article contends the statute only applies to adjudication conducted by a “person.” Thus, there is no federal mandate that courts enforce agreements to resolve disputes by AI. In turn, because state law fills faps in the FAA, individual jurisdictions will be able to decide for themselves whether to permit or prohibit robot arbitration. Finally, the Article explains why this incremental approach is better than either barring AI dispute resolution or finding that it triggers the gale force of the FAA.

Link to the rest at Cornell Law Review, Vol. 109, forthcoming 2023 via SSRN and thanks to C. for the tip.

PG is reminded of a Hemingway quote:

“How did you go bankrupt?”
“Two ways. Gradually, then suddenly.”

Ernest Hemingway, The Sun Also Rises

He had suspected that AI would take a similar path, gradually, then suddenly, but the gradually timeframe is turning out to be a lot shorter than PG would have anticipated.

3 thoughts on “Forced Robot Arbitration”

  1. AI is a lot like using a 16-year-old who got an A in spelling and grammar as the final arbiter. They have zero understanding of the subject, but it sure looks pretty.

    • …which would be better than the default outcome in IP-related arbitrations. I’ve yet to see one that looks pretty (and I say that having been on the winning side more often than not). I can find no reason to believe that an AI will be more effective.

Comments are closed.