Software crashes all the time, and the law does little about it. But as Bryan H. Choi notes in Crashworthy Code, “anticipation has been building that the rules for cyber-physical liability will be different.” (P. 43.) It is one thing for your laptop to eat the latest version of your article, and another for your self-driving lawn mower to run over your foot. The former might not trigger losses of the kind tort law cares about, but the latter seems pretty indistinguishable from physical accidents of yore. Whatever one may think of CDA 230 now, the bargain struck in this country to protect innovation and expression on the internet is by no means the right one for addressing physical harms. Robots may be special, but so are people’s limbs.
In this article, Choi joins the fray of scholars debating what comes next for tort law in the age of embodied software: robots, the internet of things, and self-driving cars. Meticulously researched, legally sharp, and truly interdisciplinary, Crashworthy Code offers a thoughtful way out of the impasse tort law currently faces. While arguing that software is exceptional not in the harms that it causes but in the way that it crashes, Choi refuses to revert to the tropes of libertarianism or protectionism. We can have risk mitigation without killing off innovation, he argues. Tort, it turns out, has done this sort of thing before.
Choi dedicates Part I of the article to the Goldilocksean voices in the current debate. One camp, which Choi labels consumer protectionism, argues that with human drivers out of the loop, companies should pay the cost of accidents caused by autonomous software. Companies are the “least cost avoiders” and the “best risk spreaders.” This argument tends to result in calls for strict liability or no-fault insurance, neither of which Choi believes to be practicable.
Swinging from too hot to too cold, what Choi calls technology protectionism “starts from the opposite premise that it is cyber-physical manufacturers who need safeguarding.” (P. 58.) This camp argues that burdensome liability will prevent valuable innovation. This article is worth reading for the literature review here alone. Choi briskly summarizes numerous calls for immunity from liability, often paired with some version of administrative oversight.
. . . .
The puzzle, then, isn’t that software now produces physical injuries, thus threatening the existing policy balance between protecting innovation and remediating harm. It’s that these newly physical injuries make visible a characteristic of software that makes it particularly hard to regulate ex post, through lawsuits. In other words, “[s]oftware liability is stuck on crash prevention,” when it should be focused instead on making programmers mitigate risk. (P. 87.)
In Part III, Choi turns to a line of cases in which courts found a way to get industry to increase its efforts at prevention and risk mitigation, without crushing innovation or otherwise shutting companies down. In a series of crashworthiness cases from the 1960s, courts found that car manufacturers were responsible for mitigating injuries in a car crash, even if (a) such crashes were statistically inevitable, and (b) the chain of causation was extremely hard to determine. While an automaker might not be responsible for the crash itself, it could be held liable for failing to make crashing safer.
The similarities and differences between injuries caused by what an author writes in a book and how an author writes computer code are interesting to PG.