Lessons From Literal Crashes For Code

This content has been archived. It may no longer be accurate or relevant.

From Jotwell:

Software crashes all the time, and the law does little about it. But as Bryan H. Choi notes in Crashworthy Code, “anticipation has been building that the rules for cyber-physical liability will be different.” (P. 43.) It is one thing for your laptop to eat the latest version of your article, and another for your self-driving lawn mower to run over your foot. The former might not trigger losses of the kind tort law cares about, but the latter seems pretty indistinguishable from physical accidents of yore. Whatever one may think of CDA 230 now, the bargain struck in this country to protect innovation and expression on the internet is by no means the right one for addressing physical harms. Robots may be special, but so are people’s limbs.

In this article, Choi joins the fray of scholars debating what comes next for tort law in the age of embodied software: robots, the internet of things, and self-driving cars. Meticulously researched, legally sharp, and truly interdisciplinary, Crashworthy Code offers a thoughtful way out of the impasse tort law currently faces. While arguing that software is exceptional not in the harms that it causes but in the way that it crashes, Choi refuses to revert to the tropes of libertarianism or protectionism. We can have risk mitigation without killing off innovation, he argues. Tort, it turns out, has done this sort of thing before.

Choi dedicates Part I of the article to the Goldilocksean voices in the current debate. One camp, which Choi labels consumer protectionism, argues that with human drivers out of the loop, companies should pay the cost of accidents caused by autonomous software. Companies are the “least cost avoiders” and the “best risk spreaders.” This argument tends to result in calls for strict liability or no-fault insurance, neither of which Choi believes to be practicable.

Swinging from too hot to too cold, what Choi calls technology protectionism “starts from the opposite premise that it is cyber-physical manufacturers who need safeguarding.” (P. 58.) This camp argues that burdensome liability will prevent valuable innovation. This article is worth reading for the literature review here alone. Choi briskly summarizes numerous calls for immunity from liability, often paired with some version of administrative oversight.

. . . .

The puzzle, then, isn’t that software now produces physical injuries, thus threatening the existing policy balance between protecting innovation and remediating harm. It’s that these newly physical injuries make visible a characteristic of software that makes it particularly hard to regulate ex post, through lawsuits. In other words, “[s]oftware liability is stuck on crash prevention,” when it should be focused instead on making programmers mitigate risk. (P. 87.)

In Part III, Choi turns to a line of cases in which courts found a way to get industry to increase its efforts at prevention and risk mitigation, without crushing innovation or otherwise shutting companies down. In a series of crashworthiness cases from the 1960s, courts found that car manufacturers were responsible for mitigating injuries in a car crash, even if (a) such crashes were statistically inevitable, and (b) the chain of causation was extremely hard to determine. While an automaker might not be responsible for the crash itself, it could be held liable for failing to make crashing safer.

Link to the rest at Jotwell and here’s a link to the Washington Law Review article that Jotwell discusses.

The similarities and differences between injuries caused by what an author writes in a book and how an author writes computer code are interesting to PG.

9 thoughts on “Lessons From Literal Crashes For Code”

  1. The key part seems to be ‘without crushing innovation or otherwise shutting companies down,’ a noble goal that is sometimes lost in making money.

    What the companies seem to forget is that the PR necessary for damage control is far more expensive to their reputation than attempting to mitigate – and failing – would be.

    Company after company ignores the warnings of its scientists and engineers – à la Dilbert’s pointy-haired boss – and then gets taken down bigtime. Possibly it’s still economic sense, but that’s not a give: jury awards can be huge.

  2. The key part seems to be ‘without crushing innovation or otherwise shutting companies down,’ a noble goal that is sometimes lost in making money.

    What the companies seem to forget is that the PR necessary for damage control is far more expensive to their reputation than not attempting to mitigate – and failing – would be.

    Company after company ignores the warnings of its scientists and engineers – à la Dilbert’s pointy-haired boss – and then gets taken down bigtime. Possibly it’s still economic sense, but that’s not a give: jury awards can be huge.

  3. Here’s an example from the past you might find amusing that makes some of these issues concrete. A friend was involved at the management level, so these (non-public) details came my way.

    Back in the last century, a large bank in NYC was unable to reconcile all of its trades at the end of the day because of a computer failure. When trades cannot be reconciled and offset against eachother (the buys & sells) which is the goal, the buyer cannot simply pay the net — instead he must pay the total potential liability, which means a sudden and unexpected need for a loan, costing them interest on that amount at least overnight (or until the situation is resolved). That was millions of dollars in this case.

    The failure was identified and corrected the next day. The trading software, which was well tested and had run for years, had a field for the number of daily trades. Back in the bad old primitive software days, some long-forgotten programmer had made this a half-byte field to save computer memory (which was then at a premium). On the day in question, this number of trades finally exceeded what could be represented in that half-byte field.

    In my business software career experience, the most lurid software failures are usually triggered not by an error in constructing the software from the spec, but by a failure of imagination in the design for what could possibly go wrong based on real-world events. Hard-core software tries to test all of its limits in advance, but this was a subtle one, based on the definition of the field, not directly on its numeric limit.

    I can’t begin to imagine the hubris implied in deciding that it’s possible to test for some of these things in advance from a liability perspective. (“What if the number of trades should become unimaginably large?” “What if aliens should visit and begin trading with us?” “What if the sky should fall?”…) In this case, the bank certainly paid a large penalty (interest on the overnight loan, which probably exceeded the cost of building the software), and that seems fair to me. Liability, too?

      • Oh, sure, and no doubt reasonably for the period. That coder did nothing wrong in that case.

        Who knew that the potential maximum number of trades would ever grow to exceed that value? (…cough…real world events…cough)

        • A half-bye is only 4 bits. The largest number it can contain is 31. IMHO that seems excessively small for any period since the stock market began operation.

          • This half-byte nonsense has become an itch for me.

            Question: How the hell do you save a half-byte of storage?
            Answer: You don’t. Computers cannot address half-bytes. Not now, not ever.

            I think it far, far more likely that the reporter screwed up. The programmer saved a half-word — 2 bytes — and the ignorant reporter wrote it as a half-byte.

            Makes a huge difference.

            The largest number a half-word can contain is 131,071. I can understand that a programmer shaving bytes in the 60s might have thought that such a number would be sufficient for the foreseeable future.

            That’s my story, and I’m sticking with it.

    • I suspect that the “half-byte” attribute of this story has been misstated. A half-byte would be just 4 bits (3 on some much older IBM machines) and so could only count up to 31 (15 on the older machines). It’s hard to imagine a limit that small could have gone unnoticed for years of real-world operation.

      ——————

      Minimizing liability is a concern in risk management, a standard part of the process of designing these kinds of systems. A good risk analysis procedure might discount the chances of aliens trading with us, but might well consider the question of “what if the number of trades exceeds the documented limit?” Increasing that limit might reduce the probability of that happening, but would not eliminate the risk entirely. A more appropriate mitigation of that risk would probably be to make sure that the system failed in a “safe” manner – for example, it might cease trading, but it should not lose the record of trades performed and trades still pending.

      • Re: half-byte… I was a sw pro, but not in COBOL. Might have been byte vs double-byte.

        Re: risk mitigation, just think of Y2K, where everyone knew it was coming decades in advance, but no one could imagine their software would still be running by then.

        Obviously real risk analysis would not be swayed by that, but it’s a very human failing… 🙂

Comments are closed.