The Danger of Intimate Algorithms

This content has been archived. It may no longer be accurate or relevant.

PG thought this might be an interesting writing prompt for sci-fi authors.

From Public Books:

After a sleepless night—during which I was kept awake by the constant alerts from my new automatic insulin pump and sensor system—I updated my Facebook status to read: “Idea for a new theory of media/technology: ‘Abusive Technology.’ No matter how badly it behaves one day, we wake up the following day thinking it will be better, only to have our hopes/dreams crushed by disappointment.” I was frustrated by the interactions that took place between, essentially, my body and an algorithm. But perhaps what took place could best be explained through a joke:

What did the algorithm say to the body at 4:24 a.m.?

“Calibrate now.”

What did the algorithm say to the body at 5:34 a.m.?

“Calibrate now.”

What did the algorithm say to the body at 6:39 a.m.?

“Calibrate now.”

And what did the body say to the algorithm?

“I’m tired of this s***. Go back to sleep unless I’m having a medical emergency.”

Although framed humorously, this scenario is a realistic depiction of the life of a person with type 1 diabetes, using one of the newest insulin pumps and continuous glucose monitor (CGM) systems. The system, Medtronic’s MiniMed 670G, is marketed as “the world’s first hybrid closed loop system,” meaning it is able to automatically and dynamically adjust insulin delivery based on real-time sensor data about blood sugar. It features three modes of use: (1) manual mode (preset insulin delivery); (2) hybrid mode with a feature called “suspend on low” (preset insulin delivery, but the system shuts off delivery if sensor data indicates that blood sugar is too low or going down too quickly); and (3) auto mode (dynamically adjusted insulin delivery based on sensor data).

In this context, the auto mode is another way of saying the “algorithmic mode”: the machine, using an algorithm, would automatically add insulin if blood sugar is too high and suspend the delivery of insulin if blood sugar is too low. And this could be done, the advertising promised, in one’s sleep, or while one is in meetings or is otherwise too consumed in human activity to monitor a device.  Thanks to this new machine, apparently, the algorithm would work with my body. What could go wrong?

Unlike drug makers, companies that make medical devices are not required to conduct clinical trials in order to evaluate the side effects of these devices prior to marketing and selling them. While the US Food and Drug Administration usually assesses the benefit-risk profile of medical devices before they are approved, often risks become known only after the devices are in use (the same way bugs are identified after an iPhone’s release and fixed in subsequent software upgrades). The FDA refers to this information as medical device “emerging signals” and offers guidance as to when a company is required to notify the public.

As such, patients are, in effect, exploited as experimental subjects, who live with devices that are permanently in beta. And unlike those who own the latest iPhone, a person who is dependent on a medical device—due to four-year product warranties, near monopolies in the health care and medical device industry, and health insurance guidelines—cannot easily downgrade, change devices, or switch to another provider when problems do occur.

It’s easy to critique technological systems. But it’s much harder to live intimately with them. With automated systems—and, in particular, with networked medical devices—the technical, medical, and legal entanglements get in the way of more generous relations between humans and things.

. . . .

In short, automation takes work. Specifically, the system requires human labor in order to function properly (and this can happen at any time of the day or night). Many of the pump’s alerts and alarms signal that “I need you to do something for me,” without regard for the context. When the pump needs to calibrate, it requires that I prick my finger and test my blood glucose with a meter in order to input more accurate data. It is necessary to do this about three or four times per day to make sure that the sensor data is accurate and the system is functioning correctly. People with disabilities such as type 1 diabetes are already burdened with additional work in order to go about their day-to-day lives—for example, tracking blood sugar, monitoring diet, keeping snacks handy, ordering supplies, going to the doctor. A system that unnecessarily adds to that burden while also diminishing one’s quality of life due to sleep deprivation is poorly designed, as well as unjust and, ultimately, dehumanizing.

. . . .

The next day was when I posted about “abusive technologies.” This post prompted an exchange about theorist Lauren Berlant’s “cruel optimism,” described as a relation or attachment in which “something you desire is actually an obstacle to your flourishing.” 

. . . .

There are many possible explanations for the frequent calibrations, but even the company does not have a clear understanding of why I am experiencing them. For example, with algorithmic systems, it has been widely demonstrated that even the engineers of these systems do not understand exactly how they make decisions. One possible explanation is that my blood sugar data may not fit with the patterns in the algorithm’s training data. In other words, I am an outlier. 

. . . .

In the medical field, the term “alert fatigue” is used to describe how “busy workers (in the case of health care, clinicians) become desensitized to safety alerts, and consequently ignore—or fail to respond appropriately—to such warnings.”

. . . .

And doctors and nurses are not the only professionals to be constantly bombarded and overwhelmed with alerts; as part of our so-called “digital transformation,” nearly every industry will be dominated by such systems in the not-so-distant future. The most oppressed, contingent, and vulnerable workers are likely to have even less agency in resisting these systems, which will be used to monitor, manage, and control everything from their schedules to their rates of compensation. As such, alerts and alarms are the lingua franca of human-machine communication.

. . . .

Sensors and humans make strange bedfellows indeed. I’ve learned to dismiss the alerts while I’m sleeping (without paying attention to whether they indicate a life-threatening scenario, such as extreme low blood sugar). I’ve also started to turn off the sensors before going to bed (around day four of use) or in the middle of the night (as soon as I realize that the device is misbehaving).

. . . .

Ultimately, I’ve come to believe that I am even “sleeping like a sensor” (that is, in shorter stretches that seem to mimic the device’s calibration patterns). Thanks to this new device, and its new algorithm, I have begun to feel a genuine fear of sleeping.

Link to the rest at Public Books