The Ethics of Hiding Your Data From the Machines

This content has been archived. It may no longer be accurate or relevant.

From Wired:

I don’t know about you, but every time I figure out a way of sharing less information online, it’s like a personal victory. After all, who have I hurt, advertisers? Oh, boo hoo.

But sharing your information, either willingly or not, is soon going to become a much more difficult moral choice. Companies may have started out hoovering up your personal data so they could deliver that now-iconic shoe ad to you over and over, everywhere you go. And, frankly, you did passively assent to the digital ad ecosystem.

Granted, you did not assent to constant data breaches and identity theft. The bargain is perhaps not worth the use of your personal information to, say, exclude you from a job or housing listing based on your race or gender.

Come to think of it, you also did not assent to companies slicing and dicing your information into a perfectly layered hoagie just waiting to be devoured by propagandists from Russia and China, who then burped out a potentially history-altering counterintelligence operation the likes of which humanity has never seen.

That’s all easy to hate—and maybe even to ban, from a PR or legislative perspective. But now that data is being used to train artificial intelligence, and the insights those future algorithms create could quite literally save lives.

. . . .

I recently met with a company that wants to do a sincerely good thing. They’ve created a sensor that pregnant women can wear, and it measures their contractions. It can reliably predict when women are going into labor, which can help reduce preterm births and C-sections. It can get women into care sooner, which can reduce both maternal and infant mortality.

All of this is an unquestionable good.

And this little device is also collecting a treasure trove of information about pregnancy and labor that is feeding into clinical research that could upend maternal care as we know it. Did you know that the way most obstetricians learn to track a woman’s progress through labor is based on a single study from the 1950s, involving 500 women, all of whom were white?

It’s called the Friedman Curve, and it’s been challenged and refined since then—the American College of Obstetrics and Gynecology officially replaced it in 2016—but it’s still the basis for a lot of treatment. Worse, it has been and continues to be the basis for a huge number of C-sections, because doctors making decisions based on these outdated numbers believe a woman’s labor has stalled.

So that’s bad.

But updated data on pregnancy and labor is a tricky thing to gather, because no doctor wants to increase the possibility of a baby dying or suffering dangerous or damaging stress in the womb while they wait and see if labor will continue.

Enter our little wearable, which can gather this data efficiently, safely, and way more quickly than existing years-long research efforts. It’s already being used by underserved women who are black or brown, or poor, or both—and that is a good thing. Black women in this country are three times more likely to die in childbirth than white women, and research into women’s health, pregnancy, and labor is woefully inadequate and only slowly increasing.

To save the lives of pregnant women and their babies, researchers and doctors, and yes, startup CEOs and even artificial intelligence algorithms need data. To cure cancer, or at least offer personalized treatments that have a much higher possibility of saving lives, those same entities will need data.

The artificial intelligence necessary to make predictions, draw conclusions, or unlock the gene responsible for ALS (that last was accomplished by IBM’s AI engine, Watson) requires thousands, maybe tens of thousands of times more data than a targeted ad exchange. In fact, the same is true for self-driving cars, or predicting the effects of climate change, or even for businesses to accurately measure productivity and toilet paper costs.

The need for our data is never again going to diminish. If anything, it’s going to rapidly expand, like an unquenchable maw, a Sarlacc pit of appetite, searching for more information to examine and consume.

In the case of the company I met with, the data collection they’re doing is all good. They want every participant in their longitudinal labor study to opt in, and to be fully informed about what’s going to happen with the data about this most precious and scary and personal time in their lives.

But when I ask what’s going to happen if their company is ever sold, they go a little quiet.

Link to the rest at Wired

For those who may have forgotten Sarlacc’s Pit – see

From Foreign Policy magazine:

One morning, Ming Ming wakes up for school to the soothing, mechanical voice of his artificially intelligent robot housekeeper. “Ming Ming! It’s March 29, 2028, and a new day has begun!”

This is not a work of science fiction. It is the opening chapter of a new Chinese high school textbook, put out by SenseTime, the world’s largest artificial intelligence start-up, with a valuation of over $4.5 billion, in partnership with a research center at East China Normal University and with middle and high school teachers in Shanghai. Published in April 2018, the textbook is part of the government’s recent push to prepare Chinese youth to help the nation become an AI superpower. According to SenseTime, it is currently being taught in pilot programs in more than 100 schools throughout the country in Shanghai, Beijing, Shanxi, Shandong, Guangdong, Jiangsu, and Heilongjiang. SenseTime is training over 900 teachers to implement its curricula.

In July 2017, the government issued an ambitious master plan to lead the world in AI research and deployment by 2030. The road map outlined the steps by which AI will be deployed in areas such as military readiness and city planning, and the government announced that AI courses would be included in all primary and secondary schools. In response, the Chinese Ministry of Education has drafted its own “AI Innovation Action Plan for Colleges and Universities,” calling for 50 world-class AI textbooks, 50 national-level online AI courses, and 50 AI research centers to be established by 2020.

. . . .

One of the most glaring challenges driving China’s education push is job displacement. The venture capitalist and AI expert Kai-Fu Lee believes that 40 percent of jobs globally will be lost to automation in the next 15 to 25 years. In China, according to a report by PwC, the largest net job losses will most likely be in agriculture—although it’s difficult to distinguish those disappearing jobs from the shift away from agriculture that a developing economy hopes for. On the other hand, the demand in China for AI professionals, such as machine learning engineers and data scientists, may surge to 5 million in the next few years, according to a 2017 report from the Tencent Research Institute. China may not be able to solve its talent shortage in time. “Just as it was impossible for us to predict that an Uber driver or a Didi driver would be a job 10 years ago, to predict what jobs exist in eight to 10 years is nearly impossible,” said Lee.

Faced with the uncertainty of the job market, the question that educators must consider, said Yong Zhao, a professor specializing in Chinese education at the University of Kansas, is how to educate young people for jobs that don’t exist. Lee said that schools must prioritize teaching 21st-century learning skills commonly known among global educators as the “Three C’s”: creativity, collaboration, and critical thinking.

Link to the rest at Foreign Policy

 

4 thoughts on “The Ethics of Hiding Your Data From the Machines”

  1. The ethics here are the same as they’ve always been: trust must be earned before it is given, and tech has not earned my trust.

  2. > involving 500 women, all of whom were white?

    Call out the posse! Ten Minutes’ Hate! Because non-white women have completely different lady bits… er, what?

    No use reading past that, the author is barking from the madhouse.

  3. Did you know that the way most obstetricians learn to track a woman’s progress through labor is based on a single study from the 1950s, involving 500 women, all of whom were white?

    Does this stuff work differently for Chinese? How about Lapps?

Comments are closed.