6 Predictions for the Future of Artificial Intelligence in 2020

This content has been archived. It may no longer be accurate or relevant.

Since the online profile of the publishing business tends to sink around the holidays, PG tends to go a bit farther afield for TPV during this time. That said, Artificial Intelligence is, at least for PG, a fascinating topic.

From Adweek:

The business world’s enthusiasm for artificial intelligence has been building towards a fever pitch in the past few years, but those feelings could get a bit more complicated in 2020.

. . . .

[C]ompanies and organizations are increasingly pushing tools that commoditize existing predictive and image recognition machine learning, making the tech easier to explain and use for non-coders. Emerging breakthroughs, like the ability to create synthetic data and open-source language processors that require less training than ever, are aiding these efforts.

At the same time, the use of AI for nefarious ends like deepfakes and the mass-production of spam are still in their earliest theoretical stages, and troubling reports indicate such dystopia may become more real in 2020.

. . . .

1. Machines will get better at understanding—and generating their own—speech and writing
A high-profile research org called OpenAI grabbed headlines in early 2019 when it proclaimed its latest news-copy generating machine learning software, GPT-2, was too dangerous to publicly release in full. Researchers worried the passably realistic-sounding text generated by GPT-2 would be used for the mass-generation of fake news.

GPT-2 is the most sophisticated of a new type of language generation. It involves a base program trained on a massive dataset. In GPT-2’s case, it trains on more than 8 million websites to understand the general mechanics of how language works. That foundational system can then be trained on a relatively smaller, more specific dataset to mimic a certain style for uses like predictive text, chatbots or even creative writing aids.

OpenAI ended up publishing the full version of the model in November. It called attention to the exciting—if sometimes unsettling—potential of a growing trend in a subfield of AI called natural language processing, the ability to parse and produce natural-sounding human language.

The resource and accessibility breakthrough is analogous to a similar milestone in the subfield of computer vision around 2012, one widely credited with spawning the surge in image and facial recognition AI of the last few years. Some researchers think natural language tech is rumored to be poised for a similar boom in the next year or so.

. . . .

3. AI will get more creative

Along those lines, GANs have also begun to fuel a burgeoning AI-generated art scene, which has inspired agencies and brands to explore more creative uses for generative machine learning. While this trend remains in its infancy (at least commercially), 2019 saw ad campaigns centered on GAN-generated imagery, the first commercial product designed by generative AI and breakthroughs in the ability of GANs to produce photo-realistic faces and landscapes.

. . . .

Experts expect GANs to grow more integral as creativity aids as companies like Adobe incorporate them into design software and other tools are built that make them easier for creatives without a technical background to use.

. . . .

5. Algorithmic ethics concerns will grow

Ethical questions about machine learning and algorithms were on everyone’s minds in 2019 as a growing movement of cross-disciplinary scholars sought to shift more academic focus to the technology’s social impact. Meanwhile, worrying headlines about prejudices baked into machine learning systems and AI-powered state repression demonstrated the necessity of such work.

This concern has also led to a phenomenon called “ethics washing,” wherein companies make a show of taking ethical issues seriously—without any concrete changes. The most reported notable example was Google’s creation of a largely powerless ethics board this year that was dissolved within a week after a fierce backlash.

Meanwhile, media took note of policymakers taking tangible action towards more ethical AI, proposing and passing new laws to govern the use of tech like facial recognition, and workers at big tech companies collectively pushing back against aspects of their employers’ business they deemed immoral.

This distinction is on track to grow in 2020 as empty talk fails to satisfy a growing movement looking for real change.

Link to the rest at Adweek

4 thoughts on “6 Predictions for the Future of Artificial Intelligence in 2020”

  1. My biggest concern about AI is the same as my concerns about software in general. I’ve been a software engineer, architect, and development manager most of my adult life. I’ve seen the industry evolve from way back in the day when our dev team got an award for implementing one of the first distributed applications based on TCP/IP. (I have a coffee mug to prove it.) I am not surprised that we are in a full-fledged software security crisis. Losses from ransomware alone are on the 10 billion dollar scale. A similar crisis is looming in AI.

    I am concerned about privacy and surveillance, but these are only secondary to a general lack of accountability in the software industry. A similar crisis from boiler explosions and structural collapses lead to codes of ethics and certification for civil engineers. These were largely self-regulation efforts that lead to eventual incorporation in legal codes of liability. Basically, folks realized that shoddy boilers and poorly constructed buildings, bridges, and dams could damage large numbers of people in ways that far exceeded the putative economic value of the project.

    The result is that there are registered engineers who are held accountable for the consequences of civil engineering projects. There is an entire mechanism for determining if procedures were followed to avoid endangering the larger community. The mechanism is not perfect, but individuals are fined and do jail time if they flagrantly disregard best practices determined by their peers. The result is that bridges generally don’t collapse, elevators don’t fall, boilers don’t explode, and airplanes land safely. When a catastrophe occurs, the incident is examined, blame assigned when appropriate, and corrections are made. Like all human institutions, it’s not perfect, but it works most of the time.

    In software, my experience was that security was quietly ignored whenever it might delay a critical release date. A half-baked “Yeah, but that would never happen,” was enough to roll out a project with open sev-one security issues. Management was dying to hear those words, and there were seldom consequences if you were dead wrong.
    In 2018, there were roughly 16,400 new CVEs (known computer security threats), close to 2 per hour, recorded by the Department of Homeland Security. More are expected when the numbers are in for 2019. There are no codes of ethics or best practice guidelines for evaluating failures, and when a catastrophe occurs, it is usually referred to the marketing department, not engineering. The result: technology that is falling in on itself from internal weaknesses and inadequate consideration for security.

    I’ve worked on a few AI projects. I’m retired now, but I try to keep up. I see the same lack of rigor and accountability in addressing issues and failures in AI software, which I believe is likely to become as critical in the future as distributed applications are today. I firmly believe that the industry needs registered software engineers who are held accountable to a code of ethics and safe practices.

    I see some stirring in that direction in the IEEE and ACM, but more will be needed before things get better.

    • A registered software engineer would be similar to a professonal civil or electrical engineer. (Professional has a very narrow meaning here.) These people pass a state exam and are registered as Professional Engineers (PE). They have to stamp the drawings before construction can begin.

      The PE ensures the public is safe from a threat from the bridge or building due to the laws of nature. Load, stress, voltage, etc.

      But a SE would have to ensure the public is safe from the software, and also ensure the software is safe from the public. The task is further complicated by the reliance on more and more compiled and proven routines. Where does the SE get the source code?

      The PE model is a good place to start, but we’ll have to add quite a bit. It ain’t an easy task.

  2. “Ethical” application of technology is in the eye of the beholder.

    My town makes extensive use of automated license plate reader technology. Police cars are outfitted with cameras that feed input into a computer that is constantly scanning images for plate numbers and looking them up to see if they have been reported in some way. Reading the police reports, you can see that an officer who was doing some task was ‘alerted’ to an automated match on a plate, followed up and apprehended someone who had broken the law, often seriously.

    Is this sort of system “Ethical”?

    What might be the difference between matching on a license plate and just the image of a car, or your face, or your entire body. When do you pass “finding the bad guys” and reach “keeping track of everyone to control enemies of the state” ?

  3. Meanwhile, media took note of policymakers taking tangible action towards more ethical AI, proposing and passing new laws to govern the use of tech like facial recognition, and workers at big tech companies collectively pushing back against aspects of their employers’ business they deemed immoral.

    Some policymakers probably really believe they have control.

Comments are closed.