Help! My Political Beliefs Were Altered by a Chatbot!

This content has been archived. It may no longer be accurate or relevant.

From The Wall Street Journal:

When we ask ChatGPT or another bot to draft a memo, email, or presentation, we think these artificial-intelligence assistants are doing our bidding. A growing body of research shows that they also can change our thinking—without our knowing.

One of the latest studies in this vein, from researchers spread across the globe, found that when subjects were asked to use an AI to help them write an essay, that AI could nudge them to write an essay either for or against a particular view, depending on the bias of the algorithm. Performing this exercise also measurably influenced the subjects’ opinions on the topic, after the exercise.

“You may not even know that you are being influenced,” says Mor Naaman, a professor in the information science department at Cornell University, and the senior author of the paper. He calls this phenomenon “latent persuasion.”

These studies raise an alarming prospect: As AI makes us more productive, it may also alter our opinions in subtle and unanticipated ways. This influence may be more akin to the way humans sway one another through collaboration and social norms, than to the kind of mass-media and social media influence we’re familiar with.

Researchers who have uncovered this phenomenon believe that the best defense against this new form of psychological influence—indeed, the only one, for now—is making more people aware of it. In the long run, other defenses, such as regulators mandating transparency about how AI algorithms work, and what human biases they mimic, may be helpful.

All of this could lead to a future in which people choose which AIs they use—at work and at home, in the office and in the education of their children—based on which human values are expressed in the responses that AI gives.

And some AIs may have different “personalities”—including political persuasions. If you’re composing an email to your colleagues at the environmental not-for-profit where you work, you might use something called, hypothetically, ProgressiveGPT. Someone else, drafting a missive for their conservative PAC on social media, might use, say, GOPGPT. Still others might mix and match traits and viewpoints in their chosen AIs, which could someday be personalized to convincingly mimic their writing style.

By extension, in the future, companies and other organizations might offer AIs that are purpose-built, from the ground up, for different tasks. Someone in sales might use an AI assistant tuned to be more persuasive—call it SalesGPT. Someone in customer service might use one trained to be extra polite—SupportGPT.

How AIs can change our minds

Looking at previous research adds nuance to the story of latent persuasion. One study from 2021 showed that the AI-powered automatic responses that Google’s Gmail suggests—called “smart replies”—which tend to be quite positive, influence people to communicate more positively in general. A second study found that smart replies, which are used billions of times a day, can influence those who receive such replies to feel the sender is warmer and more cooperative.

Building tools that will allow users to engage with AI to craft emails, marketing material, advertising, presentations, spreadsheets and the like is the express goal of Microsoft and Google, not to mention dozens if not hundreds of startups. On Wednesday, Google announced that its latest large language model, PaLM 2, will be used in 25 products across the company.

. . . .

OpenAI, Google and Microsoft, which partners with OpenAI, have all been eager to highlight their work on responsible AI, which includes examining possible harms of AI and addressing them. Sarah Bird, a leader on Microsoft’s responsible-AI team, recently told me that experimenting in public and rapidly responding to any issues that arise in its AIs is a key strategy for the company.

The team at OpenAI has written that the company is “committed to robustly addressing this issue [bias] and being transparent about both our intentions and our progress.” OpenAI has also published a portion of its guidelines for how its systems should handle political and cultural topics. They include the mandate that its algorithms should not affiliate with one side or another when generating text on a “culture war” topic or judge either side as good or bad.

Jigsaw is a unit within Google that is involved in advising, and building tools for, people within the company who work on large language models—which power today’s chat-based AIs—says Lucy Vasserman, head of engineering and product at Jigsaw. When I asked her about the possibility of latent persuasion, she said that such research shows how important it is for Jigsaw to study and understand how interacting with AI affects people.

“It’s not obvious when we create something new how people will interact with it, and how it will affect them,” she adds.

“Compared to research about recommendation systems and filter bubbles and rabbit holes on social media, whether due to AI or not, what is interesting here is the subtlety,” says Dr. Naaman, one of the researchers who uncovered latent persuasion.

In his research, the topic that subjects were moved to change their minds about was whether or not social media is good for society. 

Link to the rest at The Wall Street Journal

2 thoughts on “Help! My Political Beliefs Were Altered by a Chatbot!”

  1. In other words, if you outsource you writing, the result may not be what you would have written. This is unsurprising. If you outsource your writing and don’t carefully review the product before sending it out in your name, you deserve whatever you get. I eagerly await the new stories about companies doing this with text that turns out, much to their surprise, to be legally binding. These stories will inevitably happen, and will be hilarious.

    • “Never underestimate the power of human stupidity.”

      If you can conceive of a failure mode, no matter how convoluted, somebody somewhere will trigger it. They’ll trigger failure modes you can’t even conceive of. Fools are infinitely creative that way. 🙁

Comments are closed.