A Bot Might Have Written This

This content has been archived. It may no longer be accurate or relevant.

From JSTOR Daily:

When a colleague told me that ChatGPT (Generative Pre-trained Transformer) could write a one-page response to a complex work of literature in under fifteen seconds, I rolled my eyes. Impossible! I thought. Even if it can, the paper probably isn’t very good. But my curiosity was piqued. A few days later, I created a ChatGPT account of my own, typed in a prompt that I had recently assigned to my ninth-grade students, and, watched with chagrin, as ChatGPT effortlessly produced a very good essay. In that one moment, I knew that everything about the secondary English classroom, and society in general, was about to change in ways both exciting and terrifying.

. . . .

According to Robert F. Murphy, a senior researcher at the RAND Corporation, “Research on AI [artificial intelligence] got its start in the 1950s with funding primarily from the U.S. Department of Defense. One of the early products of this work was the development of rule-based expert systems (that is, systems that mimic the decisionmaking ability of human experts) to support military decision making and planning.” The goal of AI development from the outset was to create programming that could enhance how human beings go about problem solving. And while forms of AI, such as smart-phones, self-driving cars, and chat-bots, have become intrinsic to the fabric of our society, most people don’t recognize these now everyday devices as AI because they do exactly what they were designed to do: seamlessly assist us with daily tasks.

ChatGPT, however, is different from Google Maps, for example, helping you navigate your morning commute. There is not much room for that route-oriented AI to think independently and creatively through its task. The user types in a destination, and the AI plots a course to get there. But ChatGPT can do more because its parameters are elastic. It can write songs and poems of great complexity (I asked it to write a villanelle about orange juice, and it created a complex and hilarious one); it can offer insight on existential questions like the meaning of life; it can revise a business letter or offer feedback on a resume. In many ways, it feels like a personal assistant always there to help.

And while that’s revolutionary, it is also problematic. ChatGPT can’t “think” on its own or offer opinions. It can only respond to incredibly specific directions. Once the user gives it the go-ahead along with some other details, ChatGPT engages in complex problem solving and executes tough tasks, like writing an essay, in seconds. There is no sense of how to use ChatGPT since it can be used in any way for anything, creating an exponentially dangerous situation in which there are no directions or “how-tos.” Creators and users alike are putting the proverbial plane together as they’re flying it.

In “Should Artificial Intelligence Be Regulated?” Amitai Etzioni and Oren Etzioni contend that the more advanced the AI, the more parameters it requires. Monitoring ChatGPT is infinitely complicated since the coding that drives its very human-like thinking is, ironically, too massive and intricate for real human thinking to monitor. “’The algorithms and datasets behind them will become black boxes that offer us no accountability, traceability, or confidence,’ [. . . ] “‘render[ing] an algorithm opaque even to the programmers. Hence, humans will need new, yet-to-be-developed AI oversight programs to understand and keep operational AI systems in line.’”

Is it even possible for ChatGPT’s creators to regulate it, or is the AI simply being maintained so that others can use it and indulge in the novelty of its thinking?

If the answer leans away from boundaries, then the implication is that ChatGPT’s overseers may not understand what they’ve unleashed. In an interview with ABC News, ChatGPT CEO Sam Altman claims that “any engineer” has the ability to say, “we’re going to disable [ChatGPT] for now.” While that may reassure some people, history has shown what happens when regulation is placed in the hands of tech CEOs instead of in those of a more objective and independent regulatory body. Consider the BP Deepwater Horizon oil rig disaster of 2010. A number of investigations asserted that management routinely placed profits over safety. The rig eventually exploded, eleven workers died, and countless gallons of crude contaminated the Gulf of Mexico. Once ChatGPT becomes profitable for investors and companies, will administrators and engineers have both the will and the authority to shut it down if the program inflicts harm? What, exactly, constitutes such an action? What are the parameters? Who is guarding the proverbial guardians? The answer, as the Etzionis argue, is opaque and ambiguous at best.

Link to the rest at JSTOR Daily

1 thought on “A Bot Might Have Written This”

  1. “While that may reassure some people, history has shown what happens when regulation is placed in the hands of tech CEOs instead of in those of a more objective and independent regulatory body. ”

    Consider what happened in the Gold King Mine spill https://en.wikipedia.org/wiki/2015_Gold_King_Mine_waste_water_spill I’m not impressed with “objective” and “independent” regulatory bodies: the EPA didn’t inform in a timely manner, nobody went to jail, and the EPA has been refusing to pay for damages because of sovereign immunity.

Comments are closed.