Artificial intelligence challenges what it means to be creative

This content has been archived. It may no longer be accurate or relevant.

From Science News:

When British artist Harold Cohen met his first computer in 1968, he wondered if the machine might help solve a mystery that had long puzzled him: How can we look at a drawing, a few little scribbles, and see a face? Five years later, he devised a robotic artist called AARON to explore this idea. He equipped it with basic rules for painting and for how body parts are represented in portraiture — and then set it loose making art.

Not far behind was the composer David Cope, who coined the phrase “musical intelligence” to describe his experiments with artificial intelligence–powered composition. Cope once told me that as early as the 1960s, it seemed to him “perfectly logical to do creative things with algorithms” rather than to painstakingly draw by hand every word of a story, note of a musical composition or brush stroke of a painting. He initially tinkered with algorithms on paper, then in 1981 moved to computers to help solve a case of composer’s block.

Cohen and Cope were among a handful of eccentrics pushing computers to go against their nature as cold, calculating things. The still-nascent field of AI had its focus set squarely on solid concepts like reasoning and planning, or on tasks like playing chess and checkers or solving mathematical problems. Most AI researchers balked at the notion of creative machines.

Slowly, however, as Cohen and Cope cranked out a stream of academic papers and books about their work, a field emerged around them: computational creativity. It included the study and development of autonomous creative systems, interactive tools that support human creativity and mathematical approaches to modeling human creativity. In the late 1990s, computational creativity became a formalized area of study with a growing cohort of researchers and eventually its own journal and annual event.

. . . .

Soon enough — thanks to new techniques rooted in machine learning and artificial neural networks, in which connected computing nodes attempt to mirror the workings of the brain — creative AIs could absorb and internalize real-world data and identify patterns and rules that they could apply to their creations.

Computer scientist Simon Colton, then at Imperial College London and now at Queen Mary University of London and Monash University in Melbourne, Australia, spent much of the 2000s building the Painting Fool. The computer program analyzed the text of news articles and other written works to determine the sentiment and extract keywords. It then combined that analysis with an automated search of the photography website Flickr to help it generate painterly collages in the mood of the original article. Later the Painting Fool learned to paint portraits in real time of people it met through an attached camera, again applying its “mood” to the style of the portrait (or in some cases refusing to paint anything because it was in a bad mood).

. . . .

During this era, Colton says, AIs began to look like creative artists in their own right — incorporating elements of creativity such as intentionality, skill, appreciation and imagination. But what followed was a focus on mimicry, along with controversy over what it means to be creative.

New techniques that excelled at classifying data to high degrees of precision through repeated analysis helped AI master existing creative styles. AI could now create works like those of classical composers, famous painters, novelists and more.

One AI-authored painting modeled on thousands of portraits painted between the 14th and 20th centuries sold for $432,500 at auction. In another case, study participants struggled to differentiate the musical phrases of Johann Sebastian Bach from those created by a computer program called Kulitta that had been trained on Bach’s compositions. Even IBM got in on the fun, tasking its Watson AI system with analyzing 9,000 recipes to devise its own cuisine ideas.

But many in the field, as well as onlookers, wondered if these AIs really showed creativity. Though sophisticated in their mimicry, these creative AIs seemed incapable of true innovation because they lacked the capacity to incorporate new influences from their environment. Colton and a colleague described them as requiring “much human intervention, supervision, and highly technical knowledge” in producing creative results. Overall, as composer and computer music researcher Palle Dahlstedt puts it, these AIs converged toward the mean, creating something typical of what is already out there, whereas creativity is supposed to diverge away from the typical.

. . . .

True creativity is a quest for originality. It is a recombination of disparate ideas in new ways. It is unexpected solutions. It might be music or painting or dance, but also the flash of inspiration that helps lead to advances on the order of light bulbs and airplanes and the periodic table. In the view of many in the computational creativity field, it is not yet attainable by machines.

In just the past few years, creative AIs have expanded into style invention — into authorship that is individualized rather than imitative and that projects meaning and intentionality, even if none exists. For Colton, this element of intentionality — a focus on the process, more so than the final output — is key to achieving creativity. But he wonders whether meaning and authenticity are also essential, as the same poem could lead to vastly different interpretations if the reader knows it was written by a man versus a woman versus a machine.

Link to the rest at Science News and thanks to F. for the tip.

PG suggests that it may be difficult for many individuals to distinguish between “true creativity” and derivative works based on prior creative projects.

14 thoughts on “Artificial intelligence challenges what it means to be creative”

  1. There is a very good documentary on either YouTube or the Curiosity Network on AI beating the world champions in Go. I think it documents 2018 events.

    After the champion was defeated four games to one, he commented on the truly creative moves the AI used that he had not seen before from humans. He said he had a lot to think about, and a lot to learn from the AI.

    Color commentators during the game told us the AI had just made a huge mistake with a particular move in one game. Real time comments from the AI team said they didn’t understand it. That mistake was the foundation of the winning series of moves.

    I know next to nothing about Go, but found it a very interesting show.

      • Great fighters don’t fear the worst fighters; they fear overconfidence. Because that’s the thing the does in the great fighters the most often…

        • Depends on their traits. Brute force has its uses, as Deep Thought proved.
          Hopefully we won’t seevAI fighter going against humans too soonm

          • Based on politics at present (specific example: overconfidence did in the loser of each of the last two US presidential elections), that can’t possibly be worse than humans against humans.

            And notice that one’s particular policy viewpoint doesn’t matter to the validity or soundness of that statement…

            • Things are never so bad they can’t get worse, even without overconfidence.

              Overconfidence does lead to strategic overeach which blinds the true believers to the likelihood that the “other guys” won’t roll over and play dead.

              For example, as of last week people of some relevance are seriously considering the form an actual new US Civil War might take. And not writers worldbuilding:

              https://www.msn.com/en-us/news/world/targeted-assassinations-coming-if-civil-war-breaks-out-adam-kinzinger/ar-AATKbqK

              Not unavoidable but a shadow war as described in the link is certainly possible. Not Napoleonic-style armies meeting in the field but more along the lines of the Irish “Na Trioblóidí”, two camps playing whack-a-mole with each other, say with snipers taking out each other’s evangelizers.

              Of course, other things might happen before it comes to that.

              Even properly evaluating the local opposition isn’t enough. There is still external factors and Black Swans that crop up regularly. A Carrington Event, an asteroid impact, multiple simultaneous regional wars (Say, Russia-Ukraine, China-Taiwan, Venezuela-Colombia, India-Pakistab, etc), or another deadlier pandemic.

              It is a sign of the times that for every positive development (mRNA vaccines that might cure come cancers, a sign of an emerging HIV cure, The Polaris Dawn Project, safe modular Fission Reactors, etc) there are two or more near term negative developments. Too many to expect all can be avoided until the asteroid hits especially if tunnel vision blinds the players to the universe outside DC and NYC.

              • Many people have firm beliefs in their linear extension of the present and past into the future. But, they neglect the fact that they have huge gaps in their grasp of both past and future.

                A classic example is the school board controversy. It was propelled in part by the kids staying at home, the parents working from home, and the parents watching what the kids were being told by the teachers.

                The carefully contrived Seldon Plan undone by a lab accident. All the firm predictions from before February 2020 blew away.

                How many times over the last fifty years have we heard the democrats are dead, the republicans are dead, history is over, capitalism is supreme, socialism is supreme, and the younger generation really is different. We now regularly hear how AI will soon rule over all.

                Kitzinger and Cheney are running out the clock, and we can find some professor somewhere to say just about anything.

                Some folks have certainly been right, but most get fifteen minutes of coverage for their latest pronouncement, and then fade into the background.

  2. In the biggest corporations the top execs have long been on a quest for finding a way to dispose of the pesky meatbags that keep insisting on getting paid in proportion to the market value of tbeir output.The quest for automated creativity isn’t limited to publishing or the arts.

    And of course, Ivory towers academics keep trying to achieve the same in a quest to prove there is no such thing as human genius.

    So far the roman view remains more likely than the process obsessed theories. 😀

  3. In snarkily symbolic terms:

    f(creative process) ≢ g(creative product)

    (the input and output spaces of the function of “creative process” are not congruent with the input and output spaces of the function of “creative process,” and the latter does not operate in a sufficiently comparable fashion to the former to just add consideration of additional constants and variables but is instead a distinct function)

    And remember that f is considered only as an excuse when g(i) ≡ g(j) — that is, there is a sufficient similarity between product i and product j that the later one might infringe the earlier one.

    tl;dr IP law concerns itself with the “creative thing” and screws up the “creative process” every time it considers it (because lawyers).

Comments are closed.