This artist is dominating AI-generated art. And he’s not happy about it.

This content has been archived. It may no longer be accurate or relevant.

From MIT Technology Review:

Those cool AI-generated images you’ve seen across the internet? There’s a good chance they are based on the works of Greg Rutkowski.

Rutkowski is a Polish digital artist who uses classical painting styles to create dreamy fantasy landscapes. He has made illustrations for games such as Sony’s Horizon Forbidden West, Ubisoft’s Anno, Dungeons & Dragons, and Magic: The Gathering. And he’s become a sudden hit in the new world of text-to-image AI generation.

His distinctive style is now one of the most commonly used prompts in the new open-source AI art generator Stable Diffusion, which was launched late last month. The tool, along with other popular image-generation AI models, allows anyone to create impressive images based on text prompts.

For example, type in “Wizard with sword and a glowing orb of magic fire fights a fierce dragon Greg Rutkowski,” and the system will produce something that looks not a million miles away from works in Rutkowski’s style.

But these open-source programs are built by scraping images from the internet, often without permission and proper attribution to artists. As a result, they are raising tricky questions about ethics and copyright. And artists like Rutkowski have had enough.

According to the website Lexica, which tracks over 10 million images and prompts generated by Stable Diffusion, Rutkowski’s name has been used as a prompt around 93,000 times. Some of the world’s most famous artists, such as Michelangelo, Pablo Picasso, and Leonardo da Vinci, brought up around 2,000 prompts each or less. Rutkowski’s name also features as a prompt thousands of times in the Discord of another text-to-image generator, Midjourney.

Rutkowski was initially surprised but thought it might be a good way to reach new audiences. Then he tried searching for his name to see if a piece he had worked on had been published. The online search brought back work that had his name attached to it but wasn’t his.

“It’s been just a month. What about in a year? I probably won’t be able to find my work out there because [the internet] will be flooded with AI art,” Rutkowski says. “That’s concerning.”

Stability.AI, the company that built Stable Diffusion, trained the model on the LAION-5B data set, which was compiled by the German nonprofit LAION. LAION put the data set together and narrowed it down by filtering out watermarked images and those that were not aesthetic, such as images of logos, says Andy Baio, a technologist and writer who downloaded and analyzed some of Stable Diffusion’s data. Baio analyzed 12 million of the 600 million images used to train the model and found that a large chunk of them come from third-party websites such as Pinterest and art shopping sites such as Fine Art America.

Many of Rutkowski’s artworks have been scraped from ArtStation, a website where lots of artists upload their online portfolios. His popularity as an AI prompt stems from a number of reasons.

First, his fantastical and ethereal style looks very cool. He is also prolific, and many of his illustrations are available online in high enough quality, so there are plenty of examples to choose from. An early text-to-image generator called Disco Diffusion offered Rutkowski as an example prompt.

Rutkowski has also added alt text in English when uploading his work online. These descriptions of the images are useful for people with visual impairments who use screen reader software, and they help search engines rank the images as well. This also makes them easy to scrape, and the AI model knows which images are relevant to prompts.

. . . .

Other artists besides Rutkowski have been surprised by the apparent popularity of their work in text-to-image generators—and some are now fighting back. Karla Ortiz, an illustrator based in San Francisco who found her work in Stable Diffusion’s data set, has been raising awareness about the issues around AI art and copyright.

Artists say they risk losing income as people start using AI-generated images based on copyrighted material for commercial purposes. But it’s also a lot more personal, Ortiz says, arguing that because art is so closely linked to a person, it could raise data protection and privacy problems.

“There is a coalition growing within artist industries to figure out how to tackle or mitigate this,” says Ortiz. The group is in its early days of mobilization, which could involve pushing for new policies or regulation.

Link to the rest at MIT Technology Review and thanks to R. for the tip in the comments.

PG predicts there will be more than one copyright infringement suit filed against various individuals, institutions and companies providing AI services in which an artist’s copyrighted work was used to seed the AI.

In the United States, such suits will almost certainly be filed in the federal court system since copyright is governed by federal law. Some states have laws that would seem to give exclusive rights to publish state documents to the state or those to whom the state has given permission to make and sell copies of state documents, but trying to protect a creative work from republication under anything other than pursuant to federal copyright laws and decisions is generally regarded as a fool’s errand.

One thing that judges do when faced with a novel question is to draw from similar situations that have occurred previously.

As one crude example, if an individual uses a computer and a software program created by third parties to make an exact copy of of the text of a copyright-protected book, the manufacturer of the computer or the company that created and sold the word processing program used to make a copy of the book will not be liable for copyright infringement because they only provided tools and the author used the tools in the manner the author chose.

AI art programs require a prompt to create anything images. The OP mentions the use of an artists name in an AI prompt as one way of generating an image.

However, that decision is not made by the creators/owners of the AI program, but rather by the user. The creators of the AI program ran a huge number of images by an an enormous number of artists through the program’s processor. Is it a violation of copyright law to link an artist’s name to a painting the artist created? PG doesn’t think so.

As a matter of fact, using Mr. Rutkowski’s work without attribution would also be an offense against the creator.

PG doesn’t see the creation of works “inspired by” an artist constituting copyright infringement when they aren’t copies of what the artist created or closely resembling what an artist created. PG doesn’t believe that an artistic style is protected by copyright.

If PG’s understanding of the way AI art programs work is to deconstruct the original copy of the image into its component parts and assign some sort of marker to the parts such that a prompt for a large building in the style of the British Museum won’t generate an image dominated by a dragon.

PG just created a prompt, “Windsor Castle sailing on the ocean” and ran it through an AI art generator. Here’s what he got.

Next, PG modified his prompt to read “Windsor Castle sailing on the ocean Greg Rutkowski” and this is what he got:

For one last experiment PG created another prompt with a different artist, “Windsor Castle sailing on the ocean Andy Warhol” and here’s what showed up.

PG is not an art expert, but he doesn’t think any of his AI illustrations will put either Mr. Rutkowski or Mr. Warhol out of business.

40 thoughts on “This artist is dominating AI-generated art. And he’s not happy about it.”

  1. Traditional copyright and Trademark laws don’t recognise style.

    That why laws are added for things like bordeux wine.

    But I assume this has happened before? where something like fine china is made by a single person and a factory starts manufacturing a similar style.

    while we can be sympathetic this is in no way a endorsement of making style or taste copywritable.

  2. “It’s a naive domestic Burgundy, but I think you’ll be amused by its presumption”

    You have something backwards, you may imitate the style of a drink as you wish but not pretend by name or getup that its origin or contents or other than it is.

    https://www.thedrinksbusiness.com/2022/02/scotch-whisky-association-wins-nine-year-battle-over-use-of-word-glen/
    nine-year battle to stop German distillery Waldhorn from calling its single malt whisky Glen Buchenbach has finally been concluded.

  3. I dunno, PG. That middle image – “Windsor Castle sailing on the ocean Greg Rutkowski” – is pretty darn good.

    I can see it as the foundation for a fantasy book cover. Especially if you can keep tweaking it and adding bits.

    There is still the selection process – an AI may make billions of images, and someone has to go through them to find the special ones, the ones you wish you could have painted/drawn/created.

    Does that mean the human curator has some value in the process? Because a human can look at the third image and say ‘no’ instantaneously if familiar with Warhol’s work? Is the human contribution to create and tweak the prompts, and then select the ‘best’?

    • I prefer the third. Perfect cover for a fantasy titled THE DROWNED CASTLE. 😉

      (There are several such in the ELDER SCROLLS IV: OBLIVION video game.)

      • I prefer the castle in the third – given the prompt – but the setting in the second (the third’s sailing vessels are not convincing, even for fantasy). But what is weird is the response to the “Windsor Castle” part of the prompt which, particularly in the first two images, seems to have been interpreted as “generic fantasy castle” with no connection to the actual structure.

    • The human contribution from individuals using AI art generators is indeed the creation of the prompts, A.

      I ran a Google search for “ai art prompts” and found all sorts of results.

      That said, using a prompt on two different ai art generating sites will almost certainly result in much different from each other.

      That said, I don’t think even the most detailed and skillfully-constructed prompt will be capable of causing the ai generator to create an exact or even close copy of any of the millions of images poured into the ai generator building process.

  4. Have we not learned anything from the past? It doesn’t have to be as good as a human to replace a human if it’s convenient and more affordable. VHS did not beat Betamax because it was a better format.

    It’s easy to use computers, robots, and AI to replace tasks that humans don’t want to do, because they are repetitive, dangerous, or unpleasant. We are approaching the phase where we can use computers to replace tasks that humans *want* to do, enjoy doing, or feel defined by doing. I don’ think it’s unreasonable to pause and ask ourselves ‘how do we want to use this tool?’ Because for every person who says ‘hooray, I can use AI to create the movie I’ve always dreamed of in my mind!’ there are five people who are thinking ‘how can I use AI to create pornography using real people’s bodies and faces and voices’ or ‘how can I use AI to impersonate some grandma’s grandkid on the phone, asking for money.’

    There are licensing laws. I think it’s reasonable to treat images donated to AI to train as a licensing opportunity, for instance. Or maybe it’s time to start embedding data in image formats that make it clear whether the image can or can’t be used for AI training (or meme generation, or any of the other ways people remix art and photos posted on the internet). I’m sure there are other ideas.

    We are a community of creatives. Surely we’ll be the first to say ‘let’s make sure artists get paid’?

    • Sure artists should get paid for what they create.
      But we need to be realistic about how the world actually works.
      You are right that a combination of “AI” art video and deep fakes will be used for pornography. And some of it will be perfectly legal (becaise it’ll be cheaper, not because of morality or legality) all new techs so far have been copted into the industry. (In fact, the traditional corporate exploiters of young women have left because they can’t compete with the online cottage industry of young women monetizing themselves. C.f., OnlyFans which has recently been in the news.
      https://en.m.wikipedia.org/wiki/OnlyFans

      That right there is the perfect vehicle for monetizing such tech.
      (In fact, one of the CBS procedurals recently had an episode where a blackmailer ran an online honeypot scheme with a fake video AI. (Don’t recall which–probably one of the FBIs– they’re interchangeable. But good white noise.)

      What we all need to bear in mind is that if it can be done it will be done and somebody will figure out how to make money off it. Legally or not. In fact when it comes to disruptive tech and the business world, the law is of little help for years and decades.

      “AI” hinges on too many subtleties for the law to be of any use.
      My own recommendation for artists is to look into trade dress law and blockchain, paywall everything, put nothing you don’t want monetized without you online, and still expect to be knock off’ed.

      That too is the 21st century.
      The world has never run as it should and it won’t start now.

      • I don’t accept “people are going to do it anyway so we shouldn’t bother trying to prevent it.” Human beings will do many things that are destructive, and it’s worth saying ‘yes, it can be done, and yes, people will do it, but as a society we are drawing a line in the sand and saying ‘no’ to things that we don’t believe promote the kind of society we want to live in.’ So while I agree that people are going to use tools to do terrible things, I think discussing whether we want to introduce friction into the process is worthwhile. I have yet to see a real discussion on this topic about the behaviors we want to promote. It’s all ‘get used to it’ or ‘this is great for me and heck with anyone it doesn’t work for’ discourse.

        Being realistic is important. But expecting people to care about principles is also within the grasp of human ability, and if we don’t start grappling with those principles now and what they mean for society ethically and legally, when AI is nascent, we’re not going to like what’s going to happen when we blindly blunder into the future.

        • I return to my core position: how do we get there from here?
          Wishes and intentions are great.
          But they do not exist in a vaccum.
          One should aim for the best, yes. Better than to merely acquiesce and go along meekly. But one cannot ignore reality nor go unprepared for the worst.

          With “AI” there simply is no legal framework in place, nor accepted legal theory to fall back on. I see no way to get from here to there any time soon. There is a goal, yes. But beyond that…?

          The djinn is out of the bottle. Humans let it loose and it is no more going away than ebooks or the internet. All that remains is for humans to adjust and learn to live with the new world.

          • My initial suggestion is to create an image file format that embeds licensing info, or even simple permission to be scraped by AI or search engines. We already embed a lot of metadata in existing image file formats, there’s no reason not to use that metadata for licensing. Bonus: it would give people a way to make it clear they’re not all right with their photographs being manipulated/used as memes/etc.

            The tools to give creators control over their IP are just as possible with modern technology as the tools to use their IP without permission. It would be trivial to tweak interfaces so that this metadata is more prominent. The fact that people haven’t bothered tells you about our priorities as a society, particularly when it comes to artists’ rights.

            • Agreed.

              It’s like the crusades against DRM. It is a nuisance that treats paying customers like would be thieves and does nothing to stop the actual thieves. But the vast majority of users don’t care. For them it is a minor nuisance.

              As to the online images being “viewed” by the “AI”, there already is a way to keep the web crawlers that feed the “AI” from using an online image: the robots.txt file of the website.

              https://pureseo.com/blog/what-is-a-robots-txt-file

              As you said, the tools exist.
              But if the site doesn’t have one…

              • You don’t control the robots.txt on social media sites, though, which is where a large number of artists are sharing (for the excellent reason that they are going where the eyeballs are). Ditto big art archives. You can’t tell people ‘if you don’t like it, don’t use the sites that 99% of the people in the world are using.’ “Don’t participate in society” isn’t a useful response, which is why I’m making my suggestion. If the data’s embedded in the image file, then it’ll travel with the image unless some site manually strips it.

                These things can be done. And they would be, if those of us with a stake in the matter would fight for ourselves and our peers. I’m not willing to sit back and shrug as technology rolls over other artists, even if it’s a windmill I’m tilting at.

    • M, the purpose of feeding images through the ai electronic meat grinder isn’t to create a copy of any one of the original images. It’s to provide raw material for the ai analysis function to start the process of chopping and grinding various elements of the original and running the result into an ai creation engine.

      I claim no great expertise in the nuts and bolts of how ai actually works, but I believe the original images are simply the raw materials used by the ai generator to create something no human artist has ever created or possibly that no human artist is ever likely to prepare.

      A writing analog that popped into my head was a description of a telephone ringing and a woman named Jane answering it by saying, “Hello.” “Yes, this is Jane.” and using it as a basis for creating any number of written versions of a telephone ringing and being answered by someone.

  5. Speaking practically, as a used-as-style-feed artist, I might want instead to insist on a “credit” instead of an (unlikely to be practical) “fee”.

    Look at all of us, here, talking about a Rutkowski, who I hadn’t heard about before and might be (theoretically) interested in commissioning for paid work. I think it would be more important to think of it as advertising, spreading one’s reach, rather than significantly impacting (negatively) potential paid business.

    • This is feasible.
      Just require “AI” art to include as metadata the prompt that created it.
      It won’t the software creator or user a thing and it is trivial to implement. Steganography is old stuff.

      We actually see this in comic book covers that are based on earlier, famous covers, list the artist and add “after so and do” as a sign of respect. The most common homage/parody/ pastiche target is Action Comics #1 with Detective Comics #27 and Amazing Fantasy #15 right behind. It costs nothing and earns goodwill.

    • This sounds perilously like the “you don’t need to be paid, because you’re getting exposure” argument that we all (correctly) tell artists of every stripe is bunk, to me.

      • M. – Good point, but all the ai art that I have seen generated looked much, much different than the original accurate image that comes to mind when someone says, “That’s the spitting image of my photo of Windsor Castle.”

        • It doesn’t look like it *now*, PG. AI can’t make believable copies of DaVinci *now*. We’re just at the beginning of this roller coaster. I can see the ‘it now creates results that are indistinguishable from master artists’ from here.

          Don’t think writers are exempt either. AI writing is already here and it’s going to get better, too. Do we really need to get to the point where someone else can feed all a living author’s works into an AI generator and create “a Valdemar novel but with the ending I wanted that Lackey didn’t give me” before we start making plans for how to corral this technology? One author I said this to quipped that he’d be fine with an AI generating HisBrand books, as long as he was the one who controlled it and got the royalties. But how is that going to happen if the existing feeling is ‘whatever, you can’t own your style, lol’?

          It’s short-sighted. I am standing here waving the red flag. Can we at least look toward the future and say ‘we need to prevent *this* type of thing from happening?’

        • See, the people I’m seeing going gung-ho for the AI trend (Not the artist who take a blobby, somewhat out-of-what initial prompt and then build on it, but the folks who are using it as a replacement for commissions art) want art to spec, of their own what have you idea, and they want it in so-and-so’s style so they they don’t have to pay for that person’s work. And they don’t just make one image and call it good. They tweak the inputs over and over and over, regenerating an image until it’s as close as they can get it to what they want. They’re being art directors, but instead of getting one or multiple artists (such as in game or movie development) to submit designs, they’re having a computer do it. How long till the studios all let their concept artists go and start using AI instead?

          You’re stuck on this idea of copyrighting style. Styles are uncopyrightable and that’s something that probably shouldn’t be changed, given how many people practice for a certain style (anime! Disney! Frazetta!), but the issue is so much more than that. It’s the people waving AI art in the faces of those who do art for a living and saying ‘Why should I pay for your decades of hard work when I can do this instead?’ It’s the people who spit on the idea of learning to pick up a pencil themselves and call artists all sorts of names for having done so. And a hundred other arguments and insults I’ve seen.

          So no, the answer isn’t easy. Maybe it would be restricting the scrapers to using any art that outside legal copyright. Block them from living artists altogether. Maybe it’s something else. But just shrugging and saying it’ll happen anyways isn’t right. Expecting it to stay in visual arts and not come for the rest of us is incredibly short sighted. My inclination to share my hobby art is diminishing rapidly and I’m watching AI come for my day job (also pretty badly, but it’s on its way). Hiding behind ‘well the law doesn’t say, and it’s so diffuse, so lets just let it go’ is no way to keep this sort of thing from reaching everywhere in the long run.

          • I don’t disagree with you in principle.

            But…

            One: you can’t legislate human ethics or mores. And even legality is iffy. And as the old BETAMAX case established a tech or product can’t be banned as long as it has at least one legal use.

            Two, a commercial artist can (easily, in fact) keep their portfolio away from the “AI” engines and the web crawlers. But once they sell it the end user might want it visible online. First sale doctrine takes over. Their exposure will be proportional to the level of success and neitheir law nor tech will prevent it.

            Yet again, I see no way there from here.
            Humans are a whole are neither rational nor ethical. Selfserving, though is a lot more common than either.

            • They might be able to keep their portfolio away from the AI, but only by not posting at all, which, in the current commercial art market, also means they’re much less likely to find work at all. Visibility online is how many of them come to the attention of everyone from small time commissioners to art directors for studio projects. I’ve lost count of the number of tweet streams I’ve seen from ADs who tell you how they found their employees by scrolling artist’s posts on social media. Sure, the art they do for those companies once hired is often under NDA and therefore not posted, but it does eventually get posted in a place unprotected from scrapers. So they’re screwed anyways. Not get work because they can’t be visible, or have their work cut out from under them without permission anyway.

    • K. – My question is, how are you going to be able to demonstrate that your copyrighted image of a tree is the basis for an ai art creation that includes a tree.

      Even using relatively simple commands results in images that don’t look anything like what is specifically described. In my demo images, I intentionally used the term, “Windsor Castle,” but the castles the ai art generators included castles that bear no similarity at all the the actual Windsor Castle or a realistic painting thereof.

  6. It seems to me that if the AI companies are serious about not infringing, they should ban prompts that use the name of a living artist (or one who died within the past few years) as well as any trademarked product. This doesn’t completely compensate for having scraped that artist’s work, but it would make it more difficult for users to “steal” the artist’s style. At least they’d have to come up with their own creative prompts.
    When somebody starts using Disney as a prompt–if they haven’t already, which is likely–and those millions of Disney lawyers find out, the rafters will rattle.

    • Oh, I was also thinking this matter would get resolved quickly if Disney were brought in.

      But in a previous thread Felix brought up the practice of “clean room reverse engineering.” In this scenario, if I describe brush strokes, colors, techniques, and subject matter, e.g., “muscular dude with busty babe clinging to his leg,” I may end up with something akin to Frazetta and Vallejo without actually feeding their art “into the machine” as it were.

      In that case, what then? If you’re a creator, what is your adaptation in that scenario? Quality in execution, but what if the end user doesn’t care? Or can’t discern it; there are people who would never appreciate the difference between truffle pasta and Spaghetti-O’s. I’ve seen other creatives suggest they’d use the AI to augment their own skills. Like they may not need to trace images, or can use images that are hard to draw. I don’t know that this has to be doom and gloom, even though it looks like — for the moment — that we’ve entered Vonnegut’s “Player Piano” scenario, where machines produce art.

      Just thinking more and more about the reverse engineering process, I don’t know how readily an artist could protect themselves from AI appropriation if they wanted to opt out.

      Which brings me back to adaptation. Is it too optimistic to consider that once upon a time Mozart and other “instrumental music” artists performed at concert halls, but the new instrumentals appears in movie soundtracks? Two Steps from Hell and Hans Zimmer, for instance. Where once real art appeared on canvas and frescoes, now we see them in other media, e.g., video games and special effects? If AI produces certain types of art, must it necessarily displace actual artists? Auto-Tune didn’t crowd out actual singers.

      In the short term I can see reason for pessimism, but in the long term I tend to bet on humans innovating and adapting.

    • It’s worth noting that one of the engines, Stable Diffusion, actually has “A small cabin on top of a snowy mountain in the style of Disney, artstation” as one of their sample prompts. Apparently they aren’t afraid of the mouse.

      • I’d love to see the results from several different engines of:

        Hat, in this style, 10d/6

        to see how many of them try to grab Tenniel and how many of them try to grab Disney as their inspiration. (Alice’s Adventures in Wonderland, and the Disney version was… was… frankly, a crime against literary adaptations that should have resulted in lifetime suspension of literary license for driveling under the influence.)

  7. They could do that. Possibly even willingly.

    Not sure it would achieve much though. The way the tech works, it encompasses all the aspects of the imagery it learned so it can link style to a dozen or more prompts other than the artist name.

    Sticking with my previous example, one could replace “Frazetta” with “ACE 60’S BURROUGHS” or “MOON MAID cover”.

    Also, Style emulation may very well end up an irritant more than a catastrophe. Even if style should be deemed protected tomorrow, the OP artist might have to prove the scope of damage and a direct culprit.

    That last is why it is doubtful even Disney lawyers can block every possible combination of prompts for cartoon characters that might end up with a similar style as the mouse. They don’t even try to stamp out every knock off on the planet with a Mickey image on it. Most are technically copyright violations but the impact on their business is too small to document, much less fight.

    Some things are to be endured for lack of alternative other than dwelling on things that won’t change any time soon. Most of us have more immediate concerns.

    • But what about Mortimer?

      That actually points out the real problem with the Disney position: The Mouse has evolved over time.† That actually reduces the scope of whatever can be uniquely protected to the innovations and not the core, very similar to the various Sherlock Holmes lawsuits.

      † Thereby refuting “intelligent design.”

      • WB/DC relies on trademark but to be safe they change the character costumes every decade or so. Likewise they publish a story or more of even tbeir obscurest characters to protect the mark.

        Technically their oldest stories might lose copyright protection in a few years but the trademark will remain and while tbe stories will interest conosiuers that won’t be of much commercial value. No need for hijinks.

        Not holding my breath for Disney to change their ways but ‘maybe the horse will sing”. ;)4

  8. I suppose one might ask how any given image can be traced to Rutkowski. If all we see is the image without the input parameters, who knows how it was generated?

  9. An added point:

    Machine Learning (to use its alternate and somewhat more accurate name) is not going away (current generation Microsoft consoles and next generation Intel CPUs and GPUs come with dedicated hardware to facilitate faux AI) and it won’t stop at 2D or even 3D static images. The biggest disruptions to come will be in video. And they will come soon. Short films made with the UNREAL 5 game developing engine are all over youtube. There is even a Matrix video that seamlessly mixes video of Keanu Reaves and Unreal digital actor looking like him.

    The most recent demo to drop is even more impresive because it is generated in real time on PCs or current generation consoles starting at $249 via a competing GAME ENGINE called UNITY.

    https://m.youtube.com/watch?v=eXYUNrgqWUU

    It’s called ENEMIES and the digital actress is about as realistic as it gets.

    Again on you tube there is an entire series of short STAR WARS movies produced solely on computer. No human involved.

    We’ve already seen glimpses of where the tech can go in the various de-aged and posthumous performance scenes in movies and TV but those are million dollar projects by professional CGI houses with motion capture of real people. Unreal and Unity achieve the same or better on consumer hardware costing well under $1000 without human motion capture. Any film school student can now make a 4K CGI movie on barista wages. 😉

    And that is just the beginning. Today’s off the shelf products that folks are just starting to ldsrn how to use.
    Text is no longer the only cottage industry story telling medium.

    Things only get wilder from here on.

    • Any film school student can now make a 4K CGI movie on barista wages.

      That’s a nice equalizer. I don’t anticipate anyone having a problem with what UNITY and UNREAL** are doing, because those tools are facilitating art without necessarily appropriating someone else’s work.

      The attention to detail on the character model’s blouse in the Enemies clip is impressive. Texture, patterns, and materials put together seamlessly. Plus there’s the architecture in the setting, a mix of classical and sci-fi. I like watching Grace Kelly / Doris Day-era movies because I love their clothes and the architectural details of their settings. So, the Enemies clip hits all my buttons, even leaving aside the mystery and sense of wonder. The artistic details are the sort of thing I was thinking of when I speculated that AI art would be formidable in the hands of artists / art historians / art buffs who use it to make their own thing. Creating art requires a soul, so AI art will only be a tool of a creator, not a usurper.

      The sticking point with AI is that it’s sold as grabbing other people’s work as part of the workflow. It’s one thing to want to recreate the look of the lost techniques of the “old masters” to create something new. But right now AI-art is presented as a threat to living artists, whose style can be ripped off to make knock-offs. Hopefully that matter will be resolved soon, so that real artists can feel free to create beautiful and wondrous art.

      **I remember this video because I was impressed when the player character at the 10:10 mark set the creature’s fur on fire, and the creature shook it off. Realistic looking. The actress model in Enemies was almost convincing. Her face and hair and skin almost passed for human, but not quite. You can see it’s only a matter of time, though, especially since competition is around to spur progress.

      • Agree on all counts.
        However…
        To date almost all the digital actors in the Unreal and Unity videos and games are not based on actual living or once living humans. However, there’s no reason why this must be so.
        Imagine a time travel or historical drama where the historical characters are digital clones of the real people. And in many cases we do know what they looked like.

        Say, Cleopatra:
        https://www.youtube.com/results?search_query=how+cleopatra+looked+in+real+life+
        (The 7-9 min range.)
        Not much trouble there, but how about a digital RomCom featuring James Dean and Audrey Hepburn?
        A comedy with Chaplin, Harold Lloyd, Fatty Arbuckle, Jerry Lewis?
        A western with John Wayne and Clint Eastwood?
        And what if the project comes from Bollywood? What’s their legal framework on right of publicity?
        Given a good story there’d be big bucks to be made.

        And what about too close near clones of the living?
        There was the whole 2013 affair with THE LAST OF US where the girl character was clearly based on young Ellen Page who was not amused and made enough noise SONY had to change the character. Not all countries recognize right of publicity.

        So there are legal issues to be worked out in regards to digital actors, especially for full length productions. Like if Kennedy is finally replaced and LUCASFILM decides to do Luke and Mara Jade. Do they recast Luke with Sebastian Stan or a digital clone? Licensing young Hamil would likely be cheaper.

        As with all tech there are pluses and minuses to be sorted out.
        And lots of billable hours to be charged.

        • The redheaded Cleopatra in particular was stunning. And the possibility of historical dramas starring digital versions of historical figures is intriguing. I just finished reading a series of historical novels featuring Augustus and Livia and Cleopatra, so it’s jarring to see how they “actually” looked compared to what I imagined based on cold marble.

          But the other options I think are already barred legally in certain jurisdictions, at least when it comes to commercials. It’s easier to use a deceased celebrities likeness in a commercial if the product is closely associated with the deceased. One example that strikes me is the Dior commercial with Charlize Theron air kissing Grace Kelly and sharing a dressing room with Marlene Dietrich and Marilyn Monroe. Monroe actually speaks in that one.

          But new movies starring dead actors in a Hollywood production seems less likely. I don’t remember the name of the movie, but apparently John Wayne made a Western as a response to “High Noon,” because he disapproved of “High Noon.” So it’s known he wouldn’t be in just any Western. Lawyers of living celebrities will be quick to close any loopholes there, I think. But Bollywood? China? The Bruce Lee commercial for Johnny Walker is a strong indication of how this issue may play out outside of Hollywood.

          I don’t remember the Ellen Page situation, but I am surprised it happened. It seems like they would have just hired her to do the character’s voice. Like how Joker in Mass Effect is deliberately made to look like his voice actor Sean Green. Usually when I hear of a character being modeled on an actor, it’s designed to woo the actor to play the part, like modeling the genie’s personality on Robin Williams for “Aladdin.”

          *Thinks wistfully of Kathleen Kennedy getting the boot.*

          On the digital arts front, the future is shaping up to be “interesting times.”

          • Re: Ellen Page:

            https://www.businessinsider.com/ellen-page-the-last-of-us-2013-6


            “I guess I should be flattered that they ripped off my likeness,” said Page. “But I am actually acting in a video game called Beyond Two Souls, so it was not appreciated.”

            The game Page is voicing is also for Sony. With the famous actress on board, “Beyond: Two Souls” deliberately made its title character look like Page. ”
            —-

            This was Sony. Ethics don’t concern them any more than their fans’ interests. And at the time they ripped her off, she was working for them at another studio.

            They have been involved in one mess after another for years. The most recent was raising PS5 prices for christmas but only in markets where they are strongest and Xbox weakest (while XBOX cut their Series S new generation model $50 to $250). They charge developers that want their games to allow cross platform play, they demand developers *not* support GAME PASS to be allowed to put their games on playstation, and they charge extra for patches to PS5 from PS4. They’ve also been caught feeding lies (and cash?) to Euro regulators to try to block the Activision merger.

            They treat the hired help the same as their customers. And not just Page while in demand.

            Hopefully what goes around will come around.
            XBOX is better, cheaper, and with Cloud gaming a lot of games and all the new ones only need a browser and a controller. This may settle their hash.

Comments are closed.