FTC calls out consumer protection and competition intersections in Copyright Office AI proceeding

From JD Supra:

The U.S. Federal Trade Commission (FTC) staked out its role in policing the potential competition and consumer protection implications of generative AI technologies’ use of copyrighted materials in comments submitted in the U.S. Copyright Office’s proceeding on AI and copyright. The proceeding seeks information on the use of copyrighted works to train AI models, levels of transparency and disclosure needed regarding copyrighted works, and the legal status of AI-generated outputs, among other things. In its comments, the FTC reiterated its expertise addressing competition and consumer protection issues involving AI, identified copyright issues related to generative AI that implicate competition and consumer protection policy, and introduced testimony from an October 2023 FTC roundtable with creative professionals. The FTC also promised vigorous efforts to protect consumers in the rapidly evolving AI marketplace.

The FTC’s AI Enforcement

The FTC’s mandate – under the FTC Act and other statutes – to promote competition and protect consumers gives it the power to address unfair or deceptive acts or practices and unfair methods of competition. The FTC has characterized AI as the latest in a series of new technologies that pose novel and important challenges for consumers, workers, and businesses, which falls within the purview of the FTC’s “economy-wide mission.” According to the FTC, as companies deploy AI-enabled systems across a wide range of industries and people incorporate AI-powered products and services into their daily lives, the potential for harm to consumers increases.

The FTC has experience applying its existing legal authority to address alleged unlawful practices and unfair competition involving AI. Prior actions and guidance have addressed a variety of consumer protection allegations, such as algorithms that rely on consumer’s personal information, algorithm-driven recommendations communicating deceptive claims, and algorithms making biased decisions. The FTC has also called attention to potential advertising concerns relating to AI, and has highlighted the risk of fraud stemming from the use of chatbots, deepfakes, and voice clones.

Consumer Protection, Competition, and Copyright

In its comments to the Copyright Office, the FTC highlights important intersections across consumer protection, competition law and policy, and copyright law and policy.  The Commission’s comments note three areas of interest where the FTC might seek to address unfair or deceptive practices to ensure a competitive marketplace.

First, the FTC notes that decisions about where to draw the line between human creation and AI-generated content may cause harm to both creators and consumers. Creators may be harmed by their work being used to train an AI tool without their consent. Consumers, on the other hand, may be harmed if there is a lack of transparency about AI authorship.

Second, the FTC argues that questions about how to address liability for AI-generated content can implicate consumer protection and competition policy. These questions include what liability principles should apply, assigning and apportioning liability, treatment of pirated content, and the role of disclosures, among other issues. As policymakers seek answers to these questions, there may be overlap and potentially some conflict with consumer protection or competition interests. For example, as the FTC notes, “the use of pirated or misuse of copyrighted materials could be an unfair practice or unfair method of competition under Section 5 of the FTC Act.”

Third, the FTC argues that consumer protection and competition principles may call for policy protections that “go beyond the scope of rights and the extent of liability under the copyright laws.” There may be times when consumer protection or competition policy and copyright policy are aligned, as in the example above where misuse of copyrighted materials would likely violate both the FTC Act and copyright law. The FTC raises the possibility, however, that there may also be circumstances where companies run afoul of Section 5 with their AI-generated content even if that content would not violate copyright law.

Impact of Generative AI on Creative Economy

The FTC also seeks to highlight impacts on content creators. Along with its written remarks, the FTC submitted the transcript of its October 4, 2023, “Creative Economy and Generative AI” roundtable, which featured musicians, authors, actors, artists, software developers, and other creative professionals discussing the impact of generative AI on their work. The FTC transcript noted views shared by participants on topics such as: (1) works being used to train generative AI without participants’ consent; (2) insufficient or ineffective mechanisms for obtaining consent, including reliance on opt-out approaches; (3) lack of transparency and disclosure about training data and authorship of AI-generated content; (4) the ample use cases of AI for creative professionals but the need for better guardrails to protect creators; and (5) the putatively great power imbalance between the creators and the generative AI companies.

Link to the rest at JD Supra

PG is anything but an expert on the FTC’s jurisdiction, but color him skeptical about whether the agency has jurisdiction over Generative AI programs.

PG did some quick and dirty research and found the FTC has about 1000 people on its payroll, about two-thirds of whom are attorneys. As PG has mentioned on multiple occasions, he’s a retired attorney and has high regard for the general intelligence of his fellow attorneys.

However, with the exception of patent attorneys, who must have an undergraduate degree in the science or engineering fields, a large portion of the bar members anywhere PG knows about have non-science or engineering or math degrees as undergraduates. Indeed, law school is one of the few options for a humanities major who would like to be able to support a family in a reasonably comfortable manner.

In short, the people working at the FTC undoubtedly have a lot of opinions about artificial intelligence, but little ability to understand how it works or what dangers are real and what dangers are imaginary. The fact that judges are all attorneys gives PG another reason to worry about involving AI research in the US legal system.

Here’s a statement from the FTC website about the agency’s mission:

The FTC enforces federal consumer protection laws that prevent fraud, deception and unfair business practices. The Commission also enforces federal antitrust laws that prohibit anticompetitive mergers and other business practices that could lead to higher prices, fewer choices, or less innovation.

Whether combating telemarketing fraud, Internet scams or price-fixing schemes, the FTC’s mission is to protect consumers and promote competition.

Perhaps PG missed something, but telemarketing fraud, internet scams or price-fixing schemes don’t sound like the main features of large language models.

8 thoughts on “FTC calls out consumer protection and competition intersections in Copyright Office AI proceeding”

  1. Smacks of desperation.
    Pure Grandstanding.

    FTC is in chaos with commisioners and lawyers quitting by the dozen and congress cutting the 2024 budget by 13%. From THE HILL:

    Khan has been trying to score a big win against a big tech company, any big tech company, since 2021.Whiffed against FACEBOOK, struck out against Microsoft (three misses), and was going after google, but DOJ called dibs.

    So, with all the “AI” angst in the media, announcing an attack on that tech was predictable.
    Just as precictable as the 2 years of depositions and 1M documents resulted in the *handpicked*, Biden appointed judge rejecting every argument floated in a week. “Perhaps bad for Sony. But good for Call of Duty gamers and future gamers.”

    And that was a straight all-cash buyout of a single company involving nothing but cash flows.

    And now the intend to do what? Preempt congress? Depose and discover every tech company and university, big and small? With a reduced staff (granted, of true believers, because everybody else quit) and reduced budget? In one year? (Come 2025 she’ll be gone regardless of which party wins, seven year appointment not withstanding.)

    As PG said, they won’t even know what questions to ask, how to judge the answers, or what to do after they get them.

    Anybody remember “internet time” from 25 years ago?
    11 months ago, OpenAI let lose their chatbot for the masses to play with it.
    Not even one year and everybody who is anybody (many who aren’t) are *making money* off LLMs.
    Adobe, Corel, Google, Facebook, Grammarly, Stable Diffusion, and a hundred startups have been playing with the tech.
    Playtime’s over.
    From the Verge:

    “Last week OpenAI announced its new GPT platform to let anyone create their own version of ChatGPT, and now Microsoft is following with Copilot Studio: a new no-code solution that lets businesses create a custom copilot or integrate a custom ChatGPT AI chatbot.

    “Microsoft Copilot Studio is designed primarily to extend Microsoft 365 Copilot, the paid service that Microsoft launched earlier this month. Businesses can now customize the Copilot in Microsoft 365 to include datasets, automation flows, and even custom copilots that aren’t part of the Microsoft Graph that powers Microsoft 365 Copilot.”

    (Link below)


    “Every bank, every accounting office, every hospital, doctor’s office, multinational, every *small* business running Microsoft Dynamics will be cooking up their own custom Automated Data Analysis app running off *their* inhouse data. Even if all they do is bookkeeping and customer/patient data spreadsheets “on steroids” that alone will generate hundreds of billions for thousands of companies.

    Remember when Word Processors and spreadsheets got macros? Now those macros will have LLM tech inside. And FTC (or Congress, for that matter) thinks they can regulate a million custom apps?

    Truly idiotic.

    • Last one:


      “This year’s Microsoft Ignite developer conference might as well be called AIgnite, with over half of the almost 600 sessions featuring artificial intelligence in some shape or form.

      Generative AI, in particular, is at the heart of many of the new product announcements Microsoft is making at the event, including new AI capabilities for wrangling large language models (LLMs) in Azure, new additions to its Copilot range of generative AI assistants, new hardware, and a new tool to help developers deploy small language models (SLMs) too.

      Here’s some of the top AI news CIOs will want to take away from Microsoft Ignite 2023.”

      Note: some. From *one* company.

      “Of course, Microsoft product naming could never be so simple, and there won’t simply be one Copilot. There’s also Copilot in Dynamics 365, Copilot for Microsoft 365, Copilot in GitHub, Copilot in Viva, and now: Copilot for Service and Copilot for Sales.

      Copilot for Service is intended to help agents in contact centers, ingesting customer information and knowledgebase articles and integrating with Teams, Outlook, and third-party systems, including Salesforce, ServiceNow, and Zendesk.

      Confusingly, Microsoft already offers a Sales Copilot; Copilot for Sales is a different product that includes a license for Copilot for Microsoft 365, and helps sales staff prepare for customer meetings by creating custom briefing documents.

      It’s not just Microsoft 365 users that get a copilot: Admins will have one too. A forthcoming update will see the addition of Copilot to the Edge for Business management interface, helping admins with recommended policies and extensions for the workplace browser. Other Microsoft 365 apps are already covered, including SharePoint and Teams. There’s also a new adoption dashboard for Microsoft Viva to help track how the introduction of Copilot features in Microsoft 365 applications is changing the way users work.

      Starting next year Teams, for example, will be able to take live meeting transcripts, summarize them as notes, and organize those notes on a whiteboard, suggesting more ideas to add to the whiteboard as the meeting progresses. Meeting notes will also become interactive documents, enabling participants to ask for more detailed information on a particular point after the meeting has ended. Organizations concerned about the risks of maintaining such written records will be able to turn the feature off by default or per meeting.

      The additions to Copilot for Microsoft 365 won’t stop there, though, as Microsoft is opening it up to plugins and connectors from third-party vendors, enabling it to source and cite data from Jira, Trello, Confluence, Freshworks, and others.”

      One company.
      One year.

      Yup, change is coming.
      This isn’t the METAVERSE (not sorry, Zuckerberg), a solution in search of a problem but a solution to many existing problems. And one just starting to bite: demographic downsizing, the drop from a peak of 79M boomers to 71M milennials.

      Automation of both blue and white collar work has to make up that 10% gap.
      Trying to rein in or stop either will be resisted fiercely.

  2. It’s beginning to look like authors demand veto rights over who can buy and read their books.
    They are looking very weak. I suspect they are right about losing consumer dollars to AI.

    • Most are tradpub-bound.
      Weak is a fair assessment as they don’t have many options if the corporate publishers find a good model to crank out pastiche books by the dozen every time the spaguetti sticks to the wall.
      Their arguments are even weaker.

    • I probably should point out why I think the Author’s Guild (sic) is so weak: the Betamax decision.


      Universal sued Sony for selling a device the “could” be used to violate copyright. And they were (technically) right. But the devices also had valuable non-infringing uses that was actually the primary reason consumers bought the video recorders and Sony did not market based on the “infringing” uses. In addition, to add salt to the wound, the courts sanctioned copying for time shifting and personal collection. The underlying principle, that tech companies are liable for the uses consumers put their products to, so long as there are legal uses to justify the product, is enshrined in the safe harbor provisions of the DMCA, specifically:

      “The Online Copyright Infringement Liability Limitation Act (OCILLA) is United States federal law that creates a conditional ‘safe harbor’ for online service providers (OSP), a group which includes Internet service providers (ISP) and other Internet intermediaries, by shielding them for their own acts of direct copyright infringement (when they make unauthorized copies) as well as shielding them from potential secondary liability for the infringing acts of others. OCILLA was passed as a part of the 1998 Digital Millennium Copyright Act (DMCA) and is sometimes referred to as the “Safe Harbor” provision or as “DMCA 512″ because it added Section 512 to Title 17 of the United States Code. By exempting Internet intermediaries from copyright infringement liability provided they follow certain rules, OCILLA attempts to strike a balance between the competing interests of copyright owners and digital users.”

      Add it all up:

      1- LLM software has perfectly safe and legal uses that constitute nearly 100% of the revenues it produces. It is not going to be sanctioned over speculative harms or even minimal impacts to z handful authors.

      2- The software is so resource intensive it runs almost exclusively as an online service off massive and expensive datacenters. (DMCA!)

      3- The expense is so high all the major players (Amazon, Google, Microsoft, IBM, Tesla, and even Intel have resorted to crafting their own proprietary CPUs and “AI” accelerator chips. They are not investing sums that *start* at the billions just to destroy tradpub authors. Any harm, if any, would be incidental but they take it personal. And overestimate their importance.

      4- Now, as the recent court case pointed out, any author, publisher, or guild going to court has to show direct personal harm to start with. Then the fair use tests kick in. And finally, as Universal found out, any proven violation hzs to be weighed under the constitution’s IP clause:

      [The United States Congress shall have power] To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.

      Key words being “progress of science and useful Arts”. Nowhere does it say anything about protecting fragile egos or revenues.

      The tech is evolving so fast that by the time any LLM case reaches judgment the tech will be so changed and entrenched as “useful art” it won’t matter how it was developed.

      “The needs of the many outweigh the self-interest of the few.”

  3. The FTC does have the appropriate jurisdiction. It’s just that that jurisdiction is nowhere apparent in its presentation to the Copyright Office.

    The FTC’s jurisdiction extends to “unfair and deceptive trade practices,” which has two possible hooks to the Copyright Office’s mission (although not necessarily to copyright per se). The first — and this is Bruce’s dorsal fin sticking up above the water (cue the cellos!) — is the issue of “deceptive misidentification of the source or nature of goods” not amounting to “mere puffery.” Failure to disclose that a work is LLM/Enhanced Eliza generated falls into this description… what’s that? So do novels labelled as “by X and James Patterson” or “by V.C. Andrews” or, well, any ghostwritten work, or nominally collaborative work with an undisclosed collaborator. I think NYC-based publishing is going to need a bigger boat; the key point is that this is far from unique to large language models or anything based upon them.

    The second — probably closer to not the Creature, but the Black Lagoon itself — is closer to the core of antitrust (and the FTC has jurisdiction over antitrust matters that are in interstate commerce): Whether the achieved-through-imbalanced-economic-power-and-failure-to-disclose default “permissions” regimes embedded in commercial publishing contracts since the 1930s† giving publishers the power to grant permission for never-contemplated-in-nature uses impair authors’ copyrights and constitute antitrust violations.

    Neither of these inquiries is going to take place. Ever.

    † One wonders whether commercial-publishing-wide failure to adapt this, and many other, aspects of its contracting practices to changes in law like, say, the Copyright Act (1976), the Bankruptcy Code (1978), the SPEECH Act (2010), either Kirtsaeng decision, etc., etc., etc. — especially during periods in which one or more of its major members have been subject to either consent orders relating to antitrust violations or actual adverse judgments from antitrust lawsuits — might be a better target here. (That’s obviously “one wonders” meaning “of course it is, you morons, and if you didn’t have more conflicts of interest than a Chicago/Kansas City/New Jersey politician you’d understand that.”)

    • The issue with the FTC announcement isn’t theoretical jurisdiction but approach (ideological and political) and capability (minimal to non-existen) in the tech sphere.

      The “AI” market is evolving rapidly and yesterday’s LLM tech demos and generalized chatbots are giving way to SLMs and focused *customer-created*, focused private tools based on company-specific databases. So what commerce are they going to regulate? Are they going to mo itor what Microsoft charges Amazon (yes, Amazon uses Microsoft services) or Siemens (they have a partnership to develop and sell automation “AI” control systems for factory tools)?

      Is *this* a priority matter for FTC interference?


      Are hundreds if not thousands of similar deals going to be monitored by FTC lawyers?
      Can they identify the product and “customer” and figure out “fair pricing”?

      The big tech companies they are targetting aren’t content peddlers, their customers are. They’re not interested in extorting pennies from Joe Sixpack or scammers trying to peddle “AI” crafted fanfic pastiches. They want billions from corporate deals with industry, wall street, health care. Google Bard is all about protecting the Madison avenue cash cow, MS Copilot is about inhouse enterprise internal use software as is Amazon’s AWS tools. Even OpenAI (which is tiny) makes its money peddling their API’s for others to build software. (Think Kahn even knows whst an API is?) The entire “AI” field is in chaos that won’t sort out fot years (dotcom era) or decades (industrial automation). The big money lies upstream of the maybes the media and idiotpoliticians angsts over. This is a time to sit down, pop the corn, and watch.

      They are grandstanding, chasing a ghost that doesn’t exist over issues that don’t exist and may never exist. “Hey, we’re over here! We matter! We’ll protect you from the big bad AI wolf! Seriously!” There is no there there, just handwaving over hypoteticals that nobody can even point to as real.

      “It is a tale told by an idiot, full of sound and fury, signifying nothing.”

      That is today’s FTC.
      Deeds, not words.

Comments are closed.