Faith. Service. Law.

Can AI-Generated Works Be Copyrighted?

· Updated May 8, 2026 · 16 min read

Can AI-generated works be copyrighted? Purely AI-generated works cannot. The U.S. Copyright Office and the D.C. Circuit have held that copyright protection requires human authorship, and the Supreme Court declined to revisit that rule in 2026. AI-assisted works can be copyrighted—but only when a human exercises meaningful creative control over the expressive elements of the final work. Entering prompts, no matter how detailed, is not enough. A separate fair-use question about AI training on copyrighted material is unsettled and may take an appellate split to resolve.

Quick Reference: What’s Copyrightable in the AI Era

Work typeCopyrightable?Why
Purely AI-generated (prompt → output, no human modification)NoNo human author of the expressive elements (USCO Part 2; Thaler v. Perlmutter)
AI-assisted with substantial human editing/arrangementYes (the human-authored portion)Human exercises creative control over expressive choices
Human-authored work that uses AI as a tool (e.g., spell-check, layout suggestions)Yes (entire work)AI is treated like Photoshop or a word processor
Compilation of AI-generated elements with original creative selection/arrangementYes (the selection/arrangement only, not the underlying AI output)Compilation copyright recognized in 17 U.S.C. § 103
Image generated from a single promptNoInsufficient human control over expressive choices (USCO position)
Image generated from iterative prompting + extensive post-processingDisputedThe pending Allen v. Perlmutter case will test where the line falls

The Human Authorship Requirement

Copyright has always been a human institution. The Copyright Act of 1976 protects “original works of authorship” fixed in a tangible medium of expression, and courts have consistently interpreted “authorship” to require a human creator. The doctrine has deep roots: the Supreme Court’s nineteenth-century Burrow-Giles Lithographic Co. v. Sarony (1884) decision, which extended copyright to photography, did so by emphasizing the photographer’s human creative choices—pose, lighting, costume, accessories. The U.S. Copyright Office’s longstanding Compendium of U.S. Copyright Office Practices states the same rule: works produced by “nature, animals, or plants” cannot be registered, and neither can output from a “mechanical process or random selection without any contribution by a human author.”

This principle was tested directly when Dr. Stephen Thaler attempted to register a work created entirely by his AI system, the “Creativity Machine,” listing the AI as the sole author.

Thaler v. Perlmutter: The Definitive Ruling

The case that settled the question—at least for now—is Thaler v. Perlmutter. Dr. Thaler sought to register an AI-generated image titled “A Recent Entrance to Paradise” with the Copyright Office, naming the Creativity Machine as author and himself as the owner by virtue of the work-for-hire doctrine.

The Copyright Office refused registration. The D.C. district court upheld that refusal in 2023, and the D.C. Circuit Court of Appeals unanimously affirmed in March 2025 in an opinion by Judge Patricia Millett.

The Copyright Act of 1976 requires all eligible work to be authored in the first instance by a human being.

— Judge Patricia Millett, Thaler v. Perlmutter, D.C. Circuit (March 18, 2025)

Millett’s opinion grounded the human-authorship rule in the statute’s own text. The Act’s use of “person,” “widow,” “children,” and “heirs” in the duration and termination provisions—all categories that presuppose a human being with a finite lifespan—left no room to read “authorship” as encompassing a machine. The court also rejected Thaler’s work-for-hire argument: the work-for-hire doctrine assigns ownership of a copyrightable work to an employer or commissioning party, but it presupposes that the work was authored by a human in the first place. A work that has no human author has nothing to assign.

The full D.C. Circuit denied en banc rehearing on May 12, 2025, and on March 2, 2026, the Supreme Court denied certiorari, leaving the ruling intact. As a practical matter, the human authorship requirement is now firmly established—because all challenges to Copyright Office registration decisions are heard in the D.C. Circuit, no other appellate court is positioned to reach the question.

Watch: LegalEagle on Disney’s Landmark AI Copyright Case

LegalEagle breaks down the legal battle between Disney and AI image generators—and what it means for copyright law.

The Thaler ruling addressed a narrow question: whether an AI system can be the author of a copyrighted work. It did not hold that works involving AI are categorically uncopyrightable. This distinction matters enormously.

The U.S. Copyright Office addressed the broader question in its Part 2 report on copyrightability, released January 29, 2025. The Office drew a clear line between two categories.

The AI-Generated vs. AI-Assisted Distinction

AI-generated works (purely autonomous output) receive no copyright protection. If the AI determined the expressive elements without sufficient human control, the resulting work is uncopyrightable—regardless of how creative the prompt was.

Where AI is used as a tool to assist human authorship, the resulting work may be eligible for copyright protection…. But where AI determines the expressive elements of a work, the resulting output is not the product of human authorship and is not protected by copyright.

U.S. Copyright Office, Copyright and Artificial Intelligence, Part 2: Copyrightability (January 29, 2025)

AI-assisted works (human creativity enhanced by AI tools) can receive copyright protection, provided the human author exercised meaningful creative control over the expressive elements of the final work. The Copyright Office has stated that using AI as a tool—much like using a camera, Photoshop, or a word processor—does not disqualify a work from protection.

What Qualifies as Sufficient Human Authorship?

The Copyright Office has provided guidance on what does and does not qualify.

Works that can qualify for copyright protection include those where the human author edits or substantially modifies AI-generated output, makes creative selections and arrangements of AI-produced elements, or combines human-authored and AI-assisted sections into a unified work with original creative expression.

Works that cannot qualify include output generated solely by prompts (no matter how detailed or creative the prompts themselves may be), works where the human merely selected parameters without creative control over expression, and works where AI automated the execution of an idea without human direction over the expressive result.

USCO Guidance Has Evolved Quickly

The Copyright Office’s position has tightened in three discrete steps:

DateActionWhat changed
March 16, 2023Initial registration guidance (88 FR 16190)Required disclosure of any AI-generated content; treated AI output as raising authorship questions case by case
September 2023Zarya of the Dawn registration partially canceledFirst public refusal: Kris Kashtanova’s graphic novel kept text + arrangement protection but lost individual Midjourney images
January 29, 2025Part 2 Report: CopyrightabilityCodified the AI-generated vs. AI-assisted distinction; rejected “prompts as authorship”
May 2025Part 3 Report: AI TrainingConcluded AI training is not categorically fair use; pirated training data weighs against fair use
(Pending)Part 4 ReportExpected to address liability and remedies

Disclosure Requirements

Since March 2023, the Copyright Office has required applicants to disclose the use of AI-generated content in works submitted for registration. Applicants must explain which portions of the work were AI-generated and describe the human author’s creative contributions. Failure to disclose AI involvement can jeopardize the validity of a registration—the Office has the statutory authority to cancel registrations procured through omission of material facts (17 U.S.C. § 411(b)).

In the 2023 Zarya of the Dawn registration, the Copyright Office partially cancelled the registration of Kris Kashtanova’s graphic novel after determining that the individual images had been generated by Midjourney. The Office let the text and the “selection, coordination, and arrangement” stand as protectable, but the AI-generated images themselves received no protection. The decision became a template for how the Office handles mixed AI/human works: registration is granted, but only the human-authored elements receive protection.

Congress Responds: The 2026 Legislative Landscape

While the courts have addressed authorship, Congress is focused on a related but distinct question: whether AI companies can use copyrighted works to train their models without permission.

Several bipartisan bills were introduced in early 2026:

The CLEAR Act (Copyright Labeling and Ethical AI Reporting Act), introduced by Senators Adam Schiff and John Curtis, would require companies to file a notice with the Register of Copyrights detailing copyrighted works used in AI training datasets.

The TRAIN Act (Transparency and Responsibility for Artificial Intelligence Networks), introduced by Representatives Madeleine Dean and Nathaniel Moran, would give copyright holders the ability to determine whether their work was used to train AI models without permission.

The White House also weighed in on March 20, 2026, releasing a National Policy Framework for Artificial Intelligence. The Administration expressed the view that training AI models on copyrighted material does not violate copyright law, while acknowledging that “reasonable arguments to the contrary exist” and that courts should ultimately resolve the question. The framework recommended that Congress not legislate on fair use in a way that would influence judicial outcomes, but consider enabling collective licensing frameworks to allow copyright holders to negotiate compensation from AI providers.

No final legislation has been enacted, but the momentum toward greater transparency requirements is clear.

Fair Use and AI Training: The First Rulings

The authorship question is settled, but a separate and equally consequential line of cases is testing whether AI companies can use copyrighted works to train their models without permission. Three district courts issued the first substantive rulings in 2025, and they disagreed.

The 2025 District Court Split: Side-by-Side

CaseCourt / DateTraining dataHoldingReasoning
Thomson Reuters v. Ross IntelligenceD. Del., February 11, 2025 (Bibas, J.)Westlaw headnotesNot fair useThe AI tool was built to compete with the source; a market substitute weighs decisively against fair use under the fourth statutory factor
Bartz v. AnthropicN.D. Cal., June 23, 2025 (Alsup, J.)Mix of lawfully purchased books + pirated copies (Library Genesis)Mixed: lawful copies = fair use; pirated copies = not fair useTraining is “quintessentially transformative,” but pirating to assemble the corpus is a separate, non-transformative act
Kadrey v. MetaN.D. Cal., June 25, 2025 (Chhabria, J.)LibGen pirated copiesFair use (on the existing record)Plaintiffs failed to develop the market-harm record; transformative use plus thin market evidence outweighed the piracy

In Thomson Reuters v. Ross Intelligence (D. Del., February 2025), Judge Stephanos Bibas found that training an AI legal research tool on copyrighted headnotes to build a competing product was not fair use. The market-substitution factor was decisive: Ross was attempting to make a market replacement for Westlaw using Westlaw’s own protected expression as the training material.

In Bartz v. Anthropic (N.D. Cal., June 2025), Judge William Alsup ruled that training on lawfully acquired books was fair use, but that downloading pirated copies was not. The opinion drew a sharp distinction between the training itself—which Alsup called “exceedingly transformative” because it produces a model that does not reproduce the underlying works—and the acquisition method. Anthropic’s use of the LibGen shadow library as a training source was a separate, non-transformative act of mass infringement. The case settled for $1.5 billion shortly after, the largest copyright settlement in U.S. history.

The technology at issue was among the most transformative many of us will see in our lifetimes. Yet that transformative purpose attaches to the model and its outputs, not to every step taken to assemble the training corpus. Mass downloading from pirate sites is not transformed by what comes later.

— Judge William Alsup, paraphrasing the holding in Bartz v. Anthropic, N.D. Cal. (June 23, 2025)

In Kadrey v. Meta (N.D. Cal., June 2025), Judge Vince Chhabria reached an opposite result on similar facts, finding training to be fair use even where pirated copies were involved. Chhabria’s decision rested heavily on the procedural failure of the plaintiffs to develop a market-harm record: they offered little concrete evidence that Llama’s release had measurably reduced demand for their books, and on the existing record Chhabria felt the transformative-use analysis controlled. Chhabria notably warned that future plaintiffs would likely succeed where Sarah Silverman and her co-plaintiffs had not, by building a stronger evidentiary case on the fourth factor.

Why the Cases Disagree

The three rulings can be reconciled by reading them as turning on different statutory factors:

  • Thomson Reuters turned on factor four (market harm): Ross was building a Westlaw substitute.
  • Bartz turned on factor one (purpose), with the piracy issue treated separately as an independent act of infringement.
  • Kadrey turned on factor four, but with the burden falling against the plaintiffs because the record was thin.

This pattern suggests that the next wave of AI-training cases will be won or lost on plaintiffs’ ability to develop market-harm evidence—sales decline, license-market displacement, output that approximates the originals.

The Copyright Office weighed in with its Part 3 report on AI training, released in May 2025, concluding that AI training is not categorically fair use and that using pirated datasets weighs against a fair use finding.

These rulings and the Office’s analysis make clear that the fair use question will likely require resolution by the appellate courts or Congress.

The Next Test: AI-Assisted Copyrightability

One case to watch is Allen v. Perlmutter (D. Colo.), in which artist Jason Allen is challenging the Copyright Office’s refusal to register Théâtre D’opéra Spatial, an image he created using Midjourney through hundreds of iterative prompts and extensive post-processing. Unlike Thaler—which asked whether AI can be the author—Allen asks whether iterative, creative prompting and editing constitute sufficient human authorship.

Allen’s factual record is unusually well-developed for a copyrightability case. He has documented hundreds of generations across multiple Midjourney sessions, manual upscaling, photo-editing in Adobe Photoshop, and curatorial selection from large galleries of intermediate outputs. The Copyright Office’s position is that none of those steps individually amount to creative control over the expressive elements of the final image; Allen’s position is that, taken together, they constitute the kind of expressive judgment the Burrow-Giles line of cases has always protected.

Summary judgment briefing was completed by early 2026, making this potentially the next landmark ruling on AI copyrightability. If Allen prevails, the line between “AI as tool” and “AI as author” will move significantly toward the human side; if the Office prevails, the rule against prompt-only authorship will harden into a near-categorical bar on text-to-image works.

What This Means for Creators

If you use AI tools in your creative process, the practical takeaways are straightforward:

You can use AI as a tool in your creative workflow without losing copyright protection, but you must be the one making the creative decisions. Edit, arrange, select, and modify. The more creative control you exercise over the final product, the stronger your copyright claim.

A Practical Workflow for Registering AI-Assisted Work

If you intend to register an AI-assisted work with the Copyright Office, consider building the following into your workflow from the start:

  1. Save every iteration. Keep raw AI outputs, intermediate edits, and the final work as separate files. The Copyright Office wants to see the human-authored layer; your edit history is the proof.
  2. Document the prompts. Save the prompt text and any negative prompts. These help establish the scope of AI involvement and the boundary between AI output and your edits.
  3. Identify your contribution in writing. Before submitting, write a one-paragraph “authorship statement” describing exactly what you did—which elements you arranged, which you edited, which you replaced, which you composed yourself.
  4. Disclose AI involvement on the application. Use the “Limitation of Claim” section to disclose AI-generated material and identify the unprotectable elements. Describe the human-authored portion you are claiming.
  5. Be conservative about what you claim. Claiming a copyright on the entire work when only a portion is human-authored is the surest way to lose the registration if challenged. Claim the human-authored elements only.
  6. Keep the version stack. If your registration is ever challenged, the file-history record showing how the work evolved from AI output to final form is your strongest evidence.

Document your creative process. If a registration is ever challenged, being able to show how you directed, selected, and refined the AI’s output will strengthen your position.

Disclose AI involvement when registering your copyright. The Copyright Office requires it, and failing to do so puts your registration at risk.

And if you are a creator whose works may have been used to train AI models, watch the legislative landscape closely. The CLEAR Act and TRAIN Act, if enacted, would give you new tools to discover and potentially seek compensation for unauthorized use.

What to Watch Through 2026 and 2027

The AI copyright landscape is moving on three tracks at once:

  • Authorship doctrine: Allen v. Perlmutter will be the next major test of where the human-authorship line falls for iteratively-prompted, post-processed work. Expect a ruling in mid-to-late 2026.
  • Fair-use doctrine: The first appellate decisions from the AI-training cases are likely in 2026–2027. The Ninth Circuit will hear the Bartz settlement structure and the Kadrey fair-use ruling on largely overlapping records. A circuit split between the Ninth Circuit and (eventually) the Second Circuit’s NYT v. OpenAI line would tee the question up for the Supreme Court.
  • Statutory framework: The CLEAR Act, TRAIN Act, and any successor legislation will determine whether disclosure becomes a hard requirement and whether collective licensing emerges as a workable structure for AI training.

International developments will also matter. The European Union’s AI Act took effect in August 2024 and includes a transparency obligation for general-purpose AI providers—they must publish a “sufficiently detailed summary” of training data. The United Kingdom is consulting on a text-and-data-mining exception with an opt-out mechanism. Either approach, if widely adopted, would influence the U.S. debate.


This post is for informational purposes only and does not constitute legal advice. If you have questions about the copyrightability of AI-assisted works or about the use of your works in AI training, consult a qualified attorney.

Frequently Asked Questions

Can AI be listed as the author of a copyrighted work?

No. The D.C. Circuit ruled unanimously in Thaler v. Perlmutter (2025) that the Copyright Act requires human authorship, and the Supreme Court declined to review that ruling in March 2026. AI systems cannot be named as the author of a copyrighted work.

Can I copyright something I made with AI assistance?

Yes, if your human creative contribution is sufficiently substantial. The U.S. Copyright Office distinguishes between AI-generated works (no protection) and AI-assisted works (protectable if the human exercised meaningful creative control over the expressive elements). Simply entering prompts is not enough—you must edit, arrange, select, or substantially modify the output.

What does the U.S. Copyright Office say about AI-generated works?

The U.S. Copyright Office’s Part 2 report on copyrightability, released January 29, 2025, draws a clear line. Purely AI-generated works (where the AI determined the expressive elements without sufficient human control) receive no copyright protection. AI-assisted works (where a human exercised meaningful creative control over the expressive elements of the final work) can receive protection. The Office has also stated since March 2023 that applicants must disclose any AI-generated content when registering a work.

Do I have to tell the Copyright Office I used AI?

Yes. Since 2023, the Copyright Office has required applicants to disclose AI-generated content in registration applications and to describe the human author’s creative contributions. Failure to disclose can jeopardize the validity of your registration under 17 U.S.C. § 411(b).

Is it legal for AI companies to train models on copyrighted works?

It depends, and the answer is unsettled. Three district courts issued the first rulings in 2025 and reached different conclusions. Thomson Reuters v. Ross Intelligence found AI training was not fair use when it produced a competing product. Bartz v. Anthropic found training on lawfully acquired books was fair use but training on pirated copies was not (and produced a $1.5 billion settlement). Kadrey v. Meta found training was fair use even where the training data was pirated, on the strength of weak market-harm evidence. The Copyright Office’s Part 3 report concluded AI training is not categorically fair use. Appellate courts will likely need to resolve the split.

What is the Allen v. Perlmutter case about, and why does it matter?

Allen v. Perlmutter tests the line between “AI as a tool” (copyrightable) and “AI as the author” (not copyrightable) for iteratively-prompted, post-processed images. Jason Allen used Midjourney through hundreds of prompt iterations plus Photoshop editing to produce Théâtre D’opéra Spatial, and is challenging the Copyright Office’s refusal to register it. A ruling for Allen would expand the scope of AI-assisted works that qualify; a ruling against would harden the rule that prompts alone (no matter how many) cannot constitute authorship.

How long does copyright last on an AI-assisted work?

The same rules apply as for any copyrighted work. If the human author is an individual, copyright lasts for the life of the author plus 70 years. For works made for hire, it lasts 95 years from publication or 120 years from creation, whichever expires first. See How Long Does a Copyright Last? for more detail.

What about AI-assisted code and software?

The same authorship rules apply. The Copyright Office treats AI-generated code the same way it treats AI-generated images: code authored entirely by an AI system is not copyrightable, while code where a human developer exercised meaningful creative control—designing the architecture, writing core functions, substantially editing AI suggestions—remains protectable. As of 2026 there is no published Copyright Office decision specifically addressing GitHub Copilot output.


Sources:

Thaler v. Perlmutter, D.C. Circuit Opinion (March 18, 2025)

Copyright and Artificial Intelligence, Part 2: Copyrightability, U.S. Copyright Office (January 29, 2025)

Copyright and Artificial Intelligence, Part 3: Generative AI Training, U.S. Copyright Office (May 2025)

Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 88 FR 16190, U.S. Copyright Office (March 16, 2023)

National Policy Framework for Artificial Intelligence, The White House (March 20, 2026)

Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53 (1884)

Compendium of U.S. Copyright Office Practices, Third Edition (2021)

Bartz v. Anthropic settlement notice, $1.5 billion (Copyright Alliance)

Garrett Ham, author — attorney, military veteran, and Yale M.Div.

Garrett Ham

Garrett Ham is an attorney, military veteran, and holds a Master of Divinity from Yale Divinity School. He writes from Northwest Arkansas on theology, law, and service.

More about Garrett →

Related Posts