Generative Artificial Intelligence (AI) is rapidly transforming various industries, introducing new possibilities for innovation and efficiency, but also presenting complex challenges for intellectual property (IP) law[5][6]. Traditional copyright frameworks, conceived long before AI could compose music or paint pictures, are now struggling to address fundamental questions regarding authorship, infringement, and creative rights[2]. This report examines how generative AI is reshaping IP law, focusing on authorship debates, derivative works, and licensing models, and suggests policy reforms and corporate IP strategies to navigate this evolving landscape.
A foundational premise of U.S. copyright law is that authors must be human, a principle consistently interpreted to exclude non-human creators from copyright protection[2][8]. The U.S. Copyright Office maintains that works must "owe their origin to a human agent" to qualify for registration[2]. This stance was reinforced in Thaler v. Perlmutter (2023), where a federal court upheld the Copyright Office's rejection of an AI-created image for lacking human authorship[2][6][8]. The Copyright Office clarified in January 2025 that copyright protection extends only to works with meaningful human involvement, requiring a human creator to significantly shape or contribute to the final expressive content, rather than merely providing a simple prompt[2][9]. For instance, in the case of Kristina Kashtanova's graphic novel "Zarya of the Dawn," the Copyright Office granted registration for the human-authored text and arrangement, but excluded the individual AI-generated images from protection[2]. This highlights the ongoing debate about the extent of human intervention needed for AI-assisted content to be copyrightable[10]. In contrast, the United Kingdom explicitly recognizes "computer-generated" works as copyrightable, assigning authorship to "the person by whom the arrangements necessary for the creation of the work are undertaken," albeit with a shorter protection term[2][8]. China's approach also shows a willingness to recognize copyright for AI-generated images if significant human intellectual effort is involved in prompting and guiding the output[6][8]. The concept of "aesthetic neutrality," which cautions against legal judgments on artistic merit, is challenged by the need to evaluate the "modicum of creativity" in AI-generated works to determine copyrightability[10].
Generative AI systems learn by analyzing vast datasets, many of which contain copyrighted materials, leading to heated legal debates over whether this training constitutes fair use or infringement[2]. AI developers often argue that training is a transformative analytical use, akin to a reader learning from books, and draw analogies to search engines, which courts have previously deemed fair use[2][10]. However, creators and rights holders contend that AI training exceeds fair use, especially when the output competes with or replicates elements of original works[2]. Lawsuits have been filed against AI developers by authors like George R.R. Martin and visual artists against image generators such as Stable Diffusion and Midjourney, alleging unauthorized use of their copyrighted works for training[2][6]. A key precedent emerged in Thomson Reuters v. Ross Intelligence (2023-24), where the court rejected the AI company's fair use defense, finding its use commercial, insufficiently transformative, and directly harmful to the market for the original product[2]. The U.S. Copyright Office's May 2025 guidance states that while training AI systems on copyrighted works for research or analytic purposes may be fair use, unauthorized mass commercial use, particularly through illicit scraping, falls outside its scope[2]. The legal definition of "copy" in copyright law is also a point of contention, as AI models extract mathematical representations rather than creating direct reproductions of works during training[10]. Additionally, copyright does not protect ideas or styles, meaning that when AI mimics an artist's style by extracting features, these features are generally beyond copyright protection[10].
The question of whether AI-generated outputs are derivative works is also central to the debate. Some arguments suggest that AI outputs are derivative because they are generated from copyrighted training data[10]. However, this relies on a causation fallacy, as latent representations in AI models are not considered "copies" in the legal sense, and outputs are designed not to be verbatim representations of training elements[10]. AI companies implement mitigation strategies, such as preventing "image regurgitation" and overfitting, to avoid direct replication[10]. The music industry has seen challenges with AI-generated "deepfake" tracks that mimic artists' vocal styles without directly copying existing compositions, raising questions about publicity rights or trademark rather than traditional copyright[2].
Several solutions and policy reforms are being explored to address the challenges posed by generative AI. Legislative efforts include Tennessee's ELVIS Act (Ensuring Likeness, Voice, and Image Security Act of 2024), which protects a musician's voice as a protected attribute under publicity rights, and the proposed federal NO FAKES Act, aiming for nationwide protection against unauthorized digital replicas[2]. The Generative AI Copyright Disclosure Act, introduced in Congress in 2024, seeks to mandate disclosure of copyrighted works used in AI training datasets, promoting transparency and enabling creators to seek compensation[2][6]. Internationally, the European Union's AI Act includes transparency obligations for generative AI models, requiring providers to disclose summaries of training data, particularly copyrighted material[6][7][8]. The EU's 2019 Directive on Copyright also introduced exceptions for text and data mining for AI research, allowing rightsholders to opt out of commercial AI mining[2]. Licensing models are emerging as a market-based solution, with some AI developers voluntarily licensing data from content owners, such as Shutterstock's arrangement with OpenAI[2]. Scholars like Frank Pasquale and Haochen Sun propose a "Consent and Compensation" framework, giving creators the right to opt out or receive fair payment when their works are used in commercial AI systems[2]. The U.S. Copyright Office has rejected proposals for new sui generis rights for AI-generated works, believing sufficient incentives already exist for AI development and expressing concerns about the impact of AI-generated content on human authors[9]. A balanced approach to reform should maintain human authorship as the cornerstone of copyright, establish clear standards for AI training, implement supplementary protections like the ELVIS Act, and develop equitable revenue-sharing models[2].
Organizations must adopt robust IP strategies to navigate the complexities of generative AI. Key practices include clarifying IP ownership by documenting human contributions and establishing clear agreements with AI tool developers and model providers[5]. Businesses should minimize infringement risks by creating processes for reviewing and clearing AI outputs and implementing policies to prevent unauthorized use of third-party content[5]. Safeguarding sensitive data is crucial, as AI platforms often require user input, risking inadvertent disclosure of proprietary information[5]. Opting for enterprise-grade AI licenses can provide clearer terms regarding IP ownership, enhanced security, and specific provisions for warranties and indemnification[5]. Conducting thorough AI vendor due diligence is essential to assess security measures and IP terms[5]. Establishing a robust internal AI policy that mandates rigorous testing, forms a cross-functional oversight team, and provides ongoing employee training is vital for effective AI governance and IP protection[5]. Finally, businesses must continuously monitor and adapt their policies to evolving legal and regulatory frameworks for generative AI[5].
The rapid advancement of AI technology has exposed significant gaps in existing copyright systems, challenging traditional doctrines with technologies capable of learning from and mimicking human expression[2]. The ongoing legal battles and evolving regulatory landscape underscore the need for a balanced approach that protects human creators while fostering AI innovation[2]. The future will likely see further refinement of fair use standards, clearer guidelines for human-AI collaborations, and potential legislative efforts to address disclosure requirements and compensation mechanisms[6].
Get more accurate answers with Super Search, upload files, personalized discovery feed, save searches and contribute to the PandiPedia.
Let's look at alternatives: