By: Bárbara Becerra Marcano*


In a galaxy far, far away,[1] droids are equal to human beings as far as intelligence goes. The advanced technological systems seen in the Star Wars film, whether it be C-3PO or R2D2,[2] demonstrate the idea that human beings could coexist with sophisticated technological machines, at least in a fictional sense. Currently, fiction is becoming reality.

A couple of years ago, computers were essentially expensive calculators; now they are more like the ones portrayed in Star Trek, where Mr. Spock could ask the computer a question and the answer would be administered in the form of a conversation. Therefore, the future is now. To live in the Internet age, a term encompassing the 21st century,[3] means that there is rapid dissemination of information and innovative technologies, making us one step closer to what we once saw in sci-fi movies.[4] Cyberspace has helped us realize that technology has the potential to profoundly alter the way we access and produce knowledge.[5] A prime example of this has been the implementation of Google’s search engine, which has improved the ways in which we find answers to pressing questions, and even trivial curiosities. Another example is the smartphone, which made it possible to carry around a multi-use device in our pockets.[6] The next saga could be the launch of ChatGPT, a generative artificial intelligence (hereinafter “AI”) tool able to take in a tremendous amount of data to produce new, original material in response to a user’s input in a chatbot.[7] This program permits the creation of complex emails, term papers, reports, business ideas, poetry, jokes, and even computer code in a matter of seconds.[8] The surge in interest and use of this tool makes it necessary to discuss in more detail how people and technology interact.

Recently, several debates regarding this topic have emerged. For instance, some have discussed the possibility of computer systems making decisions that are superior to those made by humans. Others have discussed the implications of these technologies for the future of work and society at large. Ultimately, the discourse has also been present in the field of law.[9] The rapid evolution of AI technologies has ushered in a new era of content creation, fundamentally altering the dynamics of the media and entertainment industry. As the digital landscape continues to reshape how information is consumed and disseminated, legal professionals are faced with the daunting task of adapting to this transformative wave.

Therefore, this article delves into the profound impact that artificial intelligence is having on the legal frameworks governing media and intellectual property, exploring the challenges and opportunities that arise in the wake of this technological revolution. The first part discusses how the AI-driven revolution is reshaping the legal profession, optimizing tasks like contract analysis and licensing, while also raising awareness about the necessity of human judgment. The second part delves into the challenges of AI-generated content, particularly deepfake technology, prompting discussions on regulatory measures and the potential application of image rights to address unauthorized use. The third section explores copyright issues, infringement claims, and fair use defenses, emphasizing the delicate balance between content creators’ rights and public access. Lastly, it underscores the importance of thoughtful reforms to address the evolving landscape of AI-generated content. Before we begin to discuss the impact of AI, we must first define it.

I. Are These the Droids We’ve Been Looking For?

If you search the web for “artificial intelligence”, you will come across a variety of definitions. The term has been defined by the likes of the Oxford English Dictionary, as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”[10] Meanwhile, the Encyclopedia Britannica has stated that “artificial intelligence (AI) [is]the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.”[11] Intelligent beings are those that can adapt to changing circumstances.[12] Hence, the crucial capability for obtaining and applying knowledge while adjusting to diverse situations is vital to its operations, akin to human intelligence. Some commentators have expressed that, in contrast to the industrial revolution that automated physical labor, the AI-driven revolution is mechanizing mental processes.[13] While certain blue-collar occupations may only be optimized by AI, many previously thought safe, white-collar positions are now seeing a more fundamental upheaval.[14]

The perfect example is the legal profession. Currently, attorneys not only find themselves grappling with novel questions surrounding ownership and the boundaries of creative expression in the age of AI, but also face the evolving landscape of the profession.  For some, AI software can possibly be seen to improve legal practice. For instance, lawyers are already utilizing AI to examine contracts and licensing for its technologies could identify problems and mistakes that attorneys may overlook.[15] New programs allow for the examination of documents more quickly and, in some situations, more precisely than humans.[16] AI has been used to find specific research data, and algorithms have been employed in discovery procedures to locate pertinent documents in a lawsuit.[17] To some extent, one could even say that the use of AI in the field of contracts is the most benign form of application in the legal profession because it ultimately provides efficiency.[18] A team of lawyers would need more time to extract data than that currently achievable with an AI software.[19] Although it may seem like bad news for lawyers, these developments may allow attorneys to concentrate more of their time operating as counsel, and less as reviewers. This could help lawyers avoid expensive blunders.[20] Additionally, implementing AI tools could also help lawyers in accelerating the pace of investigation, thus possibly improving the legal system.[21]

However, despite the benefits that AI presents for the legal industry, AI is not yet prepared to take the place of human judgment.[22] AI falls short when it comes to mimicking human, interpersonal relations. For instance, one could argue that existing machines could neither implement experience-based reasoning when leading cases nor read the facial expressions or body language of witnesses, opposing parties, judges, and juries. To most of our knowledge, current AI machines cannot match or mimic human interactions in any way, thus making, for now, humans indispensable for the effective legal representation of other humans.

In addition to the implementation of AI technologies in legal practices, AI machines could also be potential subjects in media law and intellectual property (IP) litigation. Therefore, this article seeks to shed light on how artificial intelligence is reshaping the traditional roles and responsibilities of IP lawyers. From the protection of AI-generated works to the challenges posed by deepfakes, the legal community must grapple with a host of unprecedented issues that demand innovative and adaptive solutions.

II. Sith or Legit? Exploring Media Law Challenges

Media law encompasses the legal domain concerned with overseeing diverse media formats, including television, radio, print, internet, and social media.[23] This field addresses an extensive array of subjects, including censorship, intellectual property, privacy, defamation, broadcasting, antitrust, advertising, and entertainment.[24] The overarching goal of media law is to strike a balance among the rights and concerns of media creators, consumers, and the public, while safeguarding the principles of freedom of expression and information.[25] However, this expected balance has a new challenge with the emergence of AI. As more communications companies adopt this technology, we face hard questions about this issue, such as: how can we trust news services that use AI to create content? When we scroll through our newsfeeds, should we be informed that a machine generated the article?

In the realm of news, during the summer of 2023, the Associated Press reached an agreement with OpenAI, the creator of ChatGPT, to obtain a license for a portion of the AP’s text archive and to gain access to OpenAI’s technology and product expertise.[26] Simultaneously, Google reportedly offered major news organizations, such as the New York Times, the Washington Post, and the Wall Street Journal, a new software for journalists, codenamed Genesis.[27] This software assimilated information, including details of current events, and generated news content.[28] In a practical matter, AI can help journalists with tasks such as fact-checking, summarizing, and translating, but it may also introduce errors, biases, or misinformation that need to be monitored and corrected by human editors. At the end of the day, it is still an emergent tool, which could reflect and amplify the existing biases in the data it uses to generate news, such as favoring sensationalism, popularity, or profitability over accuracy, relevance, or diversity.[29] To combat these complications, recently, the executive branch mandated an “ambitious executive order on artificial intelligence that seeks to balance the needs of cutting-edge technology companies with national security and consumer rights, creating an early set of guardrails.”[30]According to the order, “the Commerce Department is to issue guidance to label and watermark AI-generated content to help differentiate between authentic interactions and those generated by software.”[31] This is one of the first steps by the US Government to create regulation regarding this technology. However, there are still many questions regarding the regulation of AI that remain unanswered.

A. The Phantom Edit: Unmasking Deepfakes

A difficult question we must face regards how AI and the fake news phenomenon will converge. The emergence of “deepfake” technology serves as a perfect example.[32] Currently, it is becoming incredibly difficult to discern whether a picture or video is real or generated by AI. Deepfakes are media in which a person in an existing image or video is substituted with someone else’s likeness.[33] Deepfakes use sophisticated machine-learning algorithms to edit or create visual and audio information with a high potential for deception, contributing to an already tense atmosphere when it comes to the proliferation of fake news in digital spaces.[34] Deepfakes are gaining more prominence within social media and the internet as a whole.[35] For example, a few months ago, a picture appeared of Pope Francis wearing a puffy white jacket.[36] It turned out that the image was not a real picture.[37] This begs the question: how would you ever establish or prove that a comparable photo of yourself that emerged online was phony? For someone like the Pope, that isn’t much of a problem, but what about the rest of us? Additionally, deepfake technology isn’t limited to creating fabricated images; voices and videos are also being modified.[38] The latest advances in natural language processing technology can be used to help take apart an authentic recording and rearrange it at will.[39] A recording of a person can be used by the AI to make the subject say something radically different to what he might have said.[40]

Creators and entertainment firms have called for new rules for AI from lawmakers in Congress.[41] Many groups that represent those in the creative industry, such as the Recording Industry Association of America, the Human Artistry Campaign and Screen Actors Guild-American Federation of Television and Radio Artists, (SAG-AFTRA), which had been on strike for several months, have asked for protections from studios that use AI to reuse performers’ images and voices for new content without due credit or payment.[42]

Nonetheless, no federal legislation currently exists to address the potential threats of this technology in the United States.[43] Although deepfakes are technically forms of expression, they are subject to exceptions to the First Amendment, in which certain speech is not protected by the Constitution. These exceptions include obscenity, defamation, and incitement.[44] These are particularly relevant to deepfakes because many of them are used to create nonconsensual pornography videos. In fact, a research study found that between 2018 and 2020, 90-95% of deepfake videos were nonconsensual pornographic videos and, of those videos, 90% targeted women.[45]

The problem with bringing defamation suits in these contexts is that it is hard to sue the people who spread deepfakes online, because there are too many of these images and too many anonymous users. It seems, therefore, that a possible option would be to sue the platforms where they are posted, like Facebook or X (formerly known as Twitter). Since 1996, however, Section 230(c)(1) of the Communications Decency Act protects these platforms from being held responsible for what their users post.[46] It says that only the person who posts the information, not the platform, can be sued for defamation.[47] However, there are some exceptions to this law, such as when the information violates a federal criminal law or an intellectual property law.[48] Therefore, IP could be a way to fight against a deepfake that uses someone else’s identity without their permission. A possible legal solution would be through the application of the right of publicity.

The right of publicity or image rights refers to an individual’s right to control the commercial use of his or her name, image, likeness, or other aspects of their identity.[49] This legal concept protects individuals from the unauthorized use of their persona for commercial purposes, such as in advertising, merchandising, or other forms of commercial exploitation.[50] The right of publicity is largely governed by state laws; there is no federal statute that specifically addresses it. The U.S. Supreme Court has only addressed the right of publicity once, in Zacchini v. Scripps-Howard Broadcasting Co., holding that the First Amendment does not immunize the press from liability for appropriating an individual’s entire act for commercial use.[51] In this case, a TV station had broadcasted a performer’s entire human cannonball act without his consent.

Ultimately, the specifics of image rights vary from jurisdictions with different statutes and common law principles. Some states or territories have more expansive protections, while others have more limited or specific provisions.[52] Moreover, the U.S Supreme Court has not addressed the applicability of Section 230(c) to state intellectual property law claims.[53] Nevertheless, the United States Court of Appeals for the Third Circuit held that Section 230(c) of the Communications Decency Act (CDA) does not protect Facebook from a news anchor’s claim under the Pennsylvania statutory right of publicity.[54] In contrast, the Ninth Circuit held that internet service providers (ISPs) are immune from all state intellectual property law claims, thus creating a split among courts.[55] Until the U.S. Supreme Court issues a decision solving this circuit split, we are left with a lack of uniformity in this subject matter.

A possible alternative to the aforementioned issue is for Congress to directly amend Section 230 by narrowly repealing online platforms’ immunity from tort claims in deepfake cases where platforms fail to use the best available authentication technology; or to pass other legislations dealing with this specific issue. Because this has not yet occurred, it is unclear whether a person whose identity has been used in a fake video on a social media platform for financial gain could successfully bring claims against the platform under state law.

Ultimately, any viable legal solution for deepfakes will hinge either on the capacity of the online platforms to efficiently flag them in a way that doesn’t make them the arbiters of truth; or the enforcement of the right of publicity in a way that protects free speech and creativity. Such conundrum allows us to segway into our next topic.

III. The Clone Wars of Copyright: Intellectual Property Challenges in AI

AI image generators like DALL-E, Midjourney, and Stable Diffusion are capable of rendering images in various styles within seconds, only needing a prompt.[56] Currently, in social media platforms, people are displaying pictures of themselves fed through programs that mimic 90s-inspired yearbook photos.[57] However, these AI tools don’t simply create these pieces out of nowhere. They use data and other parameters constructed by software to process images and text.[58] This sort of situation brings about a necessary debate regarding intellectual property; it begs the question, is it clear who the creator of these works is when using generative AI platforms?[59] Does AI-created content fall under copyright infringement laws?[60]These are all pressing questions that gain further importance with the undoubtedly growing influence and use of AI.

A. Who Holds the Lightsaber of Copyright Authorship?

According to the U.S. Copyright Office, “[c]opyright is a type of intellectual property that protects original works of authorship as soon as an author fixes the work in a tangible form of expression.”[61] Regarding authorship, under the Copyright Act of 1976, protection is automatically granted upon the creation of original works of authorship, subject to certain conditions being met.[62] Works are original when they are independently created by a human author and have a minimal degree of creativity.[63] Independent creation simply means that you create it yourself, without copying other works.[64] The Supreme Court of the United States has stated that, to be creative, a work must have a “modicum” of creativity.[65]

In a landmark case addressing authorship, the Supreme Court of the United States employed language that excluded nonhuman entities when interpreting Congress’s constitutional authority to grant “authors” exclusive rights to their “writings.”[66] The case, Burrow-Giles Lithographic Co. v. Sarony, involved a defendant accused of producing unauthorized copies of a photograph.[67] The defendant claimed that photographs could neither be classified as writings nor works of authors and, thus, could not be subjects of copyright.[68] The Court did not agree, stating that the acts of Congress that protected works of art clearly included photographs, as long as they represented the original ideas of the author.[69] The Court defined an “author” as the originator or maker of something, repeatedly emphasizing the human aspect by describing authors as a class of “persons” and characterizing copyright as “the exclusive right of a man to the production of his own genius or intellect.”[70]

Federal appellate courts have echoed this perspective while interpreting the Copyright Act, which specifically safeguards “works of authorship.”[71]For instance, in Naruto v. Slater, the Ninth Circuit ruled that a monkey could not sue humans and/or corporations for damages and injunctive relief arising from claims of copyright infringement.[72] In this case, a macaque took several photographs of himself with a camera left unattended by Slater, a wildlife photographer, in a reserve. These photos were later published by Slater and Wildlife Personalities, Ltd. Following this event, People for the Ethical Treatment of Animals (“PETA”) and Dr. Antje Engelhardt filed a complaint for copyright infringement against the parties that published the photographs. The court determined that the monkey, and all animals, lack statutory standing under the Copyright Act because they are not humans.[73] The court analyzed several provisions, such as one stating that “’children’ of an ‘author’ . . . can inherit certain rights under the Copyright Act”, and terms such as “children,” “widow,” “grandchildren,” and “widower” used in the Copyright Act to establish that these “imply humanity and necessarily exclude animals . . . .” from having statutory standing to sue under the Copyright Act.[74] These legal precedents demonstrate that human authorship is required for works to be protected under the Copyright Act.

Recently, in a request for reconsideration for refusal to register a computed-generated, two-dimensional artwork, the U.S. Copyright Review Boarddetermined that if a work is created by a machine, that is, lacking human authorship, the Office will not approve its registration.[75] One year later, the Office issued the Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, stating that:

If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it. For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the “traditional elements of authorship” are determined and executed by the technology—not the human user. Based on the Office’s understanding of the generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material. Instead, these prompts function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output. For example, if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare’s style. But the technology will decide the rhyming pattern, the words in each line, and the structure of the text. When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship. As a result, that material is not protected by copyright and must be disclaimed in a registration application.

In other cases, however, a work containing AI-generated material will also contain sufficient human authorship to support a copyright claim. For example, a human may select or arrange AI-generated material in a sufficiently creative way that “the resulting work as a whole constitutes an original work of authorship.” Or an artist may modify material originally generated by AI technology to such a degree that the modifications meet the standard for copyright protection. In these cases, copyright will only protect the human-authored aspects of the work, which are “independent of ” and do “not affect” the copyright status of the AI-generated material itself.[76]

The federal district court for District of Columbia determined that an artistic work created by artificial intelligence is not eligible for copyright registration under U.S. law.[77] In Thaler v. Perlmutter, Thaler attempted to register the work for a copyright of a piece of visual art, listing a computer system he owned as the author.[78] The court ruled in favor of the U.S. Copyright Office, which had denied the copyright application.[79] It supported the Copyright Office’s stance by emphasizing that “United States copyright law protects only works of human creation.”[80] In cases like Thaler, where no human involvement was claimed regarding what the machine generated, the work is typically ineligible for copyright protection.[81]However, the extent of ‘human creativity’ necessary to qualify for copyright protection remains uncertain, as does determining the boundary between human and non-human contributions in creative works, leading us to believe that the issue will continue to produce litigation moving forward.

B. Imperial Lawsuits: Navigating Copyright Infringement

On the subject of infringement, artificial intelligence raises important questions on the use of unlicensed content and data to train AI systems. The case of Andersen v. Stability AI et al., filed late into 2022, could kickstart a movement towards the creation of new jurisprudence on the matter.[82] The facts of the case relate to three artists who organized a class action lawsuit against several generative AI platforms on the grounds that the platforms used the artists’ original works without their permission to train AI systems in their artistic styles.[83] The petitioner’s argument is that this type of use enables the production of works that might not be sufficiently altered from the original protected works, essentially constituting unlicensed derivative works.[84] Most recently, the district court judge dismissed most of the claims in the Andersen case because it found the plaintiffs’ complaint defective, but allowed them to re-plead their claims with more specificity.[85] The order emphasizes the need for plaintiffs to allege substantial similarity between the AI-generated output and the original artwork. Such determination prompts questions about the viability of copyright claims against AI platforms.

Another case regards the well-known digital media company and distributor of stock photos, Getty Images, who filed a complaint against Stability AI Inc., accusing them of exploiting more than 12 million Getty images in order to train its Stable Diffusion AI image-generation system.[86]

i. Fair Use Strikes Back

The resolution of these legal cases is expected to depend on the interpretation of the fair use doctrine.[87] Under section 107 of the Copyright Act, “fair use” is a four-factor test, in which courts must consider: (1) the purpose of and character of the use; (2) the nature of the copyrighted work; (3) the amount and substantiality of the portion used in relation to the whole; and (4) the effect of the use on the potential market for, or value of, the copyrighted work.[88] This doctrine permits the use of copyrighted material without the owner’s consent for purposes that are not exhaustive, such as: “criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research . . . .”[89] However, each fair use case is assessed individually and outcomes may vary based on specific circumstances and legal interpretations.[90]

The recent U.S. Supreme Court decision, Andy Warhol Foundation v. Goldsmith, could significantly impact the debate over generative artificial intelligence models trained on copyrighted materials.[91] In the 7-2 decision, the Supreme Court of the United States held that the licensing of Andy Warhol’s “Orange Prince” portrait to Condé Nast was not a transformative use of Lynn Goldsmith’s reference photo because “Goldsmith’s original photograph of Prince, and the Andy Warhol Foundation’s (AWF) copying use of that photograph . . . share[d]substantially the same purpose, and the use [was]of a commercial nature.”[92] This new precedent emphasizes on market competition, which for industries concerned about generative AI models training on copyrighted content, could lead to less weight being given to the similarity between generative AI output and copyrighted works; instead, courts may focus on whether the AI output has the same purpose and is meant to compete in the same market. Legal experts have pointed out that the case Authors Guild v. Google, which discusses fair use, is possibly the most analogous to AI training.[93] In that case, Google’s practice of scanning books and placing them into a database for snippets and text search was considered fair use because it was deemed transformative.[94]However, the Warhol decision could complicate this argument because the ruling suggests that simply using AI to create a new work is not enough to make it transformative and that the commercial purpose of the work may override any other factors that favor fair use. In the courts, it will take time to determine whether generative AI models qualify as fair use.

ii. Crafting Copyrighted Universes: Exploring the Art of Expression

There will be another set of forthcoming lawsuits regarding infringement claims pertaining to books and other writings.[95] The aforementioned Authors Guild, a New York-based professional organization for published writers, joined by a group of 17 writers, including the likes of George R.R. Martin, John Grisham, Jodi Picoult, George Saunders and Jonathan Franzen, proposed a class-action lawsuit against OpenAI for using their copyrighted works without their permission.[96] It is important to establish that copyright law maintains a balance between content creators and the public’s interest in widespread access to the content.[97] It achieves this equilibrium by granting authors a limited exclusive right to reproduce, distribute, and create derivative works based on their copyrighted material.[98] Nevertheless, the notion of exclusive rights does not align with AI programs that extract ideas and works from public websites because copyright law does not safeguard ideas, facts, procedures, concepts, principles, or discoveries found in works where mere copying doesn’t constitute copyright infringement.[99] To establish copyright infringement, a plaintiff must prove (1) ownership of the copyright; and (2) that the defendant copied protected elements of the plaintiff’s work. Absent direct evidence of copying, proof of infringement involves fact-based showings that [1] the defendant had “access” to [and actually copied from]the plaintiff’s work and [2] that the two works are “substantially similar.”[100]

Likewise, there is a difference between ideas and expressions, which are both fundamental concepts in copyright law. An idea refers to the notion behind the creative work. It is the abstract or general concept that a work conveys. Copyright law does not protect ideas.[101] In other words, the law does not give the creator exclusive rights to an idea itself. On the other hand, an expression is the specific and tangible way in which an idea is presented, conveyed, or represented in a creative work.[102] It encompasses the details, specific words, sentences, characters, plot elements, and other concrete elements that make up the work. Copyright law protects the expression of ideas, not the ideas themselves.[103] In copyright law, it is the fixed and tangible form (such as a book) that receives legal protection, and this protection is granted to the creator of the work.[104] Other individuals may use the same idea but express it in their own unique way without infringing on the original creator’s copyright.[105] This distinction is crucial in determining whether copyright infringement has occurred. Additionally, there is the existence of the merger doctrine, which complicates these issues. Under this doctrine, if the idea can only be expressed in one way, and that way is exhausted by the available expression, the expression is said to have “merged” with the idea.[106]  When an idea and its expression are so intertwined that they cannot be separated, the expression is not eligible for copyright protection.[107]

In the context of AI-generated output, potentially infringing upon a book could mean that there was a substantial number of copyright-protected expressions taken from the author’s work. However, due to the nature of artificial intelligence, which imitates existing content, courts must determine whether the text created in response to a given prompt constitutes a violation or not. It seems that this will be resolved on a case-by-case basis.

Ultimately, we must not underestimate the significance of legal disputes in this context. To put it plainly, this goes beyond just creating a professional headshot with that picture you took in your room. The nature of content creation is changing because of generative AI, and the legal outcomes are just beginning.

Galactic Reflections: Conclusion

The immediate threat is not what the AI systems will do to us: it’s what we will do with AI. From the perspective of the legal profession, this new AI technology will change the practice of law. In terms of the surge of artificial intelligence as a subject matter of litigation, there will be a transformative influence in media law and intellectual property, which represents both a challenge and an opportunity for legal professionals. As we navigate the intricate landscape of automated content creation, algorithmic decision-making, and the ethical considerations surrounding AI, it is evident that the legal framework must evolve in tandem with technological advancements.

One key aspect involves refining existing legislation and doctrines to accommodate the unique complexities of AI-generated content and automated decision-making. This process necessitates a revaluation of definitions related to creative ownership, intellectual property rights, and the responsibilities of AI developers and users. By staying abreast of technological advancements, legal frameworks can be adapted to ensure they remain relevant and effective. Furthermore, promoting transparency and accountability in AI systems is paramount. Implementing guidelines that mandate clear disclosure of AI involvement in content creation or decision-making processes can help users and consumers make informed choices. Establishing ethical standards and best practices for AI developers, especially in the context of media law, can contribute to responsible AI usage and mitigate potential legal challenges.

Moreover, education and awareness initiatives are also vital components of the solution. Legal professionals, content creators, and the public should be informed about the implications of AI. Collaboration is crucial. Establishing forums for dialogue and knowledge-sharing can facilitate the development of best practices, guidelines, and industry standards. This collaborative effort can lead to the creation of adaptive legal frameworks that not only address current challenges but also anticipate and mitigate future issues arising from AI advancements. Together, we can craft a legal framework that harnesses the benefits of AI while safeguarding the principles of justice, fairness, and the freedom of expression that lie at the heart of the law. As AI continues to shape the future of the legal profession, thoughtful and proactive engagement will be key to ensuring that the legal system remains a resilient and adaptive force in the face of technological evolution. Whether we like it or not, this new technology is having an impact on our society, but how it gets along with the law, we’ll just have to see. May AI be with us.

* The author is a third-year law student at the University of Puerto Rico School of Law. She is the Head of Writers at In Rev, the digital publication of the University of Puerto Rico School of Law Law Review.

[1] STAR WARS (Lucasfilm 1977) (A reference to the famous Star Wars film opening crawl).

[2] Id. (A reference to the Star Wars franchise and two of its famous “droid” protagonists).

[3] Internet age, PC MAGAZINE, (2023).

[4] Id.

[5] See Uche Mbanaso & Emmanuel S. Dandaura, The Cyberspace: Redefining a New World, 17 IOSR J. COMP. ENG’G 17, 19 (2015).

[6] William L. Hosch, Smartphone, ENCYCLOPEDIA BRITANNICA, (last updated Oct. 31, 2023).

[7] Kevin Roose, The Brilliance and Weirdness of ChatGPT, THE NEW YORK TIMES (Dec. 5, 2022),

[8] Id.; Gil Appel et al., Generative AI Has an Intellectual Property Problem, HARVARD BUSINESS REVIEW (Apr. 7, 2023),

[9] Steve Lohr, A.I. Is Coming for Lawyers, Again, THE NEW YORK TIMES, (Apr. 10, 2023),

[10] Artificial Intelligence (AI): What is it and how does it work?, LEXOLOGY (Mar. 1, 2017),,making%2C%20and%20translation%20between%20languages; Artificial Intelligence, OXFORD UNIVERSITY PRESS, (last visited Nov. 1, 2023).

[11] B.J Copeland, Artificial Intelligence, ENCYCLOPEDIA BRITANNICA, (last updated Oct. 20, 2023).

[12] Id.

[13] Matthew Stepka, Law Bots: How AI Is Reshaping the Legal Profession, ABA BUSINESS LAW TODAY (Feb. 21, 2022),

[14] Id.

[15] Id.

[16] Id.

[17] Id.

[18] See Beverly Rich, How AI Is Changing Contracts, HARVARD BUISINESS REVIEW (Feb. 12, 2018),

[19] See Avaneesh Marwaha, 7 Ways artificial intelligence can benefit your law firm, AMERICAN BAR ASSOCIATION (Sept. 2017),

[20] Id.

[21] See John Villasenor, How AI will revolutionize the practice of law, BROOKINGS (Mar. 20, 2023),

[22] Id.

[23] Oxford Reference, Media Law, (last visited Jan. 31, 2024).

[24] Id.

[25] Id.

[26] Matt O’Brien, ChatGPT-maker OpenAI signs deal with AP to license news stories, AP, (July 23, 2023),

[27] Benjamin Mullin & Nico Gran, Google Tests A.I. Tool That Is Able to Write News Articles, NYT, (July 19, 2023),

[28] Id.

[29] Leonardo Nicoletti & Dina Bass, Humans Are Biased. Generative Ai Is Even Worse, BLOOMBERG, (2023),

[30] Josh Boak & Matt O’Brien, Biden wants to move fast on AI safeguards and signs an executive order to address his concerns, AP, (Oct. 30, 2023),

[31] Id.

[32] See Betül Çolak, Legal Issues of Deepfakes, INSTITUTE FOR INTERNET & THE JUST SOCIETY (Jan. 19, 2021),

[33] Id.

[34] Id.; See Mika Westerlund, The Emergence of Deepfake Technology: A Review, 9 TECH. INNOV. MGMT REV. 39 (2019).

[35] See Don Philmlee, Practice Innovations: Seeing is no longer believing — the rise of deepfakes, THOMSON REUTERS (July 18, 2023),

[36] Drake Bennett, AI Deep Fake of the Pope’s Puffy Coat Shows the Power of the Human Mind, BLOOMBERG (Apr. 6, 2023),

[37] Id.

[38] Philmlee, supra note 42.

[39] Çolak, supra note 39.

[40] Id.

[41] Jake Coyle, In Hollywood writers’ battle against AI, humans win (for now), AP, (Sept. 27, 2023),

[42] Id.

[43] Emmanuelle Saliba, Bill would criminalize ‘extremely harmful’ online ‘deepfakes’, ABC NEWS, (Sept. 25, 2023),

[44] See FCC v. Pacifica Foundation, 438 U.S. 726 (1978); New York Times Co. v. Sullivan, 376 U.S. 254 (1964); Brandenburg v. Ohio, 395 U.S. 444 (1969).

[45] Karen Hao, Deepfake Porn Is Ruining Women’s Lives. Now the Law May Finally Ban It, MIT TECHNOLOGY REVIEW, (Feb. 12, 2021), (citing a research carried out by Sensity AI).

[46] 47 U.S.C. § 230.

[47] Id.

[48] 47 U.S.C. § 230 e(1-2).

[49] Ley del Derecho Sobre la Propia Imagen, Ley Núm. 139-2011, 32 LPRA §3151 (2017).

[50] Id.

[51] Zacchini v. Scripps-Howard Broadcasting Co., 433 U.S. 562 (1977).

[52] See Bárbara Becerra-Marcano, Roberto Clemente: Más que una imagen y más que una marca, IN REV, (April 13, 2023),

[53] Aaron P. Rubin & J. Alexander Lawrence, Court Holds That Section 230’s Carve Out For “Intellectual Property” Does Not Apply to Publicity Rights Claim In New York, JDSUPRA, (Feb. 28, 2023),

[54] Hepp v. Facebook, Inc., Nos. 20-2725 & 2885, 2021 WL 4314426 (3d Cir. 2021).

[55] See Perfect 10, Inc. v. CCBill LLC, 488 F.3d 1102, 1119 (9th Cir. 2007) (“intellectual property” means only federal intellectual property).

[56] Appel et al., supra note 8.

[57] Melina Khan, People are posting AI-generated yearbook pictures with this viral app, CNBC (Oct. 5, 2023),

[58] Appel et al., supra note 8.

[59] See Id.

[60] See Id.

[61] What is Copyright? The Copyright Office, (last visited Jan. 18, 2024).

[62] Copyright Act of 1976, 17 U.S.C. § 102.

[63] Feist Publications, Inc., v. Rural Telephone Service Co., 499 U.S. 340, 345 (1991).

[64] Id.

[65] Id.

[66] See Burrow-Giles Lithographic Co. V. Sarony, 111 U.S. 53 (1884).

[67] Id. at 54.

[68] See Id. at 56.

[69] See Id. at 58.

[70] Id. at 56, 58, 61.

[71] Copyright Act of 1976, 17 U.S.C. § 102.

[72] Naruto v. Slater, 888 F.3d 418, 420 (9th Cir. 2018).

[73] Id.

[74] Id. 427.

[75] U.S. Copyright Review Board, Decision Affirming Refusal of Registration of a Recent Entrance to Paradise (Feb. 14, 2022) at 2–3 (determining a work “autonomously created by artificial intelligence without any creative contribution from a human actor” was “ineligible for registration”).

[76] U.S. Copyright Office, Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence (Mar. 16, 2023) at 4.

[77] Thaler v. Perlmutter, No. 22-1564, 2023 WL 5333236 (2023).

[78] Id.

[79] Id. at *3.

[80] Id.

[81] Id.

[82] Complaint, Andersen et al v. Stability AI Ltd. et al, No. 3:23-cv-00201 (N.D. Cal. Jan. 13, 2023); Appel, supra note 8.

[83] Appel et al., supra note 8.

[84] Id.

[85] See Opinion, Andersen v. Stability AI Ltd., 23-cv-00201-WHO, at *13 (N.D. Cal. Oct. 30, 2023).

[86] Complaint, Getty Images (US) Inc v. Stability AI Inc, No. 1:23-cv-00135 (D. Del. Feb. 3, 2023); Blake Brittain, Getty Images lawsuit says Stability AI misused photos to train AI, REUTERS, (February 6, 2023),

[87] Appel et al., supra note 8.

[88] Copyright Act of 1976, 17 U.S.C. § 107 (2023).

[89] Id.

[90] U.S. Copyright Office Fair Use Index, U.S. COPYRIGHT OFFICE,,may%20be%20used%20without%20permission (last visited Jan. 24, 2024).

[91] See Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith, 143 S. Ct. 1258 (2023).

[92] Id. at 1287.

[93] Isaiah Poritz, Generative AI Debate Braces for Post-Warhol Fair Use Impact, BLOOMBERG, (May 30, 2023),

[94] See Authors Guild v. Google, 804 F.3d 202, 229 (2nd Cir. 2015).

[95] Hillel Italie, ‘Game of Thrones’ creator and other authors sue ChatGPT-maker OpenAI for copyright infringement, ASSOCIATED PRESS, (Sep. 21, 2023),

[96] Id.; Complaint, Authors Guild v. OpenAI Inc., 1:23-cv-08292, (S.D.N.Y. filed Sept. 19, 2023).

[97]  See U.S. CONST. art. I, § 8, cl. 8; Nicolas Suzor, Access, Progress, and Fairness: Rethinking Exclusivity in Copyright, 15 VAND. J. ENTM’T & TECH. L. 297, 297-298 (2013).

[98] Copyright Act of 1976, 17 U.S.C. § 106 (2018).

[99] See Baker v. Selden, 101 U.S. 99, 108 (1879); Mazer v. Stein, 347 U.S. 201, 217 (1954); Harper & Row Publishers, Inc. v. Nation Enterprises 471 U.S. 539, 556 (1985).


[101] Id. at 59; See Mazer v. Stein, 347 U.S. 201, 217 (citing F. W. Woolworth Co. v. Contemporary Arts, 193 F.2d 162; Ansehl v. Puritan Pharmaceutical Co., 61 F.2d 131; Fulmer v. United States, 122 Ct. Cl. 195, 103 F. Supp. 1021; Muller v. Triborough Bridge Authority, 43 F. Supp. 298.).

[102] 17 U.S.C. § 102(a) (2018).

[103] FROMER & SPRIGMAN, supra note 100, at 59.

[104] Id.

[105] Id.

[106] Fromer & Springman, supra note 100, at 67.

[107] See Herbert Rosenthal Jewelry Corp. v. Kalpakian, 446 F.2d 738, 742 (9th Cir. 1971).


Leave A Reply

Skip to content