HAS ARTIFICIAL INTELLIGENCE UPENDED COPYRIGHT LAW

Every so often, man invents something that unsettles the foundations upon which society rests. Artificial Intelligence (AI) is one of those inventions that has left us questioning whether the intellectual property laws underpinning our societies can withstand the strain of its rapidly evolving nature. Its intrusion into the realm of Copyright Law will be the focus of this discussion, particularly on authorship, infringement, and possible solutions.

What is a copyright?

A copyright is the legal ownership of intellectual property like literary, scientific and artistic works that gives the owner the right to control how it is used. This is through economic and moral rights. Section 8 of the Copyright and Neighbouring Rights Act Cap. 222 (CNRA) provides for economic rights such as publishing, producing, or reproducing the work, while Section 9 provides for moral rights such as claiming authorship and objecting to any distortion, mutilation, alteration, or modification of the work. Section 4 of the CNRA provides a lengthy list of literary, scientific, and artistic works eligible for copyright. This list includes articles, books, pamphlets, audiovisual works and sound recordings, choreographic works, and pantomimes, to mention but a few.

Author entitled to copyright protection

Section 3(1) of the CNRA provides that the author of any work shall have a right to its protection where it is original and is reduced to material form in whatever method, irrespective of the quality of the work or the purpose for which it is created. Section 3(3) provides that a work is original if it is the product of the independent efforts of the author.

Reduced to a material form.

Copyright Law does not exist to protect ideas. What it seeks to protect is the material expression of those ideas. Lemley writes that an idea itself is free for the world to use, but no one should be entitled to own the idea of a painting of a comet appearing over the beach at sunset. But everyone is free to express that idea in their own way. 1 An overview of the list set out in Section 4 of the CNRA shows that what is being protected is the material expression of ideas.

Original expression, irrespective of the quality of the work and purpose for which it is created.

Section 3(3) of the CNRA states that work is original if it is the product of the independent efforts of the author. In Feist Publication Inc v Rural Telephone Service 499 U.S 340 (1991) it was stated that original means only that the work was independently created by the author as opposed to copied from other works and that it possesses at least some minimal degree of creativity. The Supreme Court added:

“To be sure, the requisite level of creativity is extremely low; even a slight amount will suffice. The vast majority of works make the grade quite easily, as they possess some creative spark, no matter how crude, humble or obvious it might be.”

In Stella Atal v Abb Abels Kirata (H.C.C.S No. 967 of 2004) Justice Kiryabwire opined that the test of originality is based on whether the same plan, arrangement and combination of materials have been used before for the same purpose or for any other purpose and if they have not, then the plaintiff is entitled to a copyright, although he may have gathered hints from existing and unknown sources. In Knight Frank Uganda Ltd v Broll Uganda Ltd and Broll Valuation & Advisory Services (H.C.C.S No. 206 of 2021), it was decided that originality is determined by the independent effort, skill, and judgment demonstrated by the author in creating the work.

Authorship

Section 2 of the CNRA defines an author as the physical person who created or creates work protected under section 4. This definition extends to a person or authority commissioning work or employing a person to produce it during employment. Having defined the key concepts, the next phase will discuss how they collide with AI.

The intersection of AI and Authorship.

Almost human-like. Every day, we are inundated with videos, images, animations and voice recordings that are AI-generated. It is easy to assume that they are of human origin, given AI’s exceptional abilities. Thus, AI’s ability to generate a diverse range of content challenges the notion of authorship. For example, Dr. Stephen Thaler, a computer scientist invented an AI system dubbed the “Creativity Machine” which autonomously created an art piece titled “A Recent Entrance to Paradise”. Unfortunately, the US Copyright Office rejected his application for a copyright, listing the machine as the sole author, citing the lack of a human creator.

Section 2 of the CNRA affirms this position, defining an author as a “physical person” who created or creates work protected under Section 4. There is no wiggle room for the argument that AI could author works. Similarly, Singapore’s Copyright Act 2021 also requires a human creator. In Asia Pacific Publishing Pte Ltd v Pioneers & Leaders (Publishers) Pte Ltd, the Supreme Court of Singapore noted that the statute did not clearly define who an author was, but the historical context of the act envisioned the grant of authorship rights to human beings. 3 This position is grounded in the same reasoning as that in Thaler v Perlmutter, No. 23-5233 (D.C Cir 2025) in which Dr. Thaler’s appeals against the decision to reject copyright application were rejected because the Creativity Machine was not a human author.

Autonomous AI models. Autonomous AI belongs to a branch of AI in which AI models are advanced enough to perform tasks with limited human oversight and involvement. Some have suggested that when AI autonomously generates works, authorship should vest in the individual responsible for programming the work’s core elements, recognising the role played in enabling the output. Others argue that authorship should vest in the entity that developed the AI model, acknowledging the overarching creative input involved in designing and implementing the AI system. 4 However, while actors like creators and programmers play a dominant role in producing AI-created content, they do not define its unpredictable final form, and this is exacerbated by the more autonomous the AI system is. This challenges the notion of skill and judgment. Additionally, would the authorship rights still vest in a creator or programmer when a third-party user starts using the AI model? The situation is clearer for semi-autonomous AI models.

Semi-autonomous AI models. Semi-autonomous AI models handle tasks with some independence but require human input, oversight, or intervention at key points. Thaler’s case concerned a fully autonomous AI system, but what about semi-autonomous ones? In Shenzhen Tencent v Shanghai Yingxun Guangdong 0305 Civil First Trial No 14010, a Chinese case, the Nanshan District People’s Court decided that works produced by AI applications deserve copyright protection if the individual asserting copyright protection fulfils the requirement of intellectual creativity. In Gao Yang et al. v. Golden Vision (Beijing) Film and Television Culture Co. Ltd. et al. (2017) Jing 73 Min Zhong 797), the issue concerned the copyrightability of AI-generated photos taken by a camera attached to a hot air balloon. It was decided that the photos were eligible for copyright protection because the plaintiff had attached a sports camera, which was considered sufficient creative input to qualify for protection.5 This aligns with jurisdictions like New Zealand (Section 5(2)(a) of the Copyright Act 1994) and Ireland (Section 21(f) of the Copyright and Related Rights Act, 2000) where the person who arranges for the creation of a computer-generated work is considered its author. Nonetheless, determining what amounts to sufficient human input is, is unclear. This is complicated by the “Blackbox problem” (applicable to autonomous and semi-autonomous systems), which Sheikh, Prins, and Skrijvers define as a phenomenon in which AI systems translate input into output without revealing their complex inner workings.6 Establishing a causal relationship is difficult when you have no grasp of how the data that is fed into the AI model results in the output.

On the other hand, Gaffar and Albarashdi opine that these legal frameworks recognise the significance of the person who undertakes the essential steps for the creation of such works.

Unlike fully autonomous AI systems, an argument can be made that there is a sufficient causal relationship between the works generated and the human’s input because the person has exercised a greater level of control over what the model has produced. Furthermore, the AI model can be perceived as a tool for bringing forth the author’s idea into material form, much like an artist using a paintbrush. This argument is strengthened by Section 3 of the CNRA, which grants an author a wide berth to reduce work to material form “in whatever method.”

Copyright infringement

Section 45 of the CNRA defines copyright infringement as dealing with any work or performance contrary to the permitted free use without a valid transfer, license, assignment, or other authorisation. AI algorithms are fed with data from which they produce output. If the data is subject to copyright protection, the copyright holder’s consent must be sought, of course, subject to the exceptions set out in the law. It is possible that the user of an AI model could mine data that is subject to copyright protection in an infringing manner. Text and Data Mining involves AI models analysing massive amounts of text, images, or other information to extract patterns and produce output. The European Union has recognised AI’s ability consume large amounts of data to the detriment of copyright owners. Briefly, the EU Directive 2019/790 provides that universities, research organisations and cultural heritage institutions can extract copyrighted content however, businesses, governments, and others can do so upon being licensed by copyright owners. The CNRA in its current form does not specifically address AI or TDM, and it would be tenuous to argue that TDM can fall under the Act’s fair use exceptions.

Final thoughts

The authorship problem and AI-sui generis legislation. The system set out under the CNRA is human-centric. Authorship, as well as moral and economic rights, are evidence of this. The Act focuses on traditional media, and any amendment might be rendered obsolete given AI’s rapidly developing nature. As such, the creation of legislation creating sui-generis rights to AI-created works is a possible alternative to an amendment of the CNRA. The creation of a special act would avoid the confusion that would inevitably arise with amending the CNRA. The European Union enacted the AI Act, officially known as Regulation (EU) 2024/1689, to comprehensively regulate AI. The EU AI Act’s key features include risk-based classification of AI systems, General Purpose AI provisions, scope and applicability, governance and oversight as well as support for innovation.8 A Ugandan AI Act could grant limited-term protection to AI-generated works by vesting ownership in the person or entity providing significant creative input rather than the AI itself. It could omit moral rights for AI works, where there is limited human input, while requiring attribution

to human contributors where applicable. It could also require licensing for AI developers using copyrighted data, thereby ensuring that copyright holders receive royalties, and a public registry for AI works to track ownership and use. The Act would also provide for enforcement and penalties, safety and transparency, as well as establishing an institution to oversee AI.

Contractual arrangements. The Chinese legal system upholds agreements regarding the ownership of AI-generated outputs.9 In the absence of a clear position on AI in Uganda, it would be prudent to embrace contractual arrangements to avoid falling foul to copyright law. Such contracts would create permissions to use copyrighted datasets in training AI systems, as well as determining ownership and control of the copyright to AI-generated outputs. This approach would avoid the legal trouble arising from infringing on copyrighted material such as in the Stability AI case. In this case, Stability AI, a London-based AI developer that is behind the Stable Diffusion AI system, which automatically generates images based on text or image prompts from users, was sued by Getty Images for using the latter’s images as data inputs for training the former’s AI system.


Additionally, publishers of copyrighted material can utilise provisions in agreements barring AI companies from scraping content to train their AI systems.11 Content scraping involves automatically extracting data from digital sources using software scripts. Amazon, the New York Times, and Shutterstock use such provisions. Anti-scraping and anti-data mining provisions can be included in the Terms of Use, which, if violated, may constitute a breach of contract. For example, The New York Times suit against Open AI in which the former sued the latter for mass data scraping. 12 Public figures can sue for infringement of their image rights when AI models create indistinguishable simulations of their voices. For example, Scarlett Johannson threatened legal action against Open AI for using a voice that was indistinguishably similar to her own for their new voice assistant, Sky. Open AI announced that it would pause the use of the voice, setting a precedent for others in similar positions


AI is rapidly evolving, but the CNRA in its current form cannot adequately cater to the challenges posed by AI.