Almost human-like. Every day, we are inundated with videos, images, animations and
voice recordings that are AI-generated. It is easy to assume that they are of human origin,
given AI’s exceptional abilities. Thus, AI’s ability to generate a diverse range of content
challenges the notion of authorship. For example, Dr. Stephen Thaler, a computer scientist
invented an AI system dubbed the “Creativity Machine” which autonomously created an art
piece titled “A Recent Entrance to Paradise”. Unfortunately, the US Copyright Office
rejected his application for a copyright, listing the machine as the sole author, citing the lack
of a human creator.
Section 2 of the CNRA affirms this position, defining an author as a “physical person” who
created or creates work protected under Section 4. There is no wiggle room for the argument
that AI could author works. Similarly, Singapore’s Copyright Act 2021 also requires a
human creator. In Asia Pacific Publishing Pte Ltd v Pioneers & Leaders (Publishers) Pte
Ltd, the Supreme Court of Singapore noted that the statute did not clearly define who an
author was, but the historical context of the act envisioned the grant of authorship rights to
human beings.
3 This position is grounded in the same reasoning as that in Thaler v
Perlmutter, No. 23-5233 (D.C Cir 2025) in which Dr. Thaler’s appeals against the decision to
reject copyright application were rejected because the Creativity Machine was not a human
author.
Autonomous AI models. Autonomous AI belongs to a branch of AI in which AI models are
advanced enough to perform tasks with limited human oversight and involvement. Some
have suggested that when AI autonomously generates works, authorship should vest in the
individual responsible for programming the work’s core elements, recognising the role
played in enabling the output. Others argue that authorship should vest in the entity that
developed the AI model, acknowledging the overarching creative input involved in
designing and implementing the AI system.
4 However, while actors like creators and
programmers play a dominant role in producing AI-created content, they do not define its
unpredictable final form, and this is exacerbated by the more autonomous the AI system is.
This challenges the notion of skill and judgment. Additionally, would the authorship rights
still vest in a creator or programmer when a third-party user starts using the AI model? The
situation is clearer for semi-autonomous AI models.
Semi-autonomous AI models. Semi-autonomous AI models handle tasks with some
independence but require human input, oversight, or intervention at key points. Thaler’s
case concerned a fully autonomous AI system, but what about semi-autonomous ones? In
Shenzhen Tencent v Shanghai Yingxun Guangdong 0305 Civil First Trial No 14010, a
Chinese case, the Nanshan District People’s Court decided that works produced by AI
applications deserve copyright protection if the individual asserting copyright protection
fulfils the requirement of intellectual creativity. In Gao Yang et al. v. Golden Vision
(Beijing) Film and Television Culture Co. Ltd. et al. (2017) Jing 73 Min Zhong 797), the
issue concerned the copyrightability of AI-generated photos taken by a camera attached to a
hot air balloon. It was decided that the photos were eligible for copyright protection because
the plaintiff had attached a sports camera, which was considered sufficient creative input to
qualify for protection.5 This aligns with jurisdictions like New Zealand (Section 5(2)(a) of the
Copyright Act 1994) and Ireland (Section 21(f) of the Copyright and Related Rights Act,
2000) where the person who arranges for the creation of a computer-generated work is
considered its author. Nonetheless, determining what amounts to sufficient human input is,
is unclear. This is complicated by the “Blackbox problem” (applicable to autonomous and
semi-autonomous systems), which Sheikh, Prins, and Skrijvers define as a phenomenon in
which AI systems translate input into output without revealing their complex inner
workings.6 Establishing a causal relationship is difficult when you have no grasp of how the
data that is fed into the AI model results in the output.
On the other hand, Gaffar and Albarashdi opine that these legal frameworks recognise the
significance of the person who undertakes the essential steps for the creation of such works.
Unlike fully autonomous AI systems, an argument can be made that there is a sufficient
causal relationship between the works generated and the human’s input because the person
has exercised a greater level of control over what the model has produced. Furthermore, the
AI model can be perceived as a tool for bringing forth the author’s idea into material form,
much like an artist using a paintbrush. This argument is strengthened by Section 3 of the
CNRA, which grants an author a wide berth to reduce work to material form “in whatever
method.”