Copyright Challenges in the Age of AI – Part 2

Meet The Authors

Olli Pitkänen

Olli Pitkänen

CLO

Dr. Olli Pitkänen is  proficient expert with extensive experience in ICT and law, leading multidisciplinary projects and providing expertise in legal aspects of ICT, IPRs, privacy, and data as a founder of an IT law firm and advisor to companies and the Finnish government.

Sami Jokela

Sami Jokela

CTO

Dr. Sami Jokela is a seasoned leader with 20+ years of experience in data, technology, and strategy, including roles at Nokia, co-founding startups, and leading Accenture’s technology and information insight practices.

Waltter Roslin

Waltter Roslin

Lawyer

Waltter is a lawyer focusing on questions concerning data sharing, governance, privacy and technology. He is also a PhD researcher at the University of Helsinki where his research focuses on the Finnish pharmaceutical reimbursement scheme.

Copyright challenges in the age of AI - Part 2: Is the output of a generative AI system copyrightable and who is the author?

Introduction

Artificial Intelligence (AI) presents new challenges in the copyright system.  

In the previous part, we analysed these challenges and show how they appear from the perspectives of developing and using AI systems. Especially, we noted that depending on the algorithm, machine learning process can be copyright relevant. If the training involves copying the creative choices made by the original author of a copyrighted work in the training data, it could violate the author’s exclusive rights. On the other hand, if the machine learning process can be considered as data mining, it can be within the limitation or exception defined in the DSM directive and therefore lawful within the EU. Yet, if the output of a generative AI system includes copies of the works in the training data, that cannot be justified by that limitation or exception. 

In this second part of our three-part posting on these challenges we discuss the authorship of AI generated output. 

Authorship

According to copyright law, the author is the person who created the work and who originally holds the copyright to it. Often copyright is transferred directly to the employer by law, or it may be transferred to a publisher, for example, but the original creator is still to be regarded as the author. As the role of AI grows, questions arise as to who is the author of a work that has been influenced by AI?  

As discussed in the previous section, to qualify for copyright protection, a work must be original, the author’s own intellectual creation. The author must have made creative choices in creating the work.  

If AI is used in a process that produces something that is perceived as creative, there are several possibilities as to where the creativity comes from.  

First, the AI user can use the system in a creative way. This is often the case today, for example, when a software product such as Adobe Photoshop contains several tools incorporating AI technology. When editing images in Photoshop, these AI tools can enable complex editing , but in most cases the choices are still made by the human user of the software. 

Second, the software developer may have made creative choices that affect the output of the system in such a way that it seems creative no matter how it is used. 

Third, an AI system can be based on a general model trained using data that includes creative works such as newspaper articles, novels, compositions, pictures or paintings. Likewise, data used to refine a model for a specific application may contain similar works. As discussed in the first part, it is possible that the creative choices made for these works are also reflected in the output produced by the system. Thus, the creativity of the output can be traced back to the original authors of the training material. 

So, an AI output that appears creative may in fact be the result of creative choices made by different actors. Actually, it is often a combination of several individuals creativity, which is called joint authorship in the copyright law. Unless the authors have assigned their rights or agreed otherwise, the consent of all of them is required for commercial exploitation of the joint work, which can be quite complex. 

Currently, most definitions of originality require a human author. Thus, AI cannot at the moment be considered as an author. Yet, a good question is, should an automatically generated work be copyrightable, in the first place? While exclusive rights such as copyright can motivate people and companies to research, create and invent new, useful goods, they can also set up monopolies that are harmful to society and the economy. Therefore, careful consideration should be given to whether granting exclusive rights to automatically created works is societally desirable. If yes, who should get the copyright?  

Some believe that one day it will be necessary to grant rights to AI itself. This would probably require some AI systems to achieve legal personhood. However, at the moment it is very difficult to see what problems this would solve.

Observable originality

One of the key principles behind the copyright system is that copyright does not require a filing of an application and an official examination process like patents. Instead, everybody should be able to evaluate a work and just by observation assess whether it is original and thus copyrightable. In practice, that can be difficult even for an expert, but in principle anyone can read a text or listen to a piece of music and tell if it is copyrightable or not.

A scale with one side featuring copyright symbols and the other side featuring symbols representing AI algorithms, with a question mark in the center.<br />
Caption: "Balancing Copyright Protection and AI Development: A Legal Dilemma.

Ernest Hemingway: Across the River and Into the Trees

Technological development has already challenged that. From a photograph, it can be impossible to tell if the photographer has made creative choices while producing the image, or if the picture is merely a random snapshot. The application of AI makes it ever more difficult to evaluate the copyrightability of a work only by observing the end-result. For example, if an image were a painting by a human artist, it would be protected by copyright, but if it was automatically generated by an AI system, it would not. Therefore, the problem is that in the future we shall need more information about how works are created to assess their copyrightability, which seriously confronts the basic principle of copyright regime. Shall we need a new intellectual property right between copyright and patent that would protect originality and creativity like copyright, but would require an application-based prior prosecution before an authority officially grants the right? 

A scale with one side featuring copyright symbols and the other side featuring symbols representing AI algorithms, with a question mark in the center.<br />
Caption: "Balancing Copyright Protection and AI Development: A Legal Dilemma.

Reijo Keskikiikonen: Tommi, Copyright Council 2016:4, https://okm.fi/lausunnot-2016

Is this a copyrightable work – or merely a random snapshot? Did the photographer make creative choices? Impossible to tell, if we don’t know, how the picture was produced. In this case, the Copyright Council considered that the key element of the photograph is the successful timing of its taking, but this alone is not sufficient to make the picture independent and original. Therefore, it is not copyrightable. However, the photograph still has more limited protection under the photographer’s neighbouring right under Finnish copyright law.

Neighbouring rights

Another problem area of the copyright system in relation to technological developments is neighbouring rights. They are similar to copyright but are to some extent weaker rights and do not require a threshold of originality to be exceeded. In Europe, for example, the Copyright Term Directive (Directive 2006/116/EC) allows Member States to provide for the protection of photographs other than those that are sufficiently original to qualify for copyright protection. Therefore, in many European countries, non-original photographs are protected by a neighbouring right, often called the photographer’s right. Similarly, producers of sound and image recordings – music, television and film  – are granted protection for their recordings, but producers of games or events, for example, are not. Anyone who takes a photo shall have a right, but a drawing needs to be original before it is protected. It seems generally quite random when something is protected by a neighbouring right and when it is not. If anything, a common factor behind the complexity of neighbouring rights might be a tendency to protect investments in intellectual property. [1] 

The main difference between copyright and related rights is that the latter do not require creative choices. The development of artificial intelligence makes these limited rights even more ambiguous, as digital convergence blurs the boundaries between artificial classifications: is there any difference between a photograph edited by AI and a non-photograph produced by AI? Why should some be protected and others not? 

The Anglo-American copyright tradition tends to emphasise the economic aspects of copyright, such as the right of authors to benefit financially from their creativity and the relatively wide scope for employers to obtain rights to works created by their employees (“contract for services” in the UK or “work for hire” in the US). In the continental European tradition, particularly in France, more emphasis is placed on the droit d’auteur, i.e. the moral rights of the original author to his creation. Most jurisdictions fall somewhere between these two extremes and seek to balance the interests of creative individuals and paying commissioners.  

As noted above, at least some neighbouring rights seem to protect in particular investments in intellectual property. The economic aspects of copyright may have a somewhat similar rationale: an author who has invested time, skill and creativity, or an employer who has paid a salary to an employee, should benefit from the work. It could therefore be desirable to move the copyright system in the direction of giving the one who has invested in the system the right to works created with the help of AI. 

Conclusions

To conclude this second part, the results produced by an AI system may well fall outside the scope of copyright protection, because the author must be human. However, the originality of the results of a generative AI system can be traced back to the creative choices made by the user, the software developers, the authors of the works in the training material – or any combination of these. Some neighbouring rights, such as photographer’s or producer’s rights, may also apply to works produced by an AI system. 

In the third part, we’ll present our ideas on copyright and other rights in AI models. 

1001 Lakes’ experts are happy to discuss these topics with you if you have concerns of AI and copyright or how to develop and use AI in compliance with the copyright law.

[1] Pitkänen, O.: Mitä lähioikeus suojaa? [What is protected by neighbouring rights?] Lakimies 5/2017, p. 580–602.

Copyright Challenges in the Age of AI – Part 2

Meet The AuthorsDr. Olli Pitkänen is  proficient expert with extensive experience in ICT and law, leading multidisciplinary projects and providing expertise in legal aspects of ICT, IPRs, privacy, and data as a founder of an IT law firm and advisor to companies and...

Realizing the Value of Networked Data – Part 1

Meet The AuthorsEmeline Banzuzi serves as a legal counsel and reseacher specializing in the dynamic field of law, technology and society, with expertise in data protection consulting, risk management, compliance within FinTech, and academic reseach.Joel Himanen is a...

Copyright Challenges in the Age of AI – Part 1

Can a copyright holder’s exclusive right to make copies prevent AI developers from using copyrighted works in training data?

What’s the deal with the AI Act?

In the early hours of December 9th, the European Union Parliament and Council finally came out with a provisional agreement on the contents of the Artificial Intelligence Act (AIA). In this blog post, we will summarize the main contents of the AIA and discuss its possible implications and open questions using the development and deployment of Large Language Models (LLM) as an example.

Trustworthy data for responsibility and sustainability

Data and AI play a crucial role in proving that companies act responsibly and meet their environmental, social and governance (ESG) targets.

Copyright Challenges in the Age of AI – Part 1

Meet The Authors

Olli Pitkänen

Olli Pitkänen

CLO

Dr. Olli Pitkänen is  proficient expert with extensive experience in ICT and law, leading multidisciplinary projects and providing expertise in legal aspects of ICT, IPRs, privacy, and data as a founder of an IT law firm and advisor to companies and the Finnish government.

Sami Jokela

Sami Jokela

CTO

Dr. Sami Jokela is a seasoned leader with 20+ years of experience in data, technology, and strategy, including roles at Nokia, co-founding startups, and leading Accenture’s technology and information insight practices.

Waltter Roslin

Waltter Roslin

Lawyer

Waltter is a lawyer focusing on questions concerning data sharing, governance, privacy and technology. He is also a PhD researcher at the University of Helsinki where his research focuses on the Finnish pharmaceutical reimbursement scheme.

Copyright challenges in the age of AI - Part 1: Can a copyright holder’s exclusive right to make copies prevent AI developers from using copyrighted works in training data?

Introduction

Artificial Intelligence (AI) presents new challenges in many legal areas. One of those areas is the copyright system that has been developed for a quite different world and era. Companies and other actors developing or applying AI systems face difficulties when trying to comply with copyright law. Those are present, especially in three fields: 

  • Can a copyright holder’s exclusive right to make copies prevent AI developers from using copyrighted works in training data,  
  • Is the output of a generative AI system copyrightable and who is the author if AI is employed in the areas that have traditionally required human creativity, and 
  • Are AI models copyrightable?  

In this first part of the three-part posting, we analyse the right of copyright holders to prevent AI developers from using copyrighted works in training data, in particular from the perspective of EU law. 

A scale with one side featuring copyright symbols and the other side featuring symbols representing AI algorithms, with a question mark in the center.<br />
Caption: "Balancing Copyright Protection and AI Development: A Legal Dilemma.

Exclusive rights to prevent training AI 

Creative works are protected by copyright. National laws, EU directives, and international treaties govern it. Anything original and expressed is protected by copyright. The work does not need to be registered or copyright noticed (e.g. © mark) nor does it need to be artistic either. The original subject matter must be the author’s intellectual creation, and only the elements that are the expression of such creation are copyrighted. The author must have made creative choices while making the work.1  

For example, writing a longish text like a novel typically includes creative choices as the author chooses which words to use and in which order to put them. On the other hand, any single word of that work is not copyrightable. Therefore, larger texts and even longer extracts often include enough originality that they are copyrighted, but a single word or few words taken out of the text are not.  

What does this mean from the AI perspective? In machine learning, statistical models are trained using large amounts of data, e.g. text or images. The model then includes information on the probabilities of collocations of different words or elements of an image. To be more exact, in relation especially to Large Language Models (LLM), the original training text is replaced with tokens (unique numeric representation of each word) after which the model is trained to predict the most likely next token. When the model is used, a prompt text is given as an initial context that is then used similarly to predict the following sequence of tokens. Finally, those tokens are converted into words and sentences. Using such a model, a generative AI system, for example, can produce texts or images that resemble those created by human authors.  

 

 

From the point of view of copyright, the first question is whether anything relevant to copyright is happening in the process. Mere reading text or looking at pictures does not infringe copyright. Similarly, copying individual words or their tokens does not infringe copyright because, as noted above, individual words are not copyrightable. Copying larger sets of text or a whole image may infringe copyright. Thus, training a model may or may not infringe copyright, depending on the training algorithm: whether the training involves copying the author’s creative choices or analysing the distance between individual words. A typical, slightly simplified machine learning process consists of reading the text, stripping out potential non-important characters, and converting the result into a token series. After this, the results are typically stored as token vectors for the learning process that is then repeated multiple times over. Alternatively, the material is first stored as is and then converted on the fly during learning, but this is a much more inefficient approach than the previous one. It is likely that the token vectors also include the results of the creative choices that the original author has made. Therefore, still depending on the algorithm, it is plausible that a machine learning process makes copies of original works and is therefore relevant from the copyright perspective. 

At the time of writing this, The New York Times has just sued OpenAI and Microsoft for copyright infringement. In one example of how AI systems use The Times’s material, the media house claimed that ChatGPT reproduced almost verbatim results from Wirecutter, The Times’s product review site.2 OpenAI on the other hand denies that. The company says they have measures in place to limit inadvertent memorization and prevent regurgitation in model outputs.3 We don’t yet know how the dispute will end, but if The Times is right, OpenAI’s ChatGPT will likely infringe copyright. It would be difficult to understand how the software output contains “almost verbatim” copies of the training data if they are not copied into the model first. On the other hand, if OpenAI is right, it is much harder to tell whether anything copyright-relevant is happening in the process. 

A snippet of binary code forming a copyrighted work (like a book or image).<br />
Caption: "Decoding Copyright: Can AI Translate Protected Works into Innovation?

Exceptions to allow training 

The second question is, if training a model is copyright-relevant, is there an exception or a limitation in the copyright law that would still allow training? 

The strong exclusive rights, e.g. the right to copy, to modify, to sell, and to display the work, that the copyright law provides to authors, have been tried to balance by exceptions and limitations. They vary from country to country. Often, they are enumerated in a copyright statute, but e.g. in the USA, they are included in fair use doctrine, an open limitation on copyright. Typically, the exceptions include acts of reproduction by libraries, educational establishments, museums or archives, and ephemeral recordings made by broadcasting organizations, illustration for teaching or research purposes, for the benefit of handicapped persons, for making current events available to the public, and for the purpose of citation or caricature. Especially, in many countries, it is legal to make copies of copyrighted works for private use. Recently, in Art 4 of the DSM directive4, the EU has required that the member states shall provide for an exception or limitation to copyright for reproductions and extractions of lawfully accessible works for the purposes of text and data mining unless the use of works has been expressly reserved by their rightsholders in an appropriate manner. Text and data mining in research organisations and cultural heritage institutions cannot be limited with such a reservation (Art. 3). 

It should be noted that the copyright holder’s exclusive right is the main rule and exceptions and limitations should be interpreted narrowly. Therefore, the exception or limitation on text and data mining should not be interpreted more broadly than how it is explicitly expressed in the Directive. An interesting question is whether text and data mining in this context include machine learning training processes. In Art 2, it is defined that ‘text and data mining’ means any automated analytical technique aimed at analysing text and data in digital form in order to generate information which includes but is not limited to patterns, trends and correlations. It seems that most experts agree that this definition covers also machine learning. At the time of writing this, we do not yet have the final wording of the AI Act, but based on current drafts, it appears that the AI Act will include a clarification that the data mining exemption in the DSM Directive applies to the training of AI. Therefore, although we cannot be sure until the European Court of Justice (ECJ) takes a stand on the issue, we presume that using copyrighted works to train artificial intelligence is allowed in accordance with DSM directive Art. 3 and 4. 

From that perspective, training of a model with data that includes copyrighted works, would be lawful unless the use of works has been expressly reserved by the rightsholders. However, that does not make it legal to develop a generative AI system that generates copies of copyrighted works. Making unauthorized copies would not be legal just by claiming that the copy machine includes AI software! 

Conclusions

To conclude our finding of this Part I, it seems that depending on the algorithm, machine learning process can be relevant from the copyright perspective. If the training involves copying the creative choices made by the original author of a copyrighted work in the training data, it could violate the author’s exclusive rights. On the other hand, if the machine learning process can be considered as data mining, it can be within the limitation or exception defined in the DSM directive and therefore lawful within the EU. Yet, if the output of a generative AI system includes copies of the works in the training data, that cannot be justified by that limitation or exception. 

In the following parts, we’ll first discuss the authorship of AI generated content and then complete this three-part posting with ideas on copyright in AI models. 

1001 Lakes’ experts are happy to discuss these topics with you if you have concerns of AI and copyright or how to develop and use AI in compliance with the copyright law. 

Copyright Challenges in the Age of AI – Part 2

Meet The AuthorsDr. Olli Pitkänen is  proficient expert with extensive experience in ICT and law, leading multidisciplinary projects and providing expertise in legal aspects of ICT, IPRs, privacy, and data as a founder of an IT law firm and advisor to companies and...

Realizing the Value of Networked Data – Part 1

Meet The AuthorsEmeline Banzuzi serves as a legal counsel and reseacher specializing in the dynamic field of law, technology and society, with expertise in data protection consulting, risk management, compliance within FinTech, and academic reseach.Joel Himanen is a...

Copyright Challenges in the Age of AI – Part 1

Can a copyright holder’s exclusive right to make copies prevent AI developers from using copyrighted works in training data?

What’s the deal with the AI Act?

In the early hours of December 9th, the European Union Parliament and Council finally came out with a provisional agreement on the contents of the Artificial Intelligence Act (AIA). In this blog post, we will summarize the main contents of the AIA and discuss its possible implications and open questions using the development and deployment of Large Language Models (LLM) as an example.

Trustworthy data for responsibility and sustainability

Data and AI play a crucial role in proving that companies act responsibly and meet their environmental, social and governance (ESG) targets.