[Guest post] Transparency requirements in the EU Provisional Agreement on the Artificial Intelligence Act influence the copyright and wider tech debate
The IPKat has received and is pleased to host the following guest opinion by Katfriend Roya Ghafele (OxFirst), addressing the copyright implications of the upcoming EU Artificial Intelligence Act. Here’s what Roya writes:
Transparency requirements in the EU Provisional Agreement on the Artificial Intelligence Act influence the Copyright and wider Tech debate
by Roya Ghafele
As the European Parliament is gearing up for elections in spring 2024, Parliamentarians were eager to get the Artificial Intelligence (AI) Act passed for as long as they still can. Following a marathon round of negotiations, a provisional agreement of the AI Act has been reached. While the agreed text still has to be formally adopted, Europe has set the baseline for what is very likely going to be the first ever international agreement on AI. The Act pays testimony to the fact that being first is difficult.
At its core, the AI Act is eager to establish that AI which the EU considers to be high risk is strongly regulated, if not even forbidden. Should the Act be adopted in its present form, the following applications of AI would be banned for being unacceptably risky:
- biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
- untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
- emotion recognition in the workplace and educational institutions;
- social scoring based on social behaviour or personal characteristics;
- AI systems that manipulate human behaviour to circumvent their free will;
- AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).
There are, however, some defined limited exceptions for biometric identification systems when used by law enforcement, with utilisation only when individuals are suspected of a few specific crimes named in the regulation.
While not at its core, the AI Act has however also underlined Europe’s position on the role of copyright for next generation technologies. Rightsholders requested further transparency and adequate disclosure as to what type of information AI models are being fed with. ‘The data AI uses, often stems from European writers, musicians and other artists’, so I heard from different rightsholder organisations. ‘Using such data without permission is not a matter of negotiation over licensing rates, but a matter of illegality,’ told me for example a key advocate for the publishing industry.
AI generated Kat |
The Act foresees the introduction of transparency requirements for general-purpose AI (GPAI) systems. These requirements include technical documentation, the obligation to comply with EU copyright law and communicating what data the models are ‘trained’ on.
The requirement to disseminate “detailed summaries about the content used for training” may work to mitigate the use of copyrighted material without consent. Rightsholders may be pleased to see this requirement, but questions may also remain about whether they find that this goes far enough to effectively prevent infringement. After all, AI cannot unlearn and solutions would still need to be found as to the rightsholder could be put in the same position as if the infringement never happened, that is how the ‘status quo ante’ for data already used could be even established.
On the other side of the argument, technology companies have underlined that the implementation of such disclosure requirements will be overly complicated and lengthy. If this is even doable, so the technology companies, will need to be seen. Technology companies have also warned that transparency requirements will further hamper Europe’s own ability to leverage AI as an engine of growth and potentially harm the take off of forward looking technology.
Without a doubt, further transparency obligations will constitute an increased burden on technology companies. From a transatlantic dialogue perspective, it will need to be seen how this will affect Europe’s relations with the United States. Many technology companies are US headquartered and it will need to be determined to what extent the disclosure requirements Europe foresees will align with the USA, its key partner in trade and international affairs in general.
At present, breaches of the regulations are foreseen to be punished by significant fines:
- €35m or 7% of global annual turnover for violations of banned AI applications,
- €15m or 3% of global annual turnover for violations of the Act’s obligations,
- €7.5m or 1.5% of global annual turnover for the supply of incorrect information.
More proportionate caps are proposed for SMEs and startups in the event of regulatory breaches.
The Act further reinforces the position of the European Commission and concentrates decision making in Brussels. An AI Office and an AI Board are supposed to be established. Both institutions are supposed to be anchored in the European Commission. The Office is not only supposed to advance AI models, but also develop standards and enforce regulation. Advice on foundation models will be provided by a scientific panel of independent experts. The board is to be composed of Member States’ representatives and will act as a coordination platform and as an advisory body to the Commission. Technical expertise is supposed to be provided by an advisory forum of stakeholders. It is planned that Citizens will be able to launch complaints about AI systems to the market surveillance authority.
The European Commission believes this Act will be “innovation friendly”.
France begs to disagree. President Macron remains concerned about France’s emerging national AI sector and fears for the EU’s competitiveness: ‘The UK and France are currently “neck and neck” on AI development, the UK will not be subject to regulation on foundational models and that “we are all very far behind the Chinese and the Americans.”
The global value of the AI market in 2022 was just over $450Bn. By 2030 it is projected to be worth over $1,800Bn – that is simply too large a market for the EU to lose out on. It remains to be seen if the draft Act will be subjected to further modifications. From a copyright perspective, I would not be surprised if the Copyright in the Digital Single Market Directive were to be re-opened, once Europe has a new European Parliament in place.
[Guest post] Transparency requirements in the EU Provisional Agreement on the Artificial Intelligence Act influence the copyright and wider tech debate
Reviewed by Nedim Malovic
on
Friday, December 15, 2023
Rating:
No comments:
All comments must be moderated by a member of the IPKat team before they appear on the blog. Comments will not be allowed if the contravene the IPKat policy that readers' comments should not be obscene or defamatory; they should not consist of ad hominem attacks on members of the blog team or other comment-posters and they should make a constructive contribution to the discussion of the post on which they purport to comment.
It is also the IPKat policy that comments should not be made completely anonymously, and users should use a consistent name or pseudonym (which should not itself be defamatory or obscene, or that of another real person), either in the "identity" field, or at the beginning of the comment. Current practice is to, however, allow a limited number of comments that contravene this policy, provided that the comment has a high degree of relevance and the comment chain does not become too difficult to follow.
Learn more here: http://ipkitten.blogspot.com/p/want-to-complain.html