New French draft law on AI: Generated or not generated, that is the question

The subject of AI is pervasive to such an extent that it is impossible to avoid its influence [latest developments here , here, here or here]. Readers may recall that this Kat reported on the existence of a lacklustre draft law aimed at establishing a French copyright framework for AI [IPKat here]. It met with an unfortunate fate when the National Assembly was dissolved in June 2024. However, a new draft law on AI was published earlier this month.
An AI generated Kat


Analysis

It is evident that the recently proposed legislation does not serve to rectify the absence of the previous AI draft law. Nor is the subject matter identical. This new draft law is entitled ‘to identify images generated by artificial intelligence published on social networks’. It consists of an explanatory memorandum and a single article.

The explanatory memorandum highlights the dangers posed by AI, particularly in terms of misinformation as a result of the publication of deepfakes on social networks. This draft law also seeks to impose a transparency obligation on content produced by AI. For its drafters, it endeavours to create a clear framework in which innovation and responsibility coexist, by both protecting users and strengthening trust in digital content.

Protecting users involves easily distinguishing an artificial creation from an authentic document. The underlying idea is the fight against disinformation and manipulation.

This proposed law is also accompanied by a legal obligation for platforms to systematically implement tools capable of making such a distinction.

To give concrete expression to this objective, the sole article of this draft law requires that ‘Anyone publishing on a social network an image generated or modified by an artificial intelligence system must explicitly mention its origin’, ‘that includes a clear and visible warning specifying the use of an artificial intelligence model to create or modify the image.’ This is supplemented by an obligation on online platform operators ‘to put in place technical means to detect content generated by artificial intelligence and to check that it is labelled correctly. They must also inform their users of the obligations in force and provide a reporting tool for suspicious content’.

Thoughts

A review of the explanatory memorandum reveals that, as it was the case with the previous French AI draft law, the critical nature of AI is presented in almost a caricatural way, with the technology being depicted as a source of serious threats that need to be quickly remedied.

Furthermore, this draft law appears to wholly disregard the existence and implications of the AI Act (Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024), which merits recognition for its establishment of a tangible framework for AI [IPKat here]. In comparison, this brief draft law may be of limited relevance given the EU regulation. It is therefore open to criticism for its lack of ambition. Nevertheless, it would be futile to attempt to replicate the Regulation.

For this Kat, it would be prudent to consider the full range of challenges posed by AI, rather than focusing on a single, visible aspect. Adopting a more comprehensive approach would allow to establish a robust framework, in line with the EU regulation. For example, it is evident that AI gives rise to queries pertaining to its relationship with copyright. This is particularly evident in relation to concepts such as AI training [IPKat here], as well as the liability of AI model providers [IPKat on this point]. This Kat would therefore appreciate the delivery of a developed French draft law that takes account of European advances. Such an approach would reduce the potential risk for legislative proposals to become fragmented and sporadic, and to lack sufficient coherence when considered in isolation.



New French draft law on AI: Generated or not generated, that is the question New French draft law on AI: Generated or not generated, that is the question Reviewed by Kevin Bercimuelle-Chamot on Friday, December 13, 2024 Rating: 5

2 comments:

  1. Surely this is a matter for the Digital Services Act and not the AI Act? Nothing in the AIA requires the publicist of synthetic audio, image, video or text content to retain the marking inserted by the provider of its generating system?

    Under article 34 DSA, VLOP providers must do risk assessment against risks such as deepfakes or desinformation. Detecting AI markings and flagging content is a trivial outcome.

    ReplyDelete
  2. All these comments are perfectly valid, but one important background element was not mentioned.

    In France, there are actually two types of draft laws.

    The first is a "projet de loi", which is tabled by the government (when there is one, which hasn't been very often the case lately) after quite a lot of preparatory work. The resulting law may be watered down, or may be ill-advised because the preparatory work is in fact flawed in a substantial way, but at least the draft law underwent some legal review by a number of people, including some from the Conseil d'État. Usually this legal review identifies potential problems with EU law in advance.

    The second is a "proposition de loi", which is tabled by one or more members of Parliament on their own initiative. They may have done as much preparatory work as the government... or very little or even not at all. In fact, it is not unusual that a "proposition de loi" is merely a communication device for a member of Parliament: "problem X exists, look at how much I care about problem X: I tabled draft law Y!"

    This draft law is a "proposition de loi", and very much looks like a communication device of this kind. I would not bet anything on it ever becoming law.

    ReplyDelete

All comments must be moderated by a member of the IPKat team before they appear on the blog. Comments will not be allowed if the contravene the IPKat policy that readers' comments should not be obscene or defamatory; they should not consist of ad hominem attacks on members of the blog team or other comment-posters and they should make a constructive contribution to the discussion of the post on which they purport to comment.

It is also the IPKat policy that comments should not be made completely anonymously, and users should use a consistent name or pseudonym (which should not itself be defamatory or obscene, or that of another real person), either in the "identity" field, or at the beginning of the comment. Current practice is to, however, allow a limited number of comments that contravene this policy, provided that the comment has a high degree of relevance and the comment chain does not become too difficult to follow.

Learn more here: http://ipkitten.blogspot.com/p/want-to-complain.html

Powered by Blogger.