An AI generated Kat |
Analysis
It is evident that the recently proposed legislation does not serve to rectify the absence of the previous AI draft law. Nor is the subject matter identical. This new draft law is entitled ‘to identify images generated by artificial intelligence published on social networks’. It consists of an explanatory memorandum and a single article.This proposed law is also accompanied by a legal obligation for platforms to systematically implement tools capable of making such a distinction.
Thoughts
A review of the explanatory memorandum reveals that, as it was the case with the previous French AI draft law, the critical nature of AI is presented in almost a caricatural way, with the technology being depicted as a source of serious threats that need to be quickly remedied.Furthermore, this draft law appears to wholly disregard the existence and implications of the AI Act (Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024), which merits recognition for its establishment of a tangible framework for AI [IPKat here]. In comparison, this brief draft law may be of limited relevance given the EU regulation. It is therefore open to criticism for its lack of ambition. Nevertheless, it would be futile to attempt to replicate the Regulation.
For this Kat, it would be prudent to consider the full range of challenges posed by AI, rather than focusing on a single, visible aspect. Adopting a more comprehensive approach would allow to establish a robust framework, in line with the EU regulation. For example, it is evident that AI gives rise to queries pertaining to its relationship with copyright. This is particularly evident in relation to concepts such as AI training [IPKat here], as well as the liability of AI model providers [IPKat on this point]. This Kat would therefore appreciate the delivery of a developed French draft law that takes account of European advances. Such an approach would reduce the potential risk for legislative proposals to become fragmented and sporadic, and to lack sufficient coherence when considered in isolation.
Surely this is a matter for the Digital Services Act and not the AI Act? Nothing in the AIA requires the publicist of synthetic audio, image, video or text content to retain the marking inserted by the provider of its generating system?
ReplyDeleteUnder article 34 DSA, VLOP providers must do risk assessment against risks such as deepfakes or desinformation. Detecting AI markings and flagging content is a trivial outcome.
All these comments are perfectly valid, but one important background element was not mentioned.
ReplyDeleteIn France, there are actually two types of draft laws.
The first is a "projet de loi", which is tabled by the government (when there is one, which hasn't been very often the case lately) after quite a lot of preparatory work. The resulting law may be watered down, or may be ill-advised because the preparatory work is in fact flawed in a substantial way, but at least the draft law underwent some legal review by a number of people, including some from the Conseil d'État. Usually this legal review identifies potential problems with EU law in advance.
The second is a "proposition de loi", which is tabled by one or more members of Parliament on their own initiative. They may have done as much preparatory work as the government... or very little or even not at all. In fact, it is not unusual that a "proposition de loi" is merely a communication device for a member of Parliament: "problem X exists, look at how much I care about problem X: I tabled draft law Y!"
This draft law is a "proposition de loi", and very much looks like a communication device of this kind. I would not bet anything on it ever becoming law.