EU AI Act cannot save us from deepfakes

The news came a few weeks ago that the super businesswoman and singer Taylor Swift may have decided to consult her lawyers to protect herself against a user who had specifically created explicit images based on authentic shots of the pop star. The accuracy of these so-called "fake" images is increasing over time with the refinement of generative AI systems, but the real problem is the inability of some of these systems to set limits that at least protect the subjects portrayed as much as possible. In the case of a well-known figure, the damage can range from being contextualized in compromising locations or situations to permanently harming their dignity (it is still very difficult to completely erase images from all media once they are shared online), but in the case of vulnerable subjects or those in need of even more significant protections, the issue becomes even more serious. Petapixel revealed that it had exclusively come into possession of AI-generated images, which were available for sale (but no longer) on the Shutterstock platform, depicting minors (especially children) in moments of extreme intimacy and in highly inappropriate poses and attitudes.

Shutterstock promptly removed the images and indefinitely banned the user who had uploaded them, but the fundamental problem is that the platform's filters, which are committed to not providing inappropriate and harmful content, did not work. If we thought we could curb this kind of problem in Europe, at least, with the implementation of the so-called AI Act (the world's first regulation attempting to govern all aspects related to artificial intelligence), we were sorely mistaken. The AI Act is indeed about to be definitively approved (barring unforeseen circumstances) around next April, although we will have to wait up to three years to see all the chapters of the regulation properly implemented. The regulatory approach adopted by the EU classifies AI systems based on the level of risk they present, introducing categories for systems with unacceptable risks, high risks, limited risks, and low risks. The strictest rules apply to systems considered high-risk, which must meet specific requirements, including compliance assessments.

Although the AI Act does not explicitly ban the creation of deepfakes or other AI-generated content that can distort reality, it imposes strict rules that could limit the use and distribution of such content, especially if considered high-risk to fundamental rights or if they could cause significant harm. This might include, for example, the use of deepfakes in contexts that influence public opinion, elections, or other critical aspects of society. In summary, while the EU's AI Act does not explicitly prohibit deepfakes and other AI-generated content, it establishes a regulatory framework that could limit their use based on the level of risk they present, imposing specific obligations on providers of such technologies to ensure transparency, safety, and respect for fundamental rights. A regulatory framework that is still too uncertain and generic, inadequate to protect images of people we regularly share on social media without too much (perhaps legitimate) fear.



Previous
Previous

Labels and AI: How Platforms Are Trying to Distinguish Between Authors and Machines

Next
Next

When you do not have a style, copy it (at least according to AI)