Labels and AI: How Platforms Are Trying to Distinguish Between Authors and Machines
Technology companies, including Meta and Google, are taking steps to label AI-generated video and photo content to ensure greater transparency and help users distinguish between authentic and artificially created content.
Meta has introduced labels such as "Made with AI" to indicate content generated or modified by AI. This includes the use of visible markers, invisible watermarks, and metadata in image files to enhance the identification of artificially generated content. Meta's policies align with the technical standards of the C2PA (Coalition for Content Provenance and Authenticity) and are developed in collaboration with other tech companies and the Partnership on AI (PAI). Guidelines can be found here (Facebook), and Meta’s approach to content manipulation rules, including the risks of opinion and potential for severe review on appropriateness, can be found here (Facebook).
Google is also addressing AI content labeling but with a slightly different approach. Currently, Google does not require creators to label AI-generated textual content. However, it advises doing so when it can be useful for users. Google emphasizes the importance of content quality, regardless of how it was produced, and recommends that AI-generated content always be reviewed by human editors to ensure accuracy and reliability. This seems to limit labeling (and the willingness to identify non-human-produced content, especially in written content; considering the vertical increase in AI-generated copy, this is a central issue for authors) (SEO.ai - The #1 AI Writer for SEO).
The effectiveness of these labels is still up for debate. On one hand, they can help reduce misinformation. On the other, there are still significant technical challenges, such as users' ability to remove metadata or manipulate images to bypass labels. Current tools do not always correctly identify all AI-generated content. For example, it is unclear what the limit is, although it seems a priority to label images with decidedly realistic aspects. But if we create an artwork with AI that we would not have been able to generate on our own and omit to declare it, are we still subject to veto simply because we generated high expectations about our artistic abilities?
There is also debate over the technical capabilities of platforms to detect AI content if the file is manipulated (e.g., with a screenshot) and reposted without the same elements as the original. Petapixel also reported a case of malfunction involving a photographer’s (Paré) image that was interpreted as AI-generated when it was not. In terms of intellectual property, we are reversing the burden of proof and giving photographers the additional responsibility of defending themselves against those who wrongly perceive their work and rights.
The guidelines followed by Meta and Google are influenced by various sources, including the Partnership on AI (PAI) and the C2PA. Additionally, both companies have developed their policies through public consultations and feedback from industry experts, academics, and civil society organizations.
To summarize: when is a work original and when is it considered AI-derived (with consequent differentiated treatment in terms of intellectual property)? Content is considered AI-generated if the entire creative process is performed by an AI model without significant human intervention. Conversely, if an artist uses AI tools as support but contributes substantially to the final result through their creativity and expertise, the work can be considered the intellectual property of the artist.
Labeling AI-generated content is an important step toward greater transparency in the digital world, but many challenges remain. Companies and legislators will need to collaborate closely to develop standards that protect both content creators and consumers while ensuring the integrity and trustworthiness of digital content.