Nick Clegg, who holds the position of President of Global Affairs Staff at Meta, elaborated on the decision in a statement. He mentioned that this initiative is particularly timely, given that many countries around the globe are gearing up for elections this year. He stated, “The line separating human-generated and synthetic content is increasingly becoming blurred. We are eager to understand more about how people are creating and disseminating content that is AI-generated and the ways in which these technologies are evolving. The insights we gain will contribute to the establishment of best practices in our industry.”
Meta has clarified that it is collaborating with various industry partners to create technology capable of recognizing AI-generated content. The tags that will be applied to such content will reflect industry standards and will be available in all languages.
Recent estimates suggest that approximately 20 billion AI-generated images have been uploaded to the Internet since the beginning of 2022. These include fake images of both public figures and private individuals, uploaded without their consent, as well as misinformation with political undertones intended to distort the truth.
It has long been understood by Meta and other social media giants that they must take action to address this issue. Last year, the UK introduced the Online Safety Act, which makes it illegal to upload counterfeit images of an individual without their consent.
US lawmakers have previously criticized social media platforms for failing to adequately protect Internet users, arguing that legislation is required to compel these platforms to take action against the spread of fake news. It is expected that Meta’s initiative will prompt other companies to establish standards for trust and control over published information.
Meta’s headquarters has acknowledged that it is currently impossible to identify all AI-generated content. There will inevitably be attempts to bypass the tagging technology, but Meta has stated its intention to continue seeking methods to scrutinize some of the content that is uploaded. The company will also encourage users to share information about AI-generated content so that appropriate tags can be added.
In recent times, AI-generated images have become incredibly sophisticated, making it difficult at times to discern their artificial nature. For instance, in January, counterfeit images of pop star Taylor Swift, believed to have been created using AI, were uploaded onto social media.
In the UK, a set of eight artificial images depicting Prince William and Harry at King Charles’ coronation were circulated on Facebook, garnering over 78,000 likes. One such image portrayed a seemingly emotional embrace between the brothers, following reports of a disagreement between them. However, none of these eight images were real.
Another counterfeit image, created using AI, depicted former US President Donald Trump after he was charged with 13 counts of alleged election fraud.