Edit Content
Tuesday, Nov 5, 2024
Edit Content
Reading: “Meta Introduces Measures to Identify AI-Generated Content”
- Advertisement -

“Meta Introduces Measures to Identify AI-Generated Content”

Ehabahe Lawani
Ehabahe Lawani 14 Views

Starting from May onwards, Facebook and Instagram will implement a new labeling system for AI-generated content, as announced by Meta, the tech giant. Previously, the company had a policy of deleting such computer-created content. According to a blog post on Friday, Meta will now apply “Made with AI” labels to photo, audio, or video content that is created using artificial intelligence. These labels will be automatically applied when Meta detects “industry-shared signals” of AI content, or when users voluntarily disclose that their post was created with AI. In cases where the content poses a significant risk of materially deceiving the public, Meta may apply a more prominent label. While the previous policy only covered videos that were manipulated by AI to make someone appear to say something they didn’t say, the new policy expands to include videos showing someone doing something they didn’t do, as well as photos and audio. However, unlike before, the content in question will be allowed to remain online under this more relaxed approach.

The recent policy update extends the scope of monitoring to include videos depicting actions that were not actually performed, as well as photos and audio content. Nevertheless, this new policy is more lenient compared to the previous one, as it permits the disputed content to remain accessible online.

The company clarified that the guidelines for manipulated media were established in 2020, when AI-generated content closely resembling reality was uncommon and the primary concern was focused on videos. However, the emergence of realistic AI-generated audio and photos in recent years, especially in the past year, has prompted the need for an updated policy due to the rapid evolution of this technology.

Following incidents such as AI-generated “robocalls” impersonating political figures and the circulation of fake nude images of celebrities, including Taylor Swift, on social media, US regulators and the White House have taken steps to address the issue of manipulated content. Former President Donald Trump has also raised concerns about media outlets using AI to alter images.

Not limited to Meta, other major tech companies like TikTok and YouTube have implemented measures to combat artificial content by requiring users to label AI-generated content and allowing users to report suspected AI-generated material. YouTube recently introduced a similar self-regulatory system.

With significant elections approaching in the EU and the US, policymakers are urging tech companies to tackle the threat of AI-generated “deepfakes” that could potentially mislead voters. Industry leaders, including Microsoft, Meta, and Google, have pledged to prevent deceptive AI content from influencing global elections this year.

While platforms like TikTok and YouTube currently rely on honor systems, they may be compelled to adopt Meta’s stricter approach in light of the EU’s AI Act, which is set to introduce regulations on AI-generated content. This shift indicates a growing emphasis on combating the spread of manipulated media across various online platforms.

Share This Article
- Advertisement -