Check this AI

Google Sets the Bar for Transparency with AI-Generated Image Labeling

Google Sets the Bar for Transparency with AI-Generated Image Labeling

Google Search to start labeling AI generated Images on Search Engine Result Page
  • Google will start labeling AI generated images and AI edited images in the search engine result pages in the coming months.
  • Google will use metadata from the Coalition for Content Provenance and Authenticity (C2PA) to flag AI generated content.

AI image generation has become very popular because it’s easy to use and produces high-quality results. As more AI tools are being developed to improve image quality, the number of AI-generated images on the internet has increased. This has led to search engine results being filled with AI images, making it harder for users to find what they’re really looking for.

To address this, Google announced on Tuesday, September 17, 2024, that it will soon start labeling AI-generated and AI-edited images in its search results. This change will apply to Search, Google Lens, and Android’s Circle to Search features, and users will be able to see these labels through the “About this image” tool.

Google is also planning to use the same technology in its ad services and might add a similar feature for YouTube videos, though they will provide more updates on that later in the year.

To identify AI-generated images, Google will rely on metadata from the Coalition for Content Provenance and Authenticity (C2PA). This group, which Google joined earlier this year, provides information about when, where, and how an image was created. Major companies like Amazon, Microsoft, OpenAI, and Adobe have already joined the C2PA, though it hasn’t gained much support from camera makers, with only a few models from Sony and Leica using the technology. Some AI tool developers, like Black Forest Labs, have also chosen not to adopt the standard.

There’s been a sharp rise in online scams using AI-generated deepfakes. For example, in February, scammers used a deepfake during a video call to trick a Hong Kong financier into transferring $25 million. A study in May found that deepfake-related scams grew by 245% globally between 2023 and 2024, with the U.S. seeing a 303% increase. Experts warn that the widespread availability of AI tools has made it easier for criminals to carry out these scams. David Fairman, a security expert at Netskope, explained that criminals no longer need advanced technical skills to use these tools effectively.