Recently, many users have noticed that Google Image search results include hyper-realistic AI-generated images alongside genuine photos. This is raising concerns about misleading information. For instance, when searching for pictures of baby peacocks, some users found AI images that looked nothing like real chicks.
A Reddit user also reported seeing similar inaccurate results for various baby animals. In contrast, searches for public figures like Elon Musk and Donald Trump returned accurate photos, as did searches for place names like Gaza and Tel Aviv.
Google is increasingly using AI to enhance its search results, despite past problems with accuracy and reliability. While major AI companies are working on better ways to mark AI-generated images, these markers may go unnoticed by users who are quickly scrolling through search results.
Meanwhile, Google plans to start labeling AI-generated images in search results. This means you’ll soon see a notice indicating whether an image was created or modified by AI, making it easier to differentiate between the two. However, this feature will take some time to roll out, so the current mix of AI-generated and real content will continue for now.
Some users have suggested that one way to avoid this confusion is to customize Google image searches to exclude results from before 2022, when generative AI became popular.
However, this could also filter out important recent news and media that users need. The growing presence of AI-generated images in search results highlights the challenges of distinguishing between what’s real and what’s not.
US court proposes to break up Google, calls for sale of Android and Chrome