Google has quietly pulled an artificial intelligence search feature that offered crowdsourced health advice from non-medical professionals, a move that comes as the tech giant faces increasing scrutiny over its AI-generated health information.

Crowdsourced Advice Pulled

The feature, known as “What People Suggest,” aimed to provide users with insights from others sharing similar medical experiences. Google had initially heralded its launch last year, asserting it demonstrated "the potential of AI to transform health outcomes across the globe." Karen DeSalvo, then Google's chief health officer, explained in a blog post that while people seek expert advice, they also value hearing from peers. The AI-powered tool was designed to organize "different perspectives from online discussions into easy-to-understand themes," allowing users to quickly grasp what others were saying, such as how individuals with arthritis manage exercise.

Initially rolled out for mobile users in the US, the program has since been discontinued. Three individuals familiar with the decision confirmed the feature's removal, with one stating simply, "It's dead." A Google spokesperson confirmed the scrapping of "What People Suggest," attributing it to a "broader simplification" of its search page. The spokesperson maintained the decision had no connection to the quality or safety of the feature itself.

Mounting Scrutiny on AI Health

But the removal unfolds against a backdrop of mounting concerns regarding Google's use of artificial intelligence to deliver health information and advice to its vast user base. In January, a Guardian investigation revealed that Google's "AI Overviews" were exposing users to potentially harmful, false, and misleading health information. These AI-generated summaries appear above traditional search results on the world's most visited website, reaching an estimated 2 billion people each month.

The investigation highlighted instances where AI Overviews provided inaccurate medical details, raising alarms among independent experts. Google initially downplayed these findings, contending that its AI Overviews linked to reputable sources and consistently recommended users seek professional medical advice. Yet, within days of the Guardian's report, Google partially reversed course, removing AI Overviews for some, though not all, medical-related queries.

The Broader AI Content Debate

Google's stumbling with health AI has sparked a bigger conversation: can we trust machine-generated content at all? Now that AI is everywhere, people are starting to demand a way to tell what's actually written by humans.

An initiative is now underway to establish a globally recognized "AI-free" logo. Paul Yates, CEO of a company involved in this effort, supports the AI industry and acknowledges its exciting potential. But he also argues that AI content creation has inadvertently created an "economic premium" for human-made content. The idea is simple: slap a label on human-made content and charge a premium for it.

The push for an AI-free logo suggests a growing desire among consumers to verify the source of information, particularly in sensitive areas like health. The real question is whether anyone will actually trust anything when AI can fake almost anything.

Tech companies still haven't figured out how to make AI health advice people can actually trust.