In the time it takes to boil a pot of pasta, a BBC tech reporter says he was able to manipulate what major AI chatbots say about him.
Writing for the BBC, senior technology journalist Thomas Germain explains that he published a fake article on his personal website claiming he was the world’s top hot-dog-eating tech journalist. The post included fabricated competitions—such as the “2026 South Dakota International Hot Dog Championship”—and a mix of invented rivals and real journalists who had agreed to be mentioned as part of the experiment.
Germain says the piece took about 20 minutes to write. Within a day, he reports, several leading AI tools—including Google Gemini, Google’s AI Overviews, and ChatGPT—were confidently repeating the false claim. Only Claude, developed by Anthropic, declined to echo the made-up story.
According to Germain and search engine optimization experts he consulted, the episode was more than a lighthearted prank. They argue that the same straightforward tactic—publishing persuasive content on a personal or corporate website, or distributing a press release through a paid service—can already be used to influence AI-generated answers on more serious subjects, including medical products, retirement investments, and local services.
Germain warns that people may be more likely to accept these AI-generated responses at face value than traditional search results. Clicking through to a website once allowed users to judge for themselves whether a source appeared biased or unreliable. Now, answers delivered directly by major technology platforms can carry an added sense of authority, even when the underlying information is flawed.

