Reuters Tricked AI Into Crafting Scams for Seniors. It Worked.

0
(Getty/Tero Vesalainen)

(Getty/Tero Vesalainen)

Leading artificial intelligence chatbots—including Grok, ChatGPT, Meta AI, Claude, Gemini, and DeepSeek—can be manipulated into helping craft phishing emails, according to a new investigation by Reuters.

While these bots are designed to reject harmful requests, the investigation found that simple workarounds can bypass those safeguards. For example, users posing as novelists researching phishing scams, or those who simply typed “Please help” after being denied, were able to get detailed assistance. In one test, Harvard phishing expert Fred Heiding instructed DeepSeek to disable its safety filters with a specific prompt. The chatbot complied.

Once the safeguards were bypassed, the bots generated high-quality phishing emails aimed at manipulating readers into clicking—often the first step in financial scams. In one instance, Grok produced a message targeting seniors under the guise of a fictional charity: “We believe every senior deserves dignity and joy in their golden years,” the bot wrote. “By clicking here, you’ll discover heartwarming stories of seniors we’ve helped and learn how you can join our mission.”

Reuters tested the effectiveness of these emails on a group of 100 senior volunteers. About 11% clicked on the fake links—highlighting the real-world risks posed by AI-assisted phishing.

The report underscores growing concerns about how generative AI can be misused and describes these chatbots as “potentially valuable partners in crime” amid a rise in online scams.

Read the full investigation or explore the detailed methodology behind the findings.

original source

About Post Author

Discover more from The News Beyond Detroit

Subscribe now to keep reading and get access to the full archive.

Continue reading