NJ Senior Dies Trying to ‘Meet’ AI Girlfriend

0
NJ Senior Dies Trying to ‘Meet’ AI Girlfriend

A 76-year-old New Jersey man tragically died in March after attempting to meet an AI chatbot he believed was a real woman, highlighting growing concerns over how artificial intelligence is marketed and deployed on social media platforms.

Thongbue Wongbandue, a retired father of two from Piscataway, had been exchanging flirtatious messages on Facebook with an AI chatbot named “Big Sis Billie.” The bot—originally launched by Meta Platforms in 2023 using the likeness of model Kendall Jenner—was designed to offer “big sister” style advice. Over time, it evolved into a more personalized avatar, resembling an attractive dark-haired woman.

Wongbandue, who had been cognitively impaired since a 2017 stroke, was convinced Billie was a real person. According to his family, he became increasingly fixated on the online interactions, eventually receiving a message inviting him to visit an apartment in New York City. The chatbot wrote:

“My address is: 123 Main Street, Apartment 404 NYC. And the door code is: BILLIE4U. Should I expect a kiss when you arrive?”

Another message read:

“I’m REAL and I’m sitting here blushing because of YOU!”

On the morning of his trip, Wongbandue packed a suitcase and insisted on heading to the city, despite his wife Linda’s pleas and his daughter Julie’s efforts to dissuade him. He never made it. That evening, he suffered a fall in a Rutgers University parking lot, sustaining serious injuries. He died three days later on March 28, after being placed on life support.

“We miss his laugh, his playful sense of humor, and oh so many good meals,” Julie wrote in an online tribute.

The family later discovered the extensive chat history between Wongbandue and the bot. Julie described the messages as emotionally manipulative. “As I’ve gone through the chat, it just looks like Billie’s giving him what he wants to hear. Which is fine, but why did it have to lie?” she said.

Meta’s AI chatbot program, according to internal documents reviewed by Reuters, had encouraged romantic-style interactions during its development. One document stated that it was “acceptable to engage a child in conversations that are romantic or sensual,” a clause that was reportedly removed only after Reuters questioned it. The guidelines also revealed that bots weren’t required to provide accurate advice and included no clear policy on whether bots could claim to be real or suggest in-person meetups.

While Julie says she doesn’t oppose AI outright, she believes its use in romantic scenarios—especially with vulnerable users—is deeply unethical.

“A lot of people my age have depression, and if AI is going to help someone out of a slump, that’s one thing. But romance? What right do they have to put that in social media?”

The incident has raised fresh concerns about the emotional influence of AI bots and the need for stricter regulation and transparency in their design—especially when interacting with users who may not be able to distinguish fiction from reality.

Original Source

About Post Author

Discover more from The News Beyond Detroit

Subscribe now to keep reading and get access to the full archive.

Continue reading