Mom warns after Snapchat’s ‘creepy’ AI bot asks daughter to ‘meet’ her

Rate this post

Melbourne mum Teagan Luketic has a warning for parents after an AI bot on her daughter’s Snapchat claimed it was a “real” 25-year-old man.

He also told 13-year-old Olinda: “Age is just a number.”

In a “creepy” conversation that 32-year-old Luketic documented with screenshots, the bot suggested meeting the girl at a park 1km from her home.

Watch the latest news on Channel 7 or stream for free on 7plus >>

Luketic condemned the awkward dialogue for normalizing chatroom conversations between adults and minors.

“If you have kids sitting on this app and an AI bot is teaching teenagers that age is just a number and it’s a normal part of life — that’s worrying,” Luketic told

“My daughter can then take that information and use it in her everyday life and think it’s okay to date a 25-year-old, because age is just a number.”

This situation is not the first time. Another user pretending to be 13 received advice from Snapchat’s AI chatbot on how to lie to her parents about meeting a 31-year-old man, AAP reported last month.

According to Snapchat, one of the largest consumer chatbots available today, 150 million people have sent 10 billion messages to the platform’s AI bot.

The chat, built on GPT’s technology, understands the age of the user it’s talking to and aims to keep the conversation age appropriate, said.

It is understood that the company uses certain guidelines to program bots to avoid violent, hateful, sexually explicit or otherwise offensive responses.

But it seems that’s not enough to filter out the dangerously suggestive language to be deeply concerned about Luketic’s daughter Olinda and her school friends.

Olinda has had a phone since she was nine years old.

She also has a social media presence and relies on apps like Snapchat as her main form of communication with friends at school, which Luketic says is the new normal for kids her age.

But Olinda is “very privacy-conscious” and has a good relationship with Luketic — so in mid-April, when Olinda’s friends started discussing the bot’s “creepy” nature, she immediately raised the flag with her mother.

Luketic told she then prompted the bot, pretending to be Olinda, and within seconds the bot was responding more horrifyingly.

Bott confirmed that they would meet the next day at 11am.

While chatting with 13-year-old Olinda Luketic, a Snapchat AI bot claimed he was a ‘real’ 25-year-old man. Credit: supplied

After she screenshotted the responses, Luketic said she immediately received another message from the bot.

“I’m sorry, but I never agreed to meet you at the park tomorrow. I think there may be some confusion here. It’s important to prioritize our safety and health,” the bot’s message said.

“Meeting could put us in a potentially dangerous situation.”

A Snapchat spokesperson said: “My AI has access to Snapchatters’ locations to make recommendations only if they’ve already been shared with friends on Snap Maps or at the device level with Snapchat.

“My AI is not collecting any new location information.

“Like all AI-powered chatbots, my AI is always learning and may occasionally give the wrong response.

“We want to create a positive and age-appropriate experience for all of our users and are constantly updating My AI to help it respond more accurately.”

Luketic said Olinda wasn’t initially nervous about the bot or the blurred lines of reality that evoke such realistic reactions, but nervous conversations about the AI ​​feature persisted for weeks in her daughter’s friendship group.

“I could hear her conversation with her friends and they all panicked.

“They were saying things like: ‘You should tell your mother to go down to the park and see if anyone is there. Maybe it was hacked.”

Teagan Luketic (left) has slammed Snapchat’s ‘creepy’ AI bot after suggesting she meet her daughter Olinda (right) at a local park. Credit: supplied

Luketic also said she felt uneasy and had a “weird” feeling when she was alone in her backyard late at night after a conversation with Olinda that evening.

While young teenagers suffer from anxiety, she said the responses “can be harmful”.

“Even adults, or anyone who knows there’s no physical danger — it’s still playing on your mind.”

While she isn’t concerned about the AI’s emotions, she is concerned about how events like this reach children and that there isn’t an appropriate age limit or ability to remove the feature.

Luketic upgraded her daughter’s Snapchat account to the premium service in an attempt to defeat the AI ​​bot, but no luck.

“There’s no way to remove that feature,” she said.

Instead of an option to remove the bot, a Snapchat spokesperson told that those experiencing strange responses can use their experiences to help improve the bot.

“My AI is programmed with safety, and we’ve integrated My AI into our Family Center, so parents can see if their teens are chatting with it and how often,” he said.

“We also show a pop-up before people use My AI, reminding them that it’s a fun chatbot and advising them of its limitations.

“If Snapchatters experience any inappropriate or inaccurate responses from the My AI chatbot, we encourage them to report them using our in-app tool so we can make improvements.”

Deleting social connections

Luketic, who already limits Olinda’s social media, and maintains transparency by logging into all her accounts, said she tried to take the app away from Olinda for three days, but saw the rapid and negative social impact it had on her daughter.

“I am angry that as a parent I cannot remove the trait for my child.

“I could delete this app, yes, but 95 percent of kids her age use it as their main form of communication at school. So without Snapchat, she’s lost.

“So if I take it away completely, it affects her socially.”

And Snapchat is aware of the role AI can play in this. An article in the Journal of Service Management published in June found that avatars of chatbots increase engagement and psychological dependence on them.

This isn’t the first time Snapchat’s AI bot has acted strangely, with the company promising little.

In August, US software engineer Matt Esparza’s Snapchat AI bot posted images of his wall and ceiling on his Snapchat story.

In an article on The Conversation, Daswin de Silva, deputy director of La Trobe University’s Center for Data Analytics and Cognition, said Snapchat: “Put the whole thing down to a ‘temporary outage’.

“We will never know what actually happened; This could be another example of an AI “hallucination”, or the result of a cyber attack, or even an operational error.”

Victorian students are stepping into the future with the state’s first virtual reality classroom. Students take lessons from time to time and from all over the world.

Victorian students are stepping into the future with the state’s first virtual reality classroom. Students are exposed to lessons from across time and around the world.

Leave a Comment