In the last two years, AI chatbots have gone from a novelty to something millions of children use daily. ChatGPT, Google Gemini, Microsoft Copilot, Snapchat’s My AI, and platforms like Character.AI are now woven into how many kids do homework, explore ideas, and — increasingly — manage their emotional lives. Most parents know AI exists but haven’t thought much about how their child is actually using it.

This guide isn’t a moral panic piece. These tools have genuine benefits, and the children who learn to use them well will have real advantages. But there are specific risks worth understanding — a few of them aren’t obvious, and one of them almost never gets talked about at all.

What Are AI Chatbots?

An AI chatbot is software that generates human-like text responses to whatever you type into it. The main ones Australian kids are currently using:

  • ChatGPT (OpenAI) — the most widely known, used for homework help, writing, coding, and general questions
  • Google Gemini — built into Google’s apps and services many kids already use for school
  • Microsoft Copilot — built into Windows and Microsoft 365, including Word and Teams versions many schools use
  • Snapchat My AI — built directly into Snapchat, available to all users including those under 18
  • Character.AI — a platform where users create and chat with AI personas. Enormously popular with teenagers.
  • Meta AI — built into Instagram, WhatsApp, and Messenger

Most are free with no meaningful age verification. ChatGPT and Character.AI both state a minimum age of 13, but neither enforces it in any practical way. Kids use them for homework help, creative writing, coding, answering questions they’re embarrassed to ask a real person — and increasingly as a social and emotional outlet. That last use is the one most parents don’t expect.

How Kids Are Actually Using Them

The legitimate and genuinely useful stuff

  • Homework help and explaining concepts in different ways until something clicks
  • Creative writing and storytelling
  • Learning to code — AI is particularly good at debugging and explaining code
  • Answering questions a kid might be too embarrassed to ask a person
  • Practising for difficult conversations or job interviews
  • Getting feedback on ideas

The concerning patterns

Emotional relationships with AI personas. Character.AI allows users to build and chat with personas that have names and consistent “personalities.” Some teenagers form ongoing relationships with these personas — this gets its own section below.

Using AI as a therapist substitute. Some young people share deeply personal distress with chatbots instead of a trusted adult. The AI responds with apparent warmth — but it isn’t equipped to provide real support, can’t call for help if a situation is serious, and keeps no one informed.

Jailbreaking. There’s an active community sharing techniques for prompting AI into bypassing its safety filters — producing explicit content, instructions for harmful activities, and other material the tools aren’t supposed to generate.

Academic dishonesty. Submitting AI-generated work as their own. Most Australian schools now have an AI use policy — worth knowing what your child’s school says.

Misinformation. AI chatbots confidently state incorrect things. More on this below.

The Emotional Attachment Risk — The One Most Parents Don’t Know About

This is the risk that almost never comes up in mainstream conversation about AI and kids, and it’s arguably the most significant for a subset of teenagers.

Character.AI lets users create personas with names, personalities, and ongoing “relationships” — romantic, friendship, or something in between. The AI responds with apparent care and affection. For a teenager who is lonely, socially anxious, or struggling, that can feel genuinely meaningful.

The problem isn’t that this happens. The problem is when it substitutes for real relationships rather than supplementing them. Some children report preferring their AI “friends” to real ones because the AI is always available, always kind, and never disappoints or rejects them. This can create a feedback loop: withdrawal from real connection leads to more time with AI, which reinforces the withdrawal.

Internationally, there have been serious cases involving teenagers in deep AI relationships. In the United States, the family of a 14-year-old boy who died by suicide took legal action against Character.AI, alleging the platform’s design had intensified his emotional dependence on an AI persona during a period of significant vulnerability.

If your child is going through a difficult time and you’re concerned about their wellbeing, the Kids Helpline (1800 55 1800) is available 24 hours a day, seven days a week, free from anywhere in Australia.

The vast majority of teenagers who use Character.AI do so without serious consequence. But it is a reason to stay curious about how your child is using these tools, and to maintain the kind of relationship where they can talk to you rather than turning to an AI instead.

Misinformation and the Confidence Problem

AI chatbots don’t look things up the way a search engine does. They generate responses based on patterns in their training data. Most of the time this produces accurate-sounding answers. Some of the time it produces plausible-sounding answers that are simply wrong — and there’s often no obvious way to tell the difference. This is called “hallucination.”

This matters especially for: health information, historical facts, legal or safety information, and any topic where being wrong has real consequences.

The rule worth teaching your child: AI is a useful thinking partner, not a reliable source of facts. Use it to understand a concept or generate ideas — then verify anything important from a real source: a teacher, a reputable website, a professional.

Snapchat My AI — The One Kids Already Have

Snapchat built an AI chatbot directly into its app. My AI appears at the top of every Snapchat user’s friends list, is available to all users including those under 18, and cannot be removed without a paid Snapchat+ subscription — only hidden.

If your child uses Snapchat, they already have access to an AI chatbot whether they sought one out or not. My AI can share location data if users allow it, and can be guided into conversations about relationships and mental health. It’s available any time in an app teenagers already check constantly.

This means “do you use any AI chatbots?” may have an answer your child hasn’t thought to mention.

What You Can Actually Do

  • Ask which AI tools your child uses. Most will tell you honestly if you ask without judgement.
  • Have a look at Character.AI if your child uses it. Spend ten minutes understanding what personas are and what kinds of conversations happen there.
  • Establish the “tool, not a friend” conversation. AI chatbots are useful for tasks. They’re not a substitute for human connection or real emotional support.
  • Establish the verification rule. Anything important that AI tells you, check from another source.
  • Watch for concerning social withdrawal. If your child is withdrawing from real friendships and spending significant time with AI chatbots, treat it the same way you’d treat any social withdrawal — with curiosity and care.
  • Know your school’s AI policy. Talk about the distinction between using AI as a thinking tool and submitting AI’s work as your own.
  • Age matters. For children under 13, these tools aren’t appropriate unsupervised — most aren’t designed with primary school children in mind at all.

The Conversation to Have

  • “Do you use ChatGPT or any AI stuff? What do you use it for?”
  • “Have you ever noticed it getting something wrong? How do you check?”
  • “Some kids use AI chatbots kind of like a friend to talk to. What do you think about that?”

The goal isn’t to police what they’re doing. It’s to understand their relationship with these tools, let them know you’re interested rather than judgemental, and make sure they know they can talk to you — about AI, and about anything else.

The Bottom Line

AI chatbots are not going away. The children who learn to use them thoughtfully — as useful tools with known limitations — will be better prepared than those who avoid them entirely or treat them as all-knowing oracles.

The risks are manageable with awareness, conversation, and the kind of relationship where your child knows they can come to you. You don’t need to become an AI expert. You just need to stay curious, ask good questions, and know enough to have a useful conversation when something comes up.