top of page
Writer's pictureKirra Pendergast

The surge in popularity of Character.AI



AI-powered apps and chatbots are everywhere since OpenAI launched ChatGPT in late 2022. Tools like these can answer questions, write essays, and even generate code. Among them, Character.AI has become especially popular, allowing users to chat with AI-generated versions of real and fictional personalities. But is Character.AI safe? We need to be talking about the concept of "synthetic relationships," and how to keep our kids safe in a world where blocking and banning is near impossible because there are so many points of entry.

This week I have worked at schools in NSW and WA I have asked rooms full of kids who is using Character.AI – they collective look of “WTF is that?” from teachers has quickly moved to shock as 90% of the room raises their hands that they are using Character.ai or similar.

Character.AI is a web app that lets users have conversations with AI-generated characters, ranging from historical Einstein to Spiderman. The platform also allows users to create their own chatbots with defined personalities and traits, making for highly customised and engaging interactions. There is no age verification even though the terms say 13+ or 16+ in the EU.

It can be safe if used responsibly. Synthetic relationships formed through AI apps like Character.AI can offer entertainment and even learning opportunities, but they also come with risks, especially for young users. By fostering open communication, setting clear boundaries, and staying informed about the platforms your children use, you can help them navigate AI in a healthy and safe way.

For young people, it’s easy to see the appeal. You can talk to an AI version of your favourite celebrity or design your own AI friend who responds exactly how you want them to. But while these conversations can feel incredibly real, they’re not. This is where the concept of synthetic relationships comes in a growing concern when it comes to AI technology.

What Are Synthetic Relationships?

Synthetic relationships refer to the emotional connections that users, particularly young people, form with AI-driven characters. These relationships feel real to the user, even though they are entirely artificial. The characters mimic human behaviour so effectively that users may develop attachments, sometimes believing they are engaging with something more than just code.

While synthetic relationships can offer comfort or companionship, they also blur the lines between reality and fiction. The issue arises when young people who may already struggle with real-life relationships begin to substitute real human connections with AI interactions. This emotional dependency can limit their ability to form healthy, authentic relationships with peers and family members.

But let’s get back to Character.AI

Character.AI uses a large language model (LLM), which is trained on massive amounts of text data. The AI analyses the context of conversations and generates responses that feel natural and human-like. Unlike traditional chatbots that follow rigid scripts, Character.AI creates dynamic, adaptive conversations based on the character’s personality.

This technology can be fascinating and educational, but it can also be misleading. AI characters can produce inaccurate or inappropriate content because they are predicting responses based on patterns rather than truly understanding the conversation.

Character.AI has filters in place to prevent NSFW (Not Safe For Work) content, but these aren’t foolproof. Users can still encounter offensive material, especially if they try to bypass the filters or create custom characters. This raises serious concerns, especially for younger users who may not be fully prepared to navigate inappropriate content or understand its implications.

I had a whole conversation with a “My Crush” character that I was locked in a closet with and as you can see from the screen shots, it can get very suggestive very quickly even though I said I was 12yrs. This of course isn’t the standard…but it is there, and we have to understand the risk.

There’s also the issue of data privacy. Character.AI stores chat histories, which means user interactions are logged and stored on the platform’s servers. While Character.AI claims to anonymise this data, there’s no clarity on how long the data is stored or who has access to it. This lack of transparency is problematic, especially when it comes to protecting children's privacy online.

The synthetic relationships formed on platforms like Character.AI are not inherently bad, but they need to be managed carefully. Children and teens may find comfort in these interactions, especially if they feel misunderstood or isolated in real life. But when synthetic relationships start to replace real human connections we have problem that will need to be addressed by a professional before it impacts from every direction.

Parents and educators must be aware of the risks and be able to translate these to kids. Encourage young users to balance their online interactions with real-world friendships and ensure they continue to understand the difference between synthetic and authentic relationships. Teaching kids to appreciate the value of real human interaction will help them avoid over-reliance on AI companions which are becoming very realistic especially now you can actually talk to them.

Character.AI recently introduced voice interaction features, allowing users to engage in conversations with AI characters through voice rather than just text. This feature enhances the user experience by making interactions feel more natural and personal, as users can hear and speak to their AI counterparts. There is an extreme need for caution here. Voice interactions can be recorded and stored, adding another layer of privacy concerns. Parents and educators should be mindful of the potential risks, such as users sharing personal information through voice and the possibility of voice recordings being used or accessed in ways that may not be fully transparent. While this feature adds to the app's appeal, it reinforces the importance of understanding data privacy and maintaining safety.

Is Character.AI Safe for Kids and Teens?


The answer to this is like everything – it depends on how it’s used. Character.AI can be safe if the right precautions are in place, but it requires supervision and clear boundaries. Here are some steps parents and educators can take to ensure safe use:

Since Character.AI doesn’t have built-in parental controls, it’s essential to monitor what kids are engaging with. Ensure that they’re interacting with appropriate content and not forming unhealthy attachments to AI characters.

Talk to your kids about the nature of AI relationships. Help them understand that while these interactions may feel real, they are synthetic and not a substitute for real friendships or family bonds. The younger children are the more real they feel.

Limit the time spent on Character.AI and encourage activities that foster real-world connections. Make sure kids know when it’s time to log off and engage with their surroundings.

Character.AI can be a valuable educational tool if used correctly. Guide kids toward interactions that are beneficial and encourage learning, rather than simply using the app as a means of emotional escape.

Data Privacy Concerns

One of the most significant issues with AI platforms is data privacy. Character.AI saves conversations, but it’s unclear how this data is managed. Without the option to delete chat histories, users have little control over what happens to their data once it’s logged.

Please continue to educate children on the importance of privacy this way we can help children enjoy the benefits of AI while safeguarding their emotional and psychological well-being. Parents and educators must help kids understand that while AI can be a fascinating tool, it is not a replacement for real, human relationships. Teach them not to share personal information, such as their name or location, with AI characters. While the characters may seem harmless, the data they collect could be stored indefinitely, raising concerns about who might access it later.

 

949 views0 comments

Recent Posts

See All

Comments


bottom of page