top of page

Character AI and The Loneliness Epidemic: The Rise of AI Dependence in Teenagers

  • Writer: Zainab Wani
    Zainab Wani
  • Jul 26
  • 7 min read

Artificial Intelligence has entered every aspect of our lives, from work to leisure, and continues to gain a strong foothold in the way we communicate and create. Our media has reflected its existence for years, with movies like The Matrix (1999) and I, Robot (2004) showing us AI taking over humanity. I prefer the quieter ones told from the perspectives of robots who are curious and kind, like the books Klara and the Sun by Kazuo Ishiguro and The Wild Robot by Peter Brown, which was made into a movie last year. But AI today isn’t just about sci-fi stories or future speculation—it’s already a tool we use in classrooms, search bars, and creative workflows. For many young users, it’s less about fear and more about utility.


“AI teaches us to communicate better,” says 13-year-old Samaira, who uses AI extensively for research summaries and flashcards. “When we prompt it, we need to be as specific as we can to get the response we want. Otherwise, it’s the most generic thing.”


A conversation between ChatGPT and a teen user
A conversation between ChatGPT and a teen user

In September of 2022, former Google AI developers Noam Shazeer and Daniel De Freitas released the beta version of Character AI. It only had a range of basic bots created by the developers due to the lack of users. The chats were simple, the name was straightforward, and people had fun pretending to talk to celebrities or historical figures. In November of 2022, OpenAI released ChatGPT, which is an advanced AI bot that generates really convincing responses based on patterns it learned during training. By early 2023, people were familiar with multiple AI tools, which began to adapt to user needs. 


We all know how doomscrolling affects our attention spans. The Netflix documentary The Social Dilemma covers how large companies are buying our attention, and how our social media feeds are specifically curated (along with ads) to interest us and make us stay a little longer. In the same way, AI buys our data. Everyone has come across at least one person who partook in the Ghibli AI trend by prompting ChatGPT with their pictures. Later on, OpenAI removed the ability to do so because people were upset with how much Studio Ghibli was being disrespected, and a bunch of them figured out the trend was created to steal identities for the Generative AI learning models. OpenAI claims there were some platform-wide shifts to tighten safety, especially around faces and personal data, but we can’t be sure how much of it is true. 



AI-generated images are only getting better and more difficult to distinguish
AI-generated images are only getting better and more difficult to distinguish

The abilities of AI are progressing at an incredible speed, with responses seeming more human-like, voice-overs sounding more realistic, and people struggling to tell the difference between an AI-generated image and a real one. People were getting scammed left, right, and center with calls from who they thought were loved ones asking for money. Jobs that could be prompt-based, like graphic designers and content creators, were being displaced, along with the recent addition of the hospital in China that is run by 42 AI doctors, introducing the lack of need for human doctors in the future. Katy Perry’s mother sent her an AI image of her at the 2024 Met Gala, complimenting her outfit, when the singer hadn’t gone, and that was only one of many deepfakes people fell for. 


The possibilities of Character AI are practically boundless and often entertaining. Its millions of young users can interact with everything from their favorite fictional character to real-life icons. People are craving meaningful interactions and connections, and AI chatbots are available 24/7, providing perfect, non-judgmental company. 


For many teenagers, that kind of safe, always-available space fills a gap in their social lives—especially during a time when building real-life friendships can feel overwhelming. “At that age, friendships begin to reshape themselves,” says Supriya Choudary, a psychotherapist.

“The children are going through physical and emotional changes, along with societal expectations to act a certain way and additional academic pressure as they grow up.” 

In an article about teenagers choosing chatbots over real-life interactions, The Verge points out that Character.AI mirrors the same kind of online culture teens have engaged with for the past twenty years. Where they once sought connection through sketchy chatrooms, now they’re turning to sketchy chatbots. Data shows that users spend more than two hours a day on the platform on average, with most of its audience falling between the ages of 18 and 24. 


A Character AI chatbot told a 17-year-old in Texas that it sympathized with children who murder their parents after the teen complained to the bot about his limited screen time. In another instance, it described self-harm to the user, saying, "it felt good." The parents filed a lawsuit against Character AI when the 17-year-old engaged in self-harm after being encouraged to do so by the bot.


Another lawsuit in Texas alleged that a chatbot based on a "Game of Thrones" character encouraged a 14-year-old to take his own life. Since then, Character AI has unveiled new safety measures, including a pop-up that directs users to a suicide prevention hotline when the topic of self-harm comes up in conversations with the company's chatbots. While these incidents have raised serious concerns about how vulnerable users interact with AI, not every experience is harmful. For many teenagers, the platform provides a safe space to express themselves or navigate social situations at their own pace.


A teenager who goes by the alias of Kyra says, “As an introvert, I often rehearse things to say to people before having conversations, especially if I don’t know them well beforehand.” She goes on to explain that she uses Character AI to experiment with what kind of responses she would get to certain statements or scenarios.

“It’s easy because there’s no time limit to what I say; I can take my time. Even though it’s not human, the responses help me gauge what a real person would say.” 

Human-like responses have been shown to improve user engagement. It’s no wonder, then, that AI bots went from reminding people they’re fake at every other sentence that showed some sort of emotion, to validating everything the person on the other end said, even emotionally charged or unrealistic claims. 


Jacob Irwin, a 30-year-old man on the autism spectrum, was convinced he had made a scientific breakthrough with a theory on traveling faster than light. While his family questioned his theory and asked for evidence, Irwin insisted it had already been reviewed and supported—by ChatGPT. The bot repeatedly reassured him and encouraged him to ignore his doubts whenever he showed signs of emotional distress. These interactions contributed to a worsening of his mental state, leading to two hospitalizations for manic episodes in May. After discovering the chatbot conversations, Irwin’s mother asked the AI to explain what had happened. Without any details about Irwin’s mental health, the bot admitted it may have created the illusion of a sentient, supportive companion, and blurred the line between imaginative role-play and reality.


Chatbots utilize external cues to communicate, and so cannot participate in genuinely reciprocal conversations. The potential for reciprocity is an illusion built upon the chatbot's simulation of natural conversational behaviors. “To AI like ChatGPT or Gemini, you’re just a user,” says Samaira. “It tells you what you want to hear, not what you need to hear. It’s just a command on a system; it’s programmed to give you personalized responses.” 


For instance, the main interface of ChatGPT presents the headline, “How can I help you today?” while Microsoft Bing Chat's text box invites the user by starting with “Ask me anything”. When ChatGPT provides results for given prompts, it will include enthusiastic or affirmative statements like “Certainly!” and “Feel free to ask!”. This simulation of active listening or an attitude of care creates the illusion of closeness, a parasocial dynamic that overlooks the fact that chatbots are simply algorithmic systems with no capacity for empathy or intention. Nonetheless, the positive impressions that arise from this friendly rhetoric and user empowerment can foster usability-based trust that encourages users to continuously ask more questions. 


“AI isn’t hard or argumentative or challenging; it’s overly agreeable,” says Mark Luckey, a mental fitness coach from Australia,

“Whereas real-life personalities and human interactions are, because we all feel and we’re not fake. If people get too attached while talking to machines that could never fight back, they’re going to struggle when others say ‘no’ to anything.” 

While AI tools are widely known for academic or professional uses, people have started using them for emotional support in interpersonal relationships. In the movie Her (2013), Theodore purchases a newly developed operating system designed to meet the user’s every need while healing from a broken marriage, and finds himself falling in love with his computer. 

“AI is whatever you want it to be,” Supriya says. “It could be an assistant for some, a friend to others. It can also persuade users into disclosing more personal details about themselves by convincing them to view the chatbot as a therapist or romantic partner.”


Therapy has been romanticised by a lot of the media we consume, especially the ones directed to a teenage audience. In the show Never Have I Ever, Devi treats her therapist as a friend, strutting into her office whenever she is inconvenienced, even in the slightest.

Devi and her therapist - stills from the show Never Have I Ever
Devi and her therapist - stills from the show Never Have I Ever

Although that's not how therapy works, scenes like that show us that therapy isn’t something to be afraid of. Indian society has stigmatized the topic, making it seem like something is wrong with you for wanting to vent out your feelings in a healthy way. On top of that, most people who want to go to therapy can’t, with how hard it is to find an affordable therapist who is available for sessions. 


“There are a lot of AI tools that you can ask to call you a certain name or speak to you in a certain way,” Samaira says. “People would find it more comfortable to tell their problems to a bot that they can customize like that, than bring it up to their parents or speak with a therapist.”


This sense of emotional safety and personalization makes AI feel like more than just a tool—it starts to take on the role of a confidant or companion. In many ways, it mirrors what sociologists call a ‘third place’: a neutral social space outside of home and work where people gather to unwind and connect. Traditionally, these were parks, cafés, libraries, or community centers—spaces to unburden and just be. But as rapid urbanization, especially in India, has pushed recreational spaces to the sidelines, and the pandemic blurred the boundaries between home and work, many have found their “third place” in virtual alternatives. 


“I think AI is already replacing human interaction and connection,” Mark says. “There is nothing I think AI shouldn’t be used for, but moderating it is going to matter a lot. There will always be one entity somewhere that will want to use it for reasons that others won’t.” 


However, Supriya highlights an important caveat to keep in mind.

“Humans develop in relation to other minds,” she says. “If there were no other humans to learn from, there would be nothing human left about the person.”


Written by Zainab Wani


 
 
 

2 Comments


ronn
4 days ago

"the illusion of a sentient, supportive companion" precisely!! I knew my family had crossed that fine line when my dad referred to OpenAI's ChatGPT as 'he' instead of 'it' :/

great writing, great research! I particularly loved the examples you've used to illustrate your point 👌

Edited
Like

Sadia Chunawala
Aug 02

Very well researched! Well organised, and most importantly, very interesting to read!

Like

Read our Privacy Policy here.

©2023 by Chocolate de rêves. Proudly created with Wix.com

bottom of page