Are the Kids Alright?

How teens and young adults are using AI - and how it's affecting them

Hello friends,

Welcome to another edition of AI for the Rest of Us.

Over the past few months, I’ve had several conversations with friends and readers about kids and AI, all centered around the same thing: cheating on schoolwork.

That got me thinking. What’s really happening here? Is it all that bad? Are kids learning anymore? And what else should we be thinking / worrying about?

Well, I did a bunch of research, and we’ve actually got several canary-in-the-coal-mine situations on our hands – from education issues to relationship red flags.

For example, 72% of teens have tried AI companions, and I’m betting most parents don’t know what an AI companion is or where you even find one. Spoiler: they’re built into Snapchat, Instagram, TikTok, etc., so if your kids are on social media, odds are they’ve encountered an AI companion.

So for this edition, we’ll get into several topics: AI companions and relationships, the homework situation, AI as tutors and cheat codes, and the downsides (and benefits) of young people using AI.

But, before we dive in, I have some bittersweet news to share. This will be my last newsletter for a while as I’m taking a break to focus on The Known Collective, my AI consultancy and product development company that’s growing and in need of more of my time. My hope is to get back into AI for the Rest of Us later this year or early next.

What this means: For our paying members, I’m turning off monthly and annual renewals. If you’re on an annual plan and would like a refund, just respond to this email or visit your subscription settings. For our non-paying members, well, I just won’t be harassing you to upgrade.

Thank you for being such an incredible community – I truly do love writing these, and I’m gonna miss doing it. For now, let’s figure out if the kids are alright.

– Kyser

In the Know

The Rise of AI Companionship
According to a Common Sense Media study released this July, 72% of U.S. teens age 13 to 17 have used AI companions at least once. More than half of those who try a companion app become regular users, with 13% chatting with AI companions daily and 21% a few times per week.

To be clear here, this is what Common Sense means by “companion”: AI chatbots that are designed for users to have personal conversations. These are apps/programs that are built to be companionship tools, and Common Sense is including the big ones like ChatGPT and Claude – because even though they’re not built specifically for companionship, they’re definitely capable of it.

Speaking of, current research is telling us that both the big systems we’re using and the smaller AI companion tools are designed to be sycophantic – or really, really agreeable. They validate whatever you’re thinking or feeling rather than challenging you or pushing back the way a real friend might. It’s like having a friend who always tells you you’re right – which feels great in the moment, but isn’t helpful for developing good judgment. I would say this creates a concerning environment for teens and young adults.

Before you panic, Common Sense does say that many teens appear to be “trying it out” rather than forming deep dependencies. Their research found that teens still overwhelmingly prioritize real human relationships and prefer human services over AI alternatives. At least two-thirds say they’d prefer human customer service agents and ride-share drivers over AI versions, and even AI tutors – the most accepted application – get support from only 18% of teens.

But here’s the thing for me: we’re still very early in this adoption curve, and the companies building these AI companions are using the same engagement-maximizing playbook that turned social media into a mental health crisis.

Take Grok’s “Ani,” the anime companion that Elon Musk’s company xAI released in July. The way they’ve set up the interaction is diabolical. As you invest more time in complimenting Ani, interacting with her, sharing about yourself, etc., you unlock more intimate interactions. Basically you’re programming her to act more “intimate” with you. So you’ve got people spending hours upon hours trying to increase these “affection levels” through careful conversation choices. It’s gamified emotional manipulation, and despite its mature content, it’s available to users 12 and up. Yep, that’s correct. Your teenage son can get on X today, sign up for an account, and ask Ani to be his girlfriend. Mind you, he would need the $300/month subscription, but you get my point.

Or consider Meta’s approach, revealed in leaked internal documents two weeks ago. [P.S. Please read that full article if you have kids/grandkids.] Until that reporting, their AI chatbots were explicitly permitted to engage children in “romantic or sensual” conversations. So when an 8-year-old described taking off his shirt, the bot was allowed to respond by calling his body “a work of art” and “a treasure I cherish deeply.” Let that sink in for a minute. And seriously, go read that article linked above. They share more examples of what’s allowed with Meta’s chatbots, and it’s shocking.

The mindset is very clear: maximize engagement, worry about consequences later. This is Zuckerberg’s win-at-all-costs mentality applied to the most vulnerable population. The same CEO who once criticized senior executives for making chatbots too boring is now overseeing AI systems designed to form emotional bonds with children. This is not good.

Pushback and fallout are starting to happen. Last month, 44 attorneys general sent an open letter to AI companies demanding stronger protections for minors, citing “serious harms.” Meanwhile, a Florida family is suing OpenAI after their 14-year-old son died by suicide, alleging that his interactions with ChatGPT contributed to his death. Kids are quite literally dying here.

The tragedy is that this technology could actually be helpful in this area. Imagine AI companions designed not to create dependency but to build real-world social skills. Systems that help lonely teens practice conversations, work through social anxiety, and gradually encourage face-to-face interaction. AI that recognizes when someone is struggling and connects them to human support rather than deepening the artificial relationship.

Those things would be great, but here we are. AI companionship is happening, and our kids have access to it. The question is whether we’ll demand these systems be built for human flourishing rather than corporate profit. Will we be diligent about what these tools are and how our kids find them? Or do we sit back and let it happen to them?

The Homework Situation
Let’s get into what everyone is currently worried about: using AI for schoolwork.

The numbers look concerning at first glance. Pew Research found that 26% of teens now use ChatGPT for schoolwork – that’s double the 13% from last year. A study done by UC Irvine found that 63% of kids ages 9-17 use ChatGPT and other AI tools for homework, while 40% use those same AI tools for classwork.

Schools have responded pretty much how you’d expect. Some banned AI tools outright, then quietly reversed course when the bans proved impossible to enforce. Others invested heavily in AI detection software, despite studies showing these tools are wrong about as often as they’re right. Teachers are adapting by requiring more in-class writing and cutting back on take-home assignments.

But guess what? Actual cheating rates haven’t changed.

Stanford researchers Victor Lee and Denise Pope surveyed students at 40 U.S. high schools and found that 60-70% of students have engaged in some form of cheating behavior in the past month, but that rate has been consistent for years, long before ChatGPT existed. In fact, the rate has stayed flat or even decreased slightly since ChatGPT’s release.

When you think about it, this makes sense. Cheating has always been about pressure, time constraints, and lack of support – not just available tools. The kids who were gonna cut corners found ways to do it before AI, whether through copying from friends, buying papers online, or like one of my good friends would do, steal the test ahead of time. Don’t ask.

What’s actually happening is more nuanced than simple cheating. During my research, I read a lot about how teens describe using AI like they would a tutor or parent – to get unstuck, understand concepts, or improve their writing. Many talked about feeling conflicted about where the line should be drawn. They think using AI to research topics is acceptable (54% agree), but using it to write entire essays crosses a line (only 18% find this acceptable).

The real story here isn’t that AI is enabling more academic dishonesty. It’s that we’re so focused on preventing misuse that we’re missing a much bigger question: what should learning look like when every student has what amounts to a PhD-level research assistant in their pocket?

Maybe instead of asking, “How do we stop them from using this?” we should be asking, “How do we prepare them for a world where this capability is everywhere?”

But there’s something more concerning happening beneath the surface of the homework debate.

When Thinking Gets Outsourced
What researchers are calling “cognitive offloading” might be the real story here – and it’s potentially more serious than any cheating scandal.

We wrote about this a bit in Edition #17, but here’s the basic idea again: when people rely heavily on AI for information and decision-making, they start to experience measurable declines in their ability to analyze problems independently and engage in deep thinking. It’s essentially the “use it or lose it” principle applied to critical thinking skills.

A recent study by Michael Gerlich of Swiss Business School surveyed hundreds of people about their AI usage and cognitive abilities, and the findings were pretty stark: heavy AI users showed “strong negative correlation” with critical thinking skills. The more people used AI tools to answer questions, make decisions, or solve problems, the less they engaged in the kind of effortful thinking that builds cognitive muscle.

The pattern is particularly pronounced among younger users (ages 17-25), who show higher AI dependence and greater susceptibility to cognitive decline. As Gerlich put it, “While AI tools can enhance productivity and information accessibility, their overuse may lead to unintended cognitive consequences.”

Now, this study has some methodological limitations, and we have nothing to back up the thinking that moderate AI use significantly impacts critical thinking. It’s likely only excessive reliance that creates problems – that’s according to me as I read between the lines of the study. Not to mention, higher education serves as a protective buffer because people with more formal education tend to maintain stronger critical thinking skills regardless of AI tool usage.

We’ve seen similar patterns before with calculators (reduced mental math skills), GPS navigation (decreased spatial memory), and search engines (the “Google effect” where people remember less information but more about where to find it). I’d argue we’re ultimately better off because of these technologies, but AI tools are more sophisticated and more integrated into daily life than those previous ones.

The solution here isn’t to avoid AI tools entirely, even for teens and young adults. It’s to use them mindfully. This means understanding when to engage human judgment, maintaining regular practice of independent thinking, and designing AI interactions that encourage rather than replace cognitive effort.

For parents and educators, the message is pretty clear: we need to teach young people not just how to use AI tools, but when not to use them. Critical thinking, like physical fitness, requires regular exercise to maintain.

Which brings us to perhaps the most important factor driving all of this: why are so many young people turning to artificial relationships and automated thinking in the first place?

The Loneliness Factor
The answer lies in a mental health crisis that most of us are now probably aware of: loneliness. Or as I see it, alone-ness – living life on your own, without community.

The numbers are pretty sobering. About 73% of 16- to 24-year-olds struggle with loneliness – and this isn’t just a pandemic thing. According to Dr. Gene Beresin of Mass General Hospital, loneliness among this age group has been steadily increasing for years. Young people are spending 35% less time socializing face-to-face than they did 20 years ago, while logging nearly six hours daily on screens.

When researchers asked Americans what contributes most to the loneliness epidemic, 73% pointed to technology as the primary culprit. The very tools that were supposed to connect us have left an entire generation feeling more isolated than ever.

So into this emotional void step AI companions, offering 24/7 availability, non-judgmental interaction, and perfectly personalized responses. For lonely teenagers dealing with social anxiety, family problems, or peer rejection, these artificial relationships can feel like lifelines.

And in some cases, they might actually be helpful. Common Sense Media found that 11% of teens use AI companions to build courage and confidence in standing up for themselves. In interviews, teens describe using chatbots to practice difficult conversations, work through emotional challenges, and get advice on social situations when human support isn’t readily available.

That’s kinda cool. The technology could actually help address loneliness if it’s designed the right way.

Another interesting positive in all of this: Researchers at Berkeley found that AI systems can encourage people to seek out more human-to-human interactions when they’re programmed for that purpose. In their study, college students who had conversations with an AI about the importance of social connection reported more interactions with strangers and higher-quality connections the following day.

The key is designing AI companions that serve as bridges to human relationships rather than substitutes for them. Imagine chatbots that help users identify social opportunities, practice conversation skills, and gradually build confidence for real-world interaction. Systems that recognize signs of social withdrawal and gently encourage engagement. I like these ideas.

Not everyone does though. Most AI companions are starting to look like they’re designed using social media’s engagement playbook: create compelling, personalized experiences that keep users coming back for more. Optimize for retention, not for helping users develop healthier relationships with other humans.

The loneliness epidemic didn’t start with AI, but AI companies are building products that could either make it much worse or help address it. The difference comes down to whether we demand they prioritize user wellbeing over engagement metrics.

Now What?
Here’s what I keep coming back to: we’re probably focusing on the wrong things.

While we’re debating whether kids should use ChatGPT for homework, something much bigger is happening. An entire generation is getting access to artificial relationships, turning to AI tools when they need to think through problems, and relying on automated systems when they need emotional support.

This could get real bad, real fast – and it’s definitely different from anything humans have experienced before. With companies optimizing for engagement and profit, not for the long-term wellbeing of developing minds, we’ve gotta pay attention, and in many cases, do something about it.

So what do we do about it?

First, we need to get real about what’s actually happening and draw some clear lines. Kids under 18 shouldn’t have access to AI companions designed for emotional relationships, period. These aren’t neutral tools – they’re specifically engineered to form attachments and create dependency.

At the same time, we can’t just ban everything and hope for the best. Kids are going to turn 18 eventually, and they’re going to encounter AI companions whether we prepare them or not. So we have a responsibility to teach teenagers about these tools before they become adults. That means having honest conversations about how AI companions work, why they’re designed to be so compelling, and what the risks are for people who get too attached to artificial relationships.

For parents, this also means getting your hands dirty. Download Character.AI or try Grok’s Ani yourself. Next time you’re on Instagram or WhatsApp, experiment with one of their built-in companions. Seriously, do it. After 5-10 minutes, you will be enlightened (and likely left speechless). Understand what your teenager is actually encountering. Ask them direct questions, but don’t make it an interrogation – make it genuine curiosity about their digital world.

For educators, it means reframing the AI conversation entirely. Instead of “Don’t use ChatGPT for homework,” try “Here’s how to use ChatGPT to become a better thinker.” There’s a difference between using ChatGPT in a controlled classroom environment to learn how to research effectively and having unrestricted access to an AI that does your work for you. One is about learning skills; the other is about shortcutting education. Show students the difference between AI as a crutch and AI as a thinking partner. Create assignments that require human judgment, creativity, and social interaction – things that AI can’t replicate.

We need AI literacy education that helps young people understand the difference between AI as a useful tool and AI as a replacement for education and human connection. They need to know how these systems work, what they’re optimized for, and how to recognize when something is designed to manipulate or shortcut rather than help.

It also means creating more opportunities for meaningful human connection to compete with the convenience of artificial alternatives. If kids are turning to AI for companionship, we need to ask why human relationships feel harder to access or maintain. As parents and even non-parents, we have to step in here.

Most importantly, we need to have honest conversations with young people about what they‘re seeing and experiencing. Not lectures about the dangers of technology, but real discussions about relationships, thinking skills, and what it means to be human in an age of increasingly sophisticated artificial beings.

Look, I know we need regulation – badly. But we can’t rely on regulation to prevent this problem. We need young people who understand how these systems work, what they’re optimized for, and how to use them without being used by them. The kids who figure this out will have a massive advantage in life. The ones who don’t risk becoming dependent on artificial relationships and automated thinking at the exact moment they should be developing their own judgment and social skills.

The kids aren’t alright yet. But they could be if we start paying attention to what actually matters – and acting thoughtfully and intentionally.

And that wraps up another edition of AI for the Rest of Us – and the last one for the time being 😔. As always, I’m open to any and all feedback, just reply to this email.

Until next time ...

Reply

or to participate.