- AI for the Rest of Us
- Posts
- Every Rose Has Its Thorn
Every Rose Has Its Thorn
A nuanced look at AI's potential harm against humanity

Hello friends,
Welcome to another edition of AI for the Rest of Us!
Just like in our last edition, I’ve got two quick things to mention before we dive in:
[1] We asked our free subscribers (that’s you) if you want to receive the full newsletter with ads instead of just the partial newsletter you currently get. We didn’t get as many “Yes, definitely” responses as we’d wanted, so we’re holding off on it for now, but it’s still on the table. If you’re interested in hearing more about the decision, we’re happy to share, just send us an email or respond to this one.
[2] We’re running a March promotion where you can get a full month of Premium access for free if you sign up before March 31st. Click here and create a login for your account to upgrade.
Just a friendly reminder that a Premium subscription gets you access to the full newsletter while a regular subscription only gets you a small portion of each newsletter. And hey, it’s only $30 a year or $4 per month. You’re basically buying me a couple of burritos.
That said, just like in last edition, you’re getting more of the newsletter so you can better decide if it’s worth going Premium. We’re biased, but we sure think it is 😊. And consider this me buying you a very small burrito.
On to this week’s edition!
As you know from last time, we’re continuing our exploration of AI’s present and future impact by shifting our focus to the legitimate concerns and challenges AI raises – because with any powerful technology, there’s another side to the story, especially when it comes to AI.
Let me preface this edition by saying something important: This isn’t about fear-mongering or painting dystopian scenarios to get people riled up and/or scared. It’s about bringing balance to our understanding of AI by examining the real challenges we’re already facing – and those that may lie ahead. Why? Because understanding both the promise and the peril of AI helps us make better decisions about how to use it, how to regulate it, and how to prepare for its impacts.
I also want to reiterate: I’m not a doomer who thinks AI will destroy humanity, nor am I suggesting we should halt AI progress. But I do believe that acknowledging the risks is critical to ensuring this technology serves us well. As the saying goes, “Hope for (and work towards) the best, but prepare for (and be aware of) the worst.”
Here we go...
– Kyser
P.S. If you missed the last edition (AI Optimism: Beyond the Rainbows & Unicorns) or the one before it (Paradise or Peril? The Great AI Debate), I recommend reading them before diving into this one.

In the Know
Two weeks ago, I painted a pretty rosy picture of AI’s potential by showing you how it’s transforming healthcare, upgrading transportation, personalizing education, and creating new economic opportunities. That wasn’t hype. Those benefits are real – and worth celebrating.
But the truth is, most significant technological shifts throughout history have led to good and bad things. The printing press democratized knowledge but also enabled the spread of misinformation. The industrial revolution brought unprecedented prosperity but also created exploitative working conditions. The internet connected the world but also allowed for new forms of harm, especially in the realm of social media.
AI is – and will be – no different. And today, we’re gonna explore five areas where I see cause-for-concern when it comes to AI’s impact.
White-Collar Disruption: This Time It’s Different
When people talk about automation and job displacement, they often think of factory robots or self-checkout kiosks. But AI is different. It’s coming for knowledge work – the kind that requires education, creativity, and specialized training.
A Goldman Sachs analysis from 2023 warns that AI could affect or eliminate up to 300 million jobs worldwide, with white-collar professionals facing some of the highest risks. A two-year-old study is ancient in AI days, so let’s look at a more recent one. The World Economic Forum’s Future of Jobs Report 2025 predicts the disruption driven by AI is set to displace around 92 million jobs by 2030, even as ~170 million new roles are created. This net growth of 78 million jobs is great, but it masks the wild challenges ahead, especially for areas where white-collar tasks are vulnerable to (or ripe for?) automation.
And here’s what makes this round of automation different from historical ones:
Speed: Previous technological revolutions unfolded over decades or centuries. AI capabilities are advancing at a crazy pace, giving the rest of us less time to adapt. Case in point: I can (and probably will) write this edition in six months, and I’ll likely have five different issues to discuss.
Scope: Earlier waves of automation typically affected specific industries or types of physical labor. AI potentially impacts any job involving information processing – from legal research to financial analysis, medical diagnostics to content creation. Pretty soon you may be reading AI’s version of AI for the Rest of Us 😢.
Skills Gap: The new jobs created by AI often require drastically different skills than those being automated away. A 55-year-old middle manager whose job is partially automated can’t easily become a machine learning engineer. And a 24-year-old with a college degree can’t just get a job as an HVAC repair person – because apparently we’re gonna need a whole lot more of those according to OpenAI.
We’re already seeing early signs of this shift. Accountants are being replaced by AI-driven financial analysis tools. Graphic designers are competing with image generators and editors. And the irony of ironies... the very people critical to creating these products, software developers, are already feeling the effects in major ways.
I personally still believe humans will remain essential to the workforce. So no, I don’t believe we’re moving towards some dystopian future where robots do all the work. In fact, in a perfect world, I believe AI will augment just about everything we do – and eventually provide so much value to our current work that we’ll thrive as workers and an economy.
But the short/medium term is gonna be a bit dicey. The very nature of work is changing, and we need to be clear-eyed about the challenges this presents, especially for early and mid-career professionals whose expertise may suddenly be devalued by AI systems in the short term. If you’re a knowledge worker and reading this, don’t be alarmed, but be prepared. I’ve said this before and I’ll say it again and again: AI probably won’t take your job, but someone who knows how to use AI probably will.
Knowledge Atrophy: When AI Does Our Thinking
Remember when you knew dozens of phone numbers by heart? Thanks, cell phones. What about driving directions and how to get around a familiar city? Thanks a lot, GPS. And remember when we’d actually use our brains to recall an obscure fact? There’s a scientific name for this one: The Google Effect.
Cell phones, GPS, and Google are great things. I love all three of them. Use ‘em all the time. But they’re literally changing how we use our brains. Something similar, but far more profound, might be happening with our cognitive abilities in the age of AI.
The phenomenon is called “knowledge atrophy” – the gradual weakening of our mental skills when we outsource thinking to machines. And it’s not just theoretical. Experts and scientists who are just beginning to study it are calling it a kind of cognitive atrophy, or AI apathy, where we humans stop flexing our mental muscles because machines handle the hard parts.
The big question becomes: If AI is always there to generate answers, will we stop pushing ourselves to think critically? Will we just blindly trust AI without questioning its logic? I would be lying if I said I wasn’t already doing this. It feels like The Google Effect on steroids.
I never expected to say this (or type it because it sounds so pretentious), but I’ve been thinking a lot about cognitive load theory lately. It basically says that our brains need to be challenged to really and truly understand information. Don’t get me wrong, I love me some cognitive offloading. But here’s the thing: When a task feels too easy — like having AI write an essay for you — your brain isn’t as actively engaged. And without that engagement, you’re less likely to understand or remember the material. These things aren’t good for us.
We’re already seeing warning signs of the effects. Students using AI to write papers are potentially failing to develop critical writing and research abilities. Education researchers found that college students who used AI tools for their essays ended up doing worse on their exams. In some creative areas, research shows that generative AI can boost creativity but may lead to fewer new ideas.
There’s also a growing concern about what happens when an entire generation grows up with AI doing their thinking. Will they develop the same depth of knowledge and problem-solving abilities as previous generations? Or will they become intellectual consumers rather than producers, with a superficial understanding that relies on AI to fill in the gaps? I have three young kids, and I really hope it’s not the latter.
I’m not suggesting we should shun AI tools or that they don’t have tremendous value. I mean, I started a company called AI for the Rest of Us! But we should be conscious of the cognitive trade-offs we’re making, and we should deliberately practice the skills we want to maintain. This might mean occasionally taking the harder path – like writing that wedding toast without AI 😊.
The Environmental Toll: AI’s Hidden Footprint
Speaking of AI writing things for us, did you know that every time you ask ChatGPT to write you a wedding toast, you’re tapping into data centers that consume electricity at nation-state levels? Sounds far fetched. But true story.
Here’s a reality check for us: each conversation with AI tools like ChatGPT isn’t just electrons bouncing around in the digital ether – it consumes real resources. Researchers estimate that a single ChatGPT query burns about five times more electricity than a simple web search. It’s still a small amount, but you know those innocuous voice chats I’ve been telling you to do? They add up.
As the big tech companies build bigger and bigger AI systems, the energy demands are growing at a crazy pace. MIT researchers found that the computational power required to train generative AI systems has been doubling every 3-4 months. By 2026, data centers are projected to consume about 1,000 terawatts of electricity – roughly equivalent to what the entire country of Japan uses 🤯.
But the story gets worse when you look at water usage – because these big data centers need water to keep them cool. In 2022 alone, Google’s data centers used 20% more water than the previous year, while Microsoft’s jumped by 34%. And this was before ChatGPT launched to the public (!). It’s not just statistical – it’s affecting real communities. In The Dalles, Oregon, Google’s three data centers already consuming more than a quarter of the city’s water supply, and in Chile and Uruguay, people are literally taking to the streets to protest planned data centers that would tap into drinking water reservoirs.
The hardware powering these systems brings its own environmental baggage – from the rare minerals mined for specialized AI chips to the constant cycle of equipment becoming obsolete as newer, more powerful versions emerge. It’s not pretty.
The irony here? While AI might eventually help solve environmental problems through better resource management and climate modeling, its current trajectory is making those very problems worse.
Cultural Fragmentation: Dividing Our Digital World
As all of us likely know, we’re living in a time of prodigious polarization and fragmentation. It’s a major cultural issue, and unfortunately, AI threatens to make it worse.
AI systems – particularly the recommendation algorithms and personalization engines that power our digital lives – are literally designed to show us what we’re most likely to engage with, no matter the consequences or effects. And what we engage with tends to be content that confirms our existing beliefs and values. It’s what keeps us engaged – and big tech couldn’t be more pleased.
The result is a kind of digital balkanization, where we increasingly live in separate realities constructed by AI systems. It’s sadly now easier than ever to create “personalized disinformation” and micro-target it through online platforms, potentially sowing division at scale. I’m not just talking about elections and politics. I’m talking about entire belief systems developed, nurtured, and expanded through AI technology that you can’t see – or even recognize.
This manifests in other troubling ways:
Confirmation bias factories: AI can create infinitely personalized content (tweets, articles, ads, etc.) that reinforces existing views, filtering out dissenting perspectives more effectively than any human editor ever could. Put another way, I fear AI systems see no point in us understanding – or dare I say empathizing – with someone’s different opinion.
Algorithmic extremism: Studies have shown how recommendation systems can gradually push users toward more extreme content by optimizing for engagement rather than accuracy or balance. Social media has mastered the science of this while mainstream media has mastered the art of it.
Reality collapse: When everyone receives a different version of events tailored to their preferences, the very notion of shared truth begins to erode. As one scholar described it, this creates “an almost paranoiac lack of trust” where everyone suspects the other side’s facts are fake and generated to manipulate. This needs no explanation. It’s everywhere.
Cultural homogenization: Paradoxically, while AI creates fragmentation between groups, it can also lead to homogenization within them. When creative works are increasingly influenced by AI trained on the most popular content, cultural diversity and innovation may suffer. It’s my worst nightmare... we all slowly get a little more boring.
The societal stakes here are enormous. Democracy depends on citizens having some shared understanding of reality and being able to engage in good-faith debate. When we can’t even agree on basic facts, governance becomes nearly impossible.
Some technologists argue that AI could be redesigned to bridge divides rather than deepen them – by intentionally exposing users to diverse viewpoints or helping identify common ground. But this would require a fundamental shift in how these systems are optimized and deployed, prioritizing societal health over engagement metrics and short-term profits. Let’s just say I’m not holding my breath for Big Tech to make those changes.
Cybersecurity Nightmares: When AI Goes Rogue
Let’s take a more alarmist turn. Because frankly, some of what’s happening in the world of AI-powered cybersecurity deserves alarm bells.
In early 2024, a frightening milestone was reached: criminals used deepfake video technology to impersonate a company’s chief financial officer in a video call, convincing an employee to transfer $25 million to hackers. This wasn’t science fiction – it was a real-life heist in Hong Kong that showed just how sophisticated AI-enabled scams have become.
The World Economic Forum now ranks AI-powered misinformation as the top AI-related risk, and with good reason. We’re entering an era where seeing and hearing can no longer be reliably believing.
The threats are evolving rapidly:
Deepfakes at scale: Remember way back in Edition 6 when we talked about deepfakes? If you do, you know that AI systems can generate convincing fake videos, audio, and images of real people saying or doing things they never did – from celebrities to your boss or family members. And the technology is only getting better (er, worse?).
Hyper-personalized phishing: Instead of the obvious scam emails from the Nigerian prince, AI can craft highly targeted messages that mimic the writing style of people you know, making them virtually indistinguishable from legitimate communications.
Automated hacking: AI tools can help cybercriminals find vulnerabilities in systems faster than human defenders can patch them, potentially enabling more frequent and severe data breaches.
Identity theft 2.0: With enough of your digital footprint, AI can effectively become “you” online – mimicking your communication style, knowing your personal details, and potentially causing harm to your reputation or finances.
Defensive technologies are emerging – from deepfake detection to AI-powered security systems – but they’re consistently playing catch-up. And they face a fundamental challenge: it’s still easier to generate convincing fakes than to detect them. For the rest of us, the best defense remains vigilance and verification.
Other Challenges Worth Noting
Now, we’ve only covered five major areas of concern, but there are many more. I just can’t keep you here through the weekend. But if I did have your weekend, I’d cover a slew of other topics like...
Governance and regulatory gaps: Our legal frameworks struggle to keep pace with AI innovation. This is a big one right now with lots happening globally.
Critical systems failures: What happens when AI controlling essential infrastructure fails?
Military applications: The rise of autonomous weapons raises profound security and safety issues, especially with China’s recent AI progress. (BTW, I plan to cover this soon in its own edition.)
Privacy erosion: AI is basically creating unprecedented surveillance capabilities.
Bias and discrimination: AI systems can amplify existing social inequities.
Innovation stagnation: Market concentration could slow progress if a few companies dominate AI.
Existential risks: The most speculative but highest-stake concerns about advanced AI alignment.
If you’re interested in learning more about any of these topics, just hit reply and let me know. We can always cover them in future editions.
Before we conclude, let me say this: the point of highlighting these challenges isn’t to lead you to believe we should stop AI development. And it’s not meant to scare you away from using AI tools. It’s genuinely intended to ensure we approach this powerful technology with our eyes open to both its potential and its pitfalls.
By understanding the full spectrum of possibilities – from the utopian visions we explored in Edition #16 to the cautionary tales we’ve discussed today – we can work toward an AI future that maximizes benefits while minimizing harms. Because ultimately, as I always say, AI will become what we make of it.
The rest of the newsletter is for premium members only. Take advantage of our March promotion (one month free) and upgrade to view this edition and all previous ones on our website. And starting in two weeks, you’ll get the full newsletter in your inbox every other Friday.
You can upgrade or downgrade at any point, or you can stick with free until you’re ready to join. We’re just glad to have you part of the community. Check us out on Instagram for news, content, and fun stuff. BTW, we explain on the website why we do paid memberships – just click here and scroll down to the FAQs.
You can also reach out directly to us if you ever have any questions about a Premium membership – simply reply to this email and we’ll get back to you asap.

Let’s Learn Something

AI in the Wild

It’s Play Time
Newcomers [AI is new to me]
Explorers [I’m comfortable with AI]
And that wraps up another edition of AI for the Rest of Us. We’ll be back in two weeks with another edition, The State of AI Tools, where we’ll get very practical by explaining (and showing you) the best tools to use today.
If you ever have any questions or feedback, simply reply to this email and fire away.
Until next time (and on Instagram between then)...

Reply