What is AI, anyway? [Rewind, Part II of II]

Let's define this thing – again.

Hello friends,

Welcome to Edition #22 of AI for the Rest of Us!

When we launched this newsletter back in August 2024, our first few editions attempted to answer the big question, “What the heck is AI, anyway?”

Because we’ve added 500+ subscribers over the past few months, and since so much has changed in the world of AI in those ten months, we thought it would be helpful to update two of the early editions and share them again. So that’s what we’re doing.

Two weeks ago, we shared Part I of “What is AI, anyway?” and today we’re doing Part II where we’ll go a bit deeper as we explore the types of AI (including the big bad AGI), the rate of improvement we’re experiencing, and several of the big players involved. As always, we’ve also got a few great examples of “AI in the Wild” and two activities to do in “It’s Play Time.”

Here we go!

– Kyser

P.S. If you missed Part I from two weeks ago, you should read it before digging into this one.

In the Know

Now that we have a sense of the AI Ecosystem from last edition, I want to step back and talk about the different types of AI and the stages of AI development.

A quick note before we dig in: When I say AI, I’m referring to the full ecosystem and the big, broad term AI, the one that encompasses things like machine learning, large language models, etc.

You’re currently not a member, but you can be today!

Become a member and pay $1 for the first two months. After your two months are up, feel free to cancel – or better yet, stay a member and continue reaping the rewards.

Click on the button below, click Login at the top of the screen, log in with your email address, then click Upgrade in the top right menu.

SPECIAL NOTE:
Since we have so many new subscribers, and since this topic is so important, we’re sharing the full newsletter with all our non-members to give you a taste of what you’ll get as a member. Enjoy!

Types of AI
Narrow AI 
This is the AI we interact with today, and it’s the AI that’s out there in the market right now. It's designed for specific tasks – and it excels at them. Think Netflix recommending your next binge-watch or ChatGPT writing you a song. Narrow AI is indeed impressive, but it’s more like a Swiss Army knife – very useful for opening tin cans and whittling sticks, but not capable of building you a house. Don’t hear me say it can’t do amazing things. My belief is that even if all progress stopped today, Narrow AI would still be a game changer on many levels.

General AI (aka AGI)
This is considered by many to be the holy grail of AI. It’s also what keeps many people up at night. AGI is a term that’s tough to define and hotly debated, but here’s how to think about it: Imagine an AI that doesn’t just diagnose diseases, but redesigns the human immune system to prevent all known and potential future illnesses, effectively redefining the concept of healthcare. Crazy, right? It’s an intelligence that doesn't just excel at human tasks, but it reimagines them entirely. And the kicker... It would do all this not because it was programmed to, but because it has a level of curiosity, creativity, and understanding that rivals – and potentially surpasses – human cognition itself.

Super AI (aka ASI)
If AGI is the holy grail, ASI is the stuff of dreams – and nightmares. This isn’t just AI that’s smarter than us, it’s an intellect so vast it transcends our ability to comprehend. Imagine an intelligence that redefines the nature of existence, manipulates the fabric of the universe, and engineers new dimensions. It’s a concept so profound and potentially terrifying that it stretches the very limits of our imagination. Here’s a side note I’ll leave you with: an oft-quoted remark in the tech industry comes from Arthur C. Clarke, who said, “It may be that our role on this planet is not to worship God, but to create him.” I’d say that belief scares me more than anything having to do with AI.

Ok Ok, I know. That’s a lotta sci-fi to throw at you. Let me bring it back down to earth and answer a question you might be asking:

Where are we on this AI evolutionary scale?
Here’s where things get a bit, let’s just say, heated, in the AI community and beyond. The rate of improvement in AI systems has been nothing short of astonishing according to every expert out there, even the pessimists. We’ve gone from AI that could barely play tic-tac-toe to systems that can generate human-like text, create art, and even engage in complex problem-solving. In a matter of years.

But how close are we to AGI – the kind that can do every human task better than even humans? Some folks in the AI world think we might be closer than we realize, other people vehemently disagree.

AGI is already here?
There’s even a camp that believes AGI might have already happened behind the scenes. Their argument goes something like this: “Look at how fast we’ve progressed. Who’s to say some big tech company hasn’t cracked the code and is keeping it under wraps?” It’s not not plausible, especially since this technology is essentially being developed in private as opposed to being created with government funding and/or oversight – which by the way is how most technologies of the past several decades have come to be.

AGI is around the corner
Then there’s the camp that says we’re so close I can feel it. This group points to the advancements we’ve seen in LLMs, the continued increase in computing power, and the ongoing releases of multi-modal AI (systems that can create images/videos and even carry on a conversation with you). They argue that if we’ve come this far, this fast, AGI could be just around the corner – or already here, silently judging our Netflix choices.

Dario Amodei, CEO of Anthropic (the company behind Claude, my favorite LLM to use), recently shocked us by predicting that AI could eliminate up to half of all entry-level white-collar jobs and spike unemployment to 10-20% within the next one to five years. Yes, that says WITHIN ONE YEAR. In an interview with Axios (which is worth reading), Amodei painted a picture of what he thinks could be a very real scenario in the coming years: “Cancer is cured, the economy grows at 10% a year, the budget is balanced – and 20% of people don’t have jobs.” Uhhhhh.

He’s particularly concerned about the speed of change. “It’s going to happen in a small amount of time – as little as a couple of years or less,” Amodei warns. And here’s what’s driving this urgency: agents (which we talked all about here and here). These AI agents are “AI that can do the work of humans – instantly, indefinitely and exponentially cheaper.” We’re not talking about AI helping you write emails anymore. We’re talking about AI actually doing your job.

The AGI Skeptics
On the flip side, we’ve got the skeptics who think we’re having the wrong conversation entirely. This camp isn’t just debating whether AGI is five years or fifty years away – they’re questioning whether AGI as a concept even matters. They look at the predictions and warnings and see a fundamental misunderstanding of how technology actually changes the world. In other words, sure, today’s AI is impressive, but we’re so focused on some hypothetical future milestone that we’re missing the real story of how AI will (or won’t) transform society.

Arvind Narayanan, a Princeton computer science professor and director of the Center for Information Technology Policy, and Sayash Kapoor, a Ph.D. candidate at the same center, take perhaps the most grounded view of all. They argue that AGI “does not represent a discontinuity in the properties or impacts of AI systems. If a company declares that it has built AGI, based on whatever definition, it is not an actionable event.”

Their big thought? We’re conflating capability with actual impact. “Even if general-purpose AI systems reach some agreed-upon capability threshold, we will need many complementary innovations that allow AI to diffuse across industries to realize its productive impact. Diffusion occurs at human (and societal) timescales, not at the speed of tech development.”

I really appreciate that they look to history as a guide: “For past general-purpose technologies, such as electricity, computing, and the internet, it took decades for the underlying technical advances to diffuse across society.”

In other words, people have to use and implement the technology (the diffusion part). And even if someone declares AGI achieved tomorrow, they “expect the economic impacts of AI to be realized over decades, as this process of diffusion unfolds.”

This perspective completely reframes the debate. While others argue about whether AGI is months or years away, Narayanan and Kapoor are saying we’re asking the wrong question. “AI’s impact on the world will be realized not through a sprint toward a magic-bullet technology but through millions of boring little business process adaptations and policy tweaks.”

I like this perspective because it’s a cold splash of water on the whole AGI debate. While tech leaders and doomers fixate on the moment AI becomes “generally intelligent,” Narayanan and Kapoor remind us that it’s not a switch to flip. The internet didn’t reshape commerce the moment the first webpage went live. And funny enough, did you know that after 25+ years of the internet, e-commerce in the U.S. still only makes up less than 25% of all commerce? Y’all, stuff takes a while. Real change happens through what they call “millions of boring little business process adaptations” – and that takes decades, not demos. Whether AGI arrives tomorrow or never, they argue, won’t change this fundamental reality of how technology diffuses through society.

Where does that leave us? 
I believe somewhere in the middle. We’ve got crazy fast progress in AI, with advancements coming at an insane pace. However, AGI isn’t something we’re likely to achieve by simply extrapolating our current technologies. It will require concerted effort of diffusion and possibly a few more significant scientific breakthroughs.

That said, the nature of breakthroughs is that they’re often unexpected. While we might be gearing up for a long journey, there’s always the possibility that a revolutionary discovery could dramatically accelerate our progress. What would that look like? I’m not sure, but I can imagine it has to do with agents – the type of AI that does this in the background, without our assistance. Imagine if it became easy for CEOs of large corporations to “hire” agents to do the work of entry level employees and middle managers? Again, it’s not not plausible.

One thing is certain: the development of AGI will continue to be a hot topic for months and years within the world of AI. People outside of tech will talk about it for years/decades to come. And I hope more and more people take it seriously – like you, one of the rest of us. Good on you for being educated 👊. 

Let’s Learn Something

Key Players in the Race to AGI
I’ve talked a lot about AI companies and LLMs, especially ChatGPT, but I haven’t touched on the different LLMs out there and the companies behind them. I created a little visual to help with this:

It’s important to know about the products and companies for a few reasons:

  1. Each one has its own style, so for those Explorers out there who use only one of them, you should try using others to get more exposure to their capabilities and vibes.

  2. Every big tech company has a player in the race, which is one of the reasons why there’s so much hype around AI. When the biggest companies in the world are investing billions of dollars in something, people tend to notice – and jump on the hype train. This includes China – which we just so happen to be competing with as we race to AGI. I wrote about that extensively here.

  3. Every single one of these companies has stated publicly, either directly or indirectly, that their goal is to achieve AGI. That’s something important to be aware of as they push this technology forward.

You can still join us!

AI in the Wild

From call centers to coding floors to the cereal aisle, AI is doing all the things – getting fired, hiring itself, and haggling on our behalf. It’s proof that AGI is anything but settled.

Klarna’s AI Chatbot Gets Benched; Humans Return to Customer Support [Free to read]
After bragging in 2024 that its AI assistant could replace 700 service reps, Swedish fintech Klarna (you probably know them as the Buy Now, Pay Later button when shopping online) reversed course this May. CEO Sebastian Siemiatkowski admitted the bot produced “lower‑quality” outcomes and has started rehiring live agents (aka humans) to keep customers happy. It seems the future is human after all?

SignalFire Data Shows Entry‑Level Tech Jobs Already Shrinking [Free to read]
A new TechCrunch deep‑dive into SignalFire’s LinkedIn scraping finds Big Tech cut graduate hiring by 25 % in 2024 and startups by 11 %. Researchers tie the drop to AI tools that now handle routine coding, reporting, and research – the exact grunt work junior staff once cut their teeth on. If AI eats the first rung of the ladder, where do tomorrow’s engineers start climbing? [ <- you can thank Claude for that line lol ]

Walmart Preps for the Rise of AI Shopping Agents [Free to read]
In a May 29 blog post, CTO Hari Vasudev detailed Walmart’s “agentic AI” playbook: personal bots that plan parties, auto‑restock pantries and bargain‑hunt on your behalf. The retailer is training in‑house models on its product and logistics data and building protocols so third‑party agents (think OpenAI’s Operator I talked about in this edition) can work smoothly on Walmart.com. Marketing, pricing, and site design will soon cater to algorithms instead of eyeballs. Are we ready for bots haggling with bots over cereal prices? I’m kinda ok with that tbh.

It’s Play Time

Newcomers [AI is new to me]

Special note to our new members: if you have never used ChatGPT and/or do not have access to it, then I highly recommend you read the Play Time section in Edition #1 and follow the steps laid out there before moving forward with this activity.

Remember back in high school when you took three years of Spanish/French/German and now can barely order a taco? Yeah, me too. This week, we’re gonna use ChatGPT’s voice feature to practice a language – and it’s way less stressful than those language apps with the pushy owl 😉.

Here’s what to do:

  1. Open ChatGPT on your phone and tap the microphone icon (bottom right)

  2. Copy and paste this prompt first, then start talking:

    I want to practice [LANGUAGE]. You're a friendly local barista at a café, and I’m a tourist trying to order coffee. Speak to me only in [LANGUAGE] but correct my mistakes gently. When I get stuck, whisper the word I need in parentheses so I can try again. Keep it simple and fun.

  3. Just start talking. Don’t worry about being perfect – that’s the whole point.

Pro tip: If you freeze up (I certainly did my first time), just say pause and tell it you’re frozen. Talk to it like you would a tutor. One of the nice tutors. Or if you’re into this sorta thing, tell it to be a mean tutor or an angry French barista. Fun stuff.

Explorers [I’m comfortable with AI]

Who’s ready to see what happens when you feed ChatGPT a massive paper like the one from Narayanan & Kapoor we talked about above? OK, just me? Well, we’re doing it anyway because it’s a good learning exercise for using LLMs to analyze documents – and you can learn more about AGI in the process. 

First, grab the paper by clicking here (it’s free). Download the PDF to your desktop to upload to ChatGPT or just use the link to feed into your chat.

Part 1: Make sense of an intimidating PDF

Upload that beast to ChatGPT and use this prompt:

I just uploaded a 100-page academic paper. Please do this:
1. Give me the 5 main arguments in plain English (no jargon)
2. Pull one killer quote for each point (under 30 words)
3. Tell me what the authors admit they might be wrong about

Then create a 150-word executive brief that includes: one risk my boss should worry about, one opportunity we could chase now, and one uncomfortable truth about AI hype.

Part 2: The fun part

After it’s done with the serious stuff, try this:

Now pretend you're a tech startup called "ZetaMind" announcing you've achieved AGI. Write a ridiculously over-the-top 150-word press release claiming your AI is conscious. Then immediately write a 150-word response from a cranky professor debunking every single claim. Make both funny but weirdly plausible.

What you’re really doing here: Testing ChatGPT’s ability to digest complex material and flip between different writing styles. It’s like watching someone read War and Peace then immediately do standup comedy about it. Sorta.

If you get something particularly hilarious from the AGI press release, screenshot it and send it my way. I collect these things.

That’s all for Edition #22 and our series “What the heck is AI, anyway?”

See you back here in two weeks for a new topic. Until then, let us know if you have any questions or feedback for us – just reply to this email, we’d love to hear from you.

Reply

or to participate.