What is AI, anyway? [Part II of II]

Let's keep defining this thing.

Hi there,

Welcome to another edition of AI for the Rest of Us. If you read Edition #2, you’ll know that this week we’re continuing our conversation, “What is AI, anyway?” If you haven’t seen that edition (or Edition #1 for that matter), check ‘em out because they set the tone and foundation for this week.

Speaking of this week, get ready to dig a little deeper as we explore the three types of AI, the rate of improvement, and some of the big players involved.

Here we go…

– Kyser

In the Know

Now that we have a sense of the AI Ecosystem, I want to step back and talk about the different types of AI and the stages of AI development.

A quick note before we dig in: When I say AI, I’m referring to the full ecosystem and the big, broad term AI, the one that encompasses the things like machine learning, large language models, etc.

Types of AI

  1. Narrow AI: This is the AI we interact with today, and it’s the AI that’s out there in the market right now. It's designed for specific tasks – and it excels at them. Think Siri setting your alarm, Netflix recommending your next binge-watch, or ChatGPT writing you a song or creating a menu (for those who did Play Time last week). Narrow AI is indeed impressive, but it’s like a Swiss Army knife – very useful for opening tin cans and whittling sticks, but not capable of building you a house.

  2. General AI (aka AGI): This is considered by many to be the holy grail of AI. It’s also what keeps many people up at night. AGI is a term that’s tough to define and hotly debated, but here’s how to think about it: Imagine an AI that doesn't just diagnose diseases, but redesigns the human immune system to prevent all known and potential future illnesses, effectively redefining the concept of healthcare. Crazy, right? It’s an intelligence that doesn't just excel at human tasks, but it reimagines them entirely. And the kicker… It would do all this not because it was programmed to, but because it possesses a level of curiosity, creativity, and understanding that rivals – and potentially surpasses – human cognition itself. Before you start freaking out… we have not created or achieved AGI, and it’s technically theoretical. But if/when we get there… 🤯

  3. Super AI (aka ASI): If AGI is the holy grail, ASI is the stuff of dreams – and nightmares. This isn't just AI that's smarter than us, it's an intellect so vast it transcends our ability to comprehend. Imagine an intelligence that redefines the nature of existence, manipulates the fabric of the universe, and engineers new dimensions. It's a concept so profound and potentially terrifying that it stretches the very limits of our imagination. Here’s a side note I’ll leave you with: an oft-quoted remark in the tech industry comes from Arthur C. Clarke, who said, “It may be that our role on this planet is not to worship God, but to create him.” Let that one simmer for a minute.

Ok Ok, I know. That’s a lotta sci-fi to throw at you. Let me bring it back down to earth and answer a question you might be asking:

Where are we on this AI evolutionary scale?
Here's where things get a bit, let’s just say, heated, in the AI community. The rate of improvement in AI systems has been nothing short of astonishing according to every expert out there, even the pessimists. We've gone from AI that could barely play tic-tac-toe to systems that can generate human-like text, create art, and even engage in complex problem-solving. In a matter of years.

But how close are we to AGI? Some folks in the AI world think we might be closer than we realize, other people vehemently disagree.

There's a camp that believes AGI might have already happened behind the scenes. Their argument goes something like this: "Look at how fast we've progressed. Who's to say some big tech company hasn't cracked the code and is keeping it under wraps?" It's the AI equivalent of Area 51 conspiracy theories.

Then there’s the camp that says we’re so close I can feel it. This group points to the advancements we've seen in LLMs, the continued increase in computing power, and the ongoing releases of multi-modal AI (systems that can create images/videos and even carry on a conversation with you). They argue that if we've come this far, this fast, AGI could be just around the corner – or already here, silently judging our Netflix choices.

Former OpenAI (the company behind ChatGPT) employee Leopold Aschenbrenner recently wrote in a seminal paper:

The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word.

Here’s a heady chart (from Leopold’s paper) that shows how AI models have progressed from the capabilities of a preschooler to a smart high schooler. It then projects future advancements that could lead to PHD level thinking and advanced reasoning (i.e. AGI).

On the flip side, we've got the "slow and steady" camp. This group thinks we're still decades away from true AGI. They argue that while our current AI is impressive, it's still about as "intelligent" as a really fancy calculator. Sure, they can play chess like grandmasters and spit out semi-catchy songs about the school year starting, but they struggle with the kind of common-sense reasoning a child uses to navigate the world.

As Yann LeCun, Chief AI Scientist at Meta (the company behind the LLM product Llama), puts it:

We're very far from having machines that can learn the most basic things about the world in the way humans and animals can do.

This camp believes we need fundamental breakthroughs in how AI learns and understands the world before we can even approach AGI. They're not saying it's impossible, but they are suggesting we’re safe in our day jobs for now. They also say that people like Leopold are spouting false existential claims to “win the AI race” (i.e. beat the other companies building LLMs to grab/dominate market share). This is definitely something to pay attention to when reading things coming from the big tech companies – and the media trying to get clicks.

So, where does that leave us? I believe somewhere in the middle. We're making rapid progress in AI, with advancements coming at an unprecedented pace. However, AGI isn't something we're likely to achieve by simply extrapolating our current technologies. It will require concerted effort and a few more significant scientific breakthroughs.

That said, the nature of breakthroughs is that they're often unexpected. While we might be gearing up for a long journey, there's always the possibility that a revolutionary discovery could dramatically accelerate our progress.

One thing is certain: the development of AGI will continue to be a hot topic within the world of AI. People will continue to talk about it for years/decades to come. And every time a new version of ChatGPT comes out, the chatter about AGI will ramp up. It’ll be fun (and scary) to see how far we’ll get.

Let’s Learn Something

Key Players in the Race to AGI
I’ve talked a lot about LLMs, especially ChatGPT, but I haven’t touched on the different LLMs out there and the companies behind them. I created a little visual to help with this:

Note: I’m only including U.S.-based companies here. More on non-U.S. companies in future editions.

It’s important to know about the products and companies for a few reasons:

  1. Each product has its own style, so for those Explorers out there who use only one of them, you should try using other ones to get more exposure to their capabilities.

  2. Every big tech company has a player in the race, which is one of the reasons why there’s so much hype around AI. When the biggest companies in the world are investing billions of dollars in something, people tend to notice – and jump on the hype train.

  3. Every single one of these companies has stated publicly, either directly or indirectly, that their goal is to achieve AGI. That’s something important to be aware of as they push this technology forward. This is a solid article if you’re interested in learning more about that.

AI in the Wild

The term “Narrow AI” can be a bit misleading, as it might imply that it’s not useful or too limited. That’s far from the truth. Here’s a great example of some useful AI at work:

WSJ: The Smart, Cheap Fix for Slow, Dumb Traffic Lights [Pay to read]
Google's AI-driven Green Light system is revolutionizing traffic management in 14 cities, reducing stop-and-go traffic by 30%. Utilizing data from Google Maps and internet-connected vehicles, it optimizes traffic light timing without requiring new infrastructure. If this approach could be rolled out nationwide, its proponents say it could make a significant dent in the amount of time we all spend idling at our country’s 300,000-plus traffic signals. Yes please.

OpenAI: Say hello to GPT-4o [Free to watch]
Here’s another example of Narrow AI that’s far from limited. This is a demo of the new Multi-Modal Mode feature (which means it can see, hear, and converse with you) coming to all ChatGPT users sometime this Fall.

More Advanced Reading
For those who want to get deeper in the weeds of AGI, here are two things for you to check out:

Situational Awareness, by Leopold Aschenbrenner / This is the paper I quoted above. Leopold takes the stance that AGI is imminent.

A Counter Argument to Situational Awareness / I found this to be a helpful POV on why AGI will not be achieved any time soon.

It’s Play Time

Newcomers [AI is new to me]

One of the great use cases with ChatGPT is summarization. Give it a good chunk of text or an entire PDF, ask it to summarize it so you don’t have to read the full, and voila, time saved. Let’s try that this week.

Head back to chatgpt.com, log in if you need to, and look towards the bottom of the page for the area that says “Message ChatGPT”. That’s where you input text to instruct it on what to do. Again, this is called prompting.

Here’s what I want you to do:

  1. Highlight all of the text in the “In the Know” section.

  2. Copy the text.

  3. Type the text below into ChatGPT, and in the brackets, paste the text you just copied from the “In the Know” section.

Please summarize this text using a max of 5 bullet points:

[ INSERT TEXT HERE ]

Next time you have a long email or document to read, try this and see what happens.

* If you don’t know how to do Copy+Paste, you should ask ChatGPT to give you step-by-step instructions 😉.

Explorers [I’m comfortable with AI]

One of the more interesting ways I’ve used LLMs is through conversation simulation. You do this by assigning roles to the LLM and yourself, providing some context to the situation, and ultimately having a full-on conversation with it (via text for now, but via voice when that feature rolls out in the Fall). It can be quite useful – and hilarious depending on the scenario.

This is what I want you to do this week. Think about a scenario where you might need a little help having a conversation with someone. It might be you and your spouse needing to talk through a disagreement about your next vacation, your child begging you to get a phone, or you’re prepping to give a negative performance review to an employee.

I’ll share one of my recent ones as an example. This is real and actually happened – and helped tremendously. The text below is word-for-word what I put into Claude. You should start a new chat in the LLM of your choice (I’ve found Claude is best at this) and come up with your own scenario, using the text as a guide if it’s helpful. Even if you think this is silly (or even bad parenting!), try it. You’ll likely see a different side to these LLMs.

My wife and I have a nine year-old son who is wanting a Gizmo 3 watch. He currently has a Gizmo 2 with a cracked screen. He has been relentless about asking about it, and while we’ve said he can’t talk about it anymore, we do want to make sure he understands where we’re coming from. And we’re open to getting him a new one under certain circumstances. We just don’t know what those are.
I don’t have the time or energy to think about responses, so I’d like to simulate a conversation with you. I'd like you to act as his parents (our names are K and A). I will act as my son (his name is T).
We consider our parenting style to be a mix of authoritarian and intentional. We try to respond with clear guidelines, warmth, support, and constructive feedback. We encourage independent thinking within set boundaries and try to promote a positive, nurturing dialogue. This is how you should act.
I want you to ask open-ended questions to facilitate this process. Encourage extensive verbal exchange. Allow T to express thoughts and feelings, and respond with empathy, warmth, and understanding. But you are in charge, don’t let him control the conversation. Ultimately, we want to get to some sorta plan of action. Always use language like daily-life conversations. Always respond as the parent, and do not repeat anything about your objectives.
Got any questions before we get started?

That’s all for Edition #3 of AI for the Rest of Us. Next week, we’ll be talking about the impact of AI on relationships, the role of “companion” apps, and what to expect as the technology gets better at conversing with us.

See you next week [assuming AGI hasn’t arrived] …

Reply

or to participate.