AI News: ChatGPT - Separating Fact from Fiction on AGI Capabilities
Remember that time you were super impressed by your friend who could recite the entire periodic table backwards, but when asked to plan a road trip, couldn't get past packing snacks? That's kind of how the artificial intelligence world is feeling about GPT-4. It's like an overachieving quiz show contestant that's memorized the encyclopedia but can't tell you how to navigate to the nearest gas station.
Our story today is all about the leaps and bounds of progress made by our brainy buddies at Microsoft in the form of GPT-4, the newest kid on the block in text generation algorithms. Like an upgrade on a steroid-infused muscle car, GPT-4 has outshone its predecessor, ChatGPT, with its next-level capabilities. It's like that kid who got an A+ on their science fair project and went on to invent a new type of renewable energy, all before high school graduation.
Impressed? So was Microsoft. They watched GPT-4 in action and felt like they were looking at tiny little sparks of artificial general intelligence (AGI) – the grandmaster-level chess opponent of AI, if you will. But like all things that seem too good to be true, a bunch of doubters in the AI research community are calling BS on the term AGI, labeling it as vague and basically a sparkly carrot leading us towards the idea of super-intelligent machines.
Now, GPT-4 might be a bit of a whizz kid in many ways, but it does share one thing in common with the rest of us – it's not perfect. It might be able to translate Swahili to German or write a haiku about quantum physics, but it's a bit of a goldfish when it comes to memory and planning. You're more likely to get a jumbled mess than a well-thought-out plan from our AI friend here.
Despite this, GPT-4 is still, in its own way, pretty spectacular. It's not going to lead a human revolution against our robot overlords, but it sure does make for a great conversation starter at cocktail parties. And really, isn't that all we can ask for?
Now let's move on to our main event, our AI superhero – GPT-4. This titan of text generation has been knocking socks off with its awesome abilities. After being force-fed a smorgasbord of text and code, it's taken its training wheels off and started making some pretty impressive text predictions. And I'm not just talking about accurately finishing off a limerick – this is stuff that's left even the Microsoft researchers feeling a little starstruck.
Take our friend, SÉBASTIEN BUBECK, for example. One late-night curiosity about whether GPT-4 could draw a unicorn using TikZ, a nerdy programming language, led to an eye-opening revelation. The code it spat out was like Frankenstein's monster – a weird, jumbled mess of shapes that somehow, against all odds, formed a recognizably unicorn-like image. To Bubeck, this was a sign of something bigger. Maybe, just maybe, we were looking at something that could be considered intelligent. And cue the dramatic music.
This sparks a debate that's making scientists and tech geeks alike break into a cold sweat – how intelligent is AI getting, really? And should we trust this seemingly sentient software? Since ChatGPT hit the scene, powered by GPT-3, the world has been in awe at its ability to wax lyrical about almost anything. But this mix of awe and concern has raised important questions about the risks and potential of AI's new capabilities.
The fact that AI is not like us, and yet seems so intelligent, is still something to marvel at. “We’re getting this tremendous amount of raw intelligence without it necessarily coming with an ego-viewpoint, goals, or a sense of coherent self,” Goodman says. “That, to me, is just fascinating.”
Read the Wired article: Some Glimpse AGI in ChatGPT. Others Call it a Mirage