On this week’s episode of Yahoo Finance’s The Crypto Mile, our host Brian McGleenan sits down with cognitive scientist Gary Marcus to discuss the realities and risks of artificial intelligence (AI). While the term “p(doom)”—referring to the possibility of an AI-induced human extinction event—is causing a stir online, Marcus brings a more nuanced perspective to the conversation. Rejecting apocalyptic predictions, he explains that AI tools like ChatGPT are unlikely to cause human extinction but could still pose catastrophic risks. He studies the term “p(disaster)” by highlighting the possibility of events that can significantly affect the human population.
Video transcript
Brian McGleenan: In this week’s episode of Yahoo Finance’s The Crypto Mile, we’re joined by Gary Marcus, scientist, best-selling author and influential voice on artificial intelligence. Today, we’ll discuss how useful generative AI really is and whether the rapid advancement of this technology is groundbreaking or just hype.
Finally, we’ll ask Gary about the existential risks posed by the rapid progress of AI developments that some commentators have suggested are p(doom), the potential for an extinction-level event caused by this technology.
(audio logo)
Gary, welcome to this week’s episode of Yahoo Finance’s The Crypto Mile.
Gary Marcus: Thanks for having me.
Brian McGleenan: Now, is generative AI living up to the hype?
Gary Marcus: Yes and no. So there is a lot of hype. It does some really useful things. Does it live up to all that hype? yes So I think the best thing it does right now is it helps computer programmers type faster. It figures things out for them, it structures the code a bit. It’s not perfect, it makes mistakes but coders are used to correcting errors. They call it debugging. If you can’t do that, you’re not a coder. And so that’s a good use case.
If you type in and you want medical advice from it, it’s going to tell you things and it can generate things. If you ask for travel advice, it might send you to a place that doesn’t exist. It will be perfectly grammatical, fun to play, but still not everything.
Brian McGleenan: Well, you know, I was looking at some data online. And since May, ChatGPT has seen a 29% drop in visits to that website. So do you think this gives any indication of the performance of generative AI in general?
Gary Marcus: Well, some are kind of a novelty, kind of a fad thing. Like, you know, you’re not old enough to remember Pet Rocks, and some of your audience will and some won’t. But you know, Pet Rocks were very popular for a while, and Furbies and Tamagotchis, and a million other things became popular. People play with them, they’re exciting, and then they’re like, yeah, okay, I get it.
I think in the early days, everyone wanted to try ChatGPT and everyone wanted to have fun. Hey, look, I wrote this thing on ChatGPT. But it’s not really funny anymore. Hey, I wrote this thing on ChatGPT, as you know, now you want to know if it really works for you. And so a lot of banks and Apple and people who have tried that internally and they’re like, like, there’s data leakage issues where customer data comes out, there’s issues that we call illusions. Maybe we should have called them confabulations when it was just making stuff up.
And so you know, it’s really a case by case basis. Does it really work for me? If you’re writing fiction and have writer’s block and need some weird idea, she might just give it to you. It can be used for brainstorming. But not everyone will. The other thing is that college students and high school students go away for the summer and can come back to one of the open source options.
And so it’s not entirely clear what the long-term picture is. OpenAI is projected to make a billion dollars this year but did that launch, as we don’t have dates, like before the peak or after the peak? And we don’t know, like some businesses thought we’d have to give it a try. Now some of these businesses are saying, well, yeah, this looks like a five-year project to actually work so we can trust it.
Brian McGleenan: So do you think the initial excitement was really built on hype?
Gary Marcus: Well, I think there was a level of craziness in the beginning, and some people like me said, wait, slow down, there are problems. These things make things. They are not reliable. Data is leaking. You can’t really trust them. But we got 2% vote. 98% of the voice was like, oh my god, I can’t believe this is so amazing.
And the reality is somewhere in between. Depending on what you do it can be amazing and depending on what you do it can be pretty bad and unreliable. So we have had many instances where media outlets have tried to use it to write their stories and they always end up writing stories that contain falsehoods. And that’s not really a good thing if you’re the press.
Brian McGleenan: If generative AI is relatively ineffective for many tasks, why is there a need to postpone AI? Why do we think we are creating misaligned, potentially deceptive, AIs that could harm humanity?
Gary Marcus: When some people asked for a moratorium, they just– and I signed a letter that called for that. He called for a moratorium on one thing, GPT-5, which we know is going to be unreliable and problematic and so on. No one said we should stop researching AI all together. So those of us who signed the letter said, let’s try to make AI more reliable and trustworthy, and it won’t cause problems. We didn’t say don’t build AI all together. Like no serious person says that.
And certainly the letter signed by thousands of people was not calling for an outright moratorium. So understand that first thing. We need more research to make an AI we can trust, not one that creates content. So the big problem you’re asking about, some people joke about it, they call it p(doom). How likely is it to kill us all? I don’t think these machines are going to literally wipe out the human species. In fact there are people who have argued that, even arguing that it is inevitable that because they will be smarter than us, they will want to kill us.
I think this is not a good argument. Machines may eventually be smarter than us, indeed, intelligence has many dimensions. It does not mean that they are going to be motivated to want to kill us and so on. But there are many dangerous applications. So bad actors are already using this stuff. They are already making deepfakes. They will try to manipulate the market. This could lead to accidental war even accidental nuclear war. So there are many things to worry about. Literally, extinction of the species is highly unlikely.
It’s really hard. We are a very persistent species. We probably won’t go anywhere. But rather than existential risk I would say catastrophic risk. So existence means we just disappear. I don’t think it will happen but disasters can come from AI. And the fundamental problem is that the tools we currently use are called black boxes, which means we don’t understand what’s going on inside them. They are difficult to debug. And we can’t guarantee around them.
So like with airplanes now there is a warranty, a formal guarantee for certain rules of travel and other things, that the software will work to do a certain thing. In the currently popular black box, these large language models, nothing is formally guaranteed. In fact, like a week you might ask, is 7 a prime number? And it gives you the right answer. And next week it gives you the wrong answer.
Or maybe 7 is fine because there’s enough data for that, but maybe 56,317, maybe it’ll fit and maybe it won’t. And it might be available on Tuesday and not Thursday, and so on. So these systems have an inherent instability or instability, I should say, that makes them really hard to engineer around. And security is something you should always engineer around. We don’t know how to do it yet.
Brian McGleenan: So randomness like word generators, predictive text on a mass scale, is a typical approach to creating artificial intelligence. But is there another approach where you sort of don’t use random predictive word stuff and you go back to rational modeling and things like that?
Gary Marcus: In fact, there is another long tradition in AI called symbol manipulation. It goes back to the early 1950s. And many of us think it still has value. And actually, one of its greatest proponents, Doug Leonhart, died a few days ago. And he built this project over 40 years that tried to build common sense in AI in the most literal sense. And I don’t think his project is completely successful but I think it points in the right direction.
And finally, we may need to reconcile the statistical predictive textual approach that is currently so popular but deeply flawed, and the symbolic approach that has its own problems. So it can be unwieldy, no one builds it equally. So we’re missing some basic ideas about how to bring those two worlds together. Eventually, we’ll get there. And I think, eventually, I will pay his promise. Medical problems like Alzheimer’s and depression that we humans couldn’t figure out will be solved. But right now, the AI we have isn’t ready to do that.
Brian McGleenan: Do you think it’s bad when people say the song was written or the script was written for comedy and made me laugh, is there a little bit of you that feels a little bit sad or is it a little bit of a rip-off? What does it mean to be human?
Gary Marcus: I mean, I could see it. Right now, I don’t think you’ll get better comedy from either of these systems. You can get little poems that are kind of doggerel that are interesting and stuff like that, but I’ve yet to see a great work of art from one of these machines. And I don’t think that will happen anytime soon. It may eventually happen.
Brian McGleenan: Well, Gary Marcus, thanks so much for coming on this week’s episode of Yahoo Finance’s The Crypto Mile.
Gary Marcus: Thank you so much for having me.
(theme music)