Crypto Mile: Disaster Could Be Caused by AI

Rate this post

Artificial intelligence (AI) tools like ChatGPT may not cause human extinction, but could lead to international catastrophes, a leading scientist has warned.

A term called p(doom) is trending online; ‘p’ is the probability of a human extinction-level event leading to the rapid advancement of AI.

However, cognitive scientist Gary Marcus told Yahoo Finance UK that these holistic predictions are unlikely. Instead, he predicts that AI tools could lead to a series of disasters as a result of deepfakes and market manipulation.

Read further: ChatGPT and Stock Picking: Hedge Fund Manager Shares AI Trading Strategy

Speaking on Yahoo Finance’s The Crypto Mile, Marcus said, “p(doom) refers to the possibility that AI will kill us all. But, I don’t think these machines are going to literally wipe out the human species. AI could cause a catastrophe.” danger than existence.”

In a recent Substack post, Marcus said that the word P (catastrophe) is more appropriate. “A (disaster) is an opportunity event that kills one percent or more of the population,” he said.

Read further: Sovereign Agent: Your Own Personal AI Assistant? | Crypto Miles

Marcus describes AI-related disasters as being caused by humans. “There are many dangerous applications, and bad actors are already using this stuff. They are already deep counterfeiting and can try to manipulate the market. Such incidents can lead to accidental warfare,” he added.

A cognitive scientist has warned against the spread of AI-generated deepfakes. He said this AI-enhanced disinformation is already being used to cheat voters and trick people into imitating other people’s voices. Speaking on the Aventine podcast, Marcus described AI-generated disinformation as “a very real threat to our democracy, and the way we’re dealing with it now doesn’t completely diminish it.”

It’s a threat even recognized by Microsoft founder Bill Gates, who believes deepfakes could disrupt political processes around the world.

“Deepfakes and misinformation generated by AI can undermine elections and democracy,” Gates said in a July post on his blog. “On a larger scale, AI-generated deepfakes can be used to try to tilt elections. Of course, it doesn’t take sophisticated technology to cast doubt on the legitimate winner of an election, but AI will make it easier.”

An AI-generated video posted on in April of this year hinted at deepfake’s disruptive potential. In a clip from the film, former US Secretary of State Hillary Clinton endorsed Florida Governor Ron DeSantis for the presidency. According to Reuters, the video is considered an AI-generated deepfake and there is no evidence that Clinton ever endorsed it.

Is AI increasing the risk of human extinction?

In contrast to Marcus’ view, decision theorist Eliezer Yudkowsky said that the rapid development of AI technology could lead to an end-game situation for the human species. Yudkowsky sees human extinction not as a worst-case scenario but as a default outcome.

In an opinion piece for Time magazine, Yudkovsky warned that AI could achieve powers of intelligence beyond human comprehension. The theorist sees artificial intelligence developing a pessimistic view of humanity’s plight. “We can use what we’re made of atoms for something else,” he warned.

Read further: Spot bitcoin ETF approval unlikely this year, analysts say

However, Marcus reassures that humanity is an enduring species and that the predictions of AI doomsayers are too extreme. However, cognitive scientists were signatories to last year’s moratorium against the development of further generative AI models.

“When some people called for a moratorium, I signed a letter calling for a moratorium. GPT5 was the only thing. We knew GPT5 was going to be unreliable and problematic.

“Nobody said let’s stop AI all together, so those of us who signed the letter said, let’s make AI more reliable and trustworthy and it won’t cause problems, we didn’t say don’t do AI all together,” he added.

Gary Marcus identified fundamental flaws in the design of current AI systems. “The tools we currently use are called black boxes, which means we don’t understand what’s going on inside them, so they’re hard to debug, and we can’t guarantee their surroundings,” he said.

See: Spot Bitcoin ETF Approval Unlikely This Year, Analyst Says | Crypto Miles

Download the Yahoo Finance app, available for apple And Android.

Leave a Comment