- DeepMind’s Mustafa Suleman recently spoke with MIT Tech Review about setting the boundaries on AI.
- He said we should deny AI the ability to update its own code.
- The issue of how to regulate AI is a growing concern among researchers, legislators, and the tech press.
The rapid development of AI has raised questions about whether we are programming our own demise. As AI systems become more powerful, they could pose a greater threat to humanity if the AI’s goals suddenly stop aligning with ours.
To avoid such an apocalypse, Mustafa Suleiman, co-founder of Google’s AI division DeepMind, said that we should reject certain capabilities when it comes to artificial intelligence.
In a recent interview with MIT Technology Review, Suleman suggested that we should reject “iterative self-improvement,” the ability of AI to make itself better over time.
“You don’t want to let your little AI go off and update its own code without your supervision,” he told MIT Technology Review. “Maybe it should even be a licensed activity – you know, like handling anthrax or nuclear material.”
And while there is considerable focus on AI regulation at the institutional level — just last week, tech execs including Sam Altman, Elon Musk and Mark Zuckerberg gathered in Washington for a closed-door forum on AI — Suleman added that what matters to people is how their personal data is used. To set limits on this.
“Essentially, it’s about setting boundaries, the limits that AI can’t cross,” he told MIT Technology Review, “and making sure those boundaries build safety all the way from the actual code to interacting with other AIs — or with humans — creating the technology.” For the motivation and incentives of companies.”
Last year, Suleman co-founded AI startup Inflection AI, whose chatbot Pi is designed to provide neutral listeners and emotional support. Suleman told MIT Technology Review that while Pi is not as “spicy” as other chatbots, it is “reliably controllable.”
And Suleman told MIT Technology Review that while he’s “optimistic” that AI can be effectively controlled, he’s not worried about a single doomsday event. He told the publication that “there are 101 other practical issues” we should focus on – from privacy to bias to facial recognition to online control.
Suleman is one of many experts in the field to raise voices about AI regulation. Demis Hassabis, another cofounder of DeepMind, has said that developing artificial general intelligence technology must be done “carefully using the scientific method” and involves rigorous experimentation and testing.
And Microsoft CEO Satya Nadella has said that the way to avoid “runaway AI” is to make sure we start using it in categories where humans are “unequivocally, undeniably, in charge”.
Since March, nearly 34,000 people, including “godfathers of AI” like Jeffrey Hinton and Joshua Bengio, have also signed an open letter from the non-profit Future of Life Institute calling on AI labs to stop training on any technology more powerful than OpenAI’s GPT-4. .
NOW WATCH: Top Videos from Insider Inc.