Friendly AI Will Still Be Our Master: Why We Shouldn’t Become Pets of Super-Intelligent Computers

Rate this post

(You can hear Robert Sparrow discuss the dangers of artificial intelligence with Waleed Ali and Scott Stephens on The Minefield.)

When asked about humanity’s future relationship with computers, Marvin Minsky, one of the founders of the field of artificial intelligence (AI), famously replied, “If we’re lucky, they might decide to keep us as pets.”

Although this view is not universally shared among computer scientists, serious, intelligent people, people believe that AIs will enslave—perhaps even destroy—humanity. Swedish philosopher Nik Boström has written the entire book. well knownDiscusses the threat posed to humanity by AI.

One might think that it would quickly follow that we should give up chasing AI. Instead, those who are concerned about the existential threat posed by AI defaults call it the “friendly AI problem.” Broadly speaking, the question is how we can ensure that the AIs that evolve from the first AIs we create will continue to empathize with and serve humanity – or at least, consider our interests. More talk: How can we make sure AI doesn’t eat us?

It’s hard to know how seriously to take this stuff. While a group of experts, often with doctorates in physics, mathematics, or philosophy, appear to be genuinely concerned about “superintelligence,” many computer scientists and other people working in artificial intelligence think the whole discussion is nonsense. It is unlikely that machines will become truly intelligent for the foreseeable future.

What should be clear, however, is if is While there is some risk that superintelligence will emerge, solving the “friendly AI problem” will not make the prospect of AI becoming a pet any more attractive.

To understand precisely why “Pets of AI” is not the status to which humanity should aspire, we must turn to the “neo-republican” philosophy of Philippe Petit. Republicanism is well-suited to examining the ethical issues surrounding AI because it centers on the relationship between power, freedom, reason, and status.

Freedom from dominance

The basic intuition of republicanism, as shown by Pettit, concerns the nature of liberty. According to Republicans, to be free, it is not enough that no one prevents you from acting as you choose: one’s freedom of action must be “flexible” or “robust.” In particular, one is not free if one can only act as one wishes on the suffering of the powerful. To be free, one must not “dominate”. A key feature of this account is that not all interventions count as dominance — there must also be interventions uncontrollable. According to Petit:

An act is done on an arbitrary basis … if it is only subject to arbitratorthe judgment or decision of the agent; The agent was in a position to choose or not to choose as they liked … without reference to the interests or opinions of those affected.

Given a republican account of the relationship between power and freedom, we can clearly see what is wrong with the domestication of AI. Humanity will become the plaything of AI, only to be “liberated” by the pleasures of intelligent machines. Even if the pet owner allows the pet to run the house, owning a pet means being a slave.

It might be objected that a friendly AI would only interfere in our lives for our own good, and thus the use of its power would not count as dominance. However, a benevolent dictator is still a dictator. The exercise of power according to our interests is compatible with our freedom only if we can resist it. Unfortunately, research on superintelligence has led to articles believing that a super-intelligent AI would be able to defeat human attempts to limit its power. The only thing stopping a friendly AI from eating you is if it doesn’t want to.

Admittedly, some formulations of the friendly AI problem imply that solving it requires guaranteeing that AI will never act against the interests of humanity. Given the profound difficulties involved in restraining the activities of a superintelligence, it is imperative that it exist with the desire to serve the interests of humanity and never change its own motivations.

However, once again, the guarantee that AI will not “eat” us is not enough to establish that co-existence with superintelligence is compatible with human freedom. While it remains true of a machine that, if it wants to eat us, it can, it seems that we will still be subject to its whims and freed by it.

What would be necessary to preserve human freedom is that a friendly AI cannot act against humanity’s interests—perhaps because its design makes it impossible for it to ever want to. It is unclear whether the existence of such strict limits on what an AI is capable of willing is compatible with claiming to be an agent and, therefore, “truly” intelligent. Regardless, it is doubtful whether we could impose such limitations on a superintelligence, even if it were possible. Moreover, locking in the AI’s motivations will increase the risk of things going wrong due to the machine’s certain sense that its interests are diverting from ours. What the republican conception of freedom shows, then, is that there is a profound tension between the freedom of AI and our own freedom.

Want the best of religion and ethics delivered to your mailbox?

Sign up for our weekly newsletter.

Pettit’s work provides us with another tool for examining the relationship between power and freedom, which also highlights the tension between super-intelligent AI and human freedom. In a republic, citizens meet with a certain kind of equality. Knowing that they are safe from the arbitrary rule of others, citizens need not bow and scrape before their “superiors.” Asking whether people look each other in the eye in everyday encounters tells us whether people are dominant and — on the assumption that people have an accurate sense of their relationships with others — whether they are dominated.

AI is unlikely to have eyeballs. Seeing as CCTV cameras are ubiquitous, one cannot be sure that either we are equal to him or that he thinks the same way about us. We cannot have an equal relationship with a superintelligence because we will not be equal to it.

Is negative freedom enough?

Solving the friendly AI problem will not change the fact that the advent of super-intelligent AI will be devastating to humanity. Pets of kind owners are still pets. As long as AI has the power to interfere with humanity’s choices and the ability to do so without reference to our interests, it will dominate us and thereby liberate us.

Why has so much attention been paid to the friendly AI problem and so little to the fact that the power relationship that exists between us and a super-intelligent AI would be enough to enslave us?

In part, one suspects, this is due to the hold of the theory of “negative freedom” on our culture. Advocates of negative liberty say that all that is required to be free is that no one prevents you from doing what you want – no one. can If they want to do so. The theory is blind to the effects on freedom that inequality of power has—indeed, rationally deliberately obscures it.

However, one also suspects that adherence to the doctrine of negative freedom is a bit selfish on the part of many who write about friendly AI, funded by organizations that already wield considerable power. Use their products. To admit that a benevolent dictatorship is still a dictatorship is to question the business model of the organizations that fund friendly AI research.

Some of the smartest people in the world are working to realize AI. Some of them declared that the project will create elements that pose a non-trivial risk – the best Hair – relates to us the way we relate to rats and mice. Given the difficulty for non-experts in evaluating claims about advances in AI, and the quality of intelligence on both sides of the debate that is likely to occur, I myself struggle to form an opinion on the matter.

what i is Sure, even if it’s available, the guarantee that our future robot overlords will be benevolent should be cold comfort. Worrying about how to build “adaptive” AI is a distraction. If we really believe that AI research threatens to lead to the emergence of superintelligence, then we should rethink the wisdom of AI research.

Robert Sparrow is Professor of Philosophy at Monash University, where he is also an Associate Investigator at the ARC Center of Excellence for Automated Decision-Making and Society.

posted , Updated

Leave a Comment