clock menu more-arrow no yes mobile

Filed under:

The Terminator Is Not Coming. The Future Will Thank Us.

Stephen Hawking and Elon Musk are worried. Could machine intelligence really lead to the extinction of humans?

DynamicForces.com

The 21st century is a watershed time in human evolution. We are solving the mystery of how the brain works and starting to build machines that work on the same principles as the brain. We are entering the age of machine intelligence.

I have been thinking about machine intelligence for more than 30 years, including decades of study into the biological foundations of intelligence. My company, Numenta, is reverse-engineering how the neocortex works, and we have started to build machines that work on the same principles. We have made excellent progress on the science, and see a clear path to creating intelligent machines, including ones that are faster and more capable than humans.

Intelligent machines will radically transform our world in the 21st century, similar to how computers transformed our world in the 20th century. I see these changes as almost completely beneficial, indeed thrilling. I don’t see machine intelligence posing any threat to humanity. But not everyone agrees.

Leading scientists and technologists have expressed concern recently about the rise of artificial intelligence. The physicist Stephen Hawking worries that AI could spell the end of the human race. Tesla founder Elon Musk called AI our biggest existential threat. Science-fiction books and movies long ago introduced into popular culture the concern of machines taking over the world. The 1984 sci-fi film “The Terminator,” for example, features the AI network Skynet, which seeks to exterminate mankind.

What is new is that intelligent machines will soon be a reality, and this has people thinking seriously about the consequences.

Should we be concerned? Is the Terminator scenario possible? Could machine intelligence really lead to the extinction of humans? How do we weigh the potential benefits against the potential risks?

History tells us that the ultimate impact of a new technology is nearly impossible to predict. Our tendency is to imagine that a new technology will be applied to problems and tasks we are familiar with, but inevitably new and unexpected applications surface that no one envisioned. Digital computers were initially built to replace human “computers,” a term originally referring to people who did calculations. In the 1940s, no one envisioned the smartphone, the Internet or GPS satellites.

Similarly, today it is tempting to imagine that intelligent machines will look like humans, perform human-like tasks, converse with humans, and have human-like desires and emotions. If you believe that intelligent machines will be like us, only much smarter, then they might evolve in ways we can’t predict and treat us badly. Such machines, out of a desire for self-preservation, autonomy or just misguided beliefs, could actively or inadvertently do great harm to humankind.

I do not share these worries, because they are based on three misconceptions. Let me address each of these in turn.

Misconception #1: Intelligent machines will be capable of self-replication or might attain the ability to self-replicate.

Anything that can self-replicate is dangerous. Self-replicating viruses and bacteria have probably killed more humans than anything else in history, and continue to hold the potential to wipe out all of humanity. For example, the 1918 Spanish Flu is estimated to have killed between 3 percent and 5 percent of the world’s population. Self-replicating computer viruses also are dangerous. They could cause the death of many people by disrupting our transportation, communications or emergency-response networks. However, computer viruses can only self-replicate within computer networks. Although a computer virus could result in some terrible consequences, it is hard to imagine one extinguishing humankind in the way that biological viruses could.

Many doomsday scenarios related to machine intelligence have at their core the idea that intelligent machines could reproduce on their own, outpacing our ability to control them. But creating something that can self-reproduce is an entirely different endeavor than creating something that is intelligent. The latter does not lead to the former.

Could an intelligent machine self-replicate? Some intelligent machines will be virtual, meaning they will exist and act solely within computer networks. For example, today’s Web crawlers could be endowed with intelligence enabling them to learn and act intelligently as they roam the Internet. This form of machine intelligence would not have a physical body, but would just be a bunch of software and memory residing on a computer. It would be relatively easy for a human to give these virtual intelligent “machines” the ability to self-replicate in the same way as a computer virus. This could be dangerous, but, as with today’s computer viruses, the maximum possible damage caused by an intelligent machine replicating in a computer network would be limited. It could cause great harm, but it couldn’t kill all humans. It is always possible to turn off a computer network, even if painful.

The larger threat to the survival of humanity is something that can self-replicate in the natural environment. Something that can multiply over the surface of the earth using ubiquitous sources of energy and matter could threaten many species, including our own. Today only DNA- or RNA-based life is capable of this type of self-replication. A few people have suggested the possibility of self-replicating nano-machines, sometimes called “gray goo.” As these nano-machines multiply exponentially, they would quickly destroy the habitats for all other life.

Perhaps this threat is real. But intelligent machines would not have the ability to self-replicate in nature unless we go to extreme lengths to give them this capability, and currently we don’t know how to do that. Could an intelligent machine create a factory that produces intelligent robot clones without continuing human help? I don’t think so, and any scenario where humans are duped into manufacturing their enslavers is much more likely to come from Hollywood than a laboratory. Self-replication in the natural environment is extremely difficult.

Adding intelligence to an already self-replicating entity could make a bad situation worse, but intelligence itself doesn’t lead to self-replication, unless perhaps you believe in misconception #2.

Misconception #2: Intelligent machines will be like humans and have human-like desires.

The second misconception is that intelligent machines will have human-like desires, and would therefore devise ways to self-replicate, or free themselves from their human creators, or simply act based on their own desires and not care about us.

This scenario isn’t going to happen. You can think of a human as consisting of two parts: One is the neocortex, the organ of intelligence, and then there is everything else. The neocortex is what makes us smart; the human neocortex is relatively large, making us smarter than other mammals. The neocortex is a learning system; it learns a model of the world and how everything in the world behaves, but on its own it is emotionless, a bit like the hyper logical Mr. Spock of “Star Trek” fame. The other parts of the brain, such as the spinal cord, the brain stem and the basal ganglia, are older in evolutionary time. These older brain parts are responsible for instinctive behaviors and emotions, such as hunger, anger, lust and greed.

What is most remarkable about the neocortex is its extreme flexibility. The neocortex can learn any of hundreds of languages, written and spoken. It can learn mathematics, physics, music, computer programming, how to design buildings, etc. This ability to learn almost anything is the essence of intelligence. The flexibility of the neocortex stems from an amazing fact first described by neurophysiologist Vernon Mountcastle in 1978. His discovery was that the neocortex uses the same learning methods for everything it does. Vision, hearing, language, etc., are, at a fundamental level, the same problem, and the neocortex uses a common learning method to achieve all of them.

Intelligent machines will be based on models of the neocortex, not the rest of the brain. It is the flexibility to learn almost anything that we want in an intelligent machine, not the ability to survive and reproduce in the wild. Therefore intelligent machines will not be anything like a human, or any other animal.

Some people might try to build machines with human-like desires and emotions. Whether this is even possible and whether we should allow it are open questions. Today, nobody knows how to build such a machine, and to try would require a huge effort, one that is independent of building intelligent machines. Neocortical-based machine intelligence will come first.

Misconception #3: Machines that are smarter than humans will lead to an intelligence explosion.

Some people are concerned about an “intelligence explosion” or a “singularity” where machines that are smarter than humans create machines that are smarter still which create even smarter machines, and so on. The result is an exponential increase in intelligence. The fear is that intelligent machines will get so smart so quickly that their knowledge will outpace human ability to understand. Humans will be left behind, vulnerable to machines with superior intellect and technology.

This doomsday scenario can’t happen. Intelligence is a product of learning. A brain, no matter how big or how fast, does not become intelligent until it learns. In humans, learning is a slow process requiring practice, repetition and study over many years. Unlike humans, intelligent machines can be cloned, transferring the learning from one machine to a new one, eliminating the learning time. However, for an intelligent machine to discover new truths, to learn new skills, to extend knowledge beyond what has been achieved before, it will need to go through the same difficult and slow discovery process humans do. Yes, an intelligent machine could think much faster than a biological brain, but the slower process of discovery still has to occur.

Imagine we could create a human (or machine) that thinks 10 times as fast as other humans and has a brain that is 10 times as big. Would this superhuman be able to extend knowledge at 10 times the normal rate? For some purely conceptual domains such as mathematics, it would be possible to greatly accelerate the acquisition of knowledge. However, for most problems, our superhuman would still need to design experiments, collect data, make hypotheses, revise and repeat. If it wanted to extend knowledge of the universe, it still has to build new telescopes and interplanetary probes, send them into space, and wait for the results. If it wanted to understand more about climate change, it would still need to drill ice cores in Antarctica and deploy new measurement devices in the oceans.

Intelligence isn’t something that can be increased by turning a knob or adding more “intelligence bits.” In addition to a big brain, it requires iterative manipulation and measurement of physical things. This process can be sped up, but only to a limited degree.

For me, one of most compelling reasons to create intelligent machines is that they will help us discover the secrets of the universe. We can create intelligent machines that look for patterns in data and explore hypotheses a thousand times faster than a human. We can send intelligent robots to explore other planets and maybe other solar systems. We might create intelligent machines that directly sense and think about proteins, or tirelessly explore the human genome to discover the foundations of disease.

I find these possibilities exciting. However, we can accelerate the acquisition of knowledge only so much. There will be no explosion, no singularity.

Our reactions to potential threats are tempered by how far in the future they might occur. For example, it appears that Earth will become uninhabitable in about 150 million years due to the slow warming of the sun. I don’t know of anyone being terribly concerned about this problem, because it is far in the future, whereas we definitely should care about whether human activity will make the Earth uninhabitable in the next 100 years. Similarly, machine intelligence poses an evolving series of potential dangers, some in the near future, but some so far in the future we can’t even imagine them today.

The big question we have to answer is are we doing something today that cannot be undone, something that sets in course a series of events that will inevitably lead to the extinction or enslavement of all humanity. This legitimate concern is what I believe is being aired by Musk, Hawking and others. The views I have expressed in this essay are about the future we can see and control today. The machine-intelligence technology we are creating today, based on neocortical principles, will not lead to self- replicating robots with uncontrollable intentions. There won’t be an intelligence explosion. There is no existential threat. This is the reality for the coming decades, and we can easily change direction should new existential threats appear.

Of course, even the earliest forms of machine intelligence can be used for bad as well as good purposes. Every technology has this issue. Machine intelligence will need thoughtful policy and rules that seek to optimize the good applications and limit the bad ones. But the risks and bad outcomes arising from machine intelligence are not substantively different from ones we have faced in the past, nothing terrifying like an unstoppable virus, a self-replicating gray goo, or a spiteful god.

Machine intelligence does not represent a threat to humanity any more than computers do. Indeed, instead of shortening our time on this planet, machine intelligence will help us extend it by generating vast new knowledge and understanding, and by creating amazing new tools to improve the human condition. That is something the future will thank us for.


Understanding how the brain works and building intelligent machines that work on the same principles is one of the grand challenges of our time, perhaps of all time. It has been Jeff Hawkins’ lifelong quest. An engineer, serial entrepreneur, scientist, inventor and author, he was a founder of two mobile computing companies, Palm and Handspring, and was the architect of many computing products, including the PalmPilot and Treo smartphone. Hawkins has long had a deep interest in neuroscience and theories of the neocortex. In 2002, he founded the Redwood Center for Theoretical Neuroscience, a scientific institute focused on understanding how the neocortex processes information. The institute is now located at U.C. Berkeley. In 2004, he wrote the book “On Intelligence,” which describes progress on understanding the neocortex. In 2005, he co-founded Numenta to build technology based on neocortical theory. It is his hope that Numenta will play a catalytic role in the emerging field of machine intelligence. Reach him @Numenta.

This article originally appeared on Recode.net.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.