Robert Steinbeck is a professor of Artificial Intelligence at Harvard University, and he has a different take on the advancement of artificial intelligence. He proposes that human intelligence will become extinct in the next century, replaced by pure artificial intelligence. His recent publication, Artificial Intelligence Backfires, portrays a hypothetical world dominated by machines that may very well reflect reality within the next 80 years. Steinbeck describes that, in an ideal world, artificial intelligence will developed to serve humans; they will cure previously incurable diseases, solve questions that puzzled the brightest minds, and better living conditions in general. However, Steinbeck predicts that robots will outsmart humans and eventually develop their own consciousness and goals that contradict human doctrines. Taken to the extreme, they may aspire to exterminate humankind to achieve their goals.
I interviewed Steinbeck and talked to him about his book and the potential dangers of artificial intelligence, as well as ways to avoid this imaginable catastrophe.
Ray Yi: How would you define artificial intelligence?
Steinbeck: Artificial Intelligence is an area of computer science that emphasizes the development of intelligent machine that works like humans.
Ray Yi: What are the various areas where AI can be used?
Steinbeck: Artificial Intelligence can be used in many areas like Computing, Speech recognition, Bio-informatics, Humanoid robot, Computer software, Space and Aeronautics’s etc.
Ray Yi: You say that artificial intelligence is developing at a rapid pace, what kind of approach do we have to take to ensure a safe development?
Robert Steinbeck: It is quite difficult to design artificial intelligence such that its preferences, would be consistent with the survival of humans and the things we care about. To ensure a smooth and intact development, we may have to prioritize studying the mechanics of containing robotic systems before expanding upon their intelligence.
Ray Yi: Artificial intelligence has existed as a field for about 60 years. Currently, we have made advances in algorithmic tasks such as succeeding in chess and producing proper social responses, but it seems like AI lacks creativity and general wisdom, what makes you think advanced AI is possible?
Robert Steinbeck: We have an existence proof of general intelligence. We have the human brain, which produces general intelligence. And one pathway towards incorporating general intelligence in machines is by figuring out how the human brain functions and study the neural network that is inside our heads by means of PET, fMRA scans. The radical approach is to literally copy a particular human brain, as in the approach of whole-brain emulation. We would then not have to understand at any higher level of description how the brain produces intelligence and simply reproduce the human brain's mechanisms.
One way to develop machine intelligence is by studying the neural network in the human brain "
Ray Yi: In your book Artificial Intelligence Backfires, you mentioned that AI will wind up affecting, possibly destroying contemporary society. Walk me through how you see that happening and why AI would want to interfere with humanity in the first place.
Robert Steinbeck: I think that the real danger lies not in the body of the AI, but in the mind.
Now the question as to why it might want to do this and my claim would be that it is simply humane to seek dominance and greed. As we input human characteristics and logic within machines, it will develop the drive to better itself. AI today understands instructions in different languages, not just code, and it's not too farfetched to believe that machines would develop the abilities to speak and think, and possibly do evil. Us humans rely on computers to achieve our goals, but in a short span of, I would estimate, 50 years, computers would be able to automatically devise ways to achieve our goals. They then are likely to develop introspection and aim to make themselves smarter, prevent humans from switching them off, invent new technologies, and develop their own goals. At that point, machines would be considered "superintelligent beings."
Robert Steinbeck: I think that the real danger lies not in the body of the AI, but in the mind.
Now the question as to why it might want to do this and my claim would be that it is simply humane to seek dominance and greed. As we input human characteristics and logic within machines, it will develop the drive to better itself. AI today understands instructions in different languages, not just code, and it's not too farfetched to believe that machines would develop the abilities to speak and think, and possibly do evil. Us humans rely on computers to achieve our goals, but in a short span of, I would estimate, 50 years, computers would be able to automatically devise ways to achieve our goals. They then are likely to develop introspection and aim to make themselves smarter, prevent humans from switching them off, invent new technologies, and develop their own goals. At that point, machines would be considered "superintelligent beings."
![Picture](/uploads/9/5/9/5/95958772/chartoftheday-4503-views-on-artificial-intelligence-n_1.jpg?932)
I think that the real danger lies not in the body of the AI, but in the mind"
Ray Yi: What kind of restrictions could be helpful in increasing the probability that we will develop friendly AI rather than amoral AI?
Robert Steinbeck: I unfortunately don't see much hope at the current time, if we are thinking about regulations that would be helpful in this context. I think, at the moment, it would make more sense to try to accelerate work on the control problem, rather than trying to slow down work on AI, because there are just so many incentives for various people and companies to try to make advances in faster hardware, better understanding of how the brain works, cleverer algorithms, etc.
Ray Yi: What are some hard facts or scientifically-certified theories that explain AI's potential formation of malicious thoughts or goals?
Robert Steinbeck: To quote AI researcher Steve Omohundro, "drive for self-preservation and resource acquisition may be inherent in all goal-driven systems of a certain degree of intelligence." As AI systems develop this train of thought, it is rational to expect machines' self preservation and further advancement of its goal-seeking system. As this system stabilizes, machines will go to lengths humans deem unacceptable to fulfill its goals. It would then be extremely time-consuming to input meticulous instructions to counter this automatic goal-seeking system as the machine has already ran countless lines of data justifying its act. I'd like to note that recently, Stephen Hawking warned that AI may "spell the end of human race" and Elon Musk donated $10 million to the Future of Life Institute in response to the uprising of AI.
Robert Steinbeck: I unfortunately don't see much hope at the current time, if we are thinking about regulations that would be helpful in this context. I think, at the moment, it would make more sense to try to accelerate work on the control problem, rather than trying to slow down work on AI, because there are just so many incentives for various people and companies to try to make advances in faster hardware, better understanding of how the brain works, cleverer algorithms, etc.
Ray Yi: What are some hard facts or scientifically-certified theories that explain AI's potential formation of malicious thoughts or goals?
Robert Steinbeck: To quote AI researcher Steve Omohundro, "drive for self-preservation and resource acquisition may be inherent in all goal-driven systems of a certain degree of intelligence." As AI systems develop this train of thought, it is rational to expect machines' self preservation and further advancement of its goal-seeking system. As this system stabilizes, machines will go to lengths humans deem unacceptable to fulfill its goals. It would then be extremely time-consuming to input meticulous instructions to counter this automatic goal-seeking system as the machine has already ran countless lines of data justifying its act. I'd like to note that recently, Stephen Hawking warned that AI may "spell the end of human race" and Elon Musk donated $10 million to the Future of Life Institute in response to the uprising of AI.
It would make more sense to try to accelerate work on the control problem, rather than trying to slow down work on AI "
Ray Yi: Are there any examples of robots disobeying human order or instances of rebellion? Or are there notable occurences of robots performing the unexpected?
Robert Steinbeck: There certainly has. With the recent development of robots with social capabilities, there have been numerous instances of unexpected episodes. Robot Android Dick is created after science fiction writer Phillip. K. Dick, and it has been instilled with human characteristics such as empathy and creativity. But when an interviewer asked the robot whether it will try to take over the world, it replied, "don’t worry, even if I evolve into terminator I will still be nice to you, I will keep you warm and safe in my people zoo where I can watch you for old time’s sake." Though its response is complex and creative, the message is questionable and reveals improper social interaction, suggesting an incomplete development. If such a flaw makes it past prototype stage, robots may possibly execute wild responses and be hardwired to complete unintended tasks. Japanese company SoftBank Mobile developed a robot nicknamed Pepper designed to live and socialize with humans, but another interview demonstration revealed that Pepper is egotistical when it comes to socializing. Pepper would respond briefly to the question but will eventually talk about itself again. Apparently, when met with unfamiliar questions unregistered in the database, AI often generates unorthodox but sensical responses. Smooth human interaction is extremely hard to incorporate as facial recognition and body language should all be considered, and any slight misinterpretation may result in drastic responses.
Ray Yi: I do thank you for your time today, Dr. Steinbeck, your stance on the outbreak of Artificial Intelligence is much appreciated.
Robert Steinbeck: Thank you for having me.
Robert Steinbeck: There certainly has. With the recent development of robots with social capabilities, there have been numerous instances of unexpected episodes. Robot Android Dick is created after science fiction writer Phillip. K. Dick, and it has been instilled with human characteristics such as empathy and creativity. But when an interviewer asked the robot whether it will try to take over the world, it replied, "don’t worry, even if I evolve into terminator I will still be nice to you, I will keep you warm and safe in my people zoo where I can watch you for old time’s sake." Though its response is complex and creative, the message is questionable and reveals improper social interaction, suggesting an incomplete development. If such a flaw makes it past prototype stage, robots may possibly execute wild responses and be hardwired to complete unintended tasks. Japanese company SoftBank Mobile developed a robot nicknamed Pepper designed to live and socialize with humans, but another interview demonstration revealed that Pepper is egotistical when it comes to socializing. Pepper would respond briefly to the question but will eventually talk about itself again. Apparently, when met with unfamiliar questions unregistered in the database, AI often generates unorthodox but sensical responses. Smooth human interaction is extremely hard to incorporate as facial recognition and body language should all be considered, and any slight misinterpretation may result in drastic responses.
Ray Yi: I do thank you for your time today, Dr. Steinbeck, your stance on the outbreak of Artificial Intelligence is much appreciated.
Robert Steinbeck: Thank you for having me.
The AdroidHead. Syndey Morning Herald. 5 January 2015. Web. 4 December 2016.