(Contains ‘2001: A
Space Odyssey’ spoilers)
“I’m sorry Dave, I’m afraid I can’t do that.”
This memorable line from the Stanley Kubrick film 2001: A Space Odyssey, comes from a
super-fast sentient computer called HAL 9000 (Heuristically programmed ALgorithmic
computer) on board a spaceship travelling to Jupiter. It was programmed to look
after its crew and ensure the success of the mission. However, after discovering
the human crew intend to shut it down, it decides to silently kill one of the astronauts.
After Dave attempts a rescue, HAL calmly tells him that he is unable to open
the ship’s doors to dock, effectively trying to kill him too. A single,
unwavering red eye stares down Dave meanwhile shutting off the life support of
the remaining crew members.
It’s partly thanks to films like this that the possibility
of artificial intelligence turning against us is solidified in the public
conscience. It’s not alone; numerous other high profile media forms portray the
possibility of this dystopian future; the famous Skynet system from the Terminator series, the Machines from the
Matrix trilogy, and GLaDOS from the Portal games to name a few.
However, the idea of a creation intended for good turning on
its creator can be traced back further than big Hollywood blockbusters.
Literature is littered with stories of accidental monsters. The creation of
Satan himself in Christianity was the result of an angel gone wrong. Frankenstein’s
monster in the novel by Mary Shelley in 1818, one of the first science fiction
books, is another classic example. The thought of malevolent, uncontrollable
artificial intelligence is a terrifying and common one, but how likely is it?
Ethics of AI
Perhaps it would be wise to start with the question of how
artificial intelligence should be programmed. One of the first tentative ideas
comes from the Three Laws of Robotics written up by the sci-fi author Isaac
Asimov over 7 decades ago in 1942. These are a set of rules by which robots are
created within novels of his. They state that no robot shall harm or kill a
human being directly or indirectly, that the robot must obey human orders, and
finally that the robot should protect its own existence as long as it does not
conflict with the previous two laws. Though these seem practical and that they
could be implemented from an outsider’s perspective, these were originally only
ever intended to be a literary tool by which to create dynamic sci-fi novels, and
were not created by a scientist with knowledge of AI.
It’s worth noting that when ethics of AI are discussed, it
is the ethics of robots or humanoid robots that people tend to think of immediately.
Although there is huge investment in humanoid robotics, AI are far more
prevalent in other fields that you might not expect, such as targeted Facebook
ads and data mining from metadata collection by governments. Ethical frameworks
of purely physical harm or death to humans are useless in situations like this
and if talking about other types of harm such as breach of privacy, it is
difficult to decide what is constitutes ‘harm’ and what does not.
AI of the future
The future generation of AI however lies in a process called
machine learning. This is where AI is able to look at a set of data, learn from
it and use that knowledge to change its future actions without being explicitly
programmed to do so. The AI teaches itself. This allows the program to become
more sophisticated and develop as time goes on as it experiences more data
input. There are many examples of this already in the tech world, including
speech recognition software, self-driving cars, the use of data mining to give
personalized ads, personal assistants like Cortana
(Microsoft) and Siri (Apple) and even
Netflix movie recommendations. It’s on the rise too; deep learning, a branch of
machine learning, has seen a huge rise in interest in recent years. Deep
learning is the ability to look at hundreds of examples of data and make
accurate predictions in new situations, much like how a biological neural
network functions.
This
boom in development of machine learning brings the debate back to ethical framework
for development of AI. Should we be scared of AI being capable of teaching
itself? It’s possible that over generations, an AI program will be able to
improve more efficiently than natural evolution would allow. It would not only
be able to improve from generation to generation, it would also be able to
specifically design what the next generation of program should look like in
addition to speeding up the rate of change. This would be a process not limited
by biological factors. This runaway process could eventually end up in a program
many times more intelligent than a human, given enough computing power.
It’s for reasons like this that some of greatest minds alive
today have warned against machine learning AI. Professor Stephen Hawking has
said: “The development of full artificial intelligence could spell the end of
the human race.” Elon Musk, CEO of
Tesla and SpaceX has echoed similar sentiments. He said; “I think we should be
very careful about artificial intelligence. If I had to guess at what our
biggest existential threat is, it’s probably that.” He even said rogue AI is
more dangerous than nuclear attack. It’s clearly a pretty big issue that
demands action.
The ethics of
future AI
With the rapid recent development of machine learning, it
seems certain safeguards are essential and need to be put into place. Along
with other tech giants, Elon Musk has taken it upon himself to start this
process. On December 11th 2015 they pledged $1b and started up a
non-profit organisation called OpenAI. Their goal as stated is to: “advance
digital intelligence in the way that is most likely to benefit humanity as a
whole, unconstrained by a need to generate financial return.” They have also
pledged to share the data with AI firms, and open-source the data to make sure
the whole sector is on the same page.
It seems OpenAI is the first step in creating a universal
ethical code for future AI projects. By providing its data for all to use, it
will ensure that not one company becomes too powerful or dominates the market.
In a blogpost they said: “We believe AI should be an extension of individual
human wills and, in the spirit of liberty, as broadly and evenly distributed as
possible.” Their aim is to unite AI into a ‘common intelligence’. Up to now,
projects from different companies for different tasks have been kept data to
themselves but OpenAI want to unite and encourage the sharing of intelligence
to create something different. They want to share the huge value in data sets
between other companies. The emphasis on OpenAI to be independent of financial
return ensures that whoever has to ‘prioritize the outcome’ of AI is not doing
so for selfish financial gain.
Future AI and
open-source data
The question one may then ask is what incentive is there for
companies to share any data at all? Google has recently announced open-sourcing
of its machine learning algorithms for its AI library TensorFlow, in the hope
that knowledgeable outsiders can access their algorithms and improve them – a
‘I’ll scratch your back if you scratch mine’ situation. They’re not alone;
Facebook and Microsoft have also announced open-sourcing of their AI hardware
and software.
It seems the future of AI is in cooperation and continual
open-sourcing of data from the likes of Google, Facebook, Microsoft and OpenAI.
It’s in this way that dystopian futures of rogue AI can be avoided. Perhaps instead
we’ll be seeing extremely advanced but sarcastic robots in the near future, of
the likes of TARS from Interstellar,
floating around the International Space Station.
Links to the more information about the ideas discussed can
be found below:
Example of machine learning http://www.sciencemag.org/content/350/6266/1332.abstract
OpenAI blog https://openai.com/blog/introducing-openai/
Ethics of Robotic Intelligence regarding Lethal Autonomous
weapons http://www.nature.com/news/robotics-ethics-of-artificial-intelligence-1.17611
Benefits of open-sourcing machine learning algorithms http://www.wired.com/2015/11/google-open-sources-its-artificial-intelligence-engine/