Every day, more and more artificial intelligence (AI) and robots affect our day to day lives, and thereby, become ingrained as integral parts of society. Robots and AI are often used as a cost-saving solution in many different fields. Does that mean companies consider the ethical ramifications of their actions or do they play fast and loose, trying to lower their bottom line as quickly as possible?

When it comes to the dilemma of ethics and robotics, one name quickly comes to mind – Isaac Asimov. Asimov is best known for his science fiction work, where he attempts to lay out a framework to philosophically and morally bind AI from inadvertently destroying all of humanity. He came up with what we all know today as Asimov’s Three Laws of Robotics.

Asimov’s 3 Laws of Robotics in a Nutshell:

  • A robot can’t allow a human to be harmed with what it does or doesn’t do.
  • A robot must follow commands given by a human as long as the orders don’t break the first law.
  • A robot can protect itself as long as it does not break the first and second laws.

To Asimov’s defense, he did write these over 70 years ago in 1942. Today, we have actual experience with robots and artificial intelligence. Some of us even allow them to drive us around, well, at least, when the conditions allow. Clearly, we allow autonomous machine intelligence to make life or death decisions for us. Do Asimov’s Three Laws suffice to guide the use of AI in our day to day lives, or do they need an update?

Even Asimov Knew That His 3 Rules Were Broken

In Asimov’s I, Robot collection of science fiction stories, he explores the possible consequences and outright failures of the three laws. Asimov specifically wrote the three laws as the motive for his fictional work; therefore, there must be a conflict to progress the plot. If he had gone about writing laws governing artificial intelligence with humanity in the balance, he would have undoubtedly put more thought into the rules and made them as air-tightly as possible. Aside from being a science fiction writer, Asimov was an actual scientist with a Ph.D. in Biochemistry.

Within the canon of Asimov’s I, Robot, the flaws of the three rules are directly addressed. In “Esccape!”, the main character suppresses the First Law allowing a super-intelligent artificial intelligence to design a craft capable of interstellar travel faster than the speed of light. But to accomplish this, the human pilots would be killed temporarily. After this, Asimov introduced the “Zeroth Law” that supersedes the first three, allowing a robot to harm humans for the greater good of humanity as a whole.

Asimov’s Fourth Law of Robotics Is The Zeroth One

A robot can’t harm humanity with what it does or doesn’t do.

What Happens When AI and Robots Don’t Have Ethics

It isn’t impossible to believe that failing to define the ethical constraints to be placed on artificial intelligence, can cause incalculable damage even if they are simply following instructions given by humans.

A well-known example of this is in the famous movie “Terminator 2: Judgment Day”. In Terminator 2’s scenario, Skynet, the AI system, starts a nuclear war and destroys the human race. According to the movie, Skynet was deployed as a rational decision since it had a perfect operational record. Unfortunately, it seems, Skynet was programmed to defend itself (perhaps the entire Terminator problem could have been avoided by directly commenting out a line of code with a “#”).

The artificial intelligence took its self-defense initiative to the extreme and came to the “logical” conclusion that humans are a threat and started wiping everyone out. Skynet, as a machine, is incapable of understanding common sense or morality. As such, its actions are entirely unconstrained by them.

We don’t need to only rely on science fiction for an example of artificial intelligence run amok. Automated trading systems in the stock market, designed for high speed, have created positive feedback loops resulting in what economists now call a “flash crash”. Many large investment firms use a form of AI to trawl the internet for information on market trends. It combs through the internet, looking at headlines and content related to stocks, which is probably how AMD shares went from $1.83 to $14.46 in 1 year.

How to Go About Defining Robotic and AI Ethics

It is quite tricky to create set laws for robotics and artificial intelligence. Concepts, like a human, can harm. The greater good is difficult to define in a way that a machine would understand. AI and robots don’t think like humans. A good comparison is that a bird and a jet plane can be considered similar, insofar as both have wings and they fly; but, the way they use their wings and achieve flight is completely different.

It’s not a simple issue to fix, but inevitably, it is one that we will need to confront as technology progresses closer to achieving true artificial intelligence. The Engineering and Physical Sciences Research Council (EPSRC) is one group approaching the issue. They are using a multi-disciplinary approach to define, what they call, Principles of Robotics.

The EPSRC’s Principles of Robotics are like Asimov’s Laws but more comprehensive, including morality and ethics defined from various fields, including social psychology, developmental psychology, neuroscience, and philosophy. Their position is that robots are tools that humans must use responsibly. In the likely event that we create powerful sentient robots, they should be integrated into society and learn the norms, similar to the way children do.

To some, the idea of robots taking over the world is absurd, dismissing the mere suggestion as to the errant musings of science fiction. To others such as Elon Musk, Bill Gates, and Stephen Hawking, the concept of rogue artificial intelligence destroying all life as we know it, is a distinct possibility that requires humanity to look deeply before we leap.


Please enter your comment!
Please enter your name here