By:

The Problematic Black Box Nature of the AI Robot

IoT|Machine learning

The opinions expressed belong to the author alone, unless there was a case of nonconsensual hypnosis involved, and do not reflect the beliefs of Onyx Tech Solutions

“What is vital is to make anything about AI explainable, fair, secure and with lineage.”

Uncertainty: Understanding the Problem

The Black Box mechanism is one in which a bunch of probabilities are available, but based on deep learning (a step above Machine Learning), the machine or robot picks one.

This seems fine on paper, right? Not if you think one step further. Why did it do it? The algorithm taught it to take a decision. Now which decision it takes is entirely up to it.

With Black Box AI even the developer isn’t sure of the outcome and can’t explain why it happened the way it did.

That is a problem when you start applying this technology in important industries, which is what AI developers are developing it for in the first place.

Why is it a problem at all? Two examples

Scenario 1

It becomes a problem when AI starts being used on the battlefield, or in policing or any other matter where lives are at stake. If our robots are not following our instructions and making decisions on their own on the battlefield, we no longer are in control.

Robots do not have empathy and therefore we can’t pin a glitch or deviation in the system on that, which is really the only reason a deviation should be made on the battlefield. An accidental shot fired or arbitrary killing, when done by a robot, cannot be explained. The algorithm took a strange turn is not good enough.

On the other hand, are human actions really that explainable? Molly Kovite writes in a piece titled titled “I, Black Box: Explainable Artificial Intelligence and the Limits of Human Deliberative Processes” that ‘we make decisions first and then justify them.’ And our decision, especially when it is an important or big one, is often unjustifiable to us. So is human cognition and decision-making process necessarily a logical one? Most definitely not.

Scenario 2

Another example of a situation in which Black Box mechanisms can cause anxiety, and rightly so, is when I am used in cars.

Imagine a car that does not follow set instructions. It learns from experience and then adapts accordingly. Having this car, or a bunch of such cars, on the road with you is very likely to cause anxiety and stress.

However, once again we can consider human error and how many times it costs people their lives or the lives of others. 94% of all accidents are caused by human error. It makes one think how bad could machines be? The answer is irrelevant.

That is because if there is the chance that humans will cause errors and also that machines in their place might also cause errors, the average person is most likely to take the first gamble. The difference lies in the presence of empathy and humanity. We naturally trust ourselves more than we trust a machine.

Is there a solution?

XAI or White Box AI

This is where Explainable Artificial Intelligence comes in. With technology that can actually show the people at the top why a decision was made by a robot, accountability and clarity might even be better than if things were being controlled entirely by humans. With XAI the fact that a robot’s decision making isn’t colored by emotions can become a good thing because at least it was the logical thing to do.

White box AI is interpretable. White box mechanisms show the mechanisms behind an output. It shows the logic behind a decision.

What does the future look like?

The future looks AI heavy. The whole problem with interpretability is all the more important for this very reason. Automation and Machine Learning are making their way into a lot of industries such as agriculture, education, city planning, art and entertainment, and the military, to name a few.

Before our everyday lives become completely automated and all the on-ground decision-making powers are handed to these machines, we need to ensure who is in charge and if there will be enough accountability and reasoning possible.

The field of Science and Technology is a very expensive one. By 2025 we are looking at a market that is worth around $60 billion. We are on that path already with investment in AI growing each year. So, if so much money is going to be spent on research and development and then its implementation, then we need to get it right.

What practical steps are being taken for XAI?

  • The American Defense Advanced Research Projects Agency (DARPA) is investing $70 million for the development to interpret the deep learning mechanisms in intelligence mining and drones.
  • The European Union has also put forth a set of guidelines that says that
  • “AI systems should be accountable, explainable, and unbiased”.

At the end of the day it comes down to one question; What is more acceptable to us, a mistake made by a human or a robot? It seems like an investigation into the human conscience is in order.