At AI’s Heart Lies A Dark Secret That Everyone Needs to Know About

ai_secrets

AI has some secrets that even its creators find hard to understand. This cannot continue if AI is to become a part of our everyday life.

Machine learning technologies are great.

They are great for researchers, for doctors, and for businessmen.

However, inherently, the actual workings of any given machine-learning technology are more or less opaque when compared to a system that is hand-coded.

And they aren’t opaque just to normal readers on the internet.

Even computer scientists who work in this field find it impossible to completely understand the inner workings of machine learning technologies.

This, of course, does not mean in any way that all artificial intelligence techniques coming in the future would be equally unknowable.

However, the case for deep learning is a bit different.

Deep learning, by its very nature, is specifically a dark black box.

Even a computer scientist would find it extremely hard to simply look inside a given deep neural network and understand how that specific deep neural network actually works.

Any given deep neural network’s reasoning ability is pretty much embedded in the actual behavior of hundreds and thousands of various simulated neurons.

These simulated neurons are neatly arranged into, sometimes, dozens and sometimes hundreds of layers which are intricately interconnected.

Each of the present the simulated neurons, for example, in the very first layer receive a given input.

This input can be anything.

For example, one input could be the intensity of a specific pixel in a specific image.

After the simulated neuron has received its input, it then has to perform a calculation.

Only after performing that calculation can the simulated neurons output the new signal.

After that, the deep neural network system feeds the outputs from the first layer to the simulated neurons of the second layer.

It is really a very complex web of simulated neuron layers.

The process continues from one layer to the next layer until there are no more layers left.

And when there are no more layers left, that is the time when the deep neural network actually produces an overall output.

In addition to that, there is also a process that computer scientists call backpropagation.

This process is able to tweak the calculations of various individual neurons.

And it does that in a way which enables the entire deep neural network to learn how to produce the desired output.

Since the deep neural network has many layers of simulated neurons, they enable the network to recognize different things/objects at different levels of abstraction.

To take an example, if a given deep neural network system is designed to recognize cats then the lower layers of the deep neural network would recognize simpler features or things about cats such as color and outlines.

Then, the higher layers present in the deep neural network would recognize the more complex features.

These would include stuff like eyes and/or fur.

After that, the topmost simulated neurons-containing layer would identify all of that information as a cat.

It should not be hard to understand that computer scientists and engineers can take this approach and apply it to a lot of other inputs (roughly speaking).

As mentioned before, these are the same inputs that actually lead the given machine to learn and teach itself.

Using such techniques machines can learn,

  • All the sounds which make up the present words in a given speech
  • The words and the letters which create various sentences in a given text.
  • The movements required of a steering-wheel for proper driving.

Computer scientists and researchers have used various but ingenious strategies to try and capture information which they can use to explain in a lot more detail the things that are happening in all such deep neural network systems.

shutterstock_236758525

Back in the year 2015, researchers working at the search engine giant Google, modified one of their image recognition computer algorithm which was based on deep learning.

When they did, the algorithm changed the way it worked.

So instead of the deep learning algorithm trying to spot objects in various photos, the image recognition algorithm would modify and/or generate them.

Researchers working at Google effectively ran the image recognition deep learning algorithm in reverse.

This enabled them to discover all the features that the program used in order to recognize, for example, a building or a bird.

Researchers called the project which enabled to produce the new images as Deep Dream.

Click here to learn more about those resulting images.

The resulting photos from the modified algorithm showed alien-like but grotesque animals that emerged from plants and clouds.

The modified images also showed hallucinatory pagodas which bloomed across mountain ranges and forests.

In other words, the resulting photos/images provided good evidence that various deep learning techniques were not, in fact, entirely inscrutable.

The modified images also revealed that that computer algorithms had little trouble in homing in on various familiar visual features such as a given bird’s feathers and/or beaks.

However, those same modified images also hinted at other important things.

Other important things such as how different human perception is from deep learning in the sense that a deep learning algorithm could actually make something out of a given artifact which humans would know to simply ignore.

Researchers working at Google also noted that when their image recognition deep learning algorithm generated photos of a given dumbbell, the algorithm also took the opportunity to generate a human arm that held the generated dumbbell.

In other words, the machine automatically came to the conclusion that a human arm was actually part of the whole dumbbell thing.

Computer scientists have made much more progress by using ideas which were actually borrowed from cognitive science as well as neuroscience.

Jeff Clune, who works at the University of Wyoming as an assistant professor, recently led a team which employed the artificial intelligence equivalent of various optical illusions in order to test various deep neural networks.

Back in the year 2015, Jeff Clune’s research group also demonstrated how specific images could find success in fooling such a deep neural network into perceiving objects/images which simply are not there.

Why did it find success in doing that?

Well, because the optical illusion images exploit the various low-level patterns that deep neural network systems search for when trying to identify images.

Jason Yosinski, one of Jeff Clune’s collaborators, also managed to build a tool which actually acts like one of those probes that scientists stick into brains.

Jason’s tool essentially targets any simulated neuron which is present in the middle of the network and then searches for that specific photo which activates that particular neuron, in the network, the most.

Now, the images which eventually turn up are more or less abstract.

Think of images that resemble as impressionistic take on a school bus or a flamingo.

But overall, these images highlight the problem with the perceptual abilities the machine.

Mainly, their mysterious nature.

However, there is little doubt that the community needs a bit more than just a glimpse of artificial intelligence’s thinking.

The problem here is that computer scientists and researchers still have not managed to come up with an easy solution.

Of course, the problem is not really the deep neural network itself.

The problem is the interplay of all the calculations that take place inside any given deep neural network.

This interplay of calculations is pretty much crucial to the complex decision-making and higher-level pattern recognition abilities of deep neural networks.

However, those calculations are pretty much a quagmire of variables and mathematical functions.

According to Jaakkola, if someone had an extremely tiny neural network, then he/she might have the ability to understand that neural network.

shutterstock_470032961

However, once the neural network became very large and it had hundreds and thousands of units in each layer and also had hundreds of layers then the neural network became pretty much un-understandable.

Regina Barzilay whose office is right next to Jaakkola, is a professor at MIT who has made determined efforts to apply various machine learning techniques to medicine.

Doctors diagnosed Regina Barzilay with breast cancer just a couple of years ago when she was 43.

Barzilay found the diagnoses a shock in and of itself.

However, Barzilay also felt dismayed that doctors and other professionals in the medical community did not make use of various machine learning and cutting-edge statistical methods to help them to guide patient treatment and/or oncological research.

Recently, she told reporters that artificial intelligence had a huge potential.

It could revolutionize medicine.

However, she also did not fail to realize that taking advantage of artificial intelligence’s potential would mean the medicine industry going beyond simply looking at medical records.

Regina envisions the medical community make use of more raw data.

According to her, the medical community is severely underutilizing data such as,

  • Pathology data
  • Imaging data

And all other such types of information.

After Regina completed her cancer treatment last year, she along with her research student begun to work with doctors employed at the Massachusetts General Hospital in order to develop a new system which would have the capability to mine pathology reports in order to identify patient that show specific clinical characteristics which researchers might have to study a bit more.

With that said, Barzilay was clear in her head that the new system would have to provide explanations for its reasoning.

Hence, together with a student and Jaakkola, Barzilay managed to add an additional step.

The modified deep neural network now extracted and highlighted little snippets of text which represented the pattern that the neural network had discovered.

Barzilay along with her research students have now busied themselves in developing another deep-learning computer algorithm.

This algorithm would have the capability to find very early signs of diseases such as breast cancer in various different mammogram images.

The research team is also giving this new deep neural network system the ability to not only make the diagnoses but also explain the reasoning behind the diagnosis too.

shutterstock_766711390 (1)

Barzilay recently told reporters that one really needed to have a new loop where the human and the machine are able to collaborate.

On the other hand, there is the United States military.

It is pouring in investments worth billions of dollars into new projects which will enable the United States military to use machine learning in order to pilot aircraft and vehicles.

The United States military will also use machine learning techniques to identify targets as well as assists analyst in sifting through large piles of important intelligence data.

Now, in applications such as these, there is very little room for any kind of mystery on part of the algorithm.

Perhaps even less so than in medicine.

In simpler terms, if there is one industry where one simply cannot tolerate any mystery in the way algorithms work then it is the military industry.

And rightly so, the Department of Defense has timely identified deep neural network’s explainability (more precisely, the fact that it is mysterious) as the key stumbling block.

The program manager at DARPA (the Defense Advanced Research Projects Agency), David Gunning, is now overseeing the very aptly named EAI (Explainable Artificial Intelligence) program.

David, a silver-haired military veteran of DARPA who, in the past, oversaw the initial DARPA project which eventually helped to lead a group of researchers to create Siri, recently mentioned that automation had managed to creep into a countless number of areas of the United States military.

Recent reports have also revealed that intelligence analysts at the agency have already started to test machine learning techniques as a method of successfully identifying various patterns in the vast amounts of collected surveillance data.

The military is also developing and testing many other kinds of autonomous aircraft and ground vehicles.

However, there is still some doubt that soldiers would take some time in feeling comfortable inside a robotic tank.

Especially if that tank does not explain its decisions to soldiers.

It is also true that intelligence analysts would show some reluctance in acting on information that does not come with proper reasoning.

Gunning also said that it was often the inherent nature of existing machine learning computer systems that they produced a good amount of false alarms.

Because of that, an intelligence analyst would really require a good bit of help in order to understand why the new machine learning system made a recommendation that it made.

Under Gunning’s program, DAPRA selected a total of 13 projects both from the industry and academia for further funding this past March.

Some of the projects could actually build on the previous work done under the leadership of a professor at the University of Washington, Carlos Guestrin.

Carlos Guestrin along with this colleagues had managed to develop a method for various machine learning computer systems in order to make them provide the humans operating them a rationale for all of their outputs.

Fundamentally, under such a method the machine learning computer system would automatically find a reasonable number of examples from the given data set and then serve those examples up with short explanations.

A computer system that takes the help of machine learning may actually use multiple millions of text messages during its training and decision-making phase.

However, by making use of the approach that the team at the University of Washington has come up with, the machine learning computer system could highlight specific keywords which it managed to find in a given text message.

Additionally, Guestrin’s research group has also managed to devise ways for various image recognition computer systems so that they too can give hints at their reasoning.

The new way would have such image recognition machine learning computer systems highlight those parts of a given image that it found had the most significance.

However, there is one mention-worthy drawback that this approach and perhaps all such approaches (such as Barzilay’s) have.

And the drawback is that these machine learning systems would still provide a simplified explanation.

That means these machine learning computer systems may produce explanations which may have lost some vital information along the way.

According to Guestrin, the artificial intelligence community had still not achieved the complete dream.

What is that complete dream?

That dream is where artificial intelligence systems are able to hold a conversation with the human and is also able to provide an explanation for its actions and decision.

He also said that the artificial intelligence community still had a long way to go if it truly wanted to have interpretable artificial intelligence.

Of course, it does not always have to involve a high-stakes situation such as military maneuvers and/or cancer diagnosis for the mysterious nature of artificial intelligence to become a major issue.

Knowing the reasoning behind artificial intelligence’s decisions is going to become crucial if this type of technology evolves to something very common and very useful part of the people’s daily lives.

Tom Gruber, who is a computer scientist and also leads the official Siri team at the technology giant, Apple, recently said that the issue of explainability had become a key consideration for him and his research team as the team tried to make Siri not only more capable but also a smarter virtual assistant.

With that said, Gruber hasn’t actually discussed the company’s specific plans for the future of its Siri offering.

However, it should not be hard for anyone to imagine that if Siri recommended a user a specific restaurant, then the user would want to know the reasoning behind Siri’s final decision.

The director of artificial intelligence research at Apple, Ruslan Salakhutdinov (who is also an associate professor working at Carnegie Mellon University) told reporters that he saw artificial intelligence explainability as the core component of the evolving relationship between intelligent machines and humans.

He also said that solving the issue of explainability would introduce trust into the whole equation.

 

Zohair

Zohair

Zohair is currently a content crafter at Security Gladiators and has been involved in the technology industry for more than a decade. He is an engineer by training and, naturally, likes to help people solve their tech related problems. When he is not writing, he can usually be found practicing his free-kicks in the ground beside his house.
Zohair

COMMENTS

WORDPRESS: 2
  • “Are you a dog that’s been mis-identified as a wolf ?”
    youtube.com/watch?v=TRzBk_KuIaM

    And what are the ramifications when your government adopts china’s social-scoring system either publicly or secretly? (Yes its already happened)

    en.wikipedia.org/wiki/Nosedive_(Black_Mirror)

    • Thanks for the comment AI.
      The ramifications will be, at least, huge.

  • At AI’s Heart Lies A Dark Secret That Everyone Needs to Know Abo…

    by Zohair time to read: 10 min
    2