Everyone Needs To Know This Dark AI Secret

ai_dark_secrets

Some want more transparency in how artificial intelligence systems work.

At the very core of artificial intelligence, there is a dark secret. And that secret is that no one actually knows, in the truest sense of the word, how the majority of advanced algorithms go about doing their work.

And that can cause a lot of problems.

 

About a year ago, a weird self-driving autonomous car made its way to the quiet roads of New Jersey (more precisely, Monmouth County).

That experimental self-driving car was developed by a group of researchers working at Nvidia, the popular chip maker.

Now, that car did not look anything different from all the other self-driving autonomous vehicles.

But even with that, it did something that other self-driving cars from the likes of General Motors, Tesla and/or Google could not demonstrate.

Moreover, the car from Nvidia successfully managed to show the rising power of machine learning and artificial intelligence.

The unique thing about the self-driving car from Nvidia was its independence.

In other words, no programmer or engineer provided the car with a single piece of instruction.

Instead of that, the self-driving autonomous car relied completely on a specific class of algorithm.

The algorithm had managed to teach itself how to drive by only watching how humans did it.

There is little doubt about the fact that a group of researchers managed to get a self-driving autonomous car to drive on its own that way is nothing short of an impressive feat.

However, it raises a lot of other unsettling questions.

The reason for those questions is simple enough as well:

No one has complete clarity on how the self-driving autonomous car goes about making its own decisions.

The self-driving vehicle’s sensors collect a ton of information from the car’s surroundings.

That information goes straight into the giant artificial neural network.

The neurons network processes the sent data.

After that, it delivers all the commands that the car requires in order to operate the vehicle’s steering wheel appropriately.

The self-driving car also makes use of that data to operate brakes as well all the other important systems.

For the majority of the cases, the result of the above-mentioned process seems to pretty much match the driving responses that one would expect if a human was driving the car via the same route.

However, who can guarantee that one day the self-driving car managed to do something completely unexpected?

There are so many things that can go wrong with a self-driving autonomous car.

It could simply crash into a tree.

And who is to tell if it won’t just stop at a green light?

At the time of writing this report, things stand in a position where it might become impossibly difficult for researchers to find out the ‘why’ of the self-driving autonomous car.

The entire system is very complicated.

Perhaps too much complicated.

It is so complicated that some think, even the car’s engineers who actually designed the self-driving car’s algorithms might stuffy a lot if they wanted to isolate the real reason for any of the car’s actions/action.

Now comes the other problem.

Researchers can’t really ask the car for a reason.

In simpler terms, right now there is actually no obvious method for researchers to design and develop a system using which the self-driving autonomous car could always manage to provide an explanation of why the car end up doing the thing that it did.

The self-driving autonomous cars such as those from Nvidia have mysterious minds.

And that, according to some, is pointing toward a rather looming issue with emerging technologies related to artificial intelligence.

shutterstock_703486276

The underlying artificial intelligence technology that the car makes use of is known in the community as deep learning.

This technique has proved fairly powerful when it comes to solving real-world problems for the last five years or so.

Moreover, researchers and engineers have managed to deploy artificial intelligence on a wide scale for difficult tasks such as,

  • Language translation
  • Voice recognition
  • Image caption

Now, the community hopes that the same artificial intelligence techniques would have the capability to solve deadly diseases and will make million-dollar (and perhaps even more) trading decisions.

Researchers and engineers also hope that artificial intelligence would help them accomplish countless other tasks in order to transform many industries from top to bottom.

However, that probably won’t happen.

Or rather, that should not happen.

Not unless and until the community of researchers and engineers are able to find ways of developing and making artificial intelligence techniques such as deep learning and others more understandable at least to their creators.

The community also has to ensure that artificial intelligence techniques and the tasks that they perform are held more accountable to their eventual users.

Not doing so would mean that it will become very hard for anyone to predict if and when a certain failure or failures may occur.

And of course, failures are always inevitable.

This is one of the reasons why the self-driving autonomous car from Nvidia is still in its experimental phase.

As far as the United States of America is concerned, law enforcement agencies and others are already making use of mathematical models in order to successfully determine if a person makes it to parole or not.

Not only that business and governments are using these mathematical models along with computer algorithms to determine which applicant would get approval for a loan and which one would get a job.

It is true that if someone could get a hold of all these mathematical models and have full access to them, then it would not be impossible to understand the reasoning of these models.

However, the problem is that employers, the military along with other entities have started to turn their attention to much more advanced and complex machine learning techniques/approaches which could make automated AI-enabled decision-making pretty much inscrutable.

As mentioned before as well, deep learning is currently the most widely used and common approach of all the existing artificial intelligence approaches.

This artificial intelligence technique represents not just a new but fundamentally different approach to program computer machines.

According to a professor who works at the Massachusetts Institute of Technology in the field of developing applications of machine learning, Tommi Jaakkola, deep learning algorithms represented a problem that was already relevant.

He also said that the problem was going to become much more relevant in the very near future as well.

Tommi further added that whether deep learning algorithms worked on helping entities to make an investment decision, a military decision and/or a medical decision, one would not want to solely rely on the current black box method.

Following on from that, some have already started to make arguments about how the ability to interrogate an artificial intelligence system, about various aspects of the information it uses to reach the conclusions that it does, should become a legal fundamental right.

Already, different governing bodies such as the European Union have shown intentions to require that technology companies must have the ability to provide users with a good explanation for all the decisions that their automated systems eventually reach.

Some believe that may as well be impossible not just for complex computer systems but also for those systems which seem to the common user as relatively simple, at least, on the surface.

These “simple” computer systems could come in the form of websites and apps which make use of deep learning in order to recommend content and/or serve advertisements.

The other thing that readers should know is that computers that actually run such online services and otherwise have essentially programmed themselves.

shutterstock_336766328

Moreover, they have managed to do so in ways that engineers and researchers cannot even understand.

As alluded to before, even the engineers who, in fact, built all such apps and services are unable to fully explain their creation’s behavior.

So it should not come as a surprise that such black box computer systems have raised plenty of mind-boggling questions.

Technology is going to advance at a rapid pace.

That is a given.

But society might, in reality, cross that specific threshold beyond which utilizing artificial intelligence would require a certain amount of the good old leap of faith.

There is little doubt about the fact that even humans are unable to truly explain their line of thoughts and thought processes at all times.

However, humans are always able to find new ways of intuitively trusting and gauging people.

The obvious question that arises from all of this is that will humans have the possibility of doing the same with machines which, first, think and then make decisions on their own and do so in an entirely different way from a human?

It should be obvious to anyone that humans have actually never built such machines before which are able to operate in ways that the creators of those machines do not understand.

Moreover, how effectively should us humans expect to not only communicate but also get along with such advanced and highly intelligent machine which could be inscrutable as well as unpredictable?

Such are the questions that take curious people on their own journeys into the fields of artificial intelligence and machine learning.

Perhaps to best understand artificial intelligence and the problems with algorithms, the best way to start one’s search is where bleeding edge of AI research takes place.

That place can be Google.

And it can be Apple as well.

Then there are tons and tons of other AI startups and research centers.

Of course, individual philosophers in the field of artificial intelligence can also help.

Back in 2015, a Mount Sinai Hospital research group in New York had an inspiration.

That group wanted to apply various deep learning techniques to the vast database of the hospital.

This database included records on all the patients admitted to the hospital.

shutterstock_650654335

The data set in question featured tens and hundreds of variables of different patients and the results drawn from their tests along with doctors visits and many other such activities.

As a result of the artificial intelligence program, the algorithm got full training by only using the data from a total of 700,000 individuals.

Researchers involved with the project called the program as Deep Patient.

After training the new program on all that old data, researchers finally decided to test the algorithm with new records.

The results showed the program predicting diseases with an incredible success rate and it proved that something like it could be used on a wider scale one day.

Without any kind of instructions from experts, Deep Patient managed to discover various disease patterns.

Patterns that remained hidden in the hospital data.

These patterns seemed to accurately indicate when various patients were pretty much on their way to a broad range of different ailments.

The Deep Patient could even tell if the patient was at a risk of liver cancer.

Now, with that said, according to the researcher who led the team at Mount Sinai, Joel Dudley, doctors and researchers had a lot of methods to predict diseases.

Most of them offered a reasonable performance by just looking at the patient’s record.

However, he added, that Deep Patient simply offered way better performance and accuracy.

Pretty much at the same time, researchers still consider Deep Patient was a puzzling piece of code.

Deep Patient appears to have a staggering success rate at anticipating the onset of various psychiatric disorders such as schizophrenia.

It can anticipate other disorders surprisingly well too.

However, since physicians have always found it notoriously difficult to predict schizophrenia, Joley Dudley could not help himself but wonder how Deep Patient made predicting the disorder possible.

Of course, Joel still has no idea how it does that.

Moreover, the majority of the new clues that he has found so far offer no such clues as to how the Deep Patient algorithm is able to do it.

It stands to reason that if any algorithm such as Deep Patient is, in reality, going to assist physicians in predicting diseases then it would ideally give doctors the exact rationale for all its predictions as well.

It has to do that in order to reassure doctors that the algorithm is not only accurate but also knows what it is doing.

The other reason why it should provide a rationale for its decisions is to justify any and all changes in drugs that the doctor has prescribed to someone.

Dudley recently ruefully told reporters that the artificial intelligence algorithm community knew that it could build all these useful models but it could not explain how the models worked.

As mentioned before, this isn’t the way artificial intelligence has always worked.

Right from the outset, the field had a total of two schools of thought regarding the extent to which artificial intelligence ought to be explainable or understandable.

A lot of early researchers had this idea that building computer machines that could reason only according to logic and rules made the most amount of sense.

This school also held the belief that by doing so, the machine’s inner workings would become transparent to anyone and everyone who had the will and the time to examine a bit of code.

The second school of thought felt that intelligence could and probably would emerge more easily only if machines tried to take inspiration from biology.

This meant that the machine should learn by not only observing but also experiencing.

Some of our readers might have already guessed that the second approach meant that fields such as computer programming would have to turn on their heads.

In such an approach, Joel Dudley said, the programmer would not write various commands in order to solve a given problem.

Instead of doing that, the program would have the ability to generate its very own computer algorithm.

It would base that algorithm on example data as well as the desired output.

The majority of the machine learning techniques which later evolved into some of the most powerful and robust artificial intelligence systems actually followed the second path.

In other words, the machine had the ability to (and did) program itself.

However, in the beginning, this approach did not provide much use and had very limited practical applications.

In fact, throughout the 60s and the 1970s, such machine learning techniques remained pretty much confined to the very fringes of the artificial intelligence fields.

After that, the fact that many industries began the process of computerization very rapidly and large data sets also started to emerge, the same machine learning techniques managed to renew the community’s interest once again.

That renewed interest is what actually inspired recent developments which eventually lead to most of the very best and powerful machine learning techniques.

One specific version that deserves a mention here is now known as the ANN (artificial neural network) technique.

Then, by the time the 1990s came around, neural networks gained the ability to automatically digitize various characters written by hand.

However, real progress did not start to take place until the start of the current decade.

This is when, with the help of some really clever refinements and tweaks, that a very large (in other words, deep) neural network managed to demonstrate rather dramatic enhancements in automated perception.

Today, there is no doubt about the fact that Deep learning is the thing that is responsible for the explosion of interest and money in artificial intelligence.

Deep neural network managed to give normal-looking computers some extraordinary powers.

Some of the powers included the machine’s ability to not only recognize words coming out of a person’s mouth, but to recognize them as accurately as a real person could.

That, according to most researchers, is a skill which is far too complex for any programmer to code into a given machine by hand.

Not only that, deep learning has also managed to transform fields such as computer vision.

It has also dramatically improved tasks such as machine translation.

Now, researchers and engineers are using deep learning techniques to guide key decisions of all sorts in the field of,

  • Manufacturing
  • Finance
  • Medicine

And, for sure, beyond as well.

There is much to discuss about the inner workings of algorithms that have given artificial neural networks and machine learning their current high standing.

Stay tuned for the next part of this post tomorrow.

 

Zohair

Zohair

Zohair is currently a content crafter at Security Gladiators and has been involved in the technology industry for more than a decade. He is an engineer by training and, naturally, likes to help people solve their tech related problems. When he is not writing, he can usually be found practicing his free-kicks in the ground beside his house.
Zohair

COMMENTS

WORDPRESS: 0