Ian Goodfellow has invented a very powerful artificial intelligence tool where different neural networks pit against each other.
With that said, now is the time for Ian and of course, the rest of us, to come to terms with the consequences of his discovery.
Back in 2014, on one special night, Ian Goodfellow decided to go out for a drink in order to celebrate the graduation day of a fellow doctoral student.
As it turns out, at The Three Brewers (or Les 3 Brasseurs, which is one of the most popular Montreal watering hole), a few friends of Ian Goodfellow came to him for help.
They wanted Ian to assist them in with a somewhat thorny project they had previously decided to work on.
The project involved a computer which could create images/photos on its own and by itself.
At that time, researchers around the world had already managed to use neural networks (which were basically algorithms that were loosely modeled o a human brain’s web of neurons) as a form of generative models.
Researchers used these generative models to create believable and new data on their own.
As some would expect, the results that these researchers met with often did not have the required quality.
In other words, an image belonging to a face that a computer-generated had a lot of errors.
Sometimes the image would not have any ears.
Other times it would tend to appear very blurry.
To solve these problems Ian Goodfellow’s friends had a plan.
They proposed to make use of techniques such as complex statistical analysis of each and every element that made up a given image or photograph.
This would effectively help the computer or the machine to generate decent images on their own.
Needless to say, such an endeavor would have consumed a truly gigantic amount of computer power for number-crunching tasks.
So what did Ian Goodfellow do?
He told his friends that their plan simply would never work.
But Ian didn’t just say that and called it a day.
He kept pondering about the problem that his friends brought to him over his drink.
As he was doing that, he “accidentally” hit an idea.
Ian Goodfellow thought about what would happen if someone pitted two different neural networks against one another.
When he talked about the idea with his friends, his friends showed the due skepticism.
Afterwards, when Ian went back home (and found out that his girlfriend had already gone to sleep) he made the decision of giving his idea a try.
As good a programmer he was, Ian started to good at night and kept on hammering till the early hours.
Then he decided to test his code and/or software.
And his software worked the very first time he tried it.
So what did Ian invented that one fateful night?
Whatever it was, everybody ended up calling it GAN.
Or, in other words, Generative Adversarial Network.
This machine learning technique has now managed to spark a ton of excitement in the relatively new field of artificial intelligence and machine learning.
Moreover, it turned the creator of GAN, Ian Goodfellow, into an artificial intelligence rockstar.
The last few years have seen artificial intelligence researchers successfully managing an impressive amount of progress using a very popular technique known as deep learning.
Deep Learning, as a technique on paper, is pretty simple to explain.
If you supply a computer with a deep learning system with a ton of images (or sometimes just enough images) the machine will eventually learn to, for example, differentiate or recognize a person (a pedestrian to be more specific) who is about to move and cross a given road.
These types of deep learning applications have made it possible for researchers and engineers to realize things such as self-driving autonomous vehicles very quickly.
Deep learning techniques have also had a great impact in conversational technology that currently powers almost all virtual assistants including Siri and Alexa.
However, even though deep learning artificial intelligence systems have managed to learn how to recognize different things, these artificial intelligence systems have not achieved any competence when it comes to creating things.
And this is the goal of machine learning techniques such as GAN.
GAN gives machines the ability to imagine.
Or least it gives them something that is akin to, what humans know as, imagination.
Such techniques don’t merely enable machines to have the ability to compose audio content or draw some weird but pretty pictures; these techniques also make machines just that little bit less reliant on us humans.
In other words, humans would not have to instruct machines about the physical world and how the physical worked.
Currently, artificial intelligence programmers regularly have this need to instruct machine precisely what they have in the given training data that the machine will have to feed on.
To put it another way, if a researcher wants to train an AI system then the researcher will have to provide millions of pictures where there is a pedestrian who is trying to or actually crossing the road and then more pictures that don’t have a pedestrian crossing a given road.
As some of our readers can probably imagine, this is labor-intensive.
It is also pretty expensive.
Moreover, such techniques limit the ability of the AI systems in dealing with even the slightest of departures from what researchers trained it on.
Now researchers believe that in the coming future, machines will have no trouble in feasting on a huge amount of raw data.
They will also get a lot better at working out exactly what it is that they need to actually learn from the given data set.
And machines will do that without any human intervention.
Researchers also believe that point would, in reality, mark a huge leap forward for the industry in AI that researchers currently know as unsupervised learning.
Once machines get better that unsupervised learning, the resulting possibilities would go beyond previously imagined limits.
Take for example a self-driving autonomous car.
With a good AI system, the car would have the ability to teach itself a lot of things about several different kinds of roads and road conditions without ever having the need to leave the garage.
A robot would have the necessary features to not only anticipate but also take action against all the different kinds of obstacles that it might have to encounter while moving about in a busy warehouse.
And the robot would do that without something or someone else helping it to go around the obstacle.
Humans have this innate ability to reflect and imagine on several different scenarios.
This is what makes humans, humans.
This is also why some believe that when future historians dealing with technology would look back at how far machines have come, they would probably see Generative Adversarial Network as a giant step in the direction of developing machines that have the ability to imagine and perhaps even something similar to human consciousness.
The chief AI scientist at Facebook, Yann LeCun, has as a matter of act called Generative Adversarial Networks the coolest idea in the field of deep learning in the last, as many as, 20 years.
Andrew Ng is one other artificial luminary (and Baidu’s former chief scientist) that has said that General Adversarial Networks represented a fundamental and significant advance in the field of AI and machine learning.
He also said that GAN has inspired a big chunk of the community of AI researchers.
And that community is still growing.
Artificial Intelligence Fight Club
The inventor of Generative Adversarial Network, Ian Goodfellow, now works as a research scientist at Google’s Google Brain division.
HIs offices are located in the company’s headquarters in Mountain View, California.
At the time a reporter met him at his offices and told him about his superstar status in the AI community, he called it a bit surreal.
And perhaps it would not come as a surprise to anyone that after inventing Generative Adversarial Networks, Ian likes to spend the majority of his waking hours working hard against actors who have this desire to use his invention for evil purposes.
Generative Adversarial Networks are magical.
Their magic lies in the fact that there are two neural networks and there is a rivalry between them.
In essence, General Adversarial Networks mimic the back-and-forth action between an art detective and an image forger.
Both try to outwit each other at all times.
Changing the context to Generative Adversarial Networks, this means that there are two neural networks that researchers have trained on identical data sets.
The first of these two neural networks is, what researchers call, the generator.
What is its responsibility?
Researchers charge the generator with the task of producing various artificial outputs.
These could come in the form of handwriting and/or images.
The images and handwriting samples are pretty much as realistic as is practically possible.
Then there is the second neural network.
Researchers call this second neural network as the discriminator.
The job of the discriminator is to compare the artificial outputs with the original data set that contains genuine images.
After comparing them, the generator has to determine which of the artificial outputs, such as images and handwriting samples, are fakes and which are real.
The generator takes a look at the discriminator’s results and then, based on that, adjusts its related parameters in order to create new photographs and/or images.
And this process of generator fake outputs and then spotting the fake ones from the originals continues till the second neural network, the discriminator, no longer has the ability to tell apart the bogus from the genuine.
Let’s take another example which gained wide publicity last year.
A chip company that we know as Nvidia (which has also invested heavily in AI) had its researchers train a GAN.
The Nvidia researchers trained it in order to generate photographs of imaginary celebrities.
To do that, they had the GAN study some real ones.
The results were not entirely perfect.
In other words, the GAN researchers trained did not manage to produce images of fake stars one hundred percent of the time.
With that said, the GAN did manage to produce some which looked impressively genuine and realistic.
Now, if researchers had tried to do that with other machine learning techniques, they would have had to feed the AI system with tens of thousands of celebrity images as a part of its training.
On the other hand, GANs have the ability to become proficient at the given task with only a few hundred.
GANs do give machines the power of imagination.
But this power still comes with lots of limits.
Once researchers have fed and trained a GAN on lots of dog images, it has little problem in generating a fake image of a, to take an example, a dog with a unique pattern of dots.
However, even with all the training, the GAN can’t really conceive of an animal that is entirely new.
Moreover, researchers have also found that the actual quality of the images or the original training data matters a lot in the sense that it can have a big influence on the GAN-generated results.
In another example (which is also a telling one in fact), researchers observed a GAN that had begun to produce images of cats.
On first sight, one doesn’t seem to think there is anything wrong with cats.
But the images that GAN produced had cats with various letters that tit integrated directly into the cat images.
Why did the GAN do so?
Because of the faulty data.
The data researchers provided to train the GAN contained images with cat memes that most of us can find littered all over the internet.
In effect, the machine had managed to teach itself that it was supposed to consider words as part of the cat itself.
According to a machine learning researcher working at the University of Washington, Pedro Domingos, Generative Adversarial Networks are also pretty temperamental.
What does that mean?
It means that if the neural network that is acting as the discriminator is a bit too easy for the neural network that is acting as the generator to fool, then the generator would produce an output which would not look anywhere near realistic.
Researchers have found that sometimes it is difficult to calibrate the two dueling and different neural networks.
This also conveniently explains why General Adversarial Networks can sometimes train on a data set and then spit out as much weird as bizarre stuff.
We’re talking about bizarre stuff like two-headed animals.
Still, all of these problems are just challenges which have not managed to deter AI researchers.
Since 2014, the year when Ian Goodfellow along with a handful of other researchers published the first-ever study on GANs, the community has seen plenty of AI researchers publishing hundreds of papers on GANs.
The technology has also gained a lot of fans.
One fan has gone far enough to create a web page.
It is called GAN zoo.
The fan has dedicated the page to keep a track of all the various version of the GAN technique that researchers have developed so far.
The most straightforward and obvious applications of GANs will no doubt involved areas that utilize a lot of image assets.
These areas are mostly present in the fashion and video games industry.
For instance, GANs could help game developers on how a game character would look like while running when it is raining above.
However, looking into the future, Ian Goodfellow believes that Generative Adversarial Networks would drive other and probably more significant AI-based advances.
He recently said that researchers had a ton of areas related to engineering and since that needed a lot of optimization.
Ian cited examples in the medical industry where companies have to make medicines which are more effective.
GANs would also help companies that are making batteries to get more efficiency out of them.
According to Ian Goodfellow, GANs will enable that next big wave of innovation.
Another area where GAN could prove useful is high-energy physics.
Scientists who are working at the Large Hadron Collider in Switzerland (at CERN) have to use super powerful computers in order to simulate hundreds of subatomic particles and their resulting interactions in some pretty big machines.
Needless to say, these simulations are not only slow but they also require a massive amount of computing power.
Researchers working at Lawrence Berkeley National Laboratory and Yale University have managed to develop a Generative Adversarial Network that can learn how to generate reasonably accurate predictions of how any given subatomic particle would behave.
Of course, researchers first have to train the GAN on existing data sets related to various simulations.
But after training, GAN can give researchers results much more quickly.
Another promising field that GANs can have a big impact on is medical research.
Researchers sometimes are unable to gather sufficient real patient data because of privacy concerns.
This makes it harder for them to analyze and find out why, for example, a particular drug did not successfully work.
Generative Adversarial Networks can help researchers solve such problems.
By generating loads of fake records.
These fake records are usually almost as accurate as real patient data according to the Casey Greene who works at the University of Pennsylvania.
With GANs, researchers can share the data more widely.
This would, in turn, help researchers advance much more quickly.
Moreover, all the while, they can keep the real patient record under tight protection.