Researchers in the field of artificial intelligence commonly base their new machine learning techniques on how the actual visual system manages to process information.
In order to beat the limitations of machine learning techniques, researchers are now starting to draw inspiration from the human brain’s ability to sense smell.
The modern world can thank artificial intelligence systems which include ANNs (or artificial neural networks, which take broad inspirations from neurons in the human brain as well as various connections of the human nervous system for performing at wonderful levels at tasks whose constraints are known.
These new artificial intelligence systems also tend to demand a ton of actual computational power along with humongous quantities of quality training data.
All of these techniques essentially serve new artificial intelligence systems to make them great at tasks such as playing Go and/or chess.
Using the same techniques, these artificial intelligence systems are also able to detect if an image contains a car or not.
Moreover, they can also differentiate between the various depictions of dogs and cats.
However, according to a computational neuroscientist working at the University of Pennsylvania, Konrad Kording, new artificial intelligence systems were rather pathetic when it came to performing tasks such as writing stories and short stories along with composing tunes.
Konrad also said that artificial intelligence system had great trouble in reasoning with the real world in any meaningful way.
In order to overcome all the current limitations of artificial intelligence systems, several research groups in the artificial intelligence field have begun to turn back to the thing that started it all:
The human brain.
Researchers in the artificial intelligence field want to study the human brain for more information in order to get more ideas.
However, only a few of those research groups in the field of artificial intelligence are selecting what, to many, may seem at first like a pretty unlikely point to start one’s research:
The human brain’s ability to sense smell.
In other words, olfaction.
It is no secret that researchers in the field of artificial intelligence are trying their level best to have a better understanding of the human brain in terms of how natural organisms process various types of chemical information.
In fact, some who already have carried out some research in this field are finding some success in uncovering the related coding strategies that they think seem pretty relevant to specific types of problems in the field of artificial intelligence.
The other point that needs to be mentioned here is that, olfactory circuits actually bear quite a few striking similarities to the more sophisticated and complex of the human brain’s regions.
Researchers have had these regions as s point of interest for many years now.
And their quest is to use that knowledge to eventually build better and more sophisticated computer machines.
It is also true that computer scientists have also begun to probe some of those findings in various machine learning and artificial intelligence contexts.
Revolutions and flukes
Most, if not all, state-of-the-art artificial intelligence and/or machine learning techniques that researchers use today actually came into existence, or researchers built them, at the very least in part to successfully mimic the exact structure of the human brain’s visual system.
The human brain’s visual system is also based on the extraction of information in a hierarchical manner.
Whenever there is some sensory data which the visual cortex receives, it first tries to pick out all the well-defined and small features.
All of these activities involve some type of spatial mapping.
Torsten Wiesel and David Hubel, both neuroscientists, discovered early in the 50s and the 60s that, as far as the human brain was concerned, specific neurons present in the visual system actually corresponded to the exact equivalent of particular pixel locations right in the given retina.
Needless to say, the finding was very important.
It was so important that both won a Nobel Prize for it in the following years.
The human brain contains things called cortical neurons.
There are actually layers and layers of them in the human brain.
Whenever, some form of visual information manages to pass along directly through these layers, other details about the textures, colors, and edges come together in order to form an increasingly abstract representation of the given input.
To take an example, the representation of that visual information may tell the brain that the object is actually a human face and also that the real identity of that human face is Jane.
Each and every single layer in the given network actually helps the given organism to achieve that same goal.
Researchers built deep neural networks through various techniques so that these networks would also work with the help of a similar hierarchical method.
This approach also leads to a legitimate revolution in artificial intelligence and machine learning research.
In order to teach these new deep neural networks how to recognize objects such as faces and others, researchers feed them thousands of different sample photos.
Any given system either weakens or strengthens the connections which are present between its various artificial neurons so that the system is able to more accurately and precisely determine that a specific collection of given pixels form the slightly abstract pattern that matches the human face.
If a deep neural network has enough samples at its disposal, it has the ability to actually recognize different faces in new photos and also in contexts that the network has not really seen before.
Artificial intelligence researchers have managed to find great success with deep neural networks in a variety of applications and not just in applications such as image classification.
Right now, artificial intelligence researchers have managed to find success with deep neural networks in areas such as,
- Language translation
- Speech recognition
- Various other kinds of machine learning applications
But still, Charles Delahunt is of the opinion that deep neural networks were more like freight trains rather than anything else.
Charles works as a researcher at the University of Washington Computational Neuroscience Center.
According to Charles, there is very little doubt that current deep neural networks had great power.
But they could only provide powerful results as long as researchers made sure that they had a reasonably flat ground.
Charles also said that neural networks worked best when one had a flat ground and one could lay down tracks and then support all of that with huge infrastructure.
However, this isn’t something that biological systems need at all.
Biological systems are able to handle deeply difficult and complex problems which current deep neural networks simply could not.
Let’s talk about a really hot topic in the field of artificial intelligence:
Self-driving autonomous vehicles.
As a self-driving autonomous vehicle navigated a completely new environment out in the real world and in real time, it may face a lot of problems because it would have to handle an environment which would be constantly changing.
It would also have to deal with an environment which was full of ambiguity and noise.
As mentioned before as well, most of the current deep learning techniques that researchers have developed so far have taken their inspiration from the visual system.
And this, according to some, can fall short.
This fact has also led some to believe that perhaps approaches which are loosely based on visual systems might would not lead researchers in the right direction.
According to a biophysicist at the Massachusetts Institute of Technology, Adam Marblestone, that researchers in the field of artificial intelligence chose vision as their dominant source of insight was, in part, incidental.
And also a historical fluke.
Of course, back then, the visual system was the one which scientists had managed to understand the best.
They also had clear and direct applications to image-based tasks which made use of machine learning techniques.
According to another computer scientist working at the Salk Institute for Biological Studies in the state of California, Saket Navlakha, deep neural networks cannot process each and every type of stimulus in the same way.
Saket also said that olfaction and vision represented two fairly different types of input signals.
Because of that Saket believes there might be different strategies to be employed when it came to dealing with different sets and types of data.
He also said that according to his thinking, there may be a ton of more lessons that researchers could learn beyond simply trying to study how various visual systems worked.
Fortunately, Saket along with some other artificial intelligence researchers are now starting to show the community how the olfactory circuits of various insects may actually hold a few of such lessons.
The thing about olfaction research that, scientists did not begin to put in the work in this field until the 1990s.
That’s when research in olfaction really started to take off.
It was in the 1990s that Richard Axel and Linda Buck (two biologists), both working at Columbia University during that time period, managed to discover the exact genes which were used for odor receptors.
However, since then the olfactory system has managed to become fairly well characterized.
Moreover, it is also something that researchers can easily study in organisms such as insects and flies.
Some felt that the olfactory system was something tractable in a specific way which current visual systems were not if one concerned himself/herself for effectively studying general computational challenges.
Of course, this is something that only a few scientists in the field argue about.
Recently Delahunt told reporters that researchers in the field worked on olfaction system because it represented a finite system which researchers could characterize relatively comprehensively and completely.
He also mentioned that researchers still had a fighting chance with olfactory research.
Another computational neuroscientist working at the University of Hertfordshire in the United Kingdom, Michael Schmuker, recently mentioned that people could already do a great amount of awesome stuff with vision.
Michael further added that it was possible for the research community to do the same with olfaction as well.
Sparse and Random Networks
As mentioned before as well, the olfactory system differed from the visual system on many fronts.
Generally speaking, smells are pretty much unstructured.
Moreover, smells don’t really have any edges.
Smell are not objects either which someone can group in space.
Smells are nothing but mixtures of varying concentrations and compositions.
Apart from that, smells are fairly difficult to accurately categorize as different from ot similar to one another.
This is the reason why researchers find it unclear which features in a given olfactory system should get their due attention.
Most of the times, researchers analyze odors with the help of a three-layer and shallow network which is significantly less complex when compared to the visual cortex.
Then there is the fact that neurons present in the olfactory areas always randomly sample the given receptor space.
They do that in a comprehensive and complete manner.
Such neurons in the olfactory areas don’t sample particular regions in the hierarchy.
Another neurobiologist working at the Salt Institute, Charles Stevens, says that neurons in the olfactory region employed, what he called, an anti-map.
The visual cortex takes advantage of a mapped system.
In such a system, a neuron’s position actually divulges something about the kind of information which that particular neuron carries.
However, in the case of an animal that is related to the olfactory cortex, that is simply not the case.
In the olfactory system, the present information is actually distributed throughout the whole system.
If someone wanted to read the data contained in that region then that someone would have to come up with processes which involve sampling from a specific minimum number of existing neurons.
The olfactory cortex achieves its anti map through what researchers know as the sparse representation of the given information which exists in a higher dimensional space.
As mentioned before, some biological organisms that make use of the olfactory circuit are insects and flies.
To take an example, let’s consider the case of a fruit fly.
The fruit fly makes use of 50 projection neurons.
These neurons receive some input from odor receptors.
Needless to say, each of these odor receptors is pretty sensitive to various different types of molecules.
Researchers now know that a single given odor can potentially excite several number of different neurons.
What more, each neuron essentially represents a wide variety of different odors.
Researchers believe that the whole thing is simply a mess of vasts amounts of information.
And there are overlapping representations.
As far as the olfactory cortex is concerns, that mess of information, at that specific point, is represented via a 50-dimensional space.
After that, a different process randomly projects all that information to a total of more than 2000 Kenyon cells, or at least that’s what researchers call them.
Kenyons cells are responsible for encoding various specific scents.
As far as mammals go, instead of Kenyon cells, cells present in the piriform cortex actually manage the process of projecting the information.
In any case, if there are 2000 or more Kenyon cells to which the system randomly projects information, that theoretically constitutes a tremendous 40-fold expansion in existing dimension.
As it turns out, this actually makes it much less difficult to accurately distinguish odors by simply understanding the patterns of various neural responses.
In a recent talk to the media, Navlakha explained that assuming someone had a thousand people and then someone stuff all of those 1000 people into a given room and then made an attempt to organize them via their hobby.
Once done, Navlakha said, one could surely be able to find a method in which to structure all the people present in such a crowded space according to their own groups.
However, if the situation changed and someone spread those people out on, say, a large football field.
Then things would change dramatically.
One would have to take into account all that extra space that “that” someone would have to actually play around in and then structure his/her data.
Going back to the example of a fly, once the being’s olfactory circuit has successfully managed to do that (structure the data) then it has to figure out a method using which it can identify various distinct and different odors with neurons that are non-overlapping.
How does the fly do this?
Well, the olfactory circuit in the fly does all of that by first sparsifying the given data.
Readers need to note here that only a hundred (give or take) Kenyon cells out of the 2000 Kenyon cells (or around 5 percent of the total Kenyon cells) actually manage to get highly active, to various given smells, as a response.
In the process of doing so, Kenyon cells provide each smell with a unique and identifiable tag.
The olfactory circuit silences the less active Kenyon cells.
To put it simpler word, while all the traditional deep neural networks (by taking various cues which they need from the actual visual system) successfully and constantly modify the actual strength of their neuron connections on the fly (in other words, as these networks learn), the fly’s olfactory system, speaking in general terms, doesn’t really seem to properly train itself simply by changing the strength of its connections between Kenyon cells and projection neurons.
Stay tuned as we discuss how olfactory systems are different and perhaps better than traditional neural network systems in the next post tomorrow.
Latest posts by Zohair (see all)
- How to watch YouTube in the UK right here and right now. - 21 February 2019 9:05 PM
- How to watch Hulu in Greece (The complete guide for you right here) - 21 February 2019 9:04 PM
- How to install Kodi 18 Leia on Your Mac - 20 February 2019 10:36 PM