Researchers are more than prepared to combine all the latest advances in the fields of machine learning and/or artificial intelligence with robotics to really transform warehousing and manufacturing.
This would effectively take artificial intelligence to that very crucial “next” level.
More and more modern robots are starting to competently perform various kind of peculiar Sisyphean tasks.
Currently, researchers have designed a robot arm that has the ability to hover over a sparkling small pile of moderately cooked chicken pieces.
As one could probably tell, the robot arm doesn’t just hover over these pieces.
It can dip down and then retrieve one single piece from that pile as well.
After taking a few more moments, the robot arm has no trouble in swinging around and then placing the “picked” chunk of chicken directly into a given bento box very gently.
And yes, the bento box isn’t stationary.
It is on a conveyor belt, moving ever so slowly.
Osaro, a company based in San Francisco, control the robot arm with its proprietary software.
According to the company, the robot arm is actually smarter than all the robots people have ever had a chance to see before.
Osaro’s very own software has managed to teach the robot arm how to pick up a chicken piece and then place that chicken piece into a bento box within a space of five seconds.
Osaro expects that within the next 12 months, the company will ship its robots to go and work in all those Japanese food factories.
So should people worry about the long-dreaded robot uprising?
Well, to answer that question all that people need to do is stop for a moment and step inside any modern-day factory to notice how robots, currently, are in no position to start an uprising let alone carry it out on a mass scale to enslave mankind.
Most of the modern-day robots are powerful.
They are also precise.
However, these robots have zero ability to do anything unless and until a programmer meticulously programs them to do a given task.
In fact, any ordinary robotic arm doesn’t have the required sensors that it would require if it wanted to pick up a given object after the object had been moved even a single inch.
Moreover, robot arms are atrociously hopeless when it comes to grabbing any object that they are not familiar with.
In short, robot arms (even the modern ones) don’t have the ability to tell the difference between a lead cube and a marshmallow.
So in that context, it is truly remarkable that Osaro has managed to come up with a robot arm that can not only pick up pieces of chicken which are irregularly shaped but also pick it from a haphazard small pile of chicken pieces.
Some would even call it a genius robotic arm.
Even though the use of artificial intelligence is starting to proliferate, industrial robots have mostly stayed away from it.
In other words, most of these still haven’t had their first touch of artificial intelligence and how it can improve their basic functions.
Looking back at the progress that artificial intelligence has made over the last half a decade or so, one can see how AI software has learned to become adept at tasks such as winning various board games and identify images.
AI software has improved the greatest in tasks where it has to respond to a given person’s voice without any form of human intervention.
Now, AI software has the ability to teach itself varied and new abilities.
Of course, researchers have to make sure that they give AI the required amount of time to practice that new ability.
But that’s software.
What about hardware?
Well, as far as AI software’s hardware cousins are concerned, all the while that AI software has been stealing all the headlines, they (robots) have had to content themselves with daily struggle while trying to pick up objects such as an apple and open a given door.
Osaro wants to change that.
As mentioned before, the AI software that Osaro has developed, allow the software to actually control the robot arm so that it is able to identify different types of objects which are placed in front of the robot.
With the help of AI software, the robot arm is able to study how the objects in front of it would behave if it poked the object or grasped it or even pushed it.
After analyzing all these questions, it can make the final decision about how to handle that particular object.
Similar to almost all other AI algorithms, Osaro’s AI software also has the ability to learn a great deal from experience.
The robot arm makes use of nothing but an off-the-shelf standard camera along with a machine-learning software.
Of course, both of these things have to be accompanied by a moderately powerful computer.
Put all of those things together and you have a robot arm that can figure out the best way to grasp different things with more effectiveness.
It is entirely possible that the robot can study and learn how to effectively grasp nearly any given object that it is thrown at it after it has had enough trial and error.
There is no doubt about the fact that workplace robots that come pre-equipped with Ai will eventually enable automation to creep up into even more different areas of human work.
In theory, these robot arms could replace workers toiling hard in any factor that needs to sort, pack and/or unpack different products.
When researchers finally come around to giving these robotic arms the ability to navigate the chaos that exists on a factory floor, these robots will eliminate the need of humans in even more jobs especially in fields such as manufacturing.
No, it won’t start a robot uprising as we have alluded to before.
But that doesn’t mean such robot arms would not bring about a revolution.
According to Willy Shin, the industry was seeing a good deal of experimentation in current times.
Shih who has studied the latest trends in manufacturing at the prestigious Harvard Business School also said that people were making use of these robots and AI software to do lots of varied things.
Shih further added that there existed a massive amount of possibility for these robots to make automation happen wherever there were repetitive tasks being done.
For some, such robot arms are not a revolution for hardware but also for software AI.
What people need to understand here is that whenever engineers and researchers put AI software in a given body (a physical one), they are giving the robot the ability to make use of speech, navigation and visual recognition that it can take advantage of in the real physical world.
Artificial intelligence works in such a way that it gets smarter and smarter as it gets more opportunities to feed on an increasing amount of data.
In terms of the performance of the robot arm, that means the more it grasps different objects and places them in their required position, the more adept the robot’s software becomes at tasks such as sensing the physical world and then studying how it actually worked.
According to Peter Abbeel, the founder of originally-called Embodied Intelligence (now known as covariant.ai) and also a professor at the University of California, Berkeley, recently said that such robot arms and the like could lead to potential advances which would not have been possible without the use of all the available data.
For the uninitiated, covariant.ai is a startup that applies virtual reality and machine learning to robotics involved in manufacturing processes.
The Birth of Smart Robots
The scientific community had long been anticipating the current era.
As early as 1954, the likes of George C.Devol, who was an inventor himself, took the opportunity to patent a programmable mechanical arm design.
Then, seven years later in 1961, Joseph Engelberger, a manufacturing entrepreneur, transformed George C Devol’s design into a product known as Unimate.
Unimate was an awkward and unwieldy machine.
General Motors used the machine the first time on its New Jersey assembly line.
Even at that point in time, researchers and engineers had this slight tendency to not only miscalculate but romanticize these simple machines’ actual intelligence.
In fact, Engelberger called its Unimate, a simple machine, a robot as a form of honor to Isaac Asimov’s (a science fiction writer) dreamed up androids.
Of course, Engelberger machines were anything but Androids.
They were simply crude mechanical devices that engineers could direct to successfully perform simple and specific tasks via rudimentary software.
It may be hard to believe but the more advanced and recently robots still have not managed the threshold of mechanical dunces.
These modern robots require a lot of data and a lot of programming even for the simplest of tasks.
The field of artificial intelligence however, went on a different path.
Back in the decade of 1950s, researchers in the field of artificial intelligence had set out to utilize various computing tools to try and mimic human-like reason and logic.
A few of the artificial intelligence researchers sought to enable such computer systems to have physical bodies.
In the late 1940s, neuroscientists like William Grey Walter working in Bristol, United Kingdom, had managed to develop a total of two small but “autonomous” machines.
Grey dubbed the two machines as Elmer and Elsie.
The devices looked like turtles that had been equipped with neurologically inspired but still simple circuits.
Those circuits enabled the Elmer and Elsie machines to follow a given light source without any human input i.e on their own.
Why did Walter build such machines?
Well, according to some he wanted to show the world how a few neurons in the human brain and the connections between them could actually result in new and relatively speaking, complex behavior.
Of course, the task of first understanding and then re-creating human intelligence, in reality, proved quite a challenge.
A byzantine one.
As a result of that, the field of artificial intelligence had to go endure a long period of time where no breakthroughs were made.
Meanwhile, researchers found it intractably complex to program real-world physical machines which had the ability to carry out useful tasks in a real world that was messy, to say the least.
Perhaps that is the reason why researchers in artificial intelligence and robots have considered both these subjects as tablemates as far as their research labs were concerned.
For the past many decades, researchers have made great efforts to apply machine learning techniques to various industrial robots in order to make them perform more complex tasks without human intervention.
But those efforts still have not managed to take off in relevant industries.
With all of that said, about five years ago, artificial intelligence researchers managed to figure out how they could take advantage of a really old AI trick in order to achieve some incredibly powerful results.
Artificial intelligence researchers started using neural networks.
Neural networks basically represented algorithms that had the capability of approximating the way the human brain’s synapses and neurons learned things from an input.
As it turns out, these neural networks were actually direct descendants of some of the components that managed to give Elmer and Elsie their light-source-detection abilities.
Artificial intelligence researchers also discovered that big neural networks or deep neural networks had the potential of doing some remarkable things when researchers fed them mammoth quantities of data that was labeled.
This lead to researchers understanding how deep neural networks could recognize a given object shown in the form of an image with close to human perfection.
Such efforts, turned the field of artificial intelligence upside down.
The deep neural network technique became commonly known as deep learning.
Now, researchers all over the world use the deep learning technique to carry out tasks which involve perception.
Some of the tasks that have stolen the spotlight include,
- Speech transcription
- Face recognition
- Steps involved to train self-driving vehicles to not only drive but identify signposts and pedestrians.
In short, techniques such as deep learning have made it realistically possible for researchers to imagine a robot that had the capability of recognizing the given user’s face and then speaking intelligently to the user before navigating safely to the user’s kitchen in order to get the user a bottle of water from the user’s fridge.
Some researchers believe that one of the very first practical skills that artificial intelligence would give to simple machines is better dexterity.
Amazon, the technology giant, has regularly run a robot picking contest for the past couple of years.
The contest actually comes in the form of a challenge for artificial intelligence researchers to compete in designing a robot that has the ability to pick up different kinds of products but does so as fast as is currently possible.
Needless to say, each and every single team of researchers is making use of machine learning techniques to teach their robots how to satisfy the competition’s requirements.
As the teams’ robots spend more time on the task, they are getting better and better at it.
It is very clear that Amazon is keeping an eye on automating its own warehouse tasks such as picking and packing millions and millions of items that the company’s fulfillment centers have to deal with.
Ken Goldberg, another professor at the University of California, Berkeley (and a colleague of Pieter Abbeel) recently said that he had started to work in the robotic grasping field about 35 years ago but made extremely little progress.
According to Ken, advances in the field of artificial intelligence have helped to change his situation.
Ken now believes that the robotics community is poised to finally make a giant leap forward.
When Artificial Intelligence gets a physical body.
In New York’s Noho neighborhood, we have one of the world’s, if not the foremost, experts in the field of artificial intelligence, Yann LeCun, is currently hard at work to look for artificial intelligence’s next breakthrough.
Yann believes that robots may represent a significant piece of the elusive puzzle.
As far as playing a huge role in the deep-learning goes, none have played it better than Yann LeCun.
Back in the 1980s, LeCun persevered with neural network techniques when other researchers simply dismissed them as impractical.
Now, LeCun works as the chief AI scientist at Facebook after working for the social media giant as its head of AI research for many years.
Yann LeCun successfully led the development of various deep learning algorithms that helped companies such as Facebook to identify users in almost any and all photos that a particular person posted.
However, LeCun wants artificial intelligence to do just a bit more than just hear and see.
LeCun wants artificial intelligence to have the ability to reason and then take appropriate action.
According to him, for artificial intelligence to do that, it will need a physical presence.
Without that physical presence, artificial intelligence, according to LeCun, won’t be able to do more.
The thing LeCun wants people to understand is that machine intelligence had not reached the levels of human intelligence because human intelligence had an advantage.
It had the ability to interact with the physical world in many different ways.
The best way for human babies to learn about the physical world around them is by playing with various things.
Grasping machines embedded with AI could so much of the same.
LeCun recently said that the major portion of the most exciting artificial intelligence research now involved something to do with robots.
If researchers manage to pull off a truly remarkable type of machine evolution, it might result in mirroring a similar process which managed to give rise to what we know as biological intelligence.
Just as humans slowly began to improve their grasp of the world and started to build advanced tools along with social organization and a complex language, something similar could happen in the field of artificial intelligence as well.
Till now, artificial intelligence has, for the large part, existed inside machines/computers.
Currently, artificial intelligence is only interacting with still images and video games.
These are just crude simulations of the physical realm.
However, some believe that once AI programs gain the capability to perceive the physical world and interact with that physical world then it could learn more about it.
Once it does, AI would eventually evolve to become better at communicating and reasoning.
According to Abbeel, if one solved manipulation in its fullest then one would probably have built a machine that would be pretty close to complete human-level intelligence.
Latest posts by Zohair (see all)
- Teaching Computers How to Smell: A Unique AI Strategy - 24 September 2018 11:32 PM
- An AI Strategy That Can Mimic How The Brain Learns to Smell - 23 September 2018 10:26 PM
- KProxy Review: The Complete Edition (With Pictures) - 23 September 2018 12:21 PM