Dactyl successfully learned complex physical tasks by using reinforcement-learning algorithms and then practicing those in a VR environment.
Researchers in the field of artificial intelligence have recently demonstrated that it is possible to use self-teaching computer algorithms to enable a given robot hand some really remarkable (and new) dexterity.
AI researchers’ creation actually managed to teach itself techniques which it could use to manipulate a given cube with an uncanny skill.
The AI-driven hand practiced a long time in order to learn that skill.
According to researchers, it did the equivalent of a full century (a hundred years) of practice inside the computer simulation they had set up.
Of course, in real time that century lasted only a few days.
Researchers also want to let readers know that the AI-driven robotic hand still does not demonstrate anywhere near the amount of agility that a human hand does.
Moreover, they have also revealed that the AI-driven robot hand is also on the clumsy side and hence warehouses and/or factories can’t deploy it just yet.
Even with that, the research pretty clearly shows that machine learning has a lot of potential when it comes to unlocking new and useful robotic capabilities in the very near future.
The research on these robotic arms also suggests that there may come a time when robots gain the ability to teach themselves even more skills while training inside various virtual worlds.
According to some, this could greatly boost the speed at which researchers could complete the process of training and/or programming these robotic arms.
Researchers have dubbed the above-mentioned robot system as Dactyl.
OpenAI is the research lab (actually a nonprofit organization that works out of Silicon Valley) that developed it.
The actual system makes use of a standard off-the-shelf robot hand developed by Shadow (a company based in the United Kingdom), an algorithm and a simple camera.
Moreover, the algorithm that researchers have used with the robotic system has already done the hard work of mastering DoTA, a multiple video game with a sprawling landscape.
This algorithm used the same approach to mastering the game with self-taught techniques.
More precisely, the AI-driven robotic hand’s algorithm makes use of a relatively new machine learning technique that researchers call reinforcement learning.
Researchers gave Dactyl (the robotic arm) a specific task of successfully maneuvering a specific cube in order to make certain that a specific face was actually upturned.
After giving the arm the above-mentioned task, researchers left the algorithm on its own to figure out (via methods such as trial and error) the kind of movements it would have to produce in order to get the desired results.
OpenAI has uploaded several videos of the robotic arm in question.
These Dactyl videos clearly show the robotic arm actually rotating the given cube with pretty impressive agility.
OpenAI’s robotic arm has the ability to automatically figure out many of the grips which human hands usually have to resort to in order to carry out different tasks.
With that said, it is also true that researchers at AI have also shown the long way AI still must go in order to have a significant impact on the real physical world.
Even after a total of hundred years of training in a virtual world the AI-driven robotic arm only had the ability to successfully manipulate cube in the required manner a mere 13 out of 50 times.
Needless to say, that is far higher number of tries that a human child requires in order to do the same.
According to the professor emeritus at the Massachusetts Institute of Technology and also the founder of Rethink Robotics, Rodney Brooke, such a robotic arm did not have the required set of skills to just go out there and fit into a given industrial workflow without problems any time soon.
ReThink Robotics, Brooke’s startup,has taken up the task of making even more intelligence industrial robots.
Recently Brooks told reporters that the fact robotic arms currently did not have the required capabilities to become useful pieces in the manufacturing line of any given industry wasn’t something bad.
It was fine, according to him.
Brooks also mentioned that he considered research as a very good thing to do if we wanted to see robotic arms helping out in actual processes in the future.
As mentioned before as well, these robotic systems work by taking advantage of a machine learning technique called reinforcement learning.
But what is reinforcement learning?
Well, for now, readers should know that this type of learning takes its inspiration from none other than animals.
More specifically, the way animals in the wild (and otherwise) seem to have this natural ability to learn via positive feedback.
Reinforcement learning isn’t something completely new though.
Researchers first proposed this technique a few decades ago.
However, this machine learning technique has only managed to prove itself as a practical machine learning technique in the last few years thanks to all the advances that researchers have made in the field of artificial neural networks.
In fact, back in 2017, MIT technology review named reinforcement learning as one of its 10 breakthrough technologies for that year.
Google’s parent company the Alphabet has also put in resources into its subsidiary, Google DeepMind to enable it to use reinforcement learning techniques in order to create a program called AlphaGo.
As many of our readers would already know, AlphaGo is the same computer program that managed to teach itself to play the subtle and fiendishly complex board game called Go.
It did that with superhuman levels of skill.
There are many other robotics researchers who have tested the reinforcement learning approach for quite some time.
However, most have found themselves to be hamstrung by the hardship of mimicking the unpredictability and complexity of the real world.
Researchers working at OpenAI deployed some interesting methods to get around such problems.
In other words, they introduced many random variations in the VR world that they used to train their robotic arm.
This way, the robotic arm had the opportunity to learn more about how it could account for nuisances such as noise coming from the robotic arm’s hardware, friction and all the moments when the given cube was partly hidden from the robotic arm’s view.
As mentioned before, many engineers worked behind the Dactyl robotic system.
One of those engineers is Alex Ray.
According to Alex, researchers could easily improve the workings of Dactyl by simply giving the algorithm a bit more processing power.
He also said that researchers should also introduce more randomizations.
Along with that, he added, he did not think that AI researchers had hit the limit just yet.
Alex Ray also pointed out that researchers at OpenAI currently did not have any plans to try and commercialize this robotic system technology.
For now, the team at OpenAI had all of its focus on trying to develop the world’s most powerful and efficient generalized learning approaches.
According to a roboticist working at the University of Michigan, Dmitry Berenson, AI researchers have so far found it very hard to develop such generalized learning approaches.
Berentzen who also has a specialization in machine manipulation said that currently, researchers in the field didn’t have a clear idea on how far the latest of machine learning techniques and approaches would take them.
He noted that coming up with the perfect network for a given task still involved a good amount of human effort.
However, he said, he believed that techniques such as simulated learning could very well prove themselves very useful.
Furthermore, he added, if researchers could reliably and successfully cross that significant reality gap, it would make learning exponentially all that much easier.
Having AI-driven robotic arms is one thing, but protecting them from hackers is quite another
It is safe to assume that in order for robots to perform ordinary tasks well in the future they would need to have some kind of internet connection.
And this is where some believe the problem lies.
As the number of robots that are connected to the internet increase, they will surely become lucrative targets for all kinds of mischief including cybercrime.
There is also the sense that hackers could just compromise hordes of different research robots for no other reason than fun.
Of course, the threat of sabotage will always be there.
Many research laboratories all over the world have a multiple number of experimental robots.
These robots inside these research laboratories.
And the possibility that these robots are pretty much wide open to all types of hackers is frightening.
What is ROS?
ROS is one of the most popular open-source operating systems that many practitioners in the field use on their research robots.
Stefanie along with her team of researchers managed to discover more than a hundred of such ROS machines which, the team found, had serious vulnerabilities that could hackers to not only access these research robots but also manipulate them over the internet.
According to Tellex, even though this “hundreds” number isn’t a huge number, the team’s results should serve as a warning for the entire research community.
Tellex also noted that such robotic systems could potentially present hackers with some really juicy targets if they wanted to cause some online mischief.
And the reason for that, according to Tellex, would be simple:
Hackers would simply find these exposed research robots as a fun and cool way of hacking stuff and take complete control of a real and live research robot.
With that said, Tellex did not rule out the possibility of state-sponsored cybercriminals and hackers to go after these research robots.
By hacking such research robots, cybercriminals could steal loads of important data, cause various kinds of accidents and disrupt scientific research.
Perhaps this is a good time to mention that the issues that Telex has touched upon enough don’t really have anything to do with the design of operating systems such as ROS.
These security oversights and/or flaws are meant to be present.
Tellex simply wants users to expect such problems and hence take all the necessary security precautions that they can take to secure their systems.
According to Tellex, if users and researchers don’t tread ahead with care, the security situation with these research robots could go from bad to worse in the very near future.
Tellex also added that as robotic systems slowly start to become more spread out and advanced around the globe, the research community should consider it an important task to make sure that they field these robotic systems in a safe and secure way.
Researchers working at the Brown University tried various methods to take control of one of these research robots.
The machine in question was present at the University of Washington.
Before doing so, researchers took permission from the owners of the machine.
After some work, researchers at Brown University demonstrated that they could easily control and read the research robot’s sensors.
Researchers also revealed that they could move the research robot around as well.
In fact, researchers even managed to find a vulnerable robotic machine right in the middle of their own lab as well.
The Brown University researchers had left the vulnerable machine for another group of researchers at MIT so that they could operate the robot remotely using nothing but virtual reality.
However, Tellex said that they should have taken the research robot offline after the group of researchers from MIT had finished their business with it.
It is common knowledge and research labs at universities have used robots in their work for decades now.
But these machines are now becoming more and more complex and sophisticated.
Moreover, researchers are always coming up with new ways to explore ideas which involve connecting these research robotic systems to the internet via a given network.
Researchers do that for many purposes.
One of the purposes is teleoperation.
Sometimes researchers want to enable one research robot to share information that it may have learned from some experiment with another robotic system.
Researchers call this approach as the cloud robotics approach.
Using such techniques, Robots actually gain the ability to teach each other.
As mentioned at the top as well, the majority of such machines run on ROS.
ROS stands for the term Robotic Operating System.
This operating system has proven itself as a boon for AI robotics researchers all over the world especially for the last decade or so.
The Robotic Operating System provides researchers with a standard platform using which they can program different hardware.
Researchers can also use the same operating system to grow a large array of packages.
What do these packages do?
These packages help robots to gain new capabilities.
Some of the abilities that these packages provide these research robots include algorithms and libraries for,
Many startups have also adopted the Robotic Operating System after seeing the possibilities that this operating system can unlock for research robots.
These startups usually work on developing novel and useful robotic systems.
Some of the most commonly known robotics systems include,
- Delivery bots
- Warehouse helpers
- Self-driving cars
While it is good that different industries have finally started to realize the impact that robots can have on their current business models, it is also true that industrial engineers, for the most part, can’t account for all security possibilities.
In other words, industrial engineers are very good when it comes to taking the security of their machines more seriously.
However, the fact that the coming robots have internet connectivity makes it very hard to counter the inevitable creation of various new vectors for cybercriminals including hackers.
The Chief Executive Officer of Open Robotics (an independent and non-profit foundation behind Robotics Operating System), Brian Gerkey, recently said that when his organization started its work on the Robotic Operating System over a decade ago, the organization wanted their operating system to offer great flexibility to researchers and at the same time, offered ease-of-use.
He also said that he agreed with the authors of the paper who noted that users and researchers operating their research robots with Robotic Operating System should take all the necessary precautions in order to secure their Robotic Operating System as well as the machines using them directly at the network level.
Brian, of Open Robotics, also noted that his organization had currently started work on the next version of its software (version 2.0) which will offer users and researchers alike more security.
Recently the Open Robotics foundation also made the announcement that it had created a brand new and security-focused edition of its older operating system.
Open Robotics is calling the new operating SROS, or Secure Robotic Operating System.
Of course, the role of companies in ensuring that their machines (and other technological products) are secured against such attacks cannot be understated.
Perhaps the government will come into play as well to make sure that companies stick to certain laws and regulations.
For such purposes, one has to look at people like Gary Reback.
Gary made his name as a lawyer in Silicon Valley for pursuing the United States Department of Justice on the Microsoft.
His landmark lawsuit alleged that Microsoft, as a company had actually abused its dominant position in the operating system market with the Windows operating system platform by favoring Internet Explorer (a web browser) over its closest rival, the Netscape web browser.
The case lasted for years.
Then in 2001, it finally ended in stalemate.
However, because of the fight that Microsoft had to put up against the lawsuit, the company trod much more carefully after that.
To read more about this side of the story, click here.
Latest posts by Zohair (see all)
- 21-year-old goes to jail for creating RAT software - 17 October 2018 6:51 PM
- Net Neutrality laws are not allowing Comcast to make more money - 17 October 2018 6:43 PM
- An AI will meet you at your next doctor’s appointment - 17 October 2018 6:32 PM