The process of creating detailed and lifelike digital worlds usually requires a lot of patience, creativity, and skill.
However, now, thanks to AI, people can simply all of that work to artificial intelligence algorithms.
Developing and creating any worth-a-look virtual environment which actually looks realistic enough takes a lot of skill and a lot of time.
Graphics chips have to come into the equation if one wants to hand-craft the details.
These chips render,
- Appropriate lighting
- 3D shapes
Just to take an example, the latest and the greatest blockbuster of a video game, Rockstar’s Red Dead Redemption 2, managed to take a team that consisted of 1000 developers working for more than eight years to develop and create.
Moreover, occasionally, these developers had to go through 100-hour weeks.
Some believe that kind of an extreme workload may become a thing of the past in the not so distant future.
A robust and powerful (and new) artificial intelligence algorithm has the ability to simply dream up entire scenes with photorealistic details on the fly.
Nvidia, the world’s foremost chipmaker, has developed the algorithm.
Some say that the software would not just make the hard life of software developers much easier but could also help out in auto-generating virtual environments for teaching robots and self-driving cars about the physical world around them.
It could even have applications in virtual reality.
The vice president of the applied deep learning division at Nvidia, Bryan Catanzaro, recently mentioned that developers could create entirely new sketches which no one has ever seen before and then render those quickly.
He also said that Nvidia was actually teaching the relevant model to the algorithm so that it is able to draw sketches based on nothing but real life video.
Researchers working at Nvidia have made use of a pretty standard machine-learning approach in identifying various different projects in a given video scene.
The algorithm can identify things such as,
- And much more.
After using such machine-learning approaches, the team at Nvidia made use of, what is now known in the industry, as GAN or Generative Adversarial Network.
They used GAN in order to successfully train a given computer to fill in all the photorealistic 3D imagery.
After that, the team can then feed the system an entire outline of a given scene.
This outline shows where all the different objects are.
Knowing all of that, the algorithm has the ability to fill in slightly shimmering but stunning details.
Research shows that the effect is quite impressive.
However, it is also true that some of the drawn objects to occasionally look like they are twisted or warped.
Catanzaro mentioned that classical computer graphics actually rendered by simply building up a given scene the way light would interact with objects.
It is because of that the researchers wondered what they could do in this field with the help of artificial intelligence in order to change the process of rendering scenes.
Catanzaro is of the opinion that such an approach could actually lower the current high barrier to game design.
There are lots of other things that one could use this technique in lots of other situations apart from rendering entire scenes.
According to Catanzaro, developers could approach this technique to take a video game and add a real person to it after feeding the system just a couple of minutes of video footage which included the person in real life.
Catanzaro also suggested that developers could use the same approach to help render photorealistic settings for VR applications.
They could also use the same technique to provide better synthetic training data for robots and/or vehicles which are autonomous.
From a practical point of view, Catanzaro believes that one could not realistically collect real and genuine training data to feed to the system for every possible situation which may or may not pop up.
Researchers announced the new algorithm at NeurIPS, a couple of days ago.
Readers who don’t know should know that NeurIPS is one of the biggest AI conferences in the world which regularly takes place in Montreal.
A University of British Columbia professor, Michiel van de Panne, recently said that the work done by Nvidia research team was interesting as well as impressive.
Michiel specializes in computer graphics and machine learning.
He noted that the majority of the previous work involving Generative Adversarial Networks only involved synthesizing much simpler elements like character motions and/or individual images.
Van de Panne also said that the new work from Nvidia pointed the way to a fairly different method of creating various forms of animated imagery.
Moreover, according to Van de Panne, the Nvidia approach came with an entirely different set of capabilities.
Simply put, Nvidia’s approach proved both more interactive and less computationally intensive.
Readers may find it interesting to know that the new Nvidia algorithm represented only the latest application in a dizzying procession of huge advances involving Generative Adversarial Networks.
After all, GANs only came into existence a few years ago when a Google researcher thought of the basic idea during an argument with some of his colleagues at a party.
Since then, Generative Adversarial Networks have successfully emerged as a really remarkable tool for researchers, developers, and engineers for synthesizing eerily strange and, lately, realistic audio and imagery.
There is no doubt about the fact that such a trend has lots of promises to actually revolutionize special effects and computer graphics.
Techniques such as the one Nvidia has brought forward might assist audio makers and artists develop and/or imagine new ideas.
However, it is also true that techniques such as the one Nvidia unveiled at NeurIPS could undermine the public trust in audio and video evidence.
In fact, Catanzaro has admitted in the past that such a technique could have a ton of potential for someone willing to misuse it.
He recently said that Nvidia’s work represented a technology that could have wide applications and people could use it to do lots of things.
Latest posts by Zohair (see all)
- How to watch YouTube in the UK right here and right now. - 21 February 2019 9:05 PM
- How to watch Hulu in Greece (The complete guide for you right here) - 21 February 2019 9:04 PM
- How to install Kodi 18 Leia on Your Mac - 20 February 2019 10:36 PM