The Exascale Computing Race: How US May Catch Up To China

exascale_computing

Exascale computing or quantum? Which one will rule the future?

Both China and the United States are vying to become the first country in the history of the world to create that elusive exascale computer machine.

A machine which would lead both these countries (and any other country that ends up having such a powerful machine) to very important and perhaps even critical advances in several computing and scientific fields.

Last month the United States of America saw people celebrating in huge numbers when the country’s Department of Energy finally unveiled the long-awaited Summit.

What is the Summit?

The Summit, at the moment (and according to the Department of Energy), is the fastest supercomputer the world has ever seen.

That has now resulted in a race between the United States of America and China (among other countries) to try and put all their resources behind achieving the next critical milestone in the field of computing processing power:

Computing at exascale.

Achieving exascale computing involves having the necessary technical skills and resources to build a powerful machine within the coming few years which has the capability to perform a billion billion simultaneous calculations each second.

In other words, these machines (when built) would have the ability to reach exaflop levels of calculations per second.

Such a machine would have enough power to surpass Summit’s computing capacity by a multiple of five.

To put it in simpler terms, if each person on the planet had to perform a calculation each and every second of each and every given day for a period of, close to, four years, then that would match the kind of computing potential that an exascale machine would have the ability to perform in a flash.

Needless to say, that amount of power would be phenomenal for a single computer machine.

And that is exactly the kind of power which hard-working researchers will need in order to run and compute complex and massie simulations which are responsible for the many advances in several fields such as artificial intelligence, renewable energy, genomics and climate science.

A supercomputing expert working at the University of Tennessee, Jack Dongarra, recently said that exascale computers represented powerful and scientific instruments.

He compared exascale computers to be as complex and important as particle colliders or even giant telescopes.

The other thing that is important to note about exascale computing is that, these super powerful machines would also have many applications in the industry.

Companies and organizations would have full opportunity to take advantage of these exascale computer machines and accomplish a variety of tasks.

Exascale computer machines can provide great help for speeding up various product design processes.

shutterstock_1058770937

These machines can also help researchers and companies to identify new useful materials very quickly.

Of course, there is little doubt about the fact that intelligence agencies along with the military would also want to get their hands on such exascale computer machines.

They will have a keen eye on how to use such machines for various national security applications.

This is nothing new though.

As mentioned before, there is an actual race going on between countries such as the United States of America and China to hit the exascale computing power milestone as quickly as possible.

This is just one part of a larger and burgeoning competitive race between the United States of America and China for the leadership of the technological world.

We have already mentioned the fact that other regions and countries such as Europe and Japan respectively are also making their own efforts in order to come up with super powerful computers.

At current moment in time, the Europeans have put their hopes in developing such a machine by 2023.

On the other hand, the Japanese feel their chances of building such a massive machine would come a bit earlier in 2021.

Back in the year 2015, China official unveiled an elaborate plan which would lead the country to produce an exascale computing machine within the next five years.

That means, China should hit the significant exascale computing milestone some time near the end of 2020.

China, as several number of media reports that have come out in the past 12 months or so have already revealed, is well on track not only in terms of progress but to actually to complete and achieve its rather very ambitious goal.

A professor at the prestigious Beihang University in Beijing, Depei Qian, while talking to MIT Technology Review in an interview said that the country could potentially fall behind on its plan.

Depei Qian assists in managing China’s exascale computing efforts, says the reason why he feels China may not be able to stick to its own schedule is because the people here still do not know if they have the ability to make the exascale computing machine by the end of the year 2020.

shutterstock_1124379359

He further added that China may have to experience half a year or a full year worth of delay before achieving its target.

At the time of writing this report, teams involved with building exascale computing machines in China have mostly worked on a total of three exascale prototype machines.

Of the three prototypes, two make use of chips which are homegrown and derived exclusively from the work that the country has done on existing supercomputers developed in the country.

What about the third prototype?

That one makes use of licensed super processor technology.

According to Qian, China (as a country) was still in the process of evaluating the pros and cons of all the three approaches.

Moreover, he mentioned, that the calls for more proposals to design and build an exascale computer machine that was fully functional have been slightly pushed back.

There is no doubt about the fact that any country would have to face a lot of big and deep challenges while trying to create as powerful of a computer machine as an exascale machine.

So, it is understandable why even a country like China would experience some slip-ups in sticking to its timetable.

This, according to some, may actually open up an important opportunity for the likes of the United States of America.

It is true that when China initially stated the goal of developing an exascale computer machine, it actually forced the United States government to not only take notice of China’s rise in the east in terms of technology but also accelerate the country’s own technological progression roadmap.

Eventually, the United States government committed the country in delivering its own first and one of a kind exascale computer before the end of 2021.

Originally though, the United States had made plans to come up with one by the end of 2023.

So just because of China’s move, the United States had to push ahead a full two years of the country’s original target date.

The United States is naming its exascale computer machine as Aurora.

Teams involved in the project want to develop the exascale machine for the United States Department of Energy.

More specifically, it wants to provide the department’s Argonne National Laboratory which is located in Illinois with the super powerful machine.

The company behind the exascale computer system meant for Argonne is Cray.

Cray, as should be obvious by now, is a supercomputing company that specializes in building such computers.

Some reports have also revealed that Intel is also producing exclusive chips to be used in the exascale supercomputer.

 

In order to boost the performance of supercomputers exponentially, engineers who are working on projects related to exascale systems all over the world are making use of advanced techniques such as parallelism.

What is this technique?

This technique involves engineers and designers working hard to pack hundreds and thousands of chips directly into millions and millions of processing united which the community knows as cores.

There is no easy way to find out a way in which all these cores can work in harmony to output the maximum amount of performance.

Researchers and engineers have to work very hard to find the best possible methods in order to make sure that a supercomputer machine is able to leverage all these cores.

The actual process requires a healthy amount of time-consuming and complex experimentation.

Then there is the problem of moving massive amounts of data between different processors.

Engineers also need to come up with ways to move data into and out of a given storage facility.

Both these processes have the tendency to soak up a ton of energy.

That, in turn, means that the cost of actually operating such a powerful machine over the course of its entire lifetime can, in reality, exceed the amount of money that went in to build the supercomputer machine.

The United States Department of Energy, in order to keep these costs in check, has set the upper limit of the total power consumption on part of these exascale computer machines to around 40 megawatts.

This would actually translate to quite a budget for the electricity requirements on the super machine.

According to some estimates, the budget could go as high as $40 million per year for only providing electricity to the machine.

For the purposes of lowering the total power consumption of the exascale computer machine, engineers working on related projects are actually making use of three-dimensional memory chip stacks.

More specifically, they are placing these stacks as close as humanly possible to the exascale machine’s compute cores.

This, they determine, would reduce the actual physical distance the data would have to travel.

Actually, this is what the chief technology officer at Cray, Steve Scott, explained to the media recently.

He also said that engineers on the project now were increasingly making use of flash memory.

Flash memory has the added advantage of using less power when compared to an alternative system like disk storage.

The main objective is to reduce the power needs of the machine in order to make it a lot cheaper than right now to store the massive amounts of data involved at various points that come into existence when the machine is trying to accomplish a calculation.

Engineers and researchers working on the project hope that by saving data in such a manner, they could help the exascale computer machine to recover quickly in the case of a glitch occurring within the system.

According to Scott, such kind of advances in the design of the exascale machine have assisted the team of engineers and researchers behind Aurora to a great degree.

He also said that the team had confidence in its ability to deliver the world’s fastest supercomputer before the end of 2021.

Of course, it doesn’t take a genius to figure out that after Aurora the United States would continue to follow it up with even faster supercomputers.

Back in April of this year, the Department of Energy made an official announcement in which it requested for all proposals that were worth up to $1.8 billion related to building two or even more exascale computer machines and to have them come online between the coming years 2021 and 2023.

Some media outlets have reported that these machines are actually expected to cost anywhere between $400 million and $600 million per piece.

The rest of the money (around a billion dollars) would go in the way of upgrading Aurora and/or create an even more powerful follow-on machine.

In fact, both the United States of America and China are further funding works and projects related to new software that would be required to run on these exascale computer machines.

As far as China is concerned, the country reportedly has set up teams which are working round the clock on a total of 15 exascale computer application areas.

On the other hand, the United States of America has set up its own teams which are hard a work to build software in 25 application areas.

These application areas would include scientific fields such as materials science and astrophysics.

shutterstock_1134846659

According to the associate director for computer sciences at United States Lawrence Berkeley National Laboratory, Katherine Yelick, the main goal of these teams is to make effort to deliver as many significant breakthroughs as are practically possible.

Katherine Yelick, is also a part of the United States leadership team which is coordinating the country’s exascale machine initiative.

One can’t deny the fact that there is a healthy amount of national pride that is wrapped up in this ferocious race to build the first exascale computer machine, the work that these researchers including Yelick are performing is just a timely reminder that the raw power of exascale computing, on its own, is not really the true indication of success in this field.

In other words, what would really matter is the way these teams are able to harness the power of these exascale machines to solve some (or all) of the major difficulties and problems that are facing the world today.

What about Quantum computing?

Talking about computing power, one can’t really ignore the role that quantum computing can play in the power dynamics of the technological world.

Google, a company that has a major stake in the development in all these technologies, now wants to make it easier for developers to program quantum computers.

The company has recently launched its open source software.

It hopes that developers will benefit from the open source software to experiment with different machines including the company’s very own super-powerful and super-efficient quantum processor.

Pretty similar to exascale machines, quantum computers are (aside from all the hype) in their infancy.

However, that has not deterred the makers of such exotic computer machines to try and encourage various software developers to go ahead and experiment with them.

The real challenge though, is how to program the circuits that exist on these quantum machines.

Quantum computers are different from digital computers in the sense that instead of the conventional digital bits that are only able to represent either 0 or 1, these make use of qubits.

Or quantum bits.

These quantum bits have special properties that allow them to have both states simultaneously.

How does that happen?

Well, without going into too many details about exactly how that happens, these qubits take advantage of a phenomenon that researchers have to come to call as superposition.

The other weird thing about qubits is that they can influence other qubits which exist in their surroundings even if the qubit in question is not even connected to the other qubit physically.

Additionally, these qubits do not stay in their special and delicate quantum state for more than the amount of time it takes an eye to blink.

Needless to say, the special properties and states of qubits force software developers to exploit them using completely new methods.

In other words, the old software won’t work with quantum computers.

Moreover, the industry has a very small and exclusive band of developers who have the skills and the highly specialized knowledge using which they can write programs and applications for quantum computers.

This is where Google wants to change the situation.

The company wants to help developers code programs for quantum computers.

More concretely, the company has released Cirq.

To read more about what Cirq is and what it can do for developers, click here.

 

Zohair

Zohair

Zohair is currently a content crafter at Security Gladiators and has been involved in the technology industry for more than a decade. He is an engineer by training and, naturally, likes to help people solve their tech related problems. When he is not writing, he can usually be found practicing his free-kicks in the ground beside his house.
Zohair

Latest posts by Zohair (see all)

COMMENTS

WORDPRESS: 0