Predicting The Future: Can Machine Learning Help? Yes.

chaos_theory

Machine learning can help humans to predict long-term future.

With the help of several computer experiments, researchers have shown that artificial intelligence algorithms can help to predict many chaotic systems’ future.

In other words, techniques such as machine learning have this amazing ability to actually help researchers predict chaos.

More than half a century has passed since chaos theory pioneers managed to understand that making long-term predictions was nearly impossible because of an effect they called the butterfly effect.

What the butterfly effect says is fairly simple (to understand that is).

The butterfly effect says that even the most minute of perturbations to any complex system have the potential of touching off a large concatenation of various events.

These events can then lead to dramatically divergent futures.

The butterfly effect can affect systems such as,

  • The economy of a country
  • Weather conditions
  • About everything else

Now, although researchers had come to an understanding about how the butterfly worked they did not have the ability to precisely pin down the actual state of all these systems to an extent where they could predict how all these interacting factors would play out.

Because of that, it is safe to say, that humans have had to live under a rather thick (but thinning all the time) veil of unpredictability.

That might change though.

Why?

Because robots have arrived.

And they will help humans to predict the future in a more reliable manner.

Researchers have published a series of relevant results in renowned scientific journals such as Chaos and Physical Review Letters, where they have used techniques involving machine learning to actually predict chaotic systems and their future evolution out to horizons that are stunningly distant.

Readers should know that machine learning is the most important computational techniques that researchers have used in the last couple of years to make a great deal of advances in the field of artificial intelligence.

Experts from the outside have actually lauded the new approach as groundbreaking.

These experts also believe that using machine learning techniques for chaotic systems may even have wider applications.

The professor of computational science working at Jacobs University in Bremen, Germany, Herbert Jaeger, recently said that he found the new technique as really amazing given the fact how far it could help researchers predict the future.

The new above-mentioned findings come from four collaborates and Edward Ott, a veteran chaos theorist, at the University of Maryland.

Edward along with his colleagues successfully employed reservoir computing ( a machine learning algorithm) in order to learn an archetypal chaotic system’s dynamics.

These dynamics are usually known as Kuramoto-Sivashinsky equation.

This equation, researchers have found, has an evolving solution which typically behaves pretty similar to any given flame front.

In other words, the solution flickers as it makes its way through a medium that is combustible.

The Kuramoto-Sivashinsky equation also manages to describe drift waves that researchers find in plasmas.

It can also help with other phenomena.

Most of all, the Kuramoto-Sivashinsky equation can serve as an effective test bed for researchers to study spatiotemporal and turbulence chaos.

This is what one of Edward Ott’s graduate student, Jaideep Pathak says.

Jaideep Pathak has played the role of the lead author with regards to the new papers.

Researchers first trained their reservoir computer on past data related to the evolution of the previously-mentioned Kuramoto-Sivashinsky equation.

After that, researchers found that the reservoir computer had little problems in closely predicting the path of the flamelike system and how it would keep on evolving out to around eight Lyapunov times.

That is, eight Lyapunov times ahead into the distant future.

chaotic_systems_predictions

For comparison’s sake, loosely speaking, the reservoir computing technique allows researchers to predict the flamelike systems’ evolution eight times further into the future than any of the previous methods had managed to allow them.

But what is Lyapunov time?

The Lyapunov time actually represents the amount of time it takes for a total of two states of a given chaotic system which are almost-identical to diverge exponentially.

To put it in simpler terms, it is the Lyapunov time which sets the horizon regarding predicting the future.

Another chaos theorist working at the Max Planck Institute for Physics of Complex Systems in Dresden in Germany, Holger Kantz, recently said that this new technique could prove itself as very effective.

Kantz was referring to the techniques ability to make eight Lyapunov time predictions.

He also said that the new machine-learning technique could prove as good as knowing the actual truth.

So how does the new machine learning technique do it?

In other words, how does it leverage the Kuramoto-Sivashinsky equation to predict the future?

Well, it turns out, the machine learning algorithm doesn’t know anything regarding the Kuramoto-Sivashinsky equation.

The machine learning algorithm only has the ability to see the recorded data that researchers have about the evolving solution to the Kuramoto-Sivashinsky equation.

And this is the exact thing that makes various machine learning techniques so powerful.

Readers should understand that in many of the cases, researchers don’t have the facility to know all the relevant equations that describe a given chaos system.

This fact, most of the time, cripples dynamics researchers (or dynamicists) in their efforts to come up with an appropriate model and try to predict them.

Edward Ott, along with his company, has managed to develop results which suggest that researchers may actually not need the previously-unknown equations.

What they need is data.

According to Kantz, the new paper made the suggestion that one day humans might have the ability to perhaps use machine learning algorithms to predict the weather rather than using sophisticated and complex models of the earth’s atmosphere.

Experts also say that machine learning techniques could assist researchers in many other areas apart from weather forecasting.

Machine learning techniques could make it easy for researchers to monitor a given patient’s cardiac arrhythmias to look for early signs of heart attacks which are impending.

These advanced algorithms could also help researchers to monitor various neuronal firing patterns in a given patient’s brain in order to look for different neuron spike signs.

Moving into more speculative applications, machine learning techniques could help researchers to predict rogue waves.

Rogue waves regularly endanger ships all around the world.

If machine learning techniques could help predict them then all the better.

Experts also think that one day these same machine learning techniques will help to predict earthquakes as well.

Edward Ott, along with his colleagues, also have one other particular hope.

That hope is for these new machine learning tools to provide useful information to researchers so that they may give advance warnings regarding solar storms.

What kind of solar storms?

The kind that erupted in 1859 across an area on the sun’s surface that spread 35,000 miles.

The 1859 magnetic outburst actually managed to create visible aurora borealis (or just auroras) all over the world.

More importantly though, it managed to blow out several telegraph systems.

The 1859 magnetic outburst also generated so much voltage that it allowed various other lines to switch off their power and still operate.

So what does an 1859 solar storm have to do with today’s world?

machine_learning

Well, experts think that if a similar solar storm successfully lashed the entire planet in an unexpected manner today then it could cause severe harm to our entire electronic infrastructure.

Ott also mentioned that if machine learning techniques enabled the community to know that the solar storm was coming, then it could simply switch off the power to the electronic infrastructure.

And turn switch it back on once the solar storm passed.

Ott along with Pathak, Zhixin Lu (currently, Lu works at the University of Pennsylvania), Michelle Girvan, Brian Hunt (all Ott’s colleagues) successfully achieved their new results by combining existing but different tools.

The group started to read up on various techniques of machine learning about seven years ago.

This was apparently also the time when another powerful computer algorithm by the name of deep learning had started its march towards mastering various artificial intelligence tasks such as speech and image recognition.

After learning more about machine learning, the group began to think of new and clever ways to apply machine learning techniques to chaos.

The group learned close to a handful of results that were promising that predated the AI deep learning revolution.

Then in early 200s, a more important development took place.

Harald Haas, a chaos theorist from Germany along with Jaeger, utilized a network of artificial neurons that he had randomly connected.

This network essentially formed the word reservoir in the term reservoir computing.

Using this technique, researchers managed to learn the related dynamics of not one but three variable that chaotically coevolved.

These researchers then trained on a total of three series or various numbers.

After the initial training, they found that the network could easily predict the precise future values of the above-mentioned three variables.

And too out to a very impressive distant horizon.

With that said, researchers then found that when they had more than a tiny number of interacting variables, the involved computations grew in number.

They grew so much that they eventually became close to impossibly unwieldy.

This is where Ott along with his colleagues felt the need of a more comprehensive and efficient scheme in order to make techniques such as reservoir computing more relevant especially for big chaotic systems.

What are big chaotic systems?

These are chaotic systems that have a large number of variables which are interrelated.

For example, readers should know that the previously-mentioned advancing flame has a front.

That front has a ton of velocity components.

And that too in a total of three spatial directions.

Each position that lies on the advancing flame’s front has to keep a track of all those velocity components.

Researchers had to study for years before they struck upon a solution that they considered a straightforward one.

Pathak recently said that the team exploited the locality of all those variable interactions in chaotic systems which were spatially extended.

What does the term locality mean here?

predict_the_future

It means that variables which are present in one place do get influenced by other variables which exist at nearby places.

But the same variables don’t get influenced by variables which exist in far away places.

Pathan explained that by using such information the team could fundamentally break up their big problems into smaller and more manageable chunks.

In other words, researchers could parallelize the given problems with the use of nothing but one reservoir of neurons.

Then they could learn more about that one single patch of the given system.

After that researchers could use another reservoir to learn more about the system’s other patches.

And this process would continue till researchers get what they want.

Such techniques also give rise to slight overlaps between domains that are close to other domains in order to account for both of the domain’s interactions.

Researchers also say that parallelization techniques enable approaches such as reservoir computing to manage chaotic systems of any given size.

But this is only true as long as researchers can dedicate proportionate computing resources to their desired task.

Reservoir Computing As A Three-Step Procedure

From an overall perspective, according to Ott’s explanation, techniques such as reservoir computing can be summed up via a three-step procedure.

Let’s assume for a moment that a researcher wants to use reservoir computing in order to predict a spreading fire’s evolution.

First Step

As a first step, the researcher would want to measure the flame’s height at a total of five different points along its flame front.

The researcher will have to continue with measuring the heights at the above-mentioned “different points” on the flame front over a reasonable period of time as the flame itself flickers and advances.

Then the researcher will have to feed all these data-streams in the reservoir to artificial neurons.

The researcher has to make sure to choose these artificial neurons randomly.

When the researcher feeds the data, it would trigger all the artificial neurons to fire.

This, in turn, would trigger all the connected neurons as well.

Consequently, it would send a deluge of signals through the whole network.

Second Step

Then comes the second step.

In this step, the researcher has to make his/her neural network utilize the input data and learn the related dynamics of the given evolving flame front.

In order to do so, the researcher has to monitor the related signal strengths of multiple randomly selected neurons from the reservoir as he/she is feeding in data.

After that, the researcher would weigh and combine all these signals in a total of five distinct methods so as to produce five outputs in the form of five numbers.

What is the goal here exactly?

It is to take the various signals and adjust their weights and then put them to good use in calculating all the outputs.

The researcher has to do that until the related outputs show consistency in matching the next input set.

What is the next set of inputs?

These inputs are the five other and new heights that the researcher measured just a moment later along the evolving flame front.

Ott explained that what the researcher here wants is to make sure that the output becomes the input at a later time but only slightly.

In order to make sure that the algorithm learns the correct weights, it itself begins to compare each and every set of outputs to the coming/next set of inputs.

Remember here that when we say “set of outputs” it basically means the predicted flame heights of the flickering flame at five different points.

And when we say “next input set” it means the actual heights of the flame at five different points.

In any case, the algorithm continues to compare them and decreases or increases the weights of all the different signals it is tracking each and every time.

And the algorithm continues to do that in whichever way it feels it has to, in order to make sure that the signal combinations spew out the correct values for the five different outputs.  

The algorithm fine-tunes the weights from one to the next time-step.

As it is doing that, the predictions improve in a gradual manner.

This process continues to happen until the above-mentioned algorithm gains the ability to consistently predict the advancing flame’s state a single time-step later.

Third Steps

According to Ott, the third step is where the researcher actually has to come up with the prediction.

The reservoir computing algorithm, now, has the ability to reveal how the flame front would evolve because it learned the chaotic system’s dynamics.

Essentially, the network itself would ask the question of what would happen next.

Then, the network would feed outputs right back in as inputs.

It would then feed the new outputs from the previously new inputs back, again, as new inputs.

And it would continue this process while making an accurate projection of the flame heights on the flickering flame front would evolve at a total of five given distinct positions.

Moreover, reservoirs perform their own work in parallel would predict how the flame’s height would evolve at some other set of five positions in the advancing flame.

The point researchers want to make here is the techniques such as reservoir computing are very effective when it comes to learning the dynamics of various chaotic systems.

However, they themselves don’t know why that is.

Researchers say that it is probably because the computer can tune the associated formulas in a direct response to data.

It tunes them until the formulas allow it to replicate the dynamics of the chaotic system.

 

Zohair

Zohair

Zohair is currently a content crafter at Security Gladiators and has been involved in the technology industry for more than a decade. He is an engineer by training and, naturally, likes to help people solve their tech related problems. When he is not writing, he can usually be found practicing his free-kicks in the ground beside his house.
Zohair

Latest posts by Zohair (see all)

COMMENTS

WORDPRESS: 0