Back in early 2018, we saw the emergence of blockbuster twin exploits.
The security community called these Spectre and Meltdown.
Initially, the first entity to respond to both these security exploits was Mozilla.
Mozilla retroactively changed multiple behaviors of its flagship web browser Firefox in order to help prevent Spectre and Meltdown hurt Firefox users.
So how exactly both Spectre and Meltdown work?
Well, they both in a similar fashion.
Both take advantage of or rather rely on concepts such as high-speed timing measurements.
They use these measurements in order to detect private and/or sensitive information.
Security researchers and engineers, somewhat counterintuitively, had to come up with patches which would actually decrease the pace at which some computations worked.
During the process researchers also found out that Spectre and Meltdown vulnerabilities took advantage of seemingly mundane and unimportant computations.
Moving forward from that, the first thing that security researchers had to take care involved slowing down the related performance API for various web browsers.
As mentioned just now, Mozilla did it before anyone else.
The problem with performance API was simple enough.
It previously had the ability to analyze all the behaviors related to a page at a very fast speed.
These speeds were fast enough for hackers to use the API to launch an attack.
But researchers also had to make another change.
This change had them remove SharedArrayBuffer.
What is SharedArrayBuffer?
Think of it as a data structure.
But of a new kind.
Atop SharedArrayBuffer hackers could rebuilt similar timers with the most trivial of techniques.
Again, Mozilla took the lead in fixing this issue as well.
Microsoft implemented similar changes to its own browsers (Internet Explorer and of course, Edge) soon enough.
WebKit also got in on the act because it represented a tool that allowed everyone to view the web.
Many web browsers use WebKit in fact.
Developers have used WebKit to build web browsers such as Mobile Safari, Safari, the Android browser along with many other dedicated web browsers which come embedded in a ton of mobile devices and otherwise.
At the time of writing this report, SharedArrayBuffer is now gone.
In other words, all major web browsers have disabled SharedArrayBuffer by default.
One could say that all these browsers basically backpaddled on some of the most established features on the web.
But they had to do it.
Security exploits such as Spectre and Meltdown had made it necessary to do so.
One can’t ignore the fact that it was unexpected and strange though.
The thing readers need to understand here is that the web, for the most part, is actually a decentralized specification among various other things.
To put it another way, it represents an agreement about how we should build things.
But that’s not all.
The web also tells us how should we run the things that we have built.
So it is just not enough to come up with new features.
They should have meaningful existence on the internet itself.
That’s why standard bodies along with web browsers and developers have to, first, come to a common understanding about how the web should or would work.
Once an entity (person, developer team) adds some new feature to the above-mentioned agreement, no one can remove it.
Why is that?
Because no one really has an idea what new problems the new addition may give rise to.
Moreover, it is even more difficult to know in which sections andor far flung corners the new problems may appear.
Contrast that with programming languages and technology systems.
Both, programming languages and technology systems operate in much narrower contexts.
Most of the time they only work within the confines of a specific server, if we want to take a simplistic example.
Other times programming languages and technology systems work inside a certain application.
This gives them a specific advantage.
That advantage is the ability to successfully endure and withstand highly dramatic modifications to their original behaviors.
So if developers come up with an upgrade and it brings with it a couple of malfunctions, those malfunctions stay localized.
And because of that developers/engineers can accordingly fix them rather easily.
When it comes to something like a distributed web, such promises don’t really exist.
With that said, it is also true that web technologies have generally evolved nicely in the past which enables them to maintain a degree of backwards compatibility.
This is also the reason why very old web pages have little trouble in functioning in newer web browsers.
In other words, they continue to work even if old web browsers go away or their developers abandon them.
So how does all that related to the Spectre and Meltdown vulnerabilities?
Well, keeping focus on Spectre for the moment, this security exploit forced web browsers to kind of break the covenant of compatibility of the distributed web.
Therefore it is actually entirely likely that any projects (that is, meaningful projects) which rely on those old features are even in existence.
But even if such projects do exist, developers and engineers may still have safer and simpler workarounds.
Nonetheless, the Spectre and Meltdown vulnerabilities did represent a rather prominent episode.
These vulnerabilities represented an episode that which forced the internet to go ahead and break its own code.
In such events, where the internet is forced to do that retroactively, there is a certain related cost.
This cost is best described from an ideological perspective.
No one can (or should) quite trust the web as an unbreakable and infallible platform.
That doesn’t mean people don’t do that.
But the extent of that trust has diminished with each passing year.
Changes That Break Things
Let’s talk about a common practise related to the software engineering discipline.
People who are involved in these side of things call this practise as semantic versioning.
Using this technique, developers and engineers give the official published (and of course, new) releases of their software packages and tools slightly more complex version numbers.
Let’s explain that just a bit.
Ever wonder why an application rarely goes from version 1 to version 2 and then to version 3?
And why do the vast majority of applications and packages go from version 2.4.2 to 2.4.3 and then to 2.4.8 and (finally) then to version 3?
Well wonder no more because this is what software engineers call semantic versioning.
It is not about making things more complex.
Developers want to both automated systems and users to know when they have changed something important.
The best way to do that is to make shifts in smaller numbers as well as in larger numbers.
The largest of these numbers indicate the most important changes to a given software package and/or tool.
Once these changes start to roll out there is no guarantee that the system would work exactly the same as it previously did.
The developer community refers to these changes as breaking changes.
And these ‘breaking changes’ serve an important purpose.
They act as safety checks.
At the very least, these ‘breaking changes’ act as warning flags.
No one really knows precisely how much these changes modified the behavior of the internet or how widespread their impact was.
What we do know is that the developers came up with patches which changed web browsers so that the web browsers could help in protecting users against Spectre and Meltdown vulnerabilities.
According to most reports on the matter these changes did manage to meet the required technical definition of ‘breaking changes’.
As mentioned before, these ‘breaking changes’ are likely to affect the entirety of the web.
Of course, at this point the web has become old enough and chaotic enough for anyone to subject it to any planning or versionaining whatsoever.
This is also precisely the reason why up until now, the distributed web had always tried to opt for safety options such as preserving backwards compatibility.
What Causes Spectre and Meltdown?
We touched upon the causes of Spectre and Meltdown before as well.
But just to touch on the subject, it is because of a a technique known as speculative execution.
HIs is a technique that has become almost ubiquitous as far as computer processors are concerned.
What does this technique do?
Because of this technique modern processors proactively and eagerly execute instructions.
Okay, so what’s wrong with that?
Except for the fact that the processor continues to engage in this behavior even if no program requires the the processor to execute those instructions.
In other words, the processor executes instructions before a program actually needs them.
This makes speculatively computed material faster.
The discovery of Spectre and Meltdown primarily showed that speculative execution did not have sufficient security.
And since this technique has security flaws, it provides a good way to leak out private and/or sensitive information.
Most notably, the Meltdown security vulnerability affects hardware that Intel sells.
Previously, everybody just assumed that speculative execution had the necessary safety measures.
Moreover, any kind of attempt to disable this feature (that is, speculative execution) at the software level came with the possibility of having marked performance expenditure.
The Spectre and Meltdown vulnerabilities and their solutions, aren’t just about laptops that get sluggish.
They affect many kinds of services as well.
One of those services is cloud computing.
To put it another way, there are a lot of cloud service providers that charge their clients variable rates that essentially reflect the related computational burden that comes with the contract.
Hence, the Spectre and Meltdown vulnerabilities may affect the prices these cloud computing services offer.
In other words, the Spectre and Meltdown vulnerabilities may eventually lead cloud computing services to increase their technical budgets.
These budgets will have cloud computing services paying more dollars (literal dollars) for services that now, as a result of the new patches, will have to run just a bit more slowly.
So if current “fixes” decrease performance and hence leads to increase in cloud computing prices, then what is the real fix?
The real fix, at least for the Meltdown vulnerability, is for manufacturing companies to actually go ahead and eventually replace all their current processor chips physically.
This, as one would suspect, would require a massive amount of money and the willingness to change.
According to some estimates it might take manufacturing companies, at the very least, a full generation of hardware to properly propagate.
Now, that would keep things under control as far as the Meltdown security vulnerability is concerned.
As for the Spectre vulnerability, things get slightly more sophisticated.
So much so that some believe that the Spectre vulnerability may not have a real fix to begin with.
Believe it or not, but many had not realized a simple fact about computers and their speed until very recently.
What is that simple fact?
That simple fact is, the media has always lied (misinformed, perhaps) about the power and speed of our computer machines.
The untruths about the full potential of the computers that we see today were built on a false foundation.
A foundation that we must now undo if we want to make sure that the majority of computer users around the world remain safe.
Bits And Beyond
Researchers and engineers had to immediately respond to the Spectre and Meltdown security vulnerabilities.
The temporary solution required engineers to immediately decrease the speeds of online timing measurements and processors.
Thereby, in the process of fixing Spectre and Meltdown vulnerabilities engineers and security researchers had to reverse the advanced that everyone thought we had made both with the overall sophistication of the online world (the web) as a useful platform and the hardware.
Likewise, we have also seen how disruptive ideas can manage to attain a pace that can successfully manifest itself outside the silicon.
For example, in the city of New York, Uber’s market dominance and reach emerged at a rapid pace.
It was rapid enough to quickly capitalize on the then ongoing financial problems of the city’s subway system.
Airbnb saw similar success come its way after the barriers to proper home ownership kept on going higher for the country’s modern middle class.
Many in the industry have reasons to believe that Intel would like to stay away from conceding the fact that Moore’s law is well and truly over.
But regardless, security vulnerabilities may substantially erase its real-world benefits in terms of performance.
Especially after engineers tame security vulnerabilities like Spectre and Meltdown.
On a side note, in this context, the rather long-standing computing trope should cause even more concern.
That trope goes along the lines of “it is all just zeroes and ones.”
One needs to really think about it.
Are we really only talking about bits once we know that these same bits are responsible to drive our 3D printers, drones and our robots?
We live in a world today where new technologies are able to manifest themselves in the actual/real world.
Why is that?
Well, despite all the progress that the online world has made in the last decade our so, the simple truth is the real world is where most of the “real” money is.
Perhaps this is also a good time to mention that, in one way, Bitcoin (the cryptocurrency and all its auxiliary industries such as coin mining) melts the earth’s polar ice caps as well.
Let’s expand the scale for a bit now.
Let’s expand it to the mind-boggling scale of the cosmos.
What we mean to say here is that security vulnerabilities such as Spectre and Meltdown will eventually affect our innate ability to edit (and sometimes create) organisms.
Now, let’s just jump onto a level that is more tangible.
The truth we need to come into grips with is that security vulnerabilities such as Spectre and Meltdown have actually decreased the previous operational speeds of both online timing measurements and processors.
In the process of doing so, they have reversed the advances we, as people, thought we had made both in the general complexity of the internet as a reliable platform and the hardware that we use today.
Some would argue that in both of the above-mentioned fields, we had actually put ourselves (quite literally) in a race whose finish line would have lead us to something terrible.
The thing we need to realize right now is that we, as a community, have built technology a bit too quickly.
Too quickly, even for our own good.
The technology is more real than ever before.
It is now quantifiable in real dollars and of course, microseconds.
And we have made use of a wide range of metrics and tools to achieve our goals even though features such as SharedArrayBuffer have gone away and are no longer around to perform and take those measurements for us.
Any technology that seeks to change and/or reshape our existing infrastructure deserves a list of things.
That list of things should include,
- Aggressive scrutiny