The tech industry has been reeling in the last week after the discovery of two serious CPU bugs, nicknamed Meltdown and Spectre, that affect a wide variety of chips from Intel, AMD, ARM and other manufacturers. Both bugs potentially allow agents to access protected information, exploiting architectural vulnerabilities that have been central to chip design for twenty years.
Intel, certainly the worst-affected chip designer (who also happen to own a majority of the CPU market), has sought to downplay the problem, as well as the potential performance impact of fixes. Meanwhile, major corporations such as Google, Amazon and Microsoft have begun to roll out patches addressing the issues, especially in their Cloud services, which represent the greatest security hazard.
Unfortunately, the sheer scale of the problem, as well as the nuanced nature of Spectre, means that a real solution is potentially a long way off; and in the meantime, many systems, especially those with Cloud architecture, will remain vulnerable. There are two known Spectre variants, but many more will inevitably be discovered, and some may be solved only by redesigning hardware. The end result? This might not be the time for high-security, business-critical applications, such as your Content Management System (CMS), to be moving towards the Cloud.
Meltdown and Spectre attacks take advantage of speculative execution, a process whereby chips will “guess” at incoming instructions while running current ones. When this is successful, it saves a great deal of time; when the guess is wrong, it can simply be discarded at no processing cost. Thus, speculative execution accelerates chip performance overall.
However, this design opens the door to two classes of security flaws. Meltdown is a specific problem, wherein an unauthorised process can read information saved to the same memory space by other processes. Spectre is much broader, more of a class of threats: by targeting a specific feature of speculative execution called “branch prediction”, a Spectre attack can potentially access any piece of memory space on a device, and force non-essential process components to reference and divulge sensitive information.
In the context of the Cloud, this means that an unauthorised accessor could potentially access private data from any other clients whose processes were running on the same server.
Given the specificity of Meltdown, which primarily affects Intel chips, solutions have been quick in coming. However, they involve moving sensitive processes to separate, protected memory spaces, which entails a performance hit. This has proven small in practice for casual users, but for processes that are heavy on retrieval and processing – say, managing millions of documents in a database, as with a CMS – there can be a slowdown of up to 30%.
Amazon Web Services has acknowledged that certain businesses will be much heavier hit than the average, and pledged to work specifically with those clients to formulate better solutions; but this, of course, will take money and time.
Protecting against Spectre attacks is an altogether different challenge. Almost every device in existence is vulnerable to the bug. While companies such as Google claim to have made headway in countering Spectre, they remain tight-lipped on the details, in marked contrast to their willingness to divulge information for counteracting Meltdown.
Many experts maintain that, even if the two known variants of Spectre can be covered by patches, neutralising the entire class of potential attacks will require wholesale redesign of hardware architecture. In the meantime, Cloud providers would have to commit to offering a separate physical server for every client to completely avoid the risk of data leakage. This is an inefficient and implausible solution.
The current Spectre variants are, to be fair, difficult to exploit, and would in many cases require physical access to servers. Intel’s microcode update for Skylake chips also protects against both variants, though it only covers a small subset of chips. But we don’t know what the next variant to be discovered will look like, and worse, who will discover it.
After the trauma of VENOM in 2015, and last year’s revelations of security flaws in VMware and Hyper-V products, the last thing that Cloud users want to hear is that there is an unknown number of permutations of a powerful attack just waiting to be discovered, which could be exploited to steal sensitive information. Add to this the potential performance impact of fixes, and the long-term possibility of an expensive implementation of redesigned chips across Cloud servers, and the forecast for the short term looks grey for CMS in the Cloud.
So, what is the solution for high risk instances hosting sensitive data? If you are investing in a new CMS, choose one that can operate in-house for now and be moved to the cloud later.
In this context, solutions like CARA, which offer the possibility of hosting in secure, on-site servers, as well as the flexibility to host less critical infrastructure in the Cloud, will allow clients a broader range of options for protecting their information and maintaining high performance. In the end, a sealed network is inherently more secure; and while CARA certainly can run in the Cloud for those clients who want it, it also provides the peace of mind that comes with running a mature product in a controlled environment.