This week, Intel fixed a CPU security bug that didn't attract a funky name, although the bug itself is admittedly pretty funky.
Known as CVE-2020-0543 or shortly Special Register Buffer Data Sampling in its full title, it serves as a further reminder that processor manufacturers produce faster and faster chips that can process more code and data less time …
… We sometimes pay a cybersecurity price, at least in theory.
If you are a normal naked security reader, you are probably familiar with the term speculative execution, which refers to the fact that modern CPUs often run by themselves through internal calculations or partial calculations, but this may turn out to be redundant.
The idea is not as strange as it sounds, since modern chips usually split operations that look like a machine code instruction to the programmer into numerous sub-instructions and can perform many of these so-called micro-architectural operations on multiple CPU cores at the same time.
For example, if your program reads a data array to perform a complex calculation based on all the values it contains, the processor must ensure that you do not read past the end of your memory buffer, as this is possible. Let someone else's private data in your calculation flow in.
In theory, the CPU should freeze your program each time you look at the next byte in the array, perform a security check that you are authorized to view, and then allow your program to continue.
But every time the security check is delayed, all of the micro-architectural computing units that your program would otherwise have used to keep the calculation running are idle – even if the results of their calculations are not visible off-chip.
In the speculative version, it says, among other things: "Let internal calculations continue before the security checks because we are ahead in the race and can quickly publish the final edition if the checks are finally passed."
The theory says that if the check fails, the chip can only discard the internal data that it now knows is uncertain. There is therefore a potential increase in performance without a security risk, since the security checks ultimately prevent secret data from being disclosed anyway.
The vast majority of the code passed through arrays does not read the end of the allocated memory, so the typical performance boost is enormous and does not appear to be a disadvantage.
Aside from the uncomfortable fact that the corrupted data sometimes leaves ghostly echoes of their presence that are recognizable off-chip, although the data itself was never officially issued as a machine code instruction.
In particular, memory addresses that have recently been accessed are typically cached on the chip to speed up access if they are needed again soon, as this greatly improves performance. Therefore, the speed at which locations can be accessed generally provides information about how recently they were checked – and thus which memory address values were used – even if this "browsing" was speculative and was canceled internally for security reasons.
Unfortunately, security links in the core of the chip can inadvertently leave discernible traces that could allow untrustworthy software to infer some of this data later.
Even if an attacker can only guess that the first and last bits of your secret decryption key must be zero, or that the very last cell in your table has a value that is greater than 32,767 but less than 1,048,576, there is always another serious security risk.
This risk is often exacerbated in such cases because attackers may be able to refine these assumptions by drawing millions or billions of conclusions and significantly improving their billing over time.
For example, imagine that your decryption key is rotated one bit to the left from time to time and that the attacker can derive the value of his first and last bit every time he turns it.
With enough time and a sufficiently precise set of conclusions, the attackers can gradually find out more and more about your secret key until they are placed well enough to successfully guess it.
(If you recover 16 bits of a decryption key that should withstand 10 years of concerted cracking, you can probably break it 216 or 65,536 times faster than before, which means you only need a few hours now.)
What about CVE-2020-0543?
In the case of the Special Register Buffer Data Sampling error or CVE-2020-0543, the internal data, which could accidentally leak out of the processor chip or, more specifically, could lure it out, contains current output values from the following machine code instructions:
RDRAND. This instruction code stands for ReaD Secure Hardware RANDom Number. Ironically, RDRAND was designed to produce the highest quality hardware random numbers based on the physics of electronic thermal noise, which is generally considered impossible to model realistically. This makes it a more trustworthy source of random data than software-derived sources such as keystroke and mouse time (which are not present on servers), network latency (which depends on software that follows pre-programmed patterns) and so on. If another program running on the same CPU as yours can figure out or guess some of the random numbers that you've incorporated into your recent cryptographic calculations, you may get a practical edge in cracking your keys.
RDSEED. This is short for ReaD Random Number SEED, an instruction that works more slowly and is based on more thermal noise than RDRAND. It was developed for cases in which you want to use a software random number generator but want to initialize it with a so-called "start value" in order to start its randomness or entropy. An attacker who knew the starting value of your software random generator could reconstruct the entire sequence, which could enable or at least considerably support future cryptographic cracking.
EGETKEY. This stands for Enclave GET Encryption KEY. Enclave means that it is part of Intel's much-vaunted SGX instructions designed to provide a sealed block of memory that even the kernel of the operating system cannot peer into. This means that an SGX enclave acts as a type of tamper-proof security module, like the special chips used in smart cards or cell phones to store lock codes and other secrets. Theoretically, only software that is already running in the enclave can read data stored in it and cannot write this data outside of the enclave, so that encryption keys generated in the enclave can neither escape accidentally nor intentionally. An attacker who can draw conclusions about random cryptographic keys in one of your enclaves may have access to secret data that even you should not be able to read!
How bad is that?
The good news is that guessing someone else's latest RDRAND values does not automatically and immediately provide the ability to decrypt all files and network traffic.
The bad news, as Intel itself admits:
RDRAND and RDSEED can be used in methods in which the returned data is kept secret from potentially malicious actors on other physical cores. For example, random numbers from RDRAND or RDSEED can be used as the basis for a session encryption key. If these values are lost, an opponent may be able to derive the encryption key.
And researchers from Vrije Universiteit Amsterdam and ETH Zurich have published an article entitled CROSSTALK: Speculative data leaks across cores are real (they have found a funky name!), Which explains how the error CVE-2020-0543 can be exploited The:
The cryptographically secure RDRAND and RDSEED instructions pass their output to attackers (…) on many Intel CPUs, and we have shown that this is a realistic attack. We also saw that (…) it is almost trivial to use these attacks to break code that runs in Intel's secure SGX enclaves.
What should I do?
Intel has released a series of microcode updates for affected chips that are re-speeding in favor of security to mitigate these “CROSSTALK” attacks.
Simply put, secret data that is generated in the chip as part of the random generator circuit is aggressively deleted after use so that no “ghost echoes” remain that could be detected thanks to speculative execution.
In addition, access to the random data generated for RDRAND and RDSEED (and consumed by EGETKEY) is regulated more strictly, so that the random numbers generated for several programs running in parallel are only made available in the order in which these programs made their requirements.
This can reduce performance slightly – any program that requests RDRAND numbers must wait for its turn instead of working in parallel – but ensures that the internal "secret data" used to generate the random numbers from Process X used before process X + was deleted from the chip. I get a look inside.
Where you get your microcode updates depends on your computer and operating system.
Linux distributions usually bundle and distribute the fixes as part of a kernel update (mine, for example, appeared yesterday). For other operating systems, you may need to download a BIOS update from the manufacturer of your computer or its motherboard. Therefore, contact your computer manufacturer.
(Intel says that "generally Intel Core family (…) and Intel Xeon E3 processors (…) can be affected" and has released a list of compromised processor chips if you happen to know which chip is in your computer.)