Intel is expanding its future processors with two new exploit detection systems.
The new technology has been under development for at least four years, according to the chip giant's recently updated specification document, which includes a release date for version 1.0 in June 2016.
The PR machine from Intel has been making waves over the system, CET for short, and the control flow enforcement technology in full.
… And now you can officially watch it. (Warning: the specification document is 358 pages.)
As far as we can see, the first Intel processors to incorporate these new protective functions are the not yet fully developed CPUs known by the nickname "Tiger Lake". So if you are a programmer, you really can't do it. Start tinkering with the CET functions.
Nevertheless, CET reminds us all that computer security is a cat and mouse game in which a round of security enhancements causes cybercriminals to change their behavior, which in turn leads to a new wave of countermeasures, and so on.
Broadly speaking – very loosely as we summarize a 358-page document – CET aims to make exploits more difficult for remote code execution than now by keeping programs' behavior under control.
Specifically, CET aims to keep an eye on how programs behave badly, to make it easier to tell when a program has crashed, and to prevent crooks from finding sneaky ways to crash faulty programs bring and still be in control of them.
Take advantage of memory errors
Errors in the use of memory are one of the main causes of software errors that lead to security vulnerabilities, which are known in the trade as security vulnerabilities.
For example, if I ask the operating system to save 64 bytes of temporary storage, for example to generate and store a cryptographic key, and then accidentally store 128 bytes of random data in it, I'll trample on everything that is next in the Memory is coming.
A block of memory that is reserved for your own use is colloquially called a buffer. Writing outside of your own buffer and into another buffer is therefore called buffer overflow.
Another way data is often trodden-footed is after free access, where I accidentally store data in a block of memory that I have already told the operating system that I no longer need, and therefore may has already been used elsewhere.
Even if I carefully write my limit of 64 bytes and avoid a buffer overflow error, I still write where I shouldn't.
Even if a use-after-free error is not technically called an overflow, you can think of it that way, because I write 64 bytes in a buffer into which I am not supposed to write any bytes at the moment.
Memory security errors, as they are commonly referred to, pose an obvious cyber security risk, as they mean that an attacker might be able to skillfully manipulate data that another part of the program believes to be trusted and that they will later rely on.
The risk of such a memory error naturally depends on what was trodden on.
If the overwritten memory bytes contained an error message that is only printed in extremely unusual circumstances, the error may go unnoticed for years, and even if it does occur, the only bad side effect is that causing an error to go unreported (or unintelligible) ).
However, if the treaded memory contains data that the software later relies on to control the execution flow in the program, an attacker might be able to find a way to misuse this error to implant malware.
Defense against memory errors
There are two ways that memory overwrite errors can be exploited to redirect execution.
One relies on modifying the so-called stack, a block of memory that the CPU uses (among other things) to track subroutine calls in the software.
When you call a program subroutine, e.g. For example, getch (), which reads the next input character, usually from the keyboard, the processor keeps track of where you called it from, so that the subroutine can simply execute a RETurn instruction to return there to the next instruction it was before after the CALL.
So, if you can play around with the stack, you can often play around with the next RET instruction so that the program does not return to where it came from, but breaks into an unauthorized area of your choice.
Another type of error is to change the location used by a JMP or CALL command to tell it where to go next. Instead of rerouting a program when it returns from a subroutine, reroute it when it tries to call one or jump to one.
There are already various protective measures against this type of trick, in particular DEP and ASLR.
DEP stands for Data Execution Prevention and assumes that when a RETurn address or a CALL or JMP target is changed, attackers must redirect execution to a piece of code called shell code that they themselves provided, usually as part from the data that you sent in the first place to the faulty program.
However, modern CPUs can mark data buffers as "not for execution", which prevents shell-provided data from being executed, even if attackers can execute RET, JMP or CALL.
Crooks responded to DEP with two-stage shell codes, the first part of which is based on linking together fragments of code already loaded into memory, for example as part of the running program or one of the DLL files used.
These "already executable" fragments, which in jargon are called "gadgets", do not have to do much. They usually only tell the operating system to change the buffer that contains the rest of the shell code. "No execution allowed". to "This data may be executed as code".
The jump is completed by simply jumping to the second part of the shell code.
(Note that the gadgets should never be used this way. Crooks usually search system DLLs and look for byte sequences that are randomly decompiled into useful code snippets like ADD THIS or COMPARE THAT, even if the gadgets themselves are part of other command sequences.)
In order to misdirect a running program so that it transfers control to an "already executable" gadget, the attacker must of course know under which memory addresses these gadget bytes are loaded.
This was trivial about fifteen years ago, since each version of Windows loaded its standard system DLLs under the same memory location each time. So if the crooks could find an exploit that knew where to weave around in memory on their test computer …
… It would work on your computer too, provided you had the same version of Windows.
ASLR, short for randomization of the address space layout, has made this considerably more difficult because Windows and all other common operating systems now load programs in different locations with every restart.
Crooks can easily guess which version of Windows you have, but they can't easily guess which gadgets are located at which memory addresses on your computer.
ASLR still not perfect
One problem with ASLR is that attackers who can somehow find out the memory addresses currently used on your computer, even though they were chosen at random, can automatically change their attack by simply adapting all gadget addresses in their exploit to their needs.
Unfortunately, information about system memory allocation is sometimes lost due to other innocent-sounding errors known as information disclosure errors.
For example, some programs write log files that should be useful if you ever need assistance, and inadvertently contain useful but confidential data, such as: B. System version data found at address 0x7DEE or the loaded KERNEL DLL 0x7EE3 …..
In other words, the memory layout information that crooks should not find out for program X may already have been blown out by program Y.
Intel's new hardware solution is expected to go beyond ASLR and take two forms: shadow stacking and indirect branch tracking (IBT).
The implementation is complex, but the concepts are simple:
The shadow stack stores two copies of each memory address to which a subroutine could return. One is stored where it always was and is still prone to buffer overflows. The other return address is stored on the shadow stack where a buffer overflow cannot (or should not) reach it. Whenever a subroutine tries to return, the two stacks are compared. If they differ, the return address on the regular stack must have been changed incorrectly. Theoretically, this detects and prevents accidental crashes as well as intentional exploit attempts.
The IBT system will introduce a new machine code instruction called ENDBRANCH. Programs that want to use IBT can compile these instructions into their code at any point where a JMP or CALL can arrive and, if desired, create an approval list with legitimate branch targets. JMP or CALL that have been changed to end up in a different location, e.g. B. in a "code gadget" selected by an attacker, can be detected and blocked. Crooks should therefore find it somewhere between very difficult and impossible to find code gadgets that do what they want.
In case you are wondering how IBT works backwards compatible, Intel has carefully chosen an instruction bytecode for ENDBRANCH to run as a NOP, short for "no operation" (ie an instruction that does nothing but a tiny amount of to consume time and memory) on older CPUs.
As a result, software that was recompiled next year for CET-enabled processors will continue to function properly on older computers.
Is this the end of the exploits?
The Intel press release states: “No product or component can be absolutely safe. Your costs and results may vary. "
However, we suspect that the CET will generally make the crooks more difficult and look forward to it being generally available.