Shay Gueron - The PC platform has made a tremendous lead in the past 20 years, due to a great progress in architectures, micro-architecture, OS and software. However, from the security point of view, the PC platform has failed. The problems today seem to be is worse than 20 years ago. Attackers have an advantage with their ability to develop 0 day exploits versus the time it takes for patches to be widely installed. The solution to this problem cannot come from hardware alone, as most attacks are against software. The PC platform community needs to step back and look at how to solve the larger problem and from there determine the role of hardware in that solution.
All players in the PC platform community need to continue work together and improve the platform’s quality. The processors need to continue improving the “compute efficiency” and provide hardware assists only as building blocks for the software stack. It is the role of the software, and mostly the responsibility of software writers to protect their code from malicious attack. One contribution that the processor community can offer is to research, publish and point out micro-architectural enhancements and to increase the awareness of software writers to consequences of writing application that will on throughput-optimized general purpose processors.
At the same time I think we need to look pragmatically at what Trusted Computing is capable of accomplishing and what it can’t accomplish. The current direction of Trusted Computing implementations is to leverage Secure Virtualization Technology as the core technology for creating isolation between guest partitions. The problem is that the guest partitions are themselves a large monolithic targets and this approach does nothing to add intrinsic protection for the guest itself.
Secure virtualization may allow other guests to continue to function when one is corrupted, but this does not help if there is critical data within the guest that is corrupted. An extension of the current software protection tool architecture is supported, where a tamper-resistant external health monitors may live outside of a guest and be able to provide higher assurance warning of a penetration or failure inside a guest and this is research that should be pursued.
More importantly we need to look at how to provide a finer level of protection within a given guest. This may be accomplished by a decomposition of the monolithic guest into smaller components hosted by the virtualization layer or it may be accomplished by other techniques. I think that a critical area of combined hardware and software research is to develop workable solutions that allow for the “effective decomposition” of the monolithic OS kernels without incurring massive code rewrites that would make this logistically impractical.
1. Virtualization hardware support as a tool for undetectable malware, i.e., reversing the security promises of virtualization.
2. New (old?) side-channel attacks capitalizing upon caches, branch prediction units, keystroke tables, etc. which potentially circumvent the TCG trust boundaries.
3. BufferOverflow attacks circumventing the “eXecute Disable XD” / “No eXecute NX” bit.
I see the new arising problems due to the following facts:
1. Security cannot be baked “afterwards” - without dramatic architecture changes - into an architecture which was not intended as a security-aware platform:
“As security is not a cheap and simple afterwards add-on, the underlying architecture must be completely overhauled towards security.”
2. Due to massive implicit/explicit parallelism deeply hidden in today’s very complex miroarchitectures, the “isolated execution” requirement, i.e., the confinement guarantee, must be carefully evaluated.
“Towards fulfilling the confinement of Lampson ‘73 on the OS-level, we must tag resources like caches, branch prediction units, TLB’s, etc. at a process-fine granularity to help the OS.”
3. The NX/XD-bit is a right but ONLY a very tiny step into the direction of a capability based computer-architecture paradigm.
“Although the security-advantages of a capability-based computer-architecture are terrific, its practical realization is a tremendous research project. But, we have tweaked and massaged the x86 architecture the last 25+ years so much, that another big giant step towards a capability-aware x86/x86-64 architecture wouldn’t matter. Actually, when inspecting it carefully, it seems not totally hopeless.”