Do you know if there is any official recording or notes online?
Thanks in advance.
The Binary and Malware Analysis course that you mentioned builds on top of the book "Practical Binary Analysis" by Dennis Andriesse, so you could grab a copy of that if you are interested.
More info here: https://krebsonsecurity.com/2014/06/operation-tovar-targets-...
it's been a while back :)
If I knew what I was getting into at the time, I'd do it. I did pay for extra, but in my case it was the low Dutch rate, so for me it was 400 euro's to follow hardware security, since I already graduated.
But I can give a rough outline of what they taught. It has been years ago but here you go.
Hardware security:
* Flush/Reload
* Cache eviction
* Spectre
* Rowhammer
* Implement research paper
* Read all kinds of research papers of our choosing (just use VUSEC as your seed and you'll be good to go)
Binary & Malware Analysis:
* Using IDA Pro to find the exact assembly line where the unpacker software we had to analyze unpacked its software fully into memory. Also we had to disable GDB debug protections. Something to do with ptrace and nopping some instructions out, if I recall correctly (look, I only low level programmed in my security courses and it was years ago - I'm a bit flabbergasted I remember the rough course outlines relatively well).
* Being able to dump the unpacked binary program from memory onto disk. Understanding page alignment was rough. Because even if you got it, there were a few gotcha's. I've looked at so many hexdumps it was insane.
* Taint analysis: watching user input "taint" other variables
* Instrumenting a binary with Intel PIN
* Cracking some program with Triton. I think Triton helped to instrument your binary with the help of Intel PIN by putting certain things (like xor's) into an SMT equation or something and you had this SMT/Z3 solver thingy and then you cracked it. I don't remember got a 6 out of 10 for this assignment, had a hard time cracking the real thing.
Computer & Network Security:
* Web securtiy: think XSS, CSRF, SQLi and reflected SQLi
* Application security: see binary and malware analysis
* Network security: we had to create our own packet sniffer and we enacted a Kevin Mitnick attack (it's an old school one) where we had to spoof our IP addresses, figure out the algorithm to create TCP packet numbers - all in the blind without feedback. Kevin in '97 I believe attacked the San Diego super computer (might be wrong about the details here). He noticed that the super computer S trusted a specific computer T. So the assignment was to spoof the address of T and pretend we were sending packets from that location. I think... writing this packet sniffer was my first C program. My prof. thought I was crazy that this was my first time writing C. I was, I also had 80 hours of time and motivation per week. So that helped.
* Finding vulnerabilities in C programs. I remember: stack overflows, heap overflows and format strings bugs.
-----
For binary & malware analsys + computer & network security I highly recommend hackthebox.eu
For hardware security, I haven't seen an alternative. To be fair, I'm not looking. I like to dive deep into security for a few months out of the year and then I can't stand it for a while.
BTW I can see you were very motivated back then. It got to be pretty steep but you managed to break through. Congrats!
So not very up to date, but I suppose mitigations haven't changed significantly upstream since then.
The kernel has nothing to do with Ubuntu, its release schedule and LTS's. Distro LTS releases also often mean custom kernels, backports, hardware enablement, whatnot, which makes it a fork, so unless were analyzing Ubuntu security rather than Linux security, mainline should be used.
Edit: "LTS" added due to popular demand
Distro LTS releases often mean custom kernels, backports, hardware enablement, whatnot, which makes it effectively a fork.
Unless were interested in discovering kernel variation discrepancies, its more interesting to analyze mainline.
LTS does not mean you get all updates, it only means you get to drag your feet for longer with random bugfixes. Only the latest release has updates.
And as security updates are back ported to all supported versions - and 24.04 being an LTS release, it is as up2date as it gets.
If you're being pedantic, be the right kind of pedantic ;)
This differs from an actual later release which is closer to mainline and includes all newer fixes, including ones that are important but weren't flagged, and with less risk of having new downstream bugs.
If you're going to fight pedantism by being pedantic, better be the right kind of pedantic. ;)
> Does Branch Privilege Injection affect non-Intel CPUs?
> No. Our analysis has not found any issues on the evaluated AMD and ARM systems.
Source: https://comsec.ethz.ch/research/microarch/branch-privilege-i...
There are probably similar bugs in AMD and ARM, I mean how long did these bugs sit undiscovered in Intel, right?
Unfortunately the only real fix is to recognize that you can’t isolate code running on a modern system, which would be devastating to some really rich companies’ business models.
Does pinning VMs to hardware cores (including any SMT'd multiples) fix this particular instance? My understanding was that doing that addressed many of the modern side channel exploits.
Of course that's not ideal, but it's not too bad in an era where the core count of high end CPUs continues to creep upwards.
You could say we only update the predictor at retirement to solve this. But that can get a little dicy also: the retirement queue would have to track this locally and retirement frees up registers, better be sure it's not the one your jump needs to read. Doable but slightly harder than you might think.
On top of that x86 seems to be pushed out more and more by ARM hardware and now increasingly RISC-V from China. But of course there's the US chip angle - will the US, especially after the problems during Covid, let a key manufacturer like Intel bite the dust?
It's not great but lol the sensationalism is hilarious.
Remember, gamers only make up a few percentage of users for what Intel makes. But that's what you hear about the most. One or two data center orders are larger than all the gaming cpus Intel will sell in a year. And Intel is still doing fine in the data center market.
Add in that Intel still dominates the business laptop market which is, again, larger than the gamer market by a pretty wide margin.
The two areas you mention (data center, integrated OEM/mobile) are the two that are most supply chain and business-lead dependent. They center around reliable deliveries of capable products at scale, hardware certifications, IT department training, and organizational bureaucracy that Intel has had captured for a long time.
But!
Data center specifically is getting hit hard from AMD in the x86 world and ARM on the other side. AWS's move to Graviton alone represents a massive dip in Intel market share, and it's not the only game in town.
Apple is continuing to succeed in the professional workspace, and AMD's share of laptop and OEM contracts just keeps going up. Once an IT department or their chosen vendor has retooled to support non-Intel, that toothpaste is not going back into the tube - not fully, at least.
For both of these, AMD's improvement in reliability and delivery at scale will be bearing fruit for the next decade (at Intel's expense), and the mindshare, which gamers and tech sensationalism are indicators for, has already shifted the market away from an Intel-dominated world to a much more competitive one. Intel will have to truly compete in that market. Intel has stayed competitive in a price-to-performance sense by undermining their own bottom line, but that lever only has so far it can be pulled.
So I'm not super bullish on Intel, sensationalism aside. They have a ton of momentum, but will need to make use of it ASAP, and they haven't shown an ability to do that so far.
Product aside, from a shareholder/business point of view (I like to think of this separately these days as financial performance is becoming less and less reflective of the end product) I think they are too big to fail.
Doing what you want would essentially require a hardware architecture where every load/store has to go through some kind of "augmented address" that stores boundary information.
Which is to say, you're asking for 80286 segmentation. We had that, it didn't do what you wanted. And the reason is that those segment descriptors need to be loaded by software that doesn't mess things up. And it doesn't, it's "just a pointer" to software and amenable to the same mistakes.
Again, it's just not a software problem. In the real world we have hardware that exposes "memory" to running instructions as a linear array of numbers with sequential addresses. As long as that's how it works, you can demand an out of bounds address (because the "bounds" are a semantic thing and not a hardware thing).
It is possible to change that basic design principle (again, x86 segmentation being a good example), but it's a whole lot more involved than just "Rust Will Fix All The Things".
(*) ... although I don't think I can abstain ...
(CHERI already exists on ARM and RISC-V though.)
A "far pointer" was, again, a *software* concept where you could tell the compiler that this particular pointer needed to use a different descriptor than the one the toolchain assumed (by convention!) was loaded in DS or SS.
In the case of speculative execution, you need an insane amount of prep to use that exploit to actually do something. The only real way this could ever be used is if you have direct access to the computer where you can run low level code. Its not like you can write JS code with this that runs on browsers that lets you leak arbitrary secrets.
And in the case of systems that are valuable enough to exploit with a risk of a dedicated private or state funded group doing the necessary research and targeting, there should be a system that doesn't allow unauthorized arbitrary code to run in the first place.
I personally disable all the mitigations because performance boost is actually noticeable.
That's precisely what Spectre and Meltdown were though. It's unclear whether this attack would work in modern browsers but they did reenable SharedArrayBuffer & it's unclear if the existing mitigations for Spectre/Meltdown stimy this attack.
> I personally disable all the mitigations because performance boost is actually noticeable.
Congratulations, you are probably susceptible to JS code reading crypto keys on your machine.
By the way, you have to be careful on your database server to not actually run arbitrary code as well. If your database supports stored procedures (think PL/SQL), that qualifies, if the clients that are able to create the stored procedures are not supposed to be able to access all data on that server anyway.
But I'd never state to definitively, as I don't know enough about what HTML without JS can do these days. For all I know there's a turing tarpit in there somewhere...
With JS or WASM, it's much more straightforward.
OTOH if an adversary gets a low-privilege RCE on your box, exploiting something like Spectre or RowHammer could help elevate the privilege level, and more easily mount an attack on your other infrastructure.
But also note my caveat about database servers, for example. A database server shared between accounts of different trust levels will be affected, if the database supports stored procedures for example. Basically, as soon as there's anything on the box that not all users of it should be able to access anyway, you'll have to be very, very careful.
Do you understand the scope of the issue? Do you know that this couldn't personally affect you in a dragnet (so, not targeted, but spread out, think opportunistic ransomware) attack?
Because this statement of yours:
> Its not like you can write JS code with this that runs on browsers that lets you leak arbitrary secrets.
was not true for Spectre. The original spectre paper notoriously mentions JS as an attack vector.
If you truly disable all mitigations (assuming CPU and OS allow you to do so), you will reopen that hole.
So:
> The only real way this could ever be used is if you have direct access to the computer where you can run low level code.
I'm a low level kernel engineer, and I don't know this to be true in the general case. JITs, i.e. the JavaScript ones, also generate "low level code". How do you know of this not being sufficient?
Look forward to learning how this can be meaningfully mitigated.
[1] https://www.intel.com/content/www/us/en/security-center/advi...
AMD has had SEV support in QEMU for a long time, which some cloud hosting providers use already, that would mitigate any such issue if it occurred on AMD EPYC processors.
[1] See, e.g., https://www.amd.com/en/resources/product-security/bulletin/a... and https://www.intel.com/content/www/us/en/developer/articles/t...
Their new processors are quite inviting, but like with all CPU’s I’d prefer to keep the entire thing to myself.
From that piece of text on the blog, I don‘t quite unterstand if Kaby Lake CPUs are affected or not.
Why mention only Windows, what about Linux users?
Not expert enough to know what to look for to see if these particular mitigations are present yet.
https://comsec.ethz.ch/research/microarch/branch-privilege-i...
"Unfortunately for John, the branches made a pact with Satan and quantum mechanics [...] In exchange for their last remaining bits of entropy, the branches cast evil spells on future generations of processors. Those evil spells had names like “scaling-induced voltage leaks” and “increasing levels of waste heat” [...] the branches, those vanquished foes from long ago, would have the last laugh."
https://www.usenix.org/system/files/1401_08-12_mickens.pdf
> The Mossad is not intimidated by the fact that you employ https://. If the Mossad wants your data, they’re going to use a drone to replace your cellphone with a piece of uranium that’s shaped like a cellphone, and when you die of tumors filled with tumors, […] they’re going to buy all of your stuff at your estate sale so that they can directly look at the photos of your vacation instead of reading your insipid emails about them.
mitigations=off
Don't care.Suppose you want to measure the distribution of the delay between recurring events (which is basically what's at the heart of those vulnerabilities). Suppose the delays are all sub-milliseconds, and that your timer, to pick something ridiculous, only has a 2 second granularity.
You may at first think that you cannot measure the sub-millisecond distribution with such a corse timer. But consider that event and timers are not synchronized to each other, so with enough patience, you will still catch some events barely on the left or on the right side of your 2 second timer tick. Do this over a long enough time, and you can reconstruct the original distribution. Even adding some randomness to the timer tick just means you need more samples to suss the statistic out.
Again, I am not an expert, and I don't know if this actually works, but that's what I came up with intuitively, and it matches with what I heard from some trustworthy people on the subject, namely that non-precision timers are not a panacea.
If each timer draws from the same random distribution then sure, you could work out the real tick with greater accuracy, but I don’t know if that is practical.
If the timers draw from different distributions then it is going to be much harder.
I imagine there is an upper limit of how much processing can be done per tick to before any attack becomes implausible.
Again, I'm an amateur, but I think you just need to know that distribution, which I guess you usually do (open source vs. closed source barely matters there), law of large numbers and all.
Anyway, looking through literature, this article presents some actual ways to circumvent timers being made corse-grained: https://attacking.systems/web/files/timers.pdf
In that article, the "Clock interpolation" sounds vaguely related to what I was describing on a quick read, or maybe it's something else entirely... Later, the article mentions alternative timing sources altogether.
Either way, the conclusion of the article is that the mitigation approach as a whole is indeed ineffective: "[...] browser vendors decided to reduce the timer resolution. In this article, we showed that this attempt to close these vulnerabilities was merely a quick-fix and did not address the underlying issue. [...]"
The mitigating factor is actually that you don't go to malicious websites all the time, hopefully. But it happens, including with injected code on ads and stuff that may enabled by secondary vulnerabilities.
[1] Not even including "potentially exploitable from JavaScript", which Spectre was. It's sufficient if you name one where an ordinary userspace program can do it.
Then people say "no that's not possible, we got security in place."
So then the researchers showcase a new demo where they use their existing knowledge with the same issue (i.e. scaling-induced voltage leaks).
I suspect this will go on and on for decades to come.
- Predictor updates may be deferred until sometime after a branch retires. Makes sense, otherwise I guess you'd expect that branches would take longer to retire!
- Dispatch-serializing instructions don't stall the pipeline for pending updates to predictor state. Also makes sense, considering you've already made a distinction between "committing the result of the branch instruction" and "committing the result of the prediction".
- Privilege-changing instructions don't stall the pipeline for pending updates either. Also makes sense, but only if you can guarantee that the privilege level is consistent between making/committing a prediction. Otherwise, you might be creating a situation where predictions generated by code in one privilege level may be committed to state used in a different one?
Maybe this is hard because "current privilege level" is not a single unambiguous thing in the pipeline?
progval•7h ago
Paper: https://comsec.ethz.ch/wp-content/files/bprc_sec25.pdf
ncr100•7h ago
> [...] the contents of the entire memory to be read over time, explains Rüegge. “We can trigger the error repeatedly and achieve a readout speed of over 5000 bytes per second.” In the event of an attack, therefore, it is only a matter of time before the information in the entire CPU memory falls into the wrong hands.
formerly_proven•7h ago
cenamus•7h ago
bloppe•6h ago
bee_rider•6h ago
On the bright side, they will get to enjoy a much better music scene, because they’ll be visiting the 90’s.
titzer•3h ago
autoexec•1h ago
umanwizard•2h ago
superblas•19m ago
The ARM Cortex-R5F and Cortex-M7, to name a few, have branch predictors as well, for what it’s worth ;)
tsukikage•59m ago
trebligdivad•6h ago
dang•6h ago
trebligdivad•6h ago
dang•6h ago