I also find it kind of funny that the "blunder" mentioned in the title, according to the article is ... installing Huntress's agent. Do they look at every customer's google searches to see if they're suspicious too?
The problem to me is that this is the kind of thing you'd expect to see being done by a state intelligence organization with explicitly defined authorities to carry out surveillance of foreign attackers codified in law somewhere. For a private company to carry out a massive surveillance campaign against a target based on their own determination of the target's identity and to then publish all of that is much more legally questionable to me. It's already often ethically and legally murky enough when the state does it; for a private company to do it seems like they're operating well beyond their legal authority. I'd imagine (or hope I guess) that they have a lawyer who they consulted before this campaign as well as before this publication.
Either way, not a great advertisement for your EDR service to show everyone that you're shoulder surfing your customers' employees and potentially posting all that to the internet if you decide they're doing something wrong.
The machine was already known to the company as belonging to a threat actor from previous activity
This gains more trust with their customers and breaking trust with ... threat actors?
No, their job is to provide EDR protection for their customers.
As far as unique identifiers go, advertisers use a unique fingerprint of your browser to target you individually. Cookies, JavaScript, screen size, etc, are all used.
I'm also slightly curious as to if you might be associated with an EDR vendor? I notice that you only have three comments ever, and they all seem to be defending how EDR software and Huntress works without engaging with this specific instance.
However, it's obvious that protection-ware like this is essentially spyware with alerts. My company uses a similar service, and it includes a remote desktop tool, which I immediately blocked from auto-startup. But the whatever scanner sends things to some central service. All in the name of security.
Unless maybe you just want to develop a more personal relationship with your internal cybersecurity team, who knows.
The startup script you blocked could have just been a decoy. And set off a red flag.
A lot of these EDR's operate in kernel space.
As an example, if you're at a FedRAMP High certified service provider, the DoD wants to know that the devices your engineers are using to maintain the service they pay for aren't running a rootkit and that you can prove that said employee using that device isn't mishandling sensitive information.
EDIT: For additional context, I'd add that security/risk tradeoffs happen all the time. In practice trusting Huntress isn't too different than trusting NPM with an engineer that has root access to their machine or any kind of centralized IT provisioning/patching setup.
One of the tools they make is a Endpoint Detection and Response (EDR) product.
The kind of thing that goes on every laptop, server, and workstation in certain controlled environments (banks, government, etc.).
It was put there by your security team.
As a corporate IT tool, I can see how Huntress ought to allow my IT department or my manager or my corporate counsel access to my browser history and everything I do, but I'm even still foggy on why Huntress grants themselves that level of access automatically.
Sure, a peek into what the bad guys do is neat, and the actual person here doesn't deserve privacy for his crimes, but I'd love a much clearer explanation of why they were able to do this to him and how if I were an IT manager choosing to deploy this software, someone who works at Huntress wouldn't be able to just pull up one of my employee's browser history or do any other investigating of their computers.
It's a relatively common model, with MDR and MSSP providers doing similar things. I don't see it as much with EDR providers though.
in general, if you're using a company owned device (the target for this product and many others like it) you should always assume everything is logged
In the EU, employees have an expectation of privacy even on their corporate laptop. It is common for e.g. union workers to use corporate email to communicate, and the employer is not allowed to breach privacy here. Even chatter between worker is reasonably private by default.
I suspect, if the attacker is inside the EU, this article is technically a blatant breach of the GDPR. Not that the attacker will sue you for it, but customers might find this discomforting.
The key difference here is that pen testing, as well as IT testing, is very explicitly scoped out in a legal contract, and part of that is that users have to told to consent to monitoring for relevant business purposes.
What happened in this blogpost is still outside of that scope, obviously. I doubt that Huntress could make the claim that their customer here was clearly told that they would be possibly monitoring their activity in the same way that a "Content to Monitoring" popup for every login on corporate machines does it.
So if <bad actor> in this writeup read your pitch and decided to install your agent to secure their attack machine, it sounds like they "trusted you with this access". You used that access to surveil them, decide that you didn't approve of their illegal activity, and publish it to the internet.
Why should any company "trust you with this access"? If one of your customers is doing what looks to one of your analysts to be cooking their books, do you surveil all of that activity and then make a blog post about them? "Hey everyone here, it's Huntress showing how <company> made the blunder of giving us access to their systems, so we did a little surprise finance audit of them!"
If folks understood this better, there would be less reason for software like Huntress' EDR to exist.
I suspect this is deliberate.
But some of these, like Bloodhound are not really telling you much you didn't know. They are tools to make exploiting access, whether authorized or otherwise, easier and more automated. Hell, even in the case of Cobalt Strike, they are doing their best to limit who can obtain it and chasing down rogue copies because used for real attack purposes.
I'm not really saying anything should (or can) be done about this. Just ruminating about it, as after many years in the industry, seeing a list of a mostly open source stack used for every aspect of cybercrime sometimes surprises me at just how good a job we've done of equipping malicious actors. For all the high minded talk of making everyone more secure, a lot of things just seem to be done for a mixture of bragging rights ego and sharing things with each other to make our offensive sec job a bit easier.
> We knew this was an adversary, rather than a legitimate user, based on several telling clues. The standout red flag was that the unique machine name used by the individual was the same as one that we had tracked in several incidents prior to them installing the agent.
So in any other context, they probably wouldn't do any digging into the machine or user history, but they did this time because they already had high confidence of malicious use from this endpoint.
Some random person downloaded Huntress to try it out. Not a company. Not through IT. Just clicked "start trial" like you might with any software. Were they trying to figure out how to get around it? We have no idea!
Huntress employees then decided - based on a hostname that matched something in their private database - to watch everything this person did for three months. Their browser history, their work patterns, what tools they used, when they took breaks.
Then they published it.
The "but EDR needs these permissions!" comments are completely missing the point. Yeah, we know EDR is basically spyware. The issue is that Huntress engineers personally have access to trial user data and apparently just... browse it when they feel like it? Based on hostname matches???
Think about what they're saying: they run every trial signup against their threat intel database. If you match their criteria - which could be as weak as a hostname collision - their engineers start watching you. No warrant. No customer requesting it. No notification. Just "this looks interesting, let's see what they're up to."
Their ToS probably says something vague about "security monitoring" but I doubt it says "we reserve the right to extensively surveil individual trial users for months and publish the results if we think you're suspicious." And even if it did, that doesn't make it right or legal.
They got lucky this time - caught an actual attacker. But what about next time? What about the security researcher whose hostname happens to match? The pentester evaluating their product? Hell, what about corporate users whose hostname accidentally matches something in their database?
The fact that they thought publishing this was a good idea tells you a lot. This isn't some one-off investigation. This is apparently? how they operate.
A person like that obviously has extremely poor operational security and is therefore of low competence.
Competent actors likely utilize virtualization or in cases where the software is adversarial and may reveal virtualization, physical machines (eg. cheap Mini PC's) with isolated and managed networks (eg. connections routed through a commercial VPN or a residential proxy) not under the control of the machine.
Also styxmmarket doesn't appear to be in any way a dark web marketplace/forum. It doesn't even have an onion address? It has a .com domain, something that should be easy for the authorities to seize. Probably is a honeypot of some kind.
Anyone who knows anything about macOS knows that it is not possible to disable System Integrity Protection without rebooting into recovery (an environment that it is not possible to actually get events from). So their "detection" is just some random guy typing "csrutil disable" in their terminal and it doing absolutely nothing. I would not be surprised if there is some similar dumb explanation here that they missed, which would make for a substantially less interesting story.
s46dxc5r7tv8•7h ago
Finnucane•6h ago
cbisnett•5h ago
lostlogin•5h ago
hunter-gatherer•5h ago
I work on a REM team in a SOC for a big finance company all you US people know. An employee can't hardly fart in front of their corporate machine without us knowing about it. How do you all think managed cyber security works?
mc32•5h ago
boston_clone•4h ago
In fact, I have worked at several organizations in which this type of activity would be a terminable offense.