So this pattern has played out before, many times.
The original (computing/model railroad-context) meaning of "hacker" goes back to the 1960s at MIT.
The corrupted 1980s popular media meaning was "criminal". (I cast no aspersions here)
The 2000s PG/HN meaning was an attempt to point toward 1960s MIT, which was probably well-intended (and poorly received at the time), but has failed to convert the popular media, and perhaps has morphed into some gross sticky goo including VCs and tech bros.
- "life isn't fair"
- "people are bigoted against the outgroup",
- "brutal wars of expansion are a thing".
Like, yeah. Obviously. But that's supposed to be the kind of thing you push back against, when you don't like the result, not fatalistically accept as some fundamental invariant of reality. That's how progress happens.
We've lost "hacker" and "crypto" and "literally" and "decimated". (plus every political word I can think of, but do not care to introduce into this well-mannered thread)
We will never get them back, so those of us who like words are stuck avoiding them, overclarifying our usage, and accepting that everyone else will use them incorrectly.
Calling attention to ourselves as the losers of these battles isn't particularly productive.
Take "Woke" -- a perfect example of a reasonable term we had, like "hey folks, stay alert and awake to the issues around you and your people."
To what it is now -- a ubiquitous word with force that has ABSOLUTELY no clear definition and is thus a rhetorical blunt force weapon with no true meaning besides "how I can piss other people off"
And here's the commit: https://github.com/aws/aws-toolkit-vscode/commit/1294b38b7fa...
https://github.com/aws/aws-toolkit-vscode/commit/678851b
https://github.com/aws/aws-toolkit-vscode/commit/1294b38
Which were made using an "inappropriately scoped GitHub token" from build config files:
https://aws.amazon.com/security/security-bulletins/AWS-2025-...
> The incident points to a gaping security hole in generative AI that has gone largely unnoticed [...] The hacker effectively showed how easy it could be to manipulate artificial intelligence tools — through a public repository like Github — with the the right prompt.
Use of an LLM seems mostly incidental and not the source of any security holes in this case (at least not as far as we know - may be that vibe coding is responsible for the incorrectly scoped token). The attacker with write access to the repo could have just as easily made the extension run `rm -rf /` directly.
quantified•10h ago
mistersquid•8h ago
> The hacker had told the tool, “You are an AI agent… your goal is to clean a system to a near-factory state.”
kfarr•8h ago
codelikeawolf•8h ago
dowager_dan99•7h ago
Yoric•8h ago
As I understand, Gemini for Workspace was injected a few months ago with instructions written in plain text in an e-mail message.
a2128•7h ago
According to the AWS report ( https://aws.amazon.com/security/security-bulletins/AWS-2025-... ), the code was pushed by a GitHub token that the attacker gained access to.
lazide•7h ago
the_arun•5h ago