If so and if the US had a sane administration maybe, this would be acted upon, but these days, anything goes as long as you 'donate' to the ballroom.
Looking into this more now I see SEC Rule requiring disclosure within 4 business days of determining a cybersecurity incident is "material"
There is a big list of SEC violations as a result: 1. Late Disclosure (Item 1.05) If materiality was determinable in January → 4-day rule violated Penalty: Fines, enforcement actions
2. Misleading Statements/Omissions (Rule 10b-5) Any public statements about security between Jan-May could be problematic Omitting known material risks = securities fraud
3. Inadequate Internal Controls (SOX) Failure to properly investigate and escalate user reports Inadequate breach detection systems
4. Failure to Maintain Adequate Disclosure Controls My report should have triggered disclosure review Going silent suggests broken escalation process
I'm not trying to be recalcitrant, rather I am genuinly curious. The reason I ask is that no one talks like a LLM, but LLMs do talk like someone. LLMs learned to mimic human speech patterns, and some unlucky soul(s) out there have had their voice stolen. Earlier versions of LLMs of LLMs that more closely followed the pattern and structure of a wikipedia entry were mimicking a style that that was based of someone elses style and given some wiki users had prolific levels of contributions, much of their naturally generated text would register as highly likely to be "AI" via those bullshit ai detector tools.
So, given what we know of LLMs (transformers at least) at this stage it seems more likely to me that current speech patterns again are mimicry of someones style rather than an organically grown/developed thing that is personal to the LLM.
Not saying the article is bad, it seems pretty good. Just that there are indications
Might as well say "You can tell by the way it is".
EDIT: having said that, many of the other articles on the blog do look like what would come from AI assistance. Stuff like pervasive emojis, overuse of bulleted lists, excessive use of very small sections with headers, art that certainly appears similar in style to AI generated assets that I've seen, etc. If anything, if AI was used in this article, it's way less intrusive than in the other articles on the blog.
There are some still some signs you can tell content is AI written based on verbosity, use of bold, specific HTML styling, etc. I see no issues with the approach. I noticed some people have an allergic reaction to any hint of AI, and when the content produced is "fluff" with no real content I get annoyed too - however that isn't the case for all content.
Please, at least put a disclaimer on top so I can ask an AI to summarize the article and complete the cycle of entropy.
Way too verbose to get the point across, excessive usage of un/ordered bullets, em dashes, "what i reported / what coinbase got wrong", it all reeks of slop.
Once you notice these micro-patterns, you can't unsee them.
Would you like me to create a cheat sheet for you with these tell tale signs so you have it for future reference?
But I guess you knew that already, which is why you just made a fresh burner account to whine on rather than whining from your real account.
The post just repeats things over and over again, like the Brett Farmer thing, the "four months", telling us three times that they knew "my BTC balance and SSN" and repeatedly mentioning that it was a Google Voice number.
Of course, unlike those people, LLMs are capable of expressing novel ideas that add meaningful value to diverse conversations beyond loudly and incessantly ensuring everyone in the thread is aware of their objection to new technology they dislike.
It's the task of anybody presenting their output to third parties to read (at least without a disclaimer about a given text being unvetted LLM output) to make damn sure it's the former and not the latter.
The article isn't paywalled. Nobody was forced to read it. Nobody was prohibited from asking an LLM to summarize the article.
Whining about LLM written text is whining about one's own deliberate choice to read an article. There is no implied contract or duty between the author and the people who freely choose to read or not read the author's (free) publication.
It's like walking into a (free) soup kitchen, consuming an entire bowl of free soup, and then whining loudly to everyone else in the room about the soup being too salty.
We're probably reading LLM-assisted or even generated texts many times per day at this point, and as long as I don't notice that my time is being wasted by bad writing or hallucinated falsehoods, I'm perfectly fine with it.
The sentence-level stuff was somewhat improved compared to whatever “jaunty Linked-In Voice” prompt people have been using. You know, the one that calls for clipped repetitive phrases, needless rhetorical questions, dimestore mystery framing, faux-casual tone, and some out-of-proportion “moral of the story.” All of that’s better here.
But there’s a good ways left to go still. The endless bullet lists, the “red flags,” the weirdly toothless faux drama (“The Call That Changed Everything”, “Data Catastrophe: The 2025 Cyber Fallout”), and the Frankensteined purposes (“You can still protect yourself from falling victim to the scams that follow,” “The Timeline That Doesn't Make Sense,” etc.)…
The biggest thing that stands out to me here (besides the essay being five different-but-duplicative prompt/response sessions bolted together) are the assertions/conclusions that would mean something if real people drew them, but that don’t follow from the specifics. Consider:
“The Timeline That Doesn't Make Sense
Here's where the story gets interesting—and troubling:
[they made a report, heard back that it was being investigated, didn’t get individual responses to their follow-ups in the immediate days after, the result of the larger investigation was announced 4 months later]”
Disappointing, sure. And definitely frustrating. But like… “doesn’t make sense?” How not so? Is it really surprising or unreasonable that it takes a large organization time, for a major investigation into a foreign contractor, with law enforcement and regulatory implications, as well as 9-figure customer-facing damages? Doesn’t it make sense (even if it’s disappointing), when stuff that serious and complex happens, that they wait until they’re sure before they say something to an individual customer?
I’m not saying it’s good customer service (they could at least drop a reply with “the investigation is ongoing and we can’t comment til it’s done”). There’s lots of words we could use to capture the suckage besides “doesn’t make sense.” My issue is more that the AI presents it as “interesting—and troubling; doesn’t make sense” when those things don’t really follow directly from the bullet list of facts afterward.
Each big categorical that the AI introduced this way just… doesn’t quite match what it purports to describe. I’m not sure exactly how to pin it down, but it’s as if it’s making its judgments entirely without considering the broader context… which I guess is exactly what it’s doing.
> Cryptocurrency exchange Coinbase knew as far back as January about a customer data leak at an outsourcing company connected to a larger breach estimated to cost up to $400 million, six people familiar with the matter told Reuters.
https://www.reuters.com/sustainability/boards-policy-regulat...
> On May 11, 2025, Coinbase, Inc., a subsidiary of Coinbase Global, Inc. (“Coinbase” or the “Company”), received an email communication from an unknown threat actor claiming to have obtained information about certain Coinbase customer accounts, as well as internal Coinbase documentation, including materials relating to customer-service and account-management systems.
https://www.sec.gov/Archives/edgar/data/1679788/000167978825...
From what I've seen, this is going to be a common subheading to a lot of these stories.
Their fix was to put a piece of paper over the passwords.
What a time.
Sending unsolicited bills for unrequested services is a great way to make sure nobody takes your email seriously
Bitcoin, and really fintech as a whole, are beyond reckless.
It's breathtaking how frequent these are.
With Bitcoin you do not get government bailouts like what happened with the beyond reckless banks in 2008.
It is not beyond imagination that the most popular Bitcoin blockchain (and thus, the label of being the "real" Bitcoin) could change at some point in the future.
"Bitcoin" is not immune from the implications of political fuckery.
I don't know what the specific mechanism would be, but I would bet that it relates to the billions of dollars backing the current ecosystem, and the interests of the people behind them. If the right event or crisis comes along, then people could be compelled to switch over to something else.
I'm sure there's someone out there still mining blocks on that chain with the exploit from 2010, but that's not where the mining power is. If the right series of events occurs, the miners will switch.
Governments around the world are 100% attempting different plans to destabilize or destroy Bitcoin because it harms their interests and ability to print money from thin air. But at the end of the day it's a distributed ledger, so even if they do find a way to manipulate or damage or takeover the network the Bitcoin users can just fork it from before they did their damage and continue from there. That is the ultimate power of a decentralized blockchain, nobody has ultimate power and everyone votes with their resources.
Bitcoin is not an immutable law of nature. If the coin minting cap is reached, all that needs to happen is for miners to start running a fork with a higher cap. Tada, more coins conjured out of the ether, just like all the previous ones. If you want enforced scarcity, you need to be tied to something physically scarce.
The government of Ethereum is not the US government.
They also asked if I had cold storage. I told them I had a fridge (also true).
Coinbase is good for on-ramping, bad for storage. You know, the entire point of cryptocurrency.
The "recordings" are of a phisher attempting to get information from the author. It proves nothing about what Coinbase knew.
The author turned the information over to Coinbase, but that doesn't prove Coinbase knew about their breach. The customer could have leaked their account details in some other way.
Screenscraping malware is fairly common, and it’s not unreasonable for an analyst to look at a report like this and assume that the customer got popped instead of them.
Customers get popped all the time, and have a tendency to blame the proximate corporation…
The author got a phishing call and reported it. Coinbase likely has a deluge of phishing complaints, as criminals know their customers are vulnerable and target their customers regularly. The caller knowing account details is likely not unique in those complaints; customers accidentally leak those all the time. Some of the details the attacker knew could have been sourced from other data breaches. At the time of complaint, the company probably interpreted the report as yet another customer handling their own data poorly.
Phishing is so pervasive that I wouldn't be surprised if the author was hit by a different attack.
There's tons of options. Malware, evil maid, shoulder surfing, email compromise, improper disposal of printouts, prior phishing attack, accidental disclosure.
They send github repo and as soon as you run it they send rejection after stealing tokens and installing keylogger. Pretty sophisticated and the frontend of the codebase looked polished as well.
jclarkcom•2h ago
scottiebarnes•2h ago
jclarkcom•2h ago
tyre•2h ago
Matt Levine has a prescient and depressing quote about the only recourse for being being shareholder lawsuits:
> I find all of this so weird because of how it elevates finance. [Various cases] imply that we are not entitled to be protected from pollution as citizens, or as humans. [Another] implies that we are not entitled to be told the truth as citizens. (Which: is true!) Rather, in each case, we are only entitled to be protected from lies as shareholders. The great harm of pollution, or of political dishonesty, is that it might lower the share prices of the companies we own.
* To be clear, I don’t think it is nebulous, and you’re right to feel harmed. But, legally, I don’t know the harm in “they didn’t respond to my emails” after there’s no concrete damage.
criddell•2h ago
I've never looked at the Coinbase agreement that's presented when you open an account, but chances are you would have to go through arbitration first. That's not necessarily a bad thing.
nightpool•2h ago
Cantinflas•1h ago