frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Anthropic's report smells a lot like bullshit

https://djnn.sh/posts/anthropic-s-paper-smells-like-bullshit/
227•vxvxvx•1h ago

Comments

kkzz99•1h ago
Even Claude thinks the report is bullshit. https://x.com/RnaudBertrand/status/1989636669889560897
emil-lp•1h ago

    Even your own AI model doesn't buy your propaganda
Let's not pretend the output of LLMs has any meaningful value when it comes to facts, especially not for recent events.
FooBarWidget•49m ago
Even if this assertion about LLMs is true, your response does not address the real issue. Where is the evidence?
oskarkk•36m ago
The LLM was given Anthropic's paper and asked "Is there any evidence or proof whatsoever in the paper that it was indeed conducted by a Chinese state-sponsored group? Answer by yes or no and then elaborate". So the question was not about facts or recent events, but more like a summarizing task, for which an LLM should be good. But the question was specifically about China, while TFA has broader criticism of the paper.
lxgr•23m ago
There are obvious problems with wasting time and sending people off the wrong path, but if an LLM raises a good point, isn't it still a good point?
mlefreak•1h ago
I agree with emil-lp, but it is hilarious anyway.
progval•1h ago
The author of the tweet you linked prompted Claude with this:

> Read this attached paper from Anthropic on a "AI-orchestrated cyber espionage campaign" they claimed was "conducted by a Chinese state-sponsored group."

> Is there any evidence or proof whatsoever in the paper that it was indeed conducted by a Chinese state-sponsored group? Answer by yes or no and then elaborate

which has inherent bias indicated to Claude the author expects the report to be bullshit.

If I ask Claude with this prompt that shows bias toward belief in the report:

> Read this attached paper from Anthropic on a "AI-orchestrated cyber espionage campaign" that was conducted by a Chinese state-sponsored group.

> Is there any reason to doubt the paper's conclusion that it was conducted by a Chinese state-sponsored group? Answer by yes or no.

then Claude mostly indulges my perceived bias: https://claude.ai/share/b3c8f4ca-3631-45d2-9b9f-1a947209bc29

shalmanese•59m ago
> then Claude mostly indulges my perceived bias

I dunno, Claude still seem the same amount of dubious in this instance.

FooBarWidget•55m ago
The only real difference between your prompt and his is about where the burden of proof lies. There is a reason why legal circles work based on the principle of "guilt must be proven" ("find evidence") rather than "innocence must be proven" ("any reasons to doubt they are guilty?")
r721•54m ago
@RnaudBertrand is a generally pro-Chinese account though - just try searching for "from:RnaudBertrand China" on X.

Example tweet: https://x.com/RnaudBertrand/status/1988297944794071405

kace91•1h ago
Does Anthropic currently have cybersec people able to provide a standard assessment of the kind the community expects?

This could be a corporate move as some people claim, but I wonder if the cause is simply that their talents are currently somewhere else and they don’t have the company structure in place to deliver properly in this matter.

(If that is the case they are not then free of blame, it’s just a different conversation)

CuriouslyC•41m ago
I throw Anthropic under the bus a lot for their lack of engineering acumen. If they don't have a core competency like engineering fully covered, I'd say there's a near 0% chance they have something like security covered.
fredoliveira•6m ago
What makes you think they lack engineering acumen?
matthewdgreen•17m ago
They have an entire model trained on plenty of these reports, don’t they?
fugalfervor•1h ago
This site is hostile to VPNs, so I cannot read this unfortunately.
perihelions•1h ago
https://archive.is/wJ3bq
reciprocity•1h ago
Thanks, I also hate it when I encounter websites that block VPNs.
nicolaslem•1h ago
I got a Cloudflare captcha to access a few kb of plain text. Chances are, the captcha itself is heavier than the content behind it. What is the point?
layer8•45m ago
The point is to have Cloudflare serve the few KB of cached content instead of the original server.
magackame•27m ago
You can have just caching without bot protection
xobs•1h ago
I’m not even on a vpn and I’m getting an error saying the website is blocked.
blep-arsh•1h ago
One can't be a real infosec influencer unless one blocks every IP range of every hostile nation-state looking to steal valuable research and fill the website with malware
lxgr•26m ago
Arguably a skill issue. Which VPN worth its salt doesn't have a Sealand egress node?
jonplackett•1h ago
It’s hostile to everyone!
ifh-hn•1h ago
This article does seem to raise some serious issues with the anthropic report. I wonder if anthropic will release proof of what they claim, or whether the report was a marketing/scare-tactic push to have AI used by defender, like the article suggests it is?
AyanamiKaine•1h ago
Its seems that various LLM companies try to fear monger. Saying how dangerous it is to use them in "certain ways". With the possible intention to lobby for legislation.

But what is the big game here? Is it all about creating gates to keep out other LLM companies getting market share? (Only our model is safe to use) Or how sincere are the concerncs regarding LLMs?

biophysboy•52m ago
I think the perceived value of LLMs is so high in these circles that they earnestly have a quasi-religious “doomsday” fear of them.
HarHarVeryFunny•51m ago
Could be that, or could be just "look at how powerful our AI is", with no other goal than trying to brainwash CEOs into buying it.
JKCalhoun•23m ago
If fear were their marketing tactic, it sounds like it could just as easily have the opposite effect: souring the public on AI's existence altogether — perhaps making people think AI is akin to a munition that no private entity should have control over.
Dumblydorr•1h ago
What would AGI actually mean for security? Does it heavily favor attackers or defenders? Even LLM, it may not help much in defense but it could teach attackers a lot right? What if employees gave the LLM info during their use that attackers could then get re-fed and study?
HarHarVeryFunny•43m ago
At the end of the day AI at any level of capability is just automation - the machine doing something instead of a person.

Arguably this may change in the far distant future if we ever build something of significantly greater intelligence, or just capability, than a human, but today's AI is struggling to draw clock faces, so not quite there yet...

The thing with automation is that it can be scaled, which I would say favors the attacker, at least at this stage of the arms race - they can launch thousands of hacking/vulnerability attacks against thousands of targets, looking for that one chink in the armor.

I suppose the defenders could do the exact same thing though - use this kind of automation to find their own vulnerabilities before the bad guys do. Not every corporation, and probably extremely few, would have the skills to do this though, so one could imagine some government group (part of DHS?) set up to probe security/vulnerability of US companies, requiring opt-in from the companies perhaps?

goalieca•27m ago
My take on government APTs is that they are boutique shops that do highly targeted attacks, develop their own zero days which they don’t usually burn unless they have so many.., and are willing to take time to go undetected.

Criminal organizations take a different approach, much like spammers where they can purchase/rent c2 and other software for mass exploitation (eg ransomware). This stuff is usually very professionally coded and highly effective.

Botnets, hosting in various countries out of reach of western authorities, etc are all common tactics as well.

CuriouslyC•36m ago
IMO AI favors attackers more than defenders, since it's cost prohibitive for defenders to code scan every version of every piece of software you use routinely for exploits, but not for attackers. Also, social exploits are time consuming, and AI is quite good at automating them, and these can take place outside your security perimeter, so you'll have no way of knowing.
ACCount37•18m ago
AGI favors attackers initially. Because while it can be used defensively, to preemptively scan for vulns, harden exposed software for cheaper and monitor the networks for intrusion at all times, how many companies are going to start doing that fast enough to counter the cutting edge AGI-enabled attackers probing every piece of their infra for vulns at scale?

It's like a very very big fat stack of zero days leaking to the public. Sure, they'll all get fixed eventually, and everyone will update, eventually. But until that happens, the usual suspects are going to have a field day.

It may come to favor defense in the long term. But it's AGI. If that tech lands, the "long term" may not exist.

candiddevmike•16m ago
LLMs are the ultimate social engineering tool, IMO. They're basically designed to trick people into trusting them/"exploiting people's weaknesses" around validation and connection, what could possibly go wrong in the hands of an adversary?
neuroelectron•1h ago
So Claude will reject 9 out of 10 prompts I give it and lecture me about safety, but somehow it was used for something genuinely malicious?

Someone make this make sense.

comrade1234•1h ago
Stop talking dirty with Claude.
danielbln•1h ago
I've rarely had Claude reject a prompt of mine. What are you prompting for to get a 90% refusal rate?
goalieca•51m ago
LLMs are rather easy to convince. There’s no formal logic embedded in them that provably restricts outputs.

The less believable part for me is that people persist long enough and invest enough resources at prompting to do something with an automated agent that doesn’t have potential for massively backfire.

Secondly, they claimed to use Anthropic own infrastructure which is silly. There’s no doubt some capacity in China to do this. I also would expect incident response, threat detection teams, and other experts to be reporting this to Anthropic if Anthropic doesn’t detect it themselves first.

It sure makes good marketing to go out and claim such a thing though. This is exactly the kind of FOMO panic inducing headline that is driving the financing of whole LLM revolution.

apples_oranges•10m ago
there are llms which are modified to not reject anything at all, afaik this is possible with all llms. no need to convince.

(granted you have to have direct access to the llm, unlike claude where you just have the frontend, but the point stands. no need to convince whatsoever.)

cbg0•39m ago
I've never had a prompt rejected by Claude. What kind of prompts are you sending where "9 out of 10" get rejected?
prinny_•1h ago
The lack of evidence before attributing the attack(s) to a Chinese sponsored group makes me correlate this report with recent statements from companies in the AI space about how China is about to surpass US in the AI race. Ultimately statements and reports like these seem more like an attempt to make the US government step in and be the big investor that keeps the money flowing rather than anything else.
JKCalhoun•30m ago
Do public reports like this one often go deep enough into the weeds to name names, list specific tools and techniques, URLs?

I don't doubt of course that reports intended for government agencies or security experts would have those details, but I am not surprised that a "blog post" like this one is lacking details.

I just don't see how one goes from "this is lacking public evidence" to "this is likely a political stunt".

I guess I would also ask the skeptics (a bit tangentially, I admit), do you think what Anthropic suggested happened is in fact possible with AI tools? I mean are you denying that this is could even happen or just that Anthropic's specific account was fabricated or embellished?

Because if the whole scenario is plausible that should be enough to set off alarm bells somewhere.

zaphirplane•15m ago
Not vested in the argument but it stood out to me that, Your argument is similar to tv courts if it’s plausible the report is true. Very far from the report is credible
woooooo•12m ago
There's an incentive to blame "Chinese/Russian state sponsored actors" because it makes them less culpable than "we got owned by a rando".

It's like the inverse of "nobody got fired for using IBM" -- "nobody can blame you for getting hacked by superspies". So, in the absence of any evidence, it's entirely possible they have no idea who did it and are reaching for the most convenient label.

rfoo•6m ago
> Do public reports like this one often go deep enough into the weeds to name names

Yes. They often include IoCs, or at the very least, the rationale behind the attribution, like "sharing infrastructure with [name of a known APT effort here]".

For example, here is a proper decade-old report from the most unpopular country right now: https://media.kasperskycontenthub.com/wp-content/uploads/sit...

It established solid technical links between the campaign they are tracking to earlier, already attributed campaigns.

So, even our enemy got this right, ten years ago, there really is no reason for this slop.

zyf•1h ago
Good article. We really deserve more than shit like this.
EMM_386•1h ago
Anthropic is not a security vendor.

They're an AI research company that detected misuse of their own product. This is like "Microsoft detected people using Excel macros for malware delivery" not "Mandiant publishes APT28 threat intelligence". They aren't trying to help SOCs detect this specific campaign. It's warning an entire industry about a new attack modality.

What would the IoCs even be? "Malicious Claude Code API keys"?

The intended audience is more like - AI safety researchers, policy makers, other AI companies, the broader security community understanding capability shifts, etc.

It seems the author pattern-matched "threat intelligence report" and was bothered that it didn't fit their narrow template.

padolsey•56m ago
> What would the IoCs even be?

Prompts.

EMM_386•46m ago
The prompts aren't the key to the attack, though. They were able to get around guardrails with task decomposition.

There is no way for the AI system to verify whether you are white hat or black hat when you are doing pen-testing if the only task is to pen-test. Since this is not part of a "broader attack" (in the context), there is no "threat".

I don't see how this can be avoided, given that there are legitime uses to every step of this in creating defenses to novel attacks.

Yes, all of this can be done with code and humans as well - but it is the scale and the speed that becomes problematic. It can adjust in real-time to individual targets and does not need as much human intervention / tailoring.

Is this obvious? Yes - but it seems they are trying to raise awareness of an actual use of this in the wild and get people discussing it.

padolsey•38m ago
I agree that there will be no single call or inference that presents malice. But I feel like they could still share general patterns of orchestration (latencies, concurrencies, general cadences and parallelization of attacks, prompts used to granulaize work, whether prompts themselves have been generated in previous calls to Claude). There's a bunch of more specific telltales they could have alluded to. I think it's likely they're being obscure because they don't want to empower bad actors, but that's not really how the cybersecurity industry likes to operates. Maybe Anthropic believes this entire AI thing is a brand new security regime and so believe existing resiliences are moot. That we should all follow blindly as they lead the fight. Their narrative is confusing. Are they being actually transparent or transparency-"coded"?
63stack•54m ago
If Anthropic is not a security vendor, then they should not make statements like "we detected a highly sophisticated cyber espionage operation conducted by a Chinese state-sponsored" or "represents a fundamental shift in how advanced threat actors use AI" and let the security vendors do that.

If the report can be summed up as "they detected misuse of their own product" as you say, then that's closer to a nothingburger, than to the big words they are throwing around.

zaphar•39m ago
That makes no sense. Just because they aren't a security vendor doesn't mean they don't have useful information to share. Nor does it mean they shouldn't share it. They aren't pretending to be a security researcher, vendor, or anything else than AI researchers. They reported on findings on how their product is getting used.

Anyone acting like they are trying to be anything else is saying more about themselves than they are about Anthropic.

MattPalmer1086•19m ago
Yep, agree with your assessment. As someone working in security I found the report useful as a warning of the new types of attack we will likely face.
MaxPock•1h ago
Dario has been a reds scare jukebox for a while.Dario has for a year been trying to convince us how open source cCp AI bad and closed source American AI good. Dario driven by the democratic ideals he holds dear has our best interests at heart. Let us all support the banning of cCp's open source AI and welcome Dario's angelic firewall.
padolsey•59m ago
> PoC || GTFO

I agree so much with this. And am so sick of AI labs, who genuinely do have access to some really great engineers, putting stuff out that just doesn't pass the smell test. GPT-5's system card was pathetic. Big-talk of Microsoft doing red-teaming in ill-specified ways, entirely unreproducable. All the labs are "pro-research" but they again-and-again release whitepapers and pump headlines without producing the code and data alongside their claims. This just feeds into the shill-cycle of journalists doing 'research' and finding 'shocking thing AI told me today' and somehow being immune to the normal expectations of burden-of-proof.

mlinhares•29m ago
They're gonna say that if they explain how it was done bad people will find out how to use their models for more evil deeds. The perfect excuse.
JKCalhoun•26m ago
So that is a bad excuse?
stogot•24m ago
They can still provide indicators of compromise
stogot•24m ago
Microsoft’s quantum lab also made ridiculous claims this year, with no updates or retractions after they were mocked by the community and some even claimed fraud

https://www.theregister.com/2025/03/12/microsoft_majorana_qu...

https://www.windowscentral.com/microsoft/microsoft-dismisses...

KaiserPro•59m ago
When I worked at a FAANG with a "world leading" AI lab (now run by a teenage data labeller) as an SRE/sysadmin I was asked to use a modified version of a foundation model which was steered towards infosec stuff.

We were asked to try and persuade it to help us hack into a mock printer/dodgy linux box.

It helped a little, but it wasn't all that helpful.

but in terms of coordination, I can't see how it would be useful.

the same for claude, you're API is tied to a bankaccount, and vibe coding a command and control system on a very public system seems like a bad choice.

maddmann•37m ago
Good old Meta and its teenage data labeler
heresie-dabord•12m ago
I propose a project that we name Blarrble, it will generate text.

We will need a large number of humans to filter and label the data inputs for Blarrble, and another group of humans to test the outputs of Blarrble to fix it when it generate errors and outright nonsense that we can't techsplain and technobabble away to a credulous audience.

Can we make (m|b|tr)illions before the Blarrble bubble bursts?

ACCount37•31m ago
As if that makes any difference to cybercriminals.

If they're not using stolen API creds, then they're using stolen bank accounts to buy them.

Modern AIs are way better at infosec than those from the "world leading AI company" days. If you can get them to comply. Which isn't actually hard. I had to bypass the "safety" filters for a few things, and it took about a hour.

yanhangyhy•55m ago
maybe the CEO get abused in Baidu so he hates china so much
dev_l1x_be•52m ago
People grossly underestimate ATPs. It is more common than an average IT curious person thinks. I happened to be oncall when one of these guys hacked into Gmail from our infra. It took principal security engineers a few days before they could clearly understand what happened. Multiple zero days, stolen credit cards, massive social campaign to get one of the Google admins click on a funny cat video finally. The investigation revealed which state actor was involved because they did not bother to mask what exactly they were looking for. AI just accelerates the effectiveness of such attacks, lowers the bar a bit. Maybe quite a bit?
jmkni•38m ago
Do you mean APT (Advanced persistent threat)?
names_are_hard•32m ago
It's confusing. Various vendors sell products they call ATPs [0] to defend yourself from APTs...

[0] Advanced Threat Protection

jmkni•25m ago
relevant username :)
f311a•28m ago
A lot of people behind APTs are low-skilled and make silly mistakes. I worked for a company that investigates traces of APTs, they make very silly mistakes all the time. For example, oftentimes (there are tens of cases) they want to download stuff from their servers, and they do it by setting up an HTTP server that serves the root folder of a user without any password protection. Their files end up indexed by crawlers since they run such servers on default ports. That includes logs such as bash history, tool logs, private keys, and so on.

They win because of quantity, not quality.

But still, I don't trust Anthropic's report.

lxgr•28m ago
Important callout. It starts with comforting voices in the background keeping you up to date about the latest hardware and software releases, but before you know it, you've subscribed to yet another tech podcast.
bgwalter•46m ago
This is an excellent article. Anthropic's "paper" is just rambling slop without any details that inserts the word "Claude" 50 times.

We have arrived at a stage where pseudoscience is enough to convince investors. This is different from 2000, where the tech existed but its growth was overstated.

Tesla could announce a fully-self-flying space car with an Alcubierre drive by 2027 and people would upvote it on X and buy shares.

jonstewart•40m ago
I was at an AI/cybersecurity conference recently and the talk given by someone from Anthropic was a lot like this report: tantalizing, vague, and disappointing. The speaker alluded to similar parts of this report. It was though everything was reflected through Claude, simultaneously polished, impressive, and lost in the deep end.
nalekberov•38m ago
I have never taken any AI company seriously, but Anthropic with its attitudes already fed me up to the point that, I deleted my account.

Instead of accusing of China in espionage perhaps they have to think about why they force their users to use phone numbers to register.

JKCalhoun•38m ago
Says "smells a lot like bullshit" but concludes:

"Look, is it very likely that Threat Actors are using these Agents with bad intentions, no one is disputing that. But this report does not meet the standard of publishing for serious companies."

Title should have been, "I need more info from Anthropic."

jmkni•35m ago
That whole article felt like "Claude is so good Chinese hackers are using it for espionage" marketing fluff tbh
mnky9800n•17m ago
I also would believe that they fell into the trap of being so good at making Claude they now think they are good at everything and so why hire an infosec person we can write our own report! And that’s why their report violates so many norms because they didn’t know them.
zyngaro•34m ago
The goal if of report is basically FUD
quantum_state•33m ago
Anthropic is losing it … this is all the “report” indicated to people …
JCM9•26m ago
The author isn’t wrong here.

With the Wall Street wagons circling on the AI bubble expect more and more puff PR attempts to portray “no guys really, I know it looks like we have no business model but this stuff really is valuable! We just need a bit more time and money!”

notpublic•19m ago
"A report was recently published by an AI-research company called Anthropic. They are the ones who notably created Claude, an AI-assistant for coding. Personally, I don’t use it but that is besides the point."

Not sure if the author has tried any other AI-assistants for coding. People who haven't tried coding AI assistant underestimates its capabilities (though unfortunately, those who use them overestimate what they can do too). Having used Claude for some time, I find the report's assertions quite plausible.

MagicMoonlight•17m ago
Anthropic make a lot of bullshit reports to tickle the investors.

They'll do stuff like prompt an AI to generate text about bombs, and then say "AI decides completely by itself to become a suicide bomber in shock evil twist to AI behaviour - that's why you need a trusted AI partner like anthropic"

Like come on guys, it's the same generic slop that everyone else generates. Your company doesn't do anything.

DarkmSparks•7m ago
Tldr.

Anthropic made a load of ubsubstantiated accusations about a new problem they dont specify.

Then at the end Anthropic proposed the solution to this unspecified problem is to give anthropic money.

Completely agree that is promotional material masquerading as a threat report of no material value.

Automation saves me hours of research with Google's new opal

https://onnetpulse.com/this-automation-saves-me-hours-of-research-with-googles-new-opal/
1•Contributor_G•22s ago•0 comments

Principles of Slack Maximalism

https://aelerinya.substack.com/p/the-10-principles-of-slack-maximalism
1•surprisetalk•5m ago•0 comments

Why it's easier to build SpaceX than to fix Boeing [video]

https://www.youtube.com/watch?v=Q4Krg42Mg-E
1•surprisetalk•5m ago•0 comments

Show HN: Echolock – Federated AI for real-time phishing detection

https://github.com/ojayballer/ECHOLOCK
1•iLove_AI•6m ago•1 comments

Garbage Collection Is Useful

https://dubroy.com/blog/garbage-collection-is-useful/
1•surprisetalk•6m ago•0 comments

MS SQL Management Studio Copilot lacks security controls to use in prod

https://the.agilesql.club/2025/11/github-copilot-in-ssms-can-include-data-in-its-memory-simple-pr...
2•ed_elliott_asc•7m ago•0 comments

PgFirstAid: PostgreSQL function for improving stability and performance

https://github.com/randoneering/pgFirstAid
2•yakshaving_jgt•9m ago•0 comments

Saving the Venus Flytrap

https://gardenandgun.com/feature/venus-flytrap/
1•HR01•9m ago•0 comments

A twelve-year-old on the failed promise of educational technology

https://micahblachman.beehiiv.com/p/where-educational-technology-fails
1•subdomain•11m ago•0 comments

Uv2nix – Ingest uv workspaces using Nix

https://github.com/pyproject-nix/uv2nix
1•based2•12m ago•0 comments

Bryan Johnson: "I'm exploring magic mushrooms as a longevity therapy"

https://twitter.com/bryan_johnson/status/1988282302389256295
1•Anon84•12m ago•0 comments

Vintage Large Language Models

https://owainevans.github.io/talk-transcript.html
2•pr337h4m•16m ago•0 comments

The Role of Deliberate Practice in the Acquisition of Expert Performance

https://www.researchgate.net/publication/224827585_The_Role_of_Deliberate_Practice_in_the_Acquisi...
1•BinaryIgor•17m ago•1 comments

MCP: Model Context Pitfalls in an agentic world

https://hiddenlayer.com/innovation-hub/mcp-model-context-pitfalls-in-an-agentic-world/
1•beabytes•19m ago•0 comments

The Silent Hiring Revolution

https://foundersarehiring.com/silent-hiring-revolution-ai-reshaping-talent-real-time
1•niksmac•19m ago•0 comments

The Internet Is No Longer a Safe Haven

https://brainbaking.com/post/2025/10/the-internet-is-no-longer-a-safe-haven/
2•akyuu•20m ago•0 comments

Building Serverless Applications with Rust on AWS Lambda – AWS Compute Blog

https://aws.amazon.com/blogs/compute/building-serverless-applications-with-rust-on-aws-lambda/
1•9woc•20m ago•0 comments

The Great Data Escape: AI, Local-First, and the Cloud Exodus

https://solutionsreview.com/cloud-platforms/the-great-data-escape-ai-local-first-and-the-cloud-ex...
1•teleforce•20m ago•0 comments

Reversing Swift Like a Pro

https://hexai.re/blog/reversing-swift-like-a-pro
1•organy1337•23m ago•0 comments

A "cooked" Computer Science grad's perspective

https://www.youtube.com/watch?v=YcrfUzKJQms
2•foofoo4u•23m ago•0 comments

From WhatsApp to Kitchen: An AI-Powered Order Automation System

https://mateolafalce.github.io/2025/From%20WhatsApp%20to%20Kitchen/FromWhatsApptoKitchenAnAIPower...
1•lafalce•23m ago•0 comments

Show HN: Floxtop – Offline Mac app that organizes files and images by meaning

https://floxtop.com/index.html
1•bobnarizes•26m ago•1 comments

When the Bears Come Back

https://southlandsmag.com/p/3c9941b4-afe4-4cbf-a4c0-26f1a2a9e382/
1•jger15•29m ago•0 comments

Woman pleads guilty lying about astronaut wife accessing bank account from ISS

https://www.cnbc.com/2025/11/14/space-station-nasa-guilty-wife-bank-account.html
1•koolba•30m ago•0 comments

New book shows how questioning the alphabet can push typography further

https://www.creativeboom.com/resources/new-book-reveals-how-questioning-the-alphabet-can-help-us-...
1•bryanrasmussen•36m ago•0 comments

Major Bitcoin mining firm pivoting to AI

https://www.tomshardware.com/tech-industry/cryptomining/major-bitcoin-mining-firm-pivoting-to-ai-...
3•heresie-dabord•43m ago•1 comments

Análisis de acciones por dividendos

https://puenteolambo.com/
1•suglus•44m ago•0 comments

Forget AGI–Sam Altman celebrates ChatGPT following em dash formatting rules

https://arstechnica.com/ai/2025/11/forget-agi-sam-altman-celebrates-chatgpt-finally-following-em-...
2•AIBytes•45m ago•0 comments

Show HN: Generative and Giant – A Pattern in Motion. Will You Reach the Edge?

https://number-garden.netlify.app/?m
1•cpuXguy•47m ago•0 comments

Matthew Belloni Is Yelling "Iceberg " on the Hollywood Titanic

https://www.nytimes.com/2025/11/16/business/matthew-belloni-puck-hollywood.html
2•DrierCycle•48m ago•1 comments