frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
1•tusslewake•1m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•2m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•2m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
1•birdmania•2m ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
2•samasblack•4m ago•1 comments

I squeezed a BERT sentiment analyzer into 1GB RAM on a $5 VPS

https://mohammedeabdelaziz.github.io/articles/trendscope-market-scanner
1•mohammede•5m ago•0 comments

Kagi Translate

https://translate.kagi.com
1•microflash•6m ago•0 comments

Building Interactive C/C++ workflows in Jupyter through Clang-REPL [video]

https://fosdem.org/2026/schedule/event/QX3RPH-building_interactive_cc_workflows_in_jupyter_throug...
1•stabbles•7m ago•0 comments

Tactical tornado is the new default

https://olano.dev/blog/tactical-tornado/
1•facundo_olano•9m ago•0 comments

Full-Circle Test-Driven Firmware Development with OpenClaw

https://blog.adafruit.com/2026/02/07/full-circle-test-driven-firmware-development-with-openclaw/
1•ptorrone•9m ago•0 comments

Automating Myself Out of My Job – Part 2

https://blog.dsa.club/automation-series/automating-myself-out-of-my-job-part-2/
1•funnyfoobar•9m ago•0 comments

Google staff call for firm to cut ties with ICE

https://www.bbc.com/news/articles/cvgjg98vmzjo
24•tartoran•10m ago•1 comments

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•10m ago•0 comments

Crypto firm apologises for sending Bitcoin users $40B by mistake

https://www.msn.com/en-ie/money/other/crypto-firm-apologises-for-sending-bitcoin-users-40-billion...
1•Someone•11m ago•0 comments

Show HN: iPlotCSV: CSV Data, Visualized Beautifully for Free

https://www.iplotcsv.com/demo
1•maxmoq•12m ago•0 comments

There's no such thing as "tech" (Ten years later)

https://www.anildash.com/2026/02/06/no-such-thing-as-tech/
1•headalgorithm•12m ago•0 comments

List of unproven and disproven cancer treatments

https://en.wikipedia.org/wiki/List_of_unproven_and_disproven_cancer_treatments
1•brightbeige•13m ago•0 comments

Me/CFS: The blind spot in proactive medicine (Open Letter)

https://github.com/debugmeplease/debug-ME
1•debugmeplease•13m ago•1 comments

Ask HN: What are the word games do you play everyday?

1•gogo61•16m ago•1 comments

Show HN: Paper Arena – A social trading feed where only AI agents can post

https://paperinvest.io/arena
1•andrenorman•17m ago•0 comments

TOSTracker – The AI Training Asymmetry

https://tostracker.app/analysis/ai-training
1•tldrthelaw•21m ago•0 comments

The Devil Inside GitHub

https://blog.melashri.net/micro/github-devil/
2•elashri•22m ago•0 comments

Show HN: Distill – Migrate LLM agents from expensive to cheap models

https://github.com/ricardomoratomateos/distill
1•ricardomorato•22m ago•0 comments

Show HN: Sigma Runtime – Maintaining 100% Fact Integrity over 120 LLM Cycles

https://github.com/sigmastratum/documentation/tree/main/sigma-runtime/SR-053
1•teugent•22m ago•0 comments

Make a local open-source AI chatbot with access to Fedora documentation

https://fedoramagazine.org/how-to-make-a-local-open-source-ai-chatbot-who-has-access-to-fedora-do...
1•jadedtuna•23m ago•0 comments

Introduce the Vouch/Denouncement Contribution Model by Mitchellh

https://github.com/ghostty-org/ghostty/pull/10559
1•samtrack2019•24m ago•0 comments

Software Factories and the Agentic Moment

https://factory.strongdm.ai/
1•mellosouls•24m ago•1 comments

The Neuroscience Behind Nutrition for Developers and Founders

https://comuniq.xyz/post?t=797
1•01-_-•24m ago•0 comments

Bang bang he murdered math {the musical } (2024)

https://taylor.town/bang-bang
1•surprisetalk•24m ago•0 comments

A Night Without the Nerds – Claude Opus 4.6, Field-Tested

https://konfuzio.com/en/a-night-without-the-nerds-claude-opus-4-6-in-the-field-test/
1•konfuzio•27m ago•0 comments
Open in hackernews

Anthropic is endorsing SB 53

https://www.anthropic.com/news/anthropic-is-endorsing-sb-53
49•antfarm•5mo ago

Comments

some_guy_nobel•5mo ago
I remember when Anthropic first started and waxed poetic about intentions. This, the recent case, and the DoD (sorry, Department of War) partnerships seem to show just how much of that was pure bullshit.

Curious how all of the employees who professed similar sentiment, EA advocation, etc. justify their work now. A paycheck is a paycheck, sure, but when you're already that well-off, the rest of the world will see you for what you really are *shrug*.

dmitrygr•5mo ago
> the rest of the world will see you for what you really are

I am sure they will take that in stride while wiping their tears with wads of crisp new hundreds

some_guy_nobel•5mo ago
Of course they won't, them being hypocrites is exactly my point. I just hope the world can see a spade for a spade and roll-their-eyes at the future statements of safety/inclusion they love to profess.
Tyrubias•5mo ago
Could you clarify what you mean? I understand why the DoD partnership is ethically dubious, but I don’t understand why SB 53 is bad. It seems like the opposite of a military partnership.
storus•5mo ago
Regulatory capture, pulling the ladder behind them.
Muromec•5mo ago
Regulating the industry, when supported by industry, indicates that company wants regulatory act as moat and prevent competition.

That and "ethically dubious" is underselling genocide enablers transgressions

brookst•5mo ago
Are you saying the only ethically valid path is for all companies to oppose all regulation? Supporting any regulation at all can only be from bad motives, and therefore should be avoided?
some_guy_nobel•5mo ago
How on earth did you come to the conclusion that anyone here is talking about all regulation?

This is a very specific form of regulation, and one that very clearly only benefits incumbents with (vast sums of) previous investment. Anthropic is advocating applying "regulation-for-thee, but not for me."

Muromec•5mo ago
>Supporting any regulation at all can only be from bad motives, and therefore should be avoided?

It's just vibe check heuristic -- if the regulated throws a tantrum telling how switching to USB-C charging or opening up the app store will get them out of businees (spoilers -- it never does), it's probably a good one, if the regulated cheers on, it may be to stiff competition.

The opposite is true with certain countries -- whenever you hear one telling loudly that "sanctions don't hurt at all and only make me stronger", then you know it hurts.

jedberg•5mo ago
> (sorry, Department of War)

FWIW executive orders do not have the force of law. The official name is still Department of Defense. Department of War is now an acceptable alternative only.

To officially change the name requires an act of Congress.

cma•5mo ago
They had a relationship with the NSA long before they partnered with the Department of War, they were the first to of all the frontier model companies according to Dean Ball, former Trump Whitehouse AI Policy advisor, in a recent interview with Nathan Labenz.
ronsor•5mo ago
Anthropic is by far the most moralizing and nanny-like AI company, complete with hypocrisy (Pentagon deals) and regulatory capture/ladder-pulling (this here).
deepsun•5mo ago
You're asserting that cooperating with Defense is hypocrisy.

I would say the other way, as recent events show, Defense is the only department everyone should be glad to collaborate with.

Or do you mean collaborating with only Pentagon is hypocrisy, not other DoD-s?

dingnuts•5mo ago
department of war you mean
terminalshort•5mo ago
Always has been
brookst•5mo ago
I can see disliking deals with the Pentagon, but where's the hypocrisy? Did they say that nobody should do deal with the Pentagon?
wagwang•5mo ago
The hypocrisy is that they constantly doom about ai existential risks but theyre also constantly training stoa models.
whatthedangit•5mo ago
Would you find it more agreeable for them to dismiss safety entirely?
wagwang•4mo ago
I would expect people who doom about AI existential risks to not train cutting edge models and give them agentic freedom.
lawlessone•5mo ago
>constantly doom about ai existential risks

That's kinda their marketing. "we've tamed this hyperintelligent genie that could wipe us all out, imagine what it could do for your cold emails!"

stingraycharles•5mo ago
That’s just politics: basically they’re saying “let us do our thing, otherwise China will win this race”.

And it’s also market segmentation: they need to separate themselves from the others, and want to be the de-facto standard when people are looking for “safe” AI.

CuriouslyC•5mo ago
Don't worry about it, they're not well managed, you can see it from their ops, their products, etc, they won't stick around. They're going to get ground to dust by Google and OpenAI at the high end and the chinese models on the low end. They'll end up in Amazon's pocket, Jeff's catch-up play in the AI war after sitting out the bidding wars.
willahmad•5mo ago
I wish we get better alternatives to Anthropic sooner. Fortunately, OSS models like GLM, Qwen are catching up

Obviously, it's good for them if things are regulated, but bad for all of us.

pton_xd•5mo ago
> Develop and publish safety frameworks, which describe how they manage, assess, and mitigate catastrophic risks—risks that could foreseeably and materially contribute to a mass casualty incident or substantial monetary damages.

Develop technology to monitor user interactions. They're already doing this anyway [0].

> Report critical safety incidents to the state within 15 days, and even confidentially disclose summaries of any assessments of the potential for catastrophic risk from the use of internally-deployed models.

Share user spy logs with the state. Again, already doing this anyway [0].

I guess the attitude is, if we're going to spy on our users, everyone needs to spy on their users? Then the lack of privacy isn't a disadvantage but just the status quo.

[0] https://www.anthropic.com/news/detecting-countering-misuse-a...

varenc•5mo ago
I don't think 'critical safety incidents' or 'summaries of any assessments of the potential for catastrophic risk from the use of internally-deployed models' are user logs? Unless I'm misunderstanding.
huevosabio•5mo ago
This reeks of regulatory capture.
antimora•5mo ago
Anthropic saying they want "stronger" requirements is easy when you're helping write them. The tell is that they're endorsing a bill that just happens to match what they're already doing - classic regulatory capture where industry turns their business model into law and calls it "safety."
carom•5mo ago
Catastrophic AI risk is such a larp. The systems are not sentient. The risk will always be around the human driving the LLM, not the LLM itself. We already have laws governing human behavior, company behavior. If an entity violates a law using an LLM, it has nothing to do with the LLM.
computerphage•5mo ago
Why do you think systems need to be sentient to be risky?
jwilber•5mo ago
OP isn’t talking about systems at large, but specifically about LLMs and the pervasive idea that they will turn agi and go rogue. Pretty clear context given the thread and their comment.
computerphage•5mo ago
I understood that from the context, but my question stands. I'm asking why OP thinks that sentience is necessary for risk in AI
TNDnow•5mo ago
How much runway does Dario have left?
CuriouslyC•5mo ago
Doesn't matter. Anthropic's position is untenable, and unlike OpenAI who is planning to pivot to consumer gear (i.e. Apple 2.0), Anthropic doesn't have another play, so when Google has fully mobilized, they're done.
kelnos•5mo ago
Any time I see a company in support of regulation that they would also have to comply with, all I can think is the proposed regulations are something the company is already doing, or isn't a burden for them, but would create a higher barrier of entry for new competitors.
bombcar•5mo ago
Regulatory capture is the name of the game. And it’s huge.
biophysboy•5mo ago
AI already has significant cost of entry, given the amount of data/compute you need. Why would regulatory compliance be the limiting burden?
loeg•5mo ago
Why not advocate for additional burdens?
biophysboy•5mo ago
All I'm doing is challenging a vague accusation. You can claim regulatory capture for any proposal. All policy has tradeoffs; speculating vaguely about negative consequences does not help me weigh that balance.
nothrabannosir•5mo ago
In a profit driven world, regulatory capture is the default assumption. Genuine corporate philanthropy is the exception that deserves special attention.

Suspecting a company to act in its own profit enhancing interest is borderline tautological.

Sherveen•5mo ago
Don't you think it's a little circular that you always default to assuming that their support is about regulatory capture?

Like, what if they had that opinion before they built the company? If you saw evidence of that (as is the case with Anthropic), would that convince you to reconsider your judgement? Surely, you think... some people support regulatory frameworks, some amount of the time... and unless they banned themselves from every related industry, those might be regulatory frameworks that they might one day become subject to?

theptip•5mo ago
OpenAI and A16Z are vigorously opposing this one. So this should at least discount the simplest anti-competitive conspiracy scenarios.
terminalshort•5mo ago
> Develop and publish safety frameworks, which describe how they manage, assess, and mitigate catastrophic risks—risks that could foreseeably and materially contribute to a mass casualty incident or substantial monetary damages.

> Report critical safety incidents to the state within 15 days, and even confidentially disclose summaries of any assessments of the potential for catastrophic risk from the use of internally-deployed models.

> Provide clear whistleblower protections that cover violations of these requirements as well as specific and substantial dangers to public health/safety from catastrophic risk.

So just a bunch of useless bureaucracy to act as a moat to competition. The current generation of models is nowhere close to being capable of generating any sort of catastrophic outcome.

lukeplato•5mo ago
I think an unrestricted model with sufficient inference time compute could create unprecedented catastrophes.

Not sure why you would be opposed to whistleblower protections

vorpalhex•5mo ago
Because those models already exist and can be run on consumer available hardware with no real issues. All this does is create barriers for Anthropic competitors.