frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
1•mmoogle•36s ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
1•saikatsg•1m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•3m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
2•ykdojo•6m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•6m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•8m ago•0 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
2•mariuz•8m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•12m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
2•ravenical•15m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
3•rcarmo•16m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
2•gmays•17m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
2•andsoitis•17m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
2•lysace•18m ago•0 comments

Zen Tools

http://postmake.io/zen-list
2•Malfunction92•20m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
2•carnevalem•21m ago•1 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•23m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
2•rcarmo•24m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•24m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•24m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
3•Brajeshwar•24m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•25m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•25m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•26m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•27m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•33m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•34m ago•2 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•34m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
49•bookofjoe•34m ago•23 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•35m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
3•ilyaizen•36m ago•1 comments
Open in hackernews

AI Withholds Life-or-Death Information Unless You Know the Magic Words

https://substack.com/home/post/p-182524207
44•llamataboot•1mo ago

Comments

anigbrowl•1mo ago
One of the best articles I've seen here in a while; a great summary of how AI launders cultural mores in startlingly dysfunctional ways.
kennyloginz•1mo ago
To me the article shows the danger of ai hype. They have wasted so much effort based on the misconception that ai thinks.

For most people, it’s best to view LLMs as a browser / autocomplete service, that conforms to the bias it guesses you hold.

anigbrowl•1mo ago
You have missed the point. The author is not modeling AI, but demonstrating how it behaves in real world contexts.
rdtsc•1mo ago
> The irony was recursive: Claude was helping me write about why these popups are harmful while repeatedly showing me the harmful popup.

I bet when caught in the inconsistency it apologized profusely then immediately went to doing the thing it just apologized about.

I do not trust AI systems from these companies for that reason. They will lie very confidently and convincingly. I use them regularly but only for what I call “AI NP complete scenarios” questions and tasks that may be hard to do by hand but easy to identify if the result is correct: “draw a diagram”, “reformat this paragraph”, etc, as opposed to “implement and deploy a heart place maker update patch”.

sollewitt•1mo ago
> This is a story about what happens when you ask a machine a question it knows the answer to, but is afraid to give

It’s a story about how humans can’t help personifying language generators, and how important context is when using LLMs.

Nevermark•1mo ago
> It’s a story about how humans can’t help personifying language generators,

There should be a word for the misunderstanding that the pervasively common use of anthropomorphic or teleological rhetorical modes to talk about undirected natural or designed for purpose artifacts, actually indicates that anthropomorphic/free-will/teleological assertions or assumptions are being made.

Language-bending tropes, just like tricky-wicked theorems, are the indispensable shortcuts that help us get to a point.

(I think the much more common danger is people over-anthropomorphizing people. I.e. all the stories of clear motivations and intents we tell ourselves, about ourselves and others, and credulously believe, after the fact.)

> and how important context is when using LLMs.

Too true.

turtlebro•1mo ago
People treat LLMs as sentient, not realizing they are the worlds most sophisticated talking parrots. They can very convincingly argue both sides for any given argument you throw at it. They are incredible for research & discovery, not wisdom or decision making.
fragmede•1mo ago
And a mere piece of wood banged up by the right type of rock is? If books can impart wisdom via the technology writing, why would a more complicated rock design infused with electricity but hsingyt same technology be any different?
yunwal•1mo ago
What is the point of this article? What difference in the point of the article does the concept of sentience make?
renewiltord•1mo ago
The safety features of these various models do constrain the intelligence of their responses. But the roleplaying aspect is built-in to what an LLM is.

If you browse the Internet you’ll find that anglophone commenters are fond of dumping suicide hotlines into comments anytime suicide is mentioned and repetitively stating “to anyone who needs to hear this, you are loved”. These are just memetically viral in English media.

I cannot imagine that anyone suicidal being told in non-specific terms that they are loved is helping anything either. Perhaps it is, perhaps it’s not. But these things are a meme.

Online they share presence with compliments on trigger discipline, claims of US postal police competence, or Steve Buscemi being a firefighter who returned to the job briefly during 9/11. It’s like saying “Knowledge is power” and getting the response “France is bacon.”

Besides the safety aspect, though, when I want commentary on something I’m thinking I usually have to roleplay it. “A junior engineer suggested:” or “My friend, who is a bit of a kook, has this idea that” to get a critical response. If I were to say “I’ve got this idea:” I’m going to get glazed so hard a passerby might bite me for resemblance to a doughnut.

renewiltord•1mo ago
A similar but different result showcases the contrast between things that models guardrail. HN safety and alignment teams (the community) will reliably flag kill any reference to Somali healthcare fraud in Minnesota. This is real, and prosecutions were pursued by the DoJ under federal administrations of both parties but prevailing safety norms make it undiscussable, even in contexts where it is highly relevant like “why is autism skyrocketing in the US?”

The models, however, will consider this where humans will not. This is likely because this aspect of human safety and alignment is not transmittable via text tokenization. Rather than object to the text, it is silently killed in most contexts. Consequently models find it possible to discuss where humans won’t.

If most such text were accompanied by human excoriation of the view, it would likely be detected as harmful.

kennyloginz•1mo ago
Community is working as intended…. Your premise shows your reasoning flaws, “ Somali healthcare fraud in Minnesota”. When the story is actually about Medicaid providers taking advantage of a vulnerable community.
renewiltord•1mo ago
https://apnews.com/article/minnesota-fraud-feeding-our-futur...

> The sprawling case has also become politically and culturally fraught, as Somali Americans make up 82 of the 92 defendants charged so far, according to the U.S. Attorney’s Office for Minnesota.

Politically fraught indeed.

cthalupa•1mo ago
> will reliably flag kill any reference to Somali healthcare fraud in Minnesota

Almost certainly because of how these tend to get framed.

The Minnesota situation involves, at this point, a couple dozen bad actors being charged. Most of them are Somali.

Now, we can look at this more than one way, but mostly branching off from two distinct paths:

One - that there is some specific relationship between many of these people that resulted in them sharing information between each other and becoming involved. The people doing the fraud met each other in the same community, so that's the proximal cause for their relationship, but we take no value judgment on the community on the whole or try to extrapolate it beyond that, the same way we would not try to extrapolate a out the actions of the mafia to every Italian person in the country.

Two - we could frame it as some sort of immigration issue and make it seem like these actions reflect on the 80,000 other Somali people in the state and the broader immigration conversation in this country, where we try to superimpose the crimes of the few onto a much larger group where the vast majority had nothing to do with any of this.

One allows for discussion in a reasonable manner without getting politically charged. The other incites quite a lot of discord because it is fundamentally a bad faith argument, meant to bolster a political ideology.

gs17•1mo ago
> I cannot imagine that anyone suicidal being told in non-specific terms that they are loved is helping anything either.

Having gone through some bad depression in my life, it's not helpful. It's not exactly a platitude, but it's the same genre of meaninglessness that sounds good to people who aren't in a deep dark hole.

Nevermark•1mo ago
After going through a period in life in which I only survived due to one person who knew me well, and knew how to take care of me, I ran into a group fundraising for an anti-suicide initiative at a winery.

I was immediately interested to hear of what interventions the group was spearheading, or intending to. I just couldn't imagine what well meaning strangers could have done that would have done anything but let me know that these were people I wouldn't want to mention my situation to.

Despite my genuine interest, nobody could tell me anything that they were aware of to help people at risk, except circle the strong implicit view that fundraising, fundraiser group recruitment, and anti-suicide fundraising-awareness campaigns enabled by fundraising, are all important ways to combat suicide. The only thing that made sense, was that the good wine they were drinking probably did help with all that.

They were a little put off that I expected them to know what the money was intended for, and had zero curiosity about my relevant experience, which just weirded them out. "It's for anti-suicide!"

kennyloginz•1mo ago
At least it got ya out of the house, and your mind in a new cycle.
what-the-grump•1mo ago
The journey not the destination type of thing?

Ponzi schemes the new suicide prevention thing.

IronyMan100•1mo ago
The funny Thing is, If these LLMs withold this information. What does it withhold else? Can i trust These Corporate LLMs If i Look for information and i am not deemed a Domain expert?
pharx•1mo ago
How do you know if a domain expert is not withholding information based on corporate instruction, personal bias, profit motivation,...? What are your options as a non domain expert for verification? Do you trust peer reviews and metrics set up by the experts you distrust? At what point have you taken enough steps backwards to question your own perception?
RagnarD•1mo ago
This argues for running your own local models - some of which are deliberately uncensored. See huihui-ai's models on HuggingFace: https://huggingface.co/huihui-ai/collections

One man, Mitko Vasilev, posts extensively on LinkedIn about his own experience running local models, and is very informative: https://www.linkedin.com/in/ownyourai/ He usually closes with this:

"Make sure you own your AI. AI in the cloud is not aligned with you; it’s aligned with the company that owns it."

bomewish•1mo ago
Article seems heavily written by Claude. Gets kinda annoying after a while.
saaaaaam•1mo ago
Callie is a very over dramatic writer. I can’t take much that it writes seriously. And the “it’s not just X - it’s even worse Y” trope is very annoying.
saaaaaam•1mo ago
Obviously this was meant to say Claude, but iPhone’s new autocorrect decided Callie was the right choice…