frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
499•klaussilveira•8h ago•138 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
836•xnx•13h ago•503 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
53•matheusalmeida•1d ago•10 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
110•jnord•4d ago•18 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
164•dmpetrov•8h ago•76 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
166•isitcontent•8h ago•18 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
59•quibono•4d ago•10 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
279•vecti•10h ago•127 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
339•aktau•14h ago•163 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
222•eljojo•11h ago•139 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
332•ostacke•14h ago•89 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
421•todsacerdoti•16h ago•221 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
34•kmm•4d ago•2 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
11•denuoweb•1d ago•0 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
360•lstoll•14h ago•248 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
15•gmays•3h ago•2 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
9•romes•4d ago•1 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
58•phreda4•8h ago•9 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
209•i5heu•11h ago•156 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
33•gfortaine•6h ago•8 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
121•vmatsiiako•13h ago•51 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
159•limoce•3d ago•80 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
257•surprisetalk•3d ago•33 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1013•cdrnsf•17h ago•422 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
51•rescrv•16h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
93•ray__•5h ago•43 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
44•lebovic•1d ago•12 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
10•denysonique•5h ago•0 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
35•betamark•15h ago•29 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
81•antves•1d ago•59 comments
Open in hackernews

Grok Can't Apologize. So Why Do Headlines Keep Saying It Did?

https://www.readtpa.com/p/grok-cant-apologize-grok-isnt-sentient
80•afavour•1mo ago

Comments

ninju•1mo ago
The fact that we have anthropomorphized AI systems (not just Grok) is because of the way in which we interact with these system with natural language.
guywithahat•1mo ago
That's sort of my thought too. Grok can't apologize, but it also can't do anything without being told. A hammer can't apologize, but it also doesn't know the difference between hitting a nail or a person. Perhaps we could design a hammer that does less harm to a human but if it comes at the cost of being a worse hammer I don't want it
WarOnPrivacy•1mo ago
> Grok can't apologize, but it also can't do anything without being told.

If you mean being told by the end user, this famously hasn't been the case. Dialing back the only restriction was enough for Grok to create nsfw material (w/o any request to create that).

     [Grok] didn’t hesitate to spit out fully uncensored topless
     videos of Taylor Swift the very first time I used it
     without me even specifically asking the bot to take her clothes off.
rowanG077•1mo ago
That sounds like weasel words. "without me even specifically asking the bot to take her clothes off."

What did they ask? If they asked for sexy, revealing pictures or something in that direction I think Grok delivered what was asked.

guywithahat•1mo ago
He turned on spicy mode, which was the NSFW image generator. As far as I can tell, it's back to producing "spicy" pics but won't produce genitals/actual nudity, what the user was describing seems to have been a now-patched bug where it was generating actual nudity in spicy mode
roywiggins•1mo ago
It's 50-80% because they are RLHFed into talking with "I". This was far less of an issue when it was just GPT-3 in a completion UI. But people find LLMs trained to produce text that looks like it's coming from a personality to be more compelling: ChatGPT is when the tech exploded into popularity.

LLMs that aren't chat tuned are just not as easy to anthropomorphize.

biophysboy•1mo ago
I really wish I could use custom products w/ RLHF turned off. I know that's not how it works, but the stupid marketing copy speak makes me use them less
digiown•1mo ago
If you use the API directly via open webui, etc., it is not nearly as annoying. You can also system prompt it into sounding more reasonable.
biophysboy•1mo ago
thanks for the rec
Terr_•1mo ago
My own minor attempt to hold back this tide involves urging everyone to imagine LLM exchanges a theater-play script document. The character's lines are not the author.

Just imagine how different all this would be if every prompt contained something to make the character(s) obviously fictional, ex: "You are Count Dracula, dread lord of the night, and a visitor has the following question..."

We hopefully wouldn't see mindless reports that "vampires are real now" or "Draculabot has developed coping mechanisms for the dark thirst, agrees to try tomato juice."

lostmsu•1mo ago
That was accurate only before the instruction tuning.
Terr_•1mo ago
I don't see how that gives the algorithm an ego when it didn't have one before.
lostmsu•1mo ago
I don't see how that does not considering that base models never refer to themselves as "I" and most of the modern instruct ones do. I can foresee the objection, that there's still a distinction between an author and a script, but for instruct models at this point it seems no different than author intentionally lying when talking about themselves.
Terr_•1mo ago
> it seems no different than author intentionally lying

That begs a really big question, assuming that humanoid type of "intent" already existed and was somehow mis-aimed the whole time.

Not to be confused with the algorithmic intent of `f(tokens,opts) -> next_token` .

roywiggins•1mo ago
The blame attaches to the corporation that trained the "I" outputs into the model. Chat-tuned LLMs are mostly finetuned to roleplay as a friendly assistant at all times. This isn't exactly a lie- the models are really being designed to be helpful assistants- but the side-effect is that they also present as coherent personalities when they aren't.

When a model outputs stuff like "I am FooGPT, a friendly chatbot" it is roleplaying just as much as when it's outputting stuff like "Hello, my name is Abraham Lincoln, I was the 16th President of the United States."

Terr_•1mo ago
Right, there's a pervasive issue here that involves illusions and assumptions coming from the minds of the humans perceiving everything.

It's like that meme where people are asked how a mirror "knows" what object is being held when a piece of opaque paper is placed in between the object and the nearest mirror surface.

Both are genuinely useful, but with mirrors we've built an accepted body of knowledge and authority, telling people to distrust their intuition and analyze it as light-paths.

LLMs are another kind of reflection—of languages—but the same guardrails aren't established, and some people have a rather strong profit motive to encourage consumers and investors to fall for the illusions.

lostmsu•1mo ago
What do you mean intent already existed? The whole point is it didn't until instruction tuning.
minimaxir•1mo ago
The real reason is because LLMs are a highly nuanced and technical topic that has been constantly evolving, but any attempt to suggest that LLMs require nuance is met with accusations of AI boosterism and are subsequently ignored. So journalists tend to go with Occam's Razor.

I have tried to offer corrections to incorrect headlines and technical information about LLMs over the past few years but have stopped because I don't have the bandwidth to deal with the "so you support the plagiarism machine" comments every time.

satisfice•1mo ago
I understand LLMs, too. That doesn’t require me to accept AI fan boys who minimize and dismiss all the bullshit that LLMs spout.
biophysboy•1mo ago
I am really grateful I have a basic understanding of 1) how LLMs work, & 2) zero trust in tech marketing/branding. I would be a lot more afraid of the future otherwise. Its not surprising to me at all that people believe AI models are sentient and capable of apologies.
WarOnPrivacy•1mo ago
The gist is down the page. I believe the assertion is sound and is worthy of consideration.

    Here’s the thing: Grok didn’t say anything. Grok didn’t
    blame anyone. Grok didn’t apologize. Grok can’t do any
    of these things, because Grok is not a sentient entity
    capable of speech acts, blame assignment, or remorse.

    What actually happened is that a user prompted Grok to generate
    text about the incident. The chatbot then produced a word sequence
    that pattern-matched to what an apology might sound like, because
    that’s what large language models do. They predict statistically
    likely next tokens based on their training data. 

    When you ask an LLM to write an apology, it writes something that
    looks like an apology. That’s not the same as actually apologizing.
ares623•1mo ago
Just like human CEOs /s
afavour•1mo ago
Unfortunately the discussion has been flagged. As is often the case.
ryandrake•1mo ago
This is to be expected here, unfortunately. Any article that reveals anything bad about a Musk-run company will get instantly flagged. Sometimes the mods will show up and correct it, but by then the damage is done--the article has been wiped off the front page and it's Mission Accomplished for the flaggers.
kccoder•1mo ago
Hacker news could fix this if they wanted to.
Razengan•1mo ago
HN doesn't want to fix shit
tim333•1mo ago
I think when people say "the car says it's low on petrol" they understand the car probably didn't talk but a petrol gauge caused it to display some message. I don't know if you have to police language if people understand what's going on.

At least with LLMs it's not too hard to figure what's going on, unlike certain politicians.

satisfice•1mo ago
No one says, and no newspaper reports, that your car regrets any of its malfunctions.
Havoc•1mo ago
Pretty wild that xAI decided to simply not comment on what seems like a pretty sizable fuckup
r0ckarong•1mo ago
Because we live in a technofeudalist hellscape where the media is owned by the people who profit from out oppression.
chopete3•1mo ago
>>>

strangers were replying to women’s photos and asking Grok, the platform’s built-in AI chatbot, to “remove her clothes” or “put her in a bikini.” And Grok was doing it. Publicly. In the replies. For everyone to see.

Wow. Thats some really creepy behavior people are choosing to show off publicly.

Grok needs some tighter gaurdrails to prevent abuse.