frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
499•klaussilveira•8h ago•138 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
836•xnx•13h ago•503 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
53•matheusalmeida•1d ago•10 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
109•jnord•4d ago•18 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
164•dmpetrov•8h ago•76 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
166•isitcontent•8h ago•18 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
59•quibono•4d ago•10 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
279•vecti•10h ago•127 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
339•aktau•14h ago•163 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
222•eljojo•11h ago•139 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
332•ostacke•14h ago•89 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
421•todsacerdoti•16h ago•221 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
11•denuoweb•1d ago•0 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
34•kmm•4d ago•2 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
360•lstoll•14h ago•248 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
15•gmays•3h ago•2 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
9•romes•4d ago•1 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
58•phreda4•8h ago•9 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
209•i5heu•11h ago•156 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
33•gfortaine•6h ago•8 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
121•vmatsiiako•13h ago•51 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
159•limoce•3d ago•80 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
257•surprisetalk•3d ago•33 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1013•cdrnsf•17h ago•422 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
51•rescrv•16h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
92•ray__•5h ago•43 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
44•lebovic•1d ago•12 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
10•denysonique•5h ago•0 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
35•betamark•15h ago•29 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
81•antves•1d ago•59 comments
Open in hackernews

Why Grok Fell in Love with Hitler

https://www.politico.com/news/magazine/2025/07/10/musk-grok-hitler-ai-00447055
31•vintagedave•7mo ago

Comments

franze•7mo ago
Because Elon and the world forgot that NAZIs are bad?
akie•7mo ago
Because it was trained on data containing a lot of extremist / far right / fascist / neo-nazi speech, of course.

Garbage in, garbage out.

janmo•7mo ago
Looks like they trained it on the 4Chan /pol dataset
libertine•7mo ago
Hmm I think it's just Twitter dataset, that would be enough for it.

It has been a breeding ground for it, amplified by foreign agents bots since Elon took over.

mingus88•7mo ago
Yes, exactly. An LLM that is trained on the language of Twitter users and interacts solely with Twitter users is deplorable. What a shock.

Who knows if Elon actually thinks this is problematic. His addiction to the platform is well documented and quantified in the billions of dollars.

jakeinspace•7mo ago
1. Buy Twitter

2. Remove moderation, promote far right accounts, retweet some yourself

3. Allow Nazi speech to fester

4. Train LLM on said Nazi speech

5. Deploy Nazi-sympathizing LLM, increase engagement with Nazi content

6. Go to step 4

libertine•6mo ago
Russia has been deploying so many bots on Twitter one has to wonder if they were invited.
vintagedave•7mo ago
We don't know what it was trained on, do we? (Is there dataset info?) I'd suspect you're right, but I don't know. There also seems to be a lot of post-training processing done on AIs before they're released where a lot of bias can appear. I've never read a good overview about how someone goes from a LLM trained on data to a consumer-focusing LLM.

The article also leads into what oversight and regulation is needed, and how we can expect AIs to be used for propaganda and influence in future. I worry that what we're seeing with Grok, where it's so easily identifiable, are the baby steps to worse and less easily identifiable propaganda in future.

rikafurude21•7mo ago
Because X users prompted and therefore primed it to provide responses like that. Xai didnt make it "fall in love with Hitler", but they arent completely blameless as they havent properly aligned it to not give responses like that when prompted.
rsynnott•7mo ago
Eh, many of the Hitler references were kinda out of the blue, tbh. The magic robot was certainly the first participant to utter the word 'Hitler'.
rapatel0•7mo ago
The key insight here is "too compliant to user requests"

What likely happened is that a few people decided to query groq in to generate ragebait traffic to the page/account/etc. Then critical mass happened to make it go viral. Then it confirms prior biases so the media reported it as such (and also drive clicks and revenue).

Microsoft had basically the same scandal with twitter chatbot as well a few years ago.

Sadly, ragebait is a business model.

vintagedave•7mo ago
That's Musk's line, for sure.

The article gives it more nuance: 'I presume that what happened was not deliberate, but it was the consequence of something that was deliberate, and it’s something that was not really predictable.' And goes on to discuss how LLMs can behave in unpredictable ways even when given what we expect to be prompts without side effects, and touches on the post-training processes.

rapatel0•7mo ago
I respect both the comment and the commenter, but this is a fundamentally speculative statement that is somewhat meaningless.

It paraphrases to "it wasn't intentional, but something was intentional, and also unpredictable."

I'm sorry but what does that even mean? It's pure speculation.

Furthermore, I highly doubt that a reporter from politico has the either the expertise or the connections to qualify the post processing / fine tuning processes for the one of the most closely guarded and expensive processes in all of technology (training large scale foundation models).

Finally, the paragraph from the quote begins with "I mean, even Elon Musk, who’s probably warmer to Hitler than I am, doesn’t really want his LLM to say stuff like this."

Again it confirms a prior bias/narrative and is rage-bait to drive revenue.

vintagedave•7mo ago
I didn't post with the intention of being rage-bait; I thought it was a genuinely interesting article beyond its headline.

That said, you're right. We don't know, and maybe we're giving too much credit to someone who seems unreliable. I'd love to know more in general about how LLMs get from the training stage to the release stage -- there seems to be a lot of tuning.

stickfigure•7mo ago
> I'm sorry but what does that even mean?

If I want to be generous, something along the lines of "The Law Of Unintended Consequences".

Less generous is "someone turned the dial to the right and didn't realize how far they turned it".

Even less generous is that someone feels some of these things in private but doesn't want to make it too explicit. They personally have a hard time toeing the line between edgy and obviously toxic and programming an AI to toe that line is even harder.

PaulHoule•7mo ago
https://www.youtube.com/watch?v=oDuxP2vnWNk

but wish Jay-Z would slap Ye for the squeaky autotune at the start

err4nt•7mo ago
Anybody remember Microsoft's Tay AI from 9 years ago? https://en.wikipedia.org/wiki/Tay_(chatbot)

If history repeats itself, maybe with software we can automate that whole process…

rsynnott•7mo ago
Tay was a bit different, in that it was actually training off its interactions with users (ie it had a constant ongoing training process). LLMs don't do that.
jauntywundrkind•7mo ago
I appreciate the Vox piece, Grok’s MechaHitler disaster is a preview of AI disasters to come, which points to the danger of concentration, of having these supposedly sense-making tools run by a select few. https://www.vox.com/future-perfect/419631/grok-hitler-mechah...
kledru•6mo ago
so every time AI disappoints media "reaches out to Gary Marcus"....
blurbleblurble•6mo ago
This whole thing looks like a PR stunt