frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Slint: Cross Platform UI Library

https://slint.dev/
1•Palmik•31s ago•0 comments

AI and Education: Generative AI and the Future of Critical Thinking

https://www.youtube.com/watch?v=k7PvscqGD24
1•nyc111•41s ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•1m ago•0 comments

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•5m ago•0 comments

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•5m ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•6m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
1•samuel246•9m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•9m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•9m ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•10m ago•0 comments

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•10m ago•0 comments

The Real AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
2•geox•13m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•13m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
2•jerpint•14m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•15m ago•0 comments

I'm 15 and built a free tool for reading Greek/Latin texts. Would love feedback

https://the-lexicon-project.netlify.app/
2•breadwithjam•18m ago•0 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•19m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•20m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•22m ago•0 comments

How Meta Made Linux a Planet-Scale Load Balancer

https://softwarefrontier.substack.com/p/how-meta-turned-the-linux-kernel
1•CortexFlow•22m ago•0 comments

A Turing Test for AI Coding

https://t-cadet.github.io/programming-wisdom/#2026-02-06-a-turing-test-for-ai-coding
2•phi-system•22m ago•0 comments

How to Identify and Eliminate Unused AWS Resources

https://medium.com/@vkelk/how-to-identify-and-eliminate-unused-aws-resources-b0e2040b4de8
3•vkelk•23m ago•0 comments

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
2•mmoogle•24m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
3•saikatsg•25m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•26m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
3•ykdojo•29m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•30m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•32m ago•1 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
3•mariuz•32m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•35m ago•1 comments
Open in hackernews

Why Grok Fell in Love with Hitler

https://www.politico.com/news/magazine/2025/07/10/musk-grok-hitler-ai-00447055
31•vintagedave•7mo ago

Comments

franze•7mo ago
Because Elon and the world forgot that NAZIs are bad?
akie•7mo ago
Because it was trained on data containing a lot of extremist / far right / fascist / neo-nazi speech, of course.

Garbage in, garbage out.

janmo•7mo ago
Looks like they trained it on the 4Chan /pol dataset
libertine•7mo ago
Hmm I think it's just Twitter dataset, that would be enough for it.

It has been a breeding ground for it, amplified by foreign agents bots since Elon took over.

mingus88•7mo ago
Yes, exactly. An LLM that is trained on the language of Twitter users and interacts solely with Twitter users is deplorable. What a shock.

Who knows if Elon actually thinks this is problematic. His addiction to the platform is well documented and quantified in the billions of dollars.

jakeinspace•7mo ago
1. Buy Twitter

2. Remove moderation, promote far right accounts, retweet some yourself

3. Allow Nazi speech to fester

4. Train LLM on said Nazi speech

5. Deploy Nazi-sympathizing LLM, increase engagement with Nazi content

6. Go to step 4

libertine•7mo ago
Russia has been deploying so many bots on Twitter one has to wonder if they were invited.
vintagedave•7mo ago
We don't know what it was trained on, do we? (Is there dataset info?) I'd suspect you're right, but I don't know. There also seems to be a lot of post-training processing done on AIs before they're released where a lot of bias can appear. I've never read a good overview about how someone goes from a LLM trained on data to a consumer-focusing LLM.

The article also leads into what oversight and regulation is needed, and how we can expect AIs to be used for propaganda and influence in future. I worry that what we're seeing with Grok, where it's so easily identifiable, are the baby steps to worse and less easily identifiable propaganda in future.

rikafurude21•7mo ago
Because X users prompted and therefore primed it to provide responses like that. Xai didnt make it "fall in love with Hitler", but they arent completely blameless as they havent properly aligned it to not give responses like that when prompted.
rsynnott•7mo ago
Eh, many of the Hitler references were kinda out of the blue, tbh. The magic robot was certainly the first participant to utter the word 'Hitler'.
rapatel0•7mo ago
The key insight here is "too compliant to user requests"

What likely happened is that a few people decided to query groq in to generate ragebait traffic to the page/account/etc. Then critical mass happened to make it go viral. Then it confirms prior biases so the media reported it as such (and also drive clicks and revenue).

Microsoft had basically the same scandal with twitter chatbot as well a few years ago.

Sadly, ragebait is a business model.

vintagedave•7mo ago
That's Musk's line, for sure.

The article gives it more nuance: 'I presume that what happened was not deliberate, but it was the consequence of something that was deliberate, and it’s something that was not really predictable.' And goes on to discuss how LLMs can behave in unpredictable ways even when given what we expect to be prompts without side effects, and touches on the post-training processes.

rapatel0•7mo ago
I respect both the comment and the commenter, but this is a fundamentally speculative statement that is somewhat meaningless.

It paraphrases to "it wasn't intentional, but something was intentional, and also unpredictable."

I'm sorry but what does that even mean? It's pure speculation.

Furthermore, I highly doubt that a reporter from politico has the either the expertise or the connections to qualify the post processing / fine tuning processes for the one of the most closely guarded and expensive processes in all of technology (training large scale foundation models).

Finally, the paragraph from the quote begins with "I mean, even Elon Musk, who’s probably warmer to Hitler than I am, doesn’t really want his LLM to say stuff like this."

Again it confirms a prior bias/narrative and is rage-bait to drive revenue.

vintagedave•7mo ago
I didn't post with the intention of being rage-bait; I thought it was a genuinely interesting article beyond its headline.

That said, you're right. We don't know, and maybe we're giving too much credit to someone who seems unreliable. I'd love to know more in general about how LLMs get from the training stage to the release stage -- there seems to be a lot of tuning.

stickfigure•7mo ago
> I'm sorry but what does that even mean?

If I want to be generous, something along the lines of "The Law Of Unintended Consequences".

Less generous is "someone turned the dial to the right and didn't realize how far they turned it".

Even less generous is that someone feels some of these things in private but doesn't want to make it too explicit. They personally have a hard time toeing the line between edgy and obviously toxic and programming an AI to toe that line is even harder.

PaulHoule•7mo ago
https://www.youtube.com/watch?v=oDuxP2vnWNk

but wish Jay-Z would slap Ye for the squeaky autotune at the start

err4nt•7mo ago
Anybody remember Microsoft's Tay AI from 9 years ago? https://en.wikipedia.org/wiki/Tay_(chatbot)

If history repeats itself, maybe with software we can automate that whole process…

rsynnott•7mo ago
Tay was a bit different, in that it was actually training off its interactions with users (ie it had a constant ongoing training process). LLMs don't do that.
jauntywundrkind•7mo ago
I appreciate the Vox piece, Grok’s MechaHitler disaster is a preview of AI disasters to come, which points to the danger of concentration, of having these supposedly sense-making tools run by a select few. https://www.vox.com/future-perfect/419631/grok-hitler-mechah...
kledru•7mo ago
so every time AI disappoints media "reaches out to Gary Marcus"....
blurbleblurble•6mo ago
This whole thing looks like a PR stunt