frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Portable C Compiler

https://en.wikipedia.org/wiki/Portable_C_Compiler
1•guerrilla•40s ago•0 comments

Show HN: Kokki – A "Dual-Core" System Prompt to Reduce LLM Hallucinations

1•Ginsabo•1m ago•0 comments

Software Engineering Transformation 2026

https://mfranc.com/blog/ai-2026/
1•michal-franc•2m ago•0 comments

Microsoft purges Win11 printer drivers, devices on borrowed time

https://www.tomshardware.com/peripherals/printers/microsoft-stops-distrubitng-legacy-v3-and-v4-pr...
1•rolph•2m ago•0 comments

Lunch with the FT: Tarek Mansour

https://www.ft.com/content/a4cebf4c-c26c-48bb-82c8-5701d8256282
1•hhs•6m ago•0 comments

Old Mexico and her lost provinces (1883)

https://www.gutenberg.org/cache/epub/77881/pg77881-images.html
1•petethomas•9m ago•0 comments

'AI' is a dick move, redux

https://www.baldurbjarnason.com/notes/2026/note-on-debating-llm-fans/
2•cratermoon•10m ago•0 comments

The source code was the moat. But not anymore

https://philipotoole.com/the-source-code-was-the-moat-no-longer/
1•otoolep•10m ago•0 comments

Does anyone else feel like their inbox has become their job?

1•cfata•10m ago•0 comments

An AI model that can read and diagnose a brain MRI in seconds

https://www.michiganmedicine.org/health-lab/ai-model-can-read-and-diagnose-brain-mri-seconds
1•hhs•14m ago•0 comments

Dev with 5 of experience switched to Rails, what should I be careful about?

1•vampiregrey•16m ago•0 comments

AlphaFace: High Fidelity and Real-Time Face Swapper Robust to Facial Pose

https://arxiv.org/abs/2601.16429
1•PaulHoule•17m ago•0 comments

Scientists discover “levitating” time crystals that you can hold in your hand

https://www.nyu.edu/about/news-publications/news/2026/february/scientists-discover--levitating--t...
1•hhs•19m ago•0 comments

Rammstein – Deutschland (C64 Cover, Real SID, 8-bit – 2019) [video]

https://www.youtube.com/watch?v=3VReIuv1GFo
1•erickhill•19m ago•0 comments

Tell HN: Yet Another Round of Zendesk Spam

1•Philpax•20m ago•0 comments

Postgres Message Queue (PGMQ)

https://github.com/pgmq/pgmq
1•Lwrless•23m ago•0 comments

Show HN: Django-rclone: Database and media backups for Django, powered by rclone

https://github.com/kjnez/django-rclone
1•cui•26m ago•1 comments

NY lawmakers proposed statewide data center moratorium

https://www.niagara-gazette.com/news/local_news/ny-lawmakers-proposed-statewide-data-center-morat...
1•geox•28m ago•0 comments

OpenClaw AI chatbots are running amok – these scientists are listening in

https://www.nature.com/articles/d41586-026-00370-w
2•EA-3167•28m ago•0 comments

Show HN: AI agent forgets user preferences every session. This fixes it

https://www.pref0.com/
6•fliellerjulian•30m ago•0 comments

Introduce the Vouch/Denouncement Contribution Model

https://github.com/ghostty-org/ghostty/pull/10559
2•DustinEchoes•32m ago•0 comments

Show HN: SSHcode – Always-On Claude Code/OpenCode over Tailscale and Hetzner

https://github.com/sultanvaliyev/sshcode
1•sultanvaliyev•32m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/microsoft-appointed-a-quality-czar-he-has-no-direct-reports-and-no-b...
2•RickJWagner•34m ago•0 comments

Multi-agent coordination on Claude Code: 8 production pain points and patterns

https://gist.github.com/sigalovskinick/6cc1cef061f76b7edd198e0ebc863397
1•nikolasi•35m ago•0 comments

Washington Post CEO Will Lewis Steps Down After Stormy Tenure

https://www.nytimes.com/2026/02/07/technology/washington-post-will-lewis.html
13•jbegley•35m ago•3 comments

DevXT – Building the Future with AI That Acts

https://devxt.com
2•superpecmuscles•36m ago•4 comments

A Minimal OpenClaw Built with the OpenCode SDK

https://github.com/CefBoud/MonClaw
1•cefboud•36m ago•0 comments

The silent death of Good Code

https://amit.prasad.me/blog/rip-good-code
3•amitprasad•37m ago•0 comments

The Internal Negotiation You Have When Your Heart Rate Gets Uncomfortable

https://www.vo2maxpro.com/blog/internal-negotiation-heart-rate
1•GoodluckH•38m ago•0 comments

Show HN: Glance – Fast CSV inspection for the terminal (SIMD-accelerated)

https://github.com/AveryClapp/glance
2•AveryClapp•39m ago•0 comments
Open in hackernews

Why Grok Fell in Love with Hitler

https://www.politico.com/news/magazine/2025/07/10/musk-grok-hitler-ai-00447055
31•vintagedave•7mo ago

Comments

franze•7mo ago
Because Elon and the world forgot that NAZIs are bad?
akie•7mo ago
Because it was trained on data containing a lot of extremist / far right / fascist / neo-nazi speech, of course.

Garbage in, garbage out.

janmo•7mo ago
Looks like they trained it on the 4Chan /pol dataset
libertine•7mo ago
Hmm I think it's just Twitter dataset, that would be enough for it.

It has been a breeding ground for it, amplified by foreign agents bots since Elon took over.

mingus88•7mo ago
Yes, exactly. An LLM that is trained on the language of Twitter users and interacts solely with Twitter users is deplorable. What a shock.

Who knows if Elon actually thinks this is problematic. His addiction to the platform is well documented and quantified in the billions of dollars.

jakeinspace•7mo ago
1. Buy Twitter

2. Remove moderation, promote far right accounts, retweet some yourself

3. Allow Nazi speech to fester

4. Train LLM on said Nazi speech

5. Deploy Nazi-sympathizing LLM, increase engagement with Nazi content

6. Go to step 4

libertine•7mo ago
Russia has been deploying so many bots on Twitter one has to wonder if they were invited.
vintagedave•7mo ago
We don't know what it was trained on, do we? (Is there dataset info?) I'd suspect you're right, but I don't know. There also seems to be a lot of post-training processing done on AIs before they're released where a lot of bias can appear. I've never read a good overview about how someone goes from a LLM trained on data to a consumer-focusing LLM.

The article also leads into what oversight and regulation is needed, and how we can expect AIs to be used for propaganda and influence in future. I worry that what we're seeing with Grok, where it's so easily identifiable, are the baby steps to worse and less easily identifiable propaganda in future.

rikafurude21•7mo ago
Because X users prompted and therefore primed it to provide responses like that. Xai didnt make it "fall in love with Hitler", but they arent completely blameless as they havent properly aligned it to not give responses like that when prompted.
rsynnott•7mo ago
Eh, many of the Hitler references were kinda out of the blue, tbh. The magic robot was certainly the first participant to utter the word 'Hitler'.
rapatel0•7mo ago
The key insight here is "too compliant to user requests"

What likely happened is that a few people decided to query groq in to generate ragebait traffic to the page/account/etc. Then critical mass happened to make it go viral. Then it confirms prior biases so the media reported it as such (and also drive clicks and revenue).

Microsoft had basically the same scandal with twitter chatbot as well a few years ago.

Sadly, ragebait is a business model.

vintagedave•7mo ago
That's Musk's line, for sure.

The article gives it more nuance: 'I presume that what happened was not deliberate, but it was the consequence of something that was deliberate, and it’s something that was not really predictable.' And goes on to discuss how LLMs can behave in unpredictable ways even when given what we expect to be prompts without side effects, and touches on the post-training processes.

rapatel0•7mo ago
I respect both the comment and the commenter, but this is a fundamentally speculative statement that is somewhat meaningless.

It paraphrases to "it wasn't intentional, but something was intentional, and also unpredictable."

I'm sorry but what does that even mean? It's pure speculation.

Furthermore, I highly doubt that a reporter from politico has the either the expertise or the connections to qualify the post processing / fine tuning processes for the one of the most closely guarded and expensive processes in all of technology (training large scale foundation models).

Finally, the paragraph from the quote begins with "I mean, even Elon Musk, who’s probably warmer to Hitler than I am, doesn’t really want his LLM to say stuff like this."

Again it confirms a prior bias/narrative and is rage-bait to drive revenue.

vintagedave•7mo ago
I didn't post with the intention of being rage-bait; I thought it was a genuinely interesting article beyond its headline.

That said, you're right. We don't know, and maybe we're giving too much credit to someone who seems unreliable. I'd love to know more in general about how LLMs get from the training stage to the release stage -- there seems to be a lot of tuning.

stickfigure•7mo ago
> I'm sorry but what does that even mean?

If I want to be generous, something along the lines of "The Law Of Unintended Consequences".

Less generous is "someone turned the dial to the right and didn't realize how far they turned it".

Even less generous is that someone feels some of these things in private but doesn't want to make it too explicit. They personally have a hard time toeing the line between edgy and obviously toxic and programming an AI to toe that line is even harder.

PaulHoule•7mo ago
https://www.youtube.com/watch?v=oDuxP2vnWNk

but wish Jay-Z would slap Ye for the squeaky autotune at the start

err4nt•7mo ago
Anybody remember Microsoft's Tay AI from 9 years ago? https://en.wikipedia.org/wiki/Tay_(chatbot)

If history repeats itself, maybe with software we can automate that whole process…

rsynnott•7mo ago
Tay was a bit different, in that it was actually training off its interactions with users (ie it had a constant ongoing training process). LLMs don't do that.
jauntywundrkind•7mo ago
I appreciate the Vox piece, Grok’s MechaHitler disaster is a preview of AI disasters to come, which points to the danger of concentration, of having these supposedly sense-making tools run by a select few. https://www.vox.com/future-perfect/419631/grok-hitler-mechah...
kledru•7mo ago
so every time AI disappoints media "reaches out to Gary Marcus"....
blurbleblurble•7mo ago
This whole thing looks like a PR stunt