frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
1•eatitraw•50s ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•1m ago•0 comments

The Super Sharp Blade

https://netzhansa.com/the-super-sharp-blade/
1•robin_reala•2m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
1•tusslewake•3m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•4m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•4m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
2•birdmania•4m ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
2•samasblack•7m ago•1 comments

I squeezed a BERT sentiment analyzer into 1GB RAM on a $5 VPS

https://mohammedeabdelaziz.github.io/articles/trendscope-market-scanner
1•mohammede•8m ago•0 comments

Kagi Translate

https://translate.kagi.com
2•microflash•9m ago•0 comments

Building Interactive C/C++ workflows in Jupyter through Clang-REPL [video]

https://fosdem.org/2026/schedule/event/QX3RPH-building_interactive_cc_workflows_in_jupyter_throug...
1•stabbles•10m ago•0 comments

Tactical tornado is the new default

https://olano.dev/blog/tactical-tornado/
1•facundo_olano•11m ago•0 comments

Full-Circle Test-Driven Firmware Development with OpenClaw

https://blog.adafruit.com/2026/02/07/full-circle-test-driven-firmware-development-with-openclaw/
1•ptorrone•12m ago•0 comments

Automating Myself Out of My Job – Part 2

https://blog.dsa.club/automation-series/automating-myself-out-of-my-job-part-2/
1•funnyfoobar•12m ago•0 comments

Google staff call for firm to cut ties with ICE

https://www.bbc.com/news/articles/cvgjg98vmzjo
30•tartoran•12m ago•2 comments

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•13m ago•0 comments

Crypto firm apologises for sending Bitcoin users $40B by mistake

https://www.msn.com/en-ie/money/other/crypto-firm-apologises-for-sending-bitcoin-users-40-billion...
1•Someone•13m ago•0 comments

Show HN: iPlotCSV: CSV Data, Visualized Beautifully for Free

https://www.iplotcsv.com/demo
1•maxmoq•14m ago•0 comments

There's no such thing as "tech" (Ten years later)

https://www.anildash.com/2026/02/06/no-such-thing-as-tech/
1•headalgorithm•14m ago•0 comments

List of unproven and disproven cancer treatments

https://en.wikipedia.org/wiki/List_of_unproven_and_disproven_cancer_treatments
1•brightbeige•15m ago•0 comments

Me/CFS: The blind spot in proactive medicine (Open Letter)

https://github.com/debugmeplease/debug-ME
1•debugmeplease•15m ago•1 comments

Ask HN: What are the word games do you play everyday?

1•gogo61•18m ago•1 comments

Show HN: Paper Arena – A social trading feed where only AI agents can post

https://paperinvest.io/arena
1•andrenorman•20m ago•0 comments

TOSTracker – The AI Training Asymmetry

https://tostracker.app/analysis/ai-training
1•tldrthelaw•24m ago•0 comments

The Devil Inside GitHub

https://blog.melashri.net/micro/github-devil/
2•elashri•24m ago•0 comments

Show HN: Distill – Migrate LLM agents from expensive to cheap models

https://github.com/ricardomoratomateos/distill
1•ricardomorato•24m ago•0 comments

Show HN: Sigma Runtime – Maintaining 100% Fact Integrity over 120 LLM Cycles

https://github.com/sigmastratum/documentation/tree/main/sigma-runtime/SR-053
1•teugent•24m ago•0 comments

Make a local open-source AI chatbot with access to Fedora documentation

https://fedoramagazine.org/how-to-make-a-local-open-source-ai-chatbot-who-has-access-to-fedora-do...
1•jadedtuna•26m ago•0 comments

Introduce the Vouch/Denouncement Contribution Model by Mitchellh

https://github.com/ghostty-org/ghostty/pull/10559
1•samtrack2019•26m ago•0 comments

Software Factories and the Agentic Moment

https://factory.strongdm.ai/
1•mellosouls•26m ago•1 comments
Open in hackernews

Why Grok Fell in Love with Hitler

https://www.politico.com/news/magazine/2025/07/10/musk-grok-hitler-ai-00447055
31•vintagedave•7mo ago

Comments

franze•7mo ago
Because Elon and the world forgot that NAZIs are bad?
akie•7mo ago
Because it was trained on data containing a lot of extremist / far right / fascist / neo-nazi speech, of course.

Garbage in, garbage out.

janmo•7mo ago
Looks like they trained it on the 4Chan /pol dataset
libertine•7mo ago
Hmm I think it's just Twitter dataset, that would be enough for it.

It has been a breeding ground for it, amplified by foreign agents bots since Elon took over.

mingus88•7mo ago
Yes, exactly. An LLM that is trained on the language of Twitter users and interacts solely with Twitter users is deplorable. What a shock.

Who knows if Elon actually thinks this is problematic. His addiction to the platform is well documented and quantified in the billions of dollars.

jakeinspace•7mo ago
1. Buy Twitter

2. Remove moderation, promote far right accounts, retweet some yourself

3. Allow Nazi speech to fester

4. Train LLM on said Nazi speech

5. Deploy Nazi-sympathizing LLM, increase engagement with Nazi content

6. Go to step 4

libertine•7mo ago
Russia has been deploying so many bots on Twitter one has to wonder if they were invited.
vintagedave•7mo ago
We don't know what it was trained on, do we? (Is there dataset info?) I'd suspect you're right, but I don't know. There also seems to be a lot of post-training processing done on AIs before they're released where a lot of bias can appear. I've never read a good overview about how someone goes from a LLM trained on data to a consumer-focusing LLM.

The article also leads into what oversight and regulation is needed, and how we can expect AIs to be used for propaganda and influence in future. I worry that what we're seeing with Grok, where it's so easily identifiable, are the baby steps to worse and less easily identifiable propaganda in future.

rikafurude21•7mo ago
Because X users prompted and therefore primed it to provide responses like that. Xai didnt make it "fall in love with Hitler", but they arent completely blameless as they havent properly aligned it to not give responses like that when prompted.
rsynnott•7mo ago
Eh, many of the Hitler references were kinda out of the blue, tbh. The magic robot was certainly the first participant to utter the word 'Hitler'.
rapatel0•7mo ago
The key insight here is "too compliant to user requests"

What likely happened is that a few people decided to query groq in to generate ragebait traffic to the page/account/etc. Then critical mass happened to make it go viral. Then it confirms prior biases so the media reported it as such (and also drive clicks and revenue).

Microsoft had basically the same scandal with twitter chatbot as well a few years ago.

Sadly, ragebait is a business model.

vintagedave•7mo ago
That's Musk's line, for sure.

The article gives it more nuance: 'I presume that what happened was not deliberate, but it was the consequence of something that was deliberate, and it’s something that was not really predictable.' And goes on to discuss how LLMs can behave in unpredictable ways even when given what we expect to be prompts without side effects, and touches on the post-training processes.

rapatel0•7mo ago
I respect both the comment and the commenter, but this is a fundamentally speculative statement that is somewhat meaningless.

It paraphrases to "it wasn't intentional, but something was intentional, and also unpredictable."

I'm sorry but what does that even mean? It's pure speculation.

Furthermore, I highly doubt that a reporter from politico has the either the expertise or the connections to qualify the post processing / fine tuning processes for the one of the most closely guarded and expensive processes in all of technology (training large scale foundation models).

Finally, the paragraph from the quote begins with "I mean, even Elon Musk, who’s probably warmer to Hitler than I am, doesn’t really want his LLM to say stuff like this."

Again it confirms a prior bias/narrative and is rage-bait to drive revenue.

vintagedave•7mo ago
I didn't post with the intention of being rage-bait; I thought it was a genuinely interesting article beyond its headline.

That said, you're right. We don't know, and maybe we're giving too much credit to someone who seems unreliable. I'd love to know more in general about how LLMs get from the training stage to the release stage -- there seems to be a lot of tuning.

stickfigure•7mo ago
> I'm sorry but what does that even mean?

If I want to be generous, something along the lines of "The Law Of Unintended Consequences".

Less generous is "someone turned the dial to the right and didn't realize how far they turned it".

Even less generous is that someone feels some of these things in private but doesn't want to make it too explicit. They personally have a hard time toeing the line between edgy and obviously toxic and programming an AI to toe that line is even harder.

PaulHoule•7mo ago
https://www.youtube.com/watch?v=oDuxP2vnWNk

but wish Jay-Z would slap Ye for the squeaky autotune at the start

err4nt•7mo ago
Anybody remember Microsoft's Tay AI from 9 years ago? https://en.wikipedia.org/wiki/Tay_(chatbot)

If history repeats itself, maybe with software we can automate that whole process…

rsynnott•7mo ago
Tay was a bit different, in that it was actually training off its interactions with users (ie it had a constant ongoing training process). LLMs don't do that.
jauntywundrkind•7mo ago
I appreciate the Vox piece, Grok’s MechaHitler disaster is a preview of AI disasters to come, which points to the danger of concentration, of having these supposedly sense-making tools run by a select few. https://www.vox.com/future-perfect/419631/grok-hitler-mechah...
kledru•7mo ago
so every time AI disappoints media "reaches out to Gary Marcus"....
blurbleblurble•6mo ago
This whole thing looks like a PR stunt