frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Modulation of Heat Shock Proteins Levels in Health and Disease

https://www.mdpi.com/2073-4409/14/13/979
1•PaulHoule•39s ago•0 comments

It took 45 years, but spreadsheet legend Mitch Kapor finally got his MIT degree

https://www.bostonglobe.com/2025/06/24/business/mitch-kapor-mit-degree-bill-aulet/
1•bookofjoe•1m ago•1 comments

All Trains in the USA are vulnerable to wireless RF command injection

https://www.cisa.gov/news-events/ics-advisories/icsa-25-191-10
1•neilwillgettoit•1m ago•2 comments

A deep dive into deeply recursive Go

https://mattermost.com/blog/a-deep-dive-into-deeply-recursive-go/
1•jupenur•2m ago•0 comments

Things I Learned Building an AI Tool After the Hype

https://www.indiehackers.com/post/tech/hitting-2m-arr-in-two-years-even-though-he-was-late-to-the-ai-party-zC8I9EvIpANvyntOTDzk
1•hansjan•3m ago•0 comments

The Empire of Intelligence: OpenAI's Power Map

https://newsletter.boundlessdiscovery.com/p/the-empire-of-intelligence-openai-s-power-map
1•handfuloflight•4m ago•0 comments

Potential Danger to Satellites from a 2032 Lunar Impact by Asteroid 2024 YR4

https://arxiv.org/abs/2506.11217
1•bikenaga•4m ago•0 comments

Billionaire Math

https://apenwarr.ca/log/20250711
2•avinassh•5m ago•0 comments

Datacenters feeling the heat as climate risk boils over

https://www.theregister.com/2025/07/11/climate_change_datacenters/
1•rntn•6m ago•0 comments

I still care about the code

https://martinfowler.com/articles/exploring-gen-ai/i-still-care-about-the-code.html
1•mpweiher•7m ago•0 comments

A Mental Model for C++ Coroutine

https://uvdn7.github.io/cpp-coro/
1•uvdn7•8m ago•0 comments

Win, lose, or draw: trends in English football match results

https://blog.engora.com/2025/06/english-football-data.html
1•Vermin2000•8m ago•0 comments

Hilbert spaces, Ricci traces: the singularity we should attend to

1•glitchprince•9m ago•0 comments

jank Is C++

https://jank-lang.org/blog/2025-07-11-jank-is-cpp/
1•Jeaye•10m ago•0 comments

Bay Area biotech co Jasper Therapeutics drug mishap leads to layoffs

https://www.sfgate.com/tech/article/bay-area-biotech-company-layoffs-drug-mishap-20765300.php
2•randycupertino•11m ago•0 comments

Building a Simple Router with OpenBSD

https://btxx.org/posts/openbsd-router/
2•Bogdanp•11m ago•0 comments

Get My SaaS to "Go Viral"

1•chany2•11m ago•0 comments

Spec Engineering and the New Code – Sgrove from OpenAI [video]

https://www.youtube.com/watch?v=8rABwKRsec4
1•dhorthy•14m ago•1 comments

Wacky history of Computer Scrabble [video]

https://www.youtube.com/watch?v=HP90knHlYqc
1•indrora•14m ago•0 comments

Show HN: Free app to estimate calories burn based on MET

https://burnmeter.swimpeaks.com
1•bacdor•14m ago•0 comments

Air India Probe Puts Early Focus on Pilots' Actions and Plane's Fuel Switches

https://www.wsj.com/business/airlines/air-india-crash-probe-fuel-cut-3a711f39
1•cebert•17m ago•1 comments

Ask HN: What makes an AI system an "agent" vs. just software with if-then logic?

1•Jimmc414•17m ago•1 comments

Pythonic Guardrails for MCP Servers

https://github.com/codeintegrity-ai/tramlines-gateway
1•coderinsan•18m ago•0 comments

Stop Converting Your REST APIs to MCP

https://www.jlowin.dev/blog/stop-converting-rest-apis-to-mcp
2•cicdw•20m ago•0 comments

Subnautica studio co-founder says he's suing parent company Krafton

https://www.engadget.com/gaming/subnautica-studio-co-founder-says-hes-suing-parent-company-krafton-153412484.html
1•jtanderson•23m ago•0 comments

The Analog Art

https://www.joostrekveld.net/?p=1409
1•joebig•25m ago•0 comments

Ancient trees are dying faster than expected in Eastern Oregon

https://phys.org/news/2025-07-ancient-trees-dying-faster-eastern.html
1•bikenaga•26m ago•0 comments

Indeed and Glassdoor are cutting more than 1k jobs

https://www.engadget.com/ai/indeed-and-glassdoor-are-cutting-more-than-1000-jobs-190128210.html
3•taubek•28m ago•2 comments

Show HN: Infragram – C4 style interactive architecture diagrams for Terraform

https://marketplace.visualstudio.com/items?itemName=infragram.infragram
2•aqula•29m ago•0 comments

Gemini 2.5 Flash default time to first token increased from 2.0

https://www.thoughteddies.com/notes/2025/gemini-hidden-reasoning/
1•danielcorin•29m ago•0 comments
Open in hackernews

Why Grok Fell in Love with Hitler

https://www.politico.com/news/magazine/2025/07/10/musk-grok-hitler-ai-00447055
22•vintagedave•4h ago

Comments

franze•4h ago
Because Elon and the world forgot that NAZIs are bad?
akie•4h ago
Because it was trained on data containing a lot of extremist / far right / fascist / neo-nazi speech, of course.

Garbage in, garbage out.

janmo•4h ago
Looks like they trained it on the 4Chan /pol dataset
libertine•4h ago
Hmm I think it's just Twitter dataset, that would be enough for it.

It has been a breeding ground for it, amplified by foreign agents bots since Elon took over.

mingus88•4h ago
Yes, exactly. An LLM that is trained on the language of Twitter users and interacts solely with Twitter users is deplorable. What a shock.

Who knows if Elon actually thinks this is problematic. His addiction to the platform is well documented and quantified in the billions of dollars.

vintagedave•3h ago
We don't know what it was trained on, do we? (Is there dataset info?) I'd suspect you're right, but I don't know. There also seems to be a lot of post-training processing done on AIs before they're released where a lot of bias can appear. I've never read a good overview about how someone goes from a LLM trained on data to a consumer-focusing LLM.

The article also leads into what oversight and regulation is needed, and how we can expect AIs to be used for propaganda and influence in future. I worry that what we're seeing with Grok, where it's so easily identifiable, are the baby steps to worse and less easily identifiable propaganda in future.

rikafurude21•4h ago
Because X users prompted and therefore primed it to provide responses like that. Xai didnt make it "fall in love with Hitler", but they arent completely blameless as they havent properly aligned it to not give responses like that when prompted.
rsynnott•3h ago
Eh, many of the Hitler references were kinda out of the blue, tbh. The magic robot was certainly the first participant to utter the word 'Hitler'.
rapatel0•4h ago
The key insight here is "too compliant to user requests"

What likely happened is that a few people decided to query groq in to generate ragebait traffic to the page/account/etc. Then critical mass happened to make it go viral. Then it confirms prior biases so the media reported it as such (and also drive clicks and revenue).

Microsoft had basically the same scandal with twitter chatbot as well a few years ago.

Sadly, ragebait is a business model.

vintagedave•3h ago
That's Musk's line, for sure.

The article gives it more nuance: 'I presume that what happened was not deliberate, but it was the consequence of something that was deliberate, and it’s something that was not really predictable.' And goes on to discuss how LLMs can behave in unpredictable ways even when given what we expect to be prompts without side effects, and touches on the post-training processes.

rapatel0•1h ago
I respect both the comment and the commenter, but this is a fundamentally speculative statement that is somewhat meaningless.

It paraphrases to "it wasn't intentional, but something was intentional, and also unpredictable."

I'm sorry but what does that even mean? It's pure speculation.

Furthermore, I highly doubt that a reporter from politico has the either the expertise or the connections to qualify the post processing / fine tuning processes for the one of the most closely guarded and expensive processes in all of technology (training large scale foundation models).

Finally, the paragraph from the quote begins with "I mean, even Elon Musk, who’s probably warmer to Hitler than I am, doesn’t really want his LLM to say stuff like this."

Again it confirms a prior bias/narrative and is rage-bait to drive revenue.

PaulHoule•4h ago
https://www.youtube.com/watch?v=oDuxP2vnWNk

but wish Jay-Z would slap Ye for the squeaky autotune at the start

err4nt•4h ago
Anybody remember Microsoft's Tay AI from 9 years ago? https://en.wikipedia.org/wiki/Tay_(chatbot)

If history repeats itself, maybe with software we can automate that whole process…

rsynnott•3h ago
Tay was a bit different, in that it was actually training off its interactions with users (ie it had a constant ongoing training process). LLMs don't do that.
jauntywundrkind•1h ago
I appreciate the Vox piece, Grok’s MechaHitler disaster is a preview of AI disasters to come, which points to the danger of concentration, of having these supposedly sense-making tools run by a select few. https://www.vox.com/future-perfect/419631/grok-hitler-mechah...