frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Infinite Pixels

https://meyerweb.com/eric/thoughts/2025/08/07/infinite-pixels/
150•OuterVale•2h ago•35 comments

How to Sell if Your User is not the Buyer

https://writings.founderlabs.io/p/how-to-sell-if-your-user-is-not-the
26•mooreds•52m ago•13 comments

AWS Restored My Account: The Human Who Made the Difference

https://www.seuros.com/blog/aws-restored-account-plot-twist/
21•mhuot•40m ago•5 comments

Laptop Support and Usability (LSU): July 2025 Report from the FreeBSD Foundation

https://github.com/FreeBSDFoundation/proj-laptop/blob/main/monthly-updates/2025-07.md
37•grahamjperrin•2h ago•18 comments

Windows XP Professional

https://win32.run/
129•pentagrama•2h ago•83 comments

Monte Carlo Crash Course: Quasi-Monte Carlo

https://thenumb.at/QMC/
33•zote•3d ago•3 comments

New AI Coding Teammate: Gemini CLI GitHub Actions

https://blog.google/technology/developers/introducing-gemini-cli-github-actions/
152•michael-sumner•6h ago•62 comments

Arm Desktop: x86 Emulation

https://marcin.juszkiewicz.com.pl/2025/07/22/arm-desktop-emulation/
40•PaulHoule•3h ago•10 comments

We replaced passwords with something worse

https://blog.danielh.cc/blog/passwords
646•max__dev•13h ago•526 comments

Sweatshop Data Is Over

https://www.mechanize.work/blog/sweatshop-data-is-over/
18•whoami_nr•2h ago•8 comments

Building Bluesky Comments for My Blog

https://natalie.sh/posts/bluesky-comments/
3•g0xA52A2A•5m ago•0 comments

An LLM does not need to understand MCP

https://hackteam.io/blog/your-llm-does-not-care-about-mcp/
69•gethackteam•3h ago•80 comments

Global Trade Dynamics

https://alhadaqa.github.io/globaltradedynamics/
18•gmays•1h ago•2 comments

GoGoGrandparent (YC S16) Is Hiring Back End and Full-Stack Engineers

1•davidchl•4h ago

The Whispering Earring (Scott Alexander)

https://croissanthology.com/earring
65•ZeljkoS•5h ago•24 comments

Google confirms it has been hacked

https://www.forbes.com/sites/daveywinder/2025/08/07/google-confirms-it-has-been-hacked---user-data-stolen/
17•hidden_sheepman•10m ago•3 comments

Claude Code IDE integration for Emacs

https://github.com/manzaltu/claude-code-ide.el
713•kgwgk•1d ago•236 comments

Leonardo Chiariglione: “I closed MPEG on 2 June 2020”

https://leonardo.chiariglione.org/
182•eggspurt•5h ago•149 comments

Cracking the Vault: How we found zero-day flaws in HashiCorp Vault

https://cyata.ai/blog/cracking-the-vault-how-we-found-zero-day-flaws-in-authentication-identity-and-authorization-in-hashicorp-vault/
174•nihsy•9h ago•71 comments

Hopfield Networks Is All You Need (2020)

https://arxiv.org/abs/2008.02217
6•liamdgray•2d ago•1 comments

Budget Car Buyers Want Automakers to K.I.S.S

https://www.thedrive.com/news/budget-car-buyers-want-automakers-to-k-i-s-s
8•PaulHoule•29m ago•1 comments

PastVu: Historical Photographs on Current Maps

https://pastvu.com/?_nojs=1
31•lapetitejort•2d ago•3 comments

Let's stop pretending that managers and executives care about productivity

https://www.baldurbjarnason.com/2025/disingenuous-discourse/
22•speckx•1h ago•2 comments

Debounce

https://developer.mozilla.org/en-US/docs/Glossary/Debounce
114•aanthonymax•2d ago•60 comments

Baltimore Assessments Accidentally Subsidize Blight–and How We Can Fix It

https://progressandpoverty.substack.com/p/how-baltimore-assessments-accidentally
53•surprisetalk•3h ago•60 comments

Maybe we should do an updated Super Cars

https://spillhistorie.no/2025/07/31/maybe-we-should-do-an-updated-version/
17•Kolorabi•3h ago•3 comments

Project Hyperion: Interstellar ship design competition

https://www.projecthyperion.org
332•codeulike•19h ago•258 comments

Show HN: Aura – Like robots.txt, but for AI actions

https://github.com/osmandkitay/aura
26•OsmanDKitay•1d ago•21 comments

AI Ethics is being narrowed on purpose, like privacy was

https://nimishg.substack.com/p/ai-ethics-is-being-narrowed-on-purpose
138•i_dont_know_•4h ago•94 comments

Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model

https://github.com/KittenML/KittenTTS
897•divamgupta•1d ago•343 comments
Open in hackernews

The Whispering Earring (Scott Alexander)

https://croissanthology.com/earring
65•ZeljkoS•5h ago

Comments

AndrewDucker•2h ago
It's a classic, and the recent rise of AI will hopefully make it a more widely-known one.
summa_tech•2h ago
A distant relative, no doubt, of Stanislaw Lem's "Automatthew's Friend" (1964). A perfectly rational, indestructible, selfless, well-meaning in-ear AI assistant. In the end, out of nothing but the deepest care for its owner's mental state in a hopeless situation, it advocates efficient and quick suicide.
CoopaTroopa•2h ago
"The parable of the earring was not about the dangers of using technology that wasn't Truly Part Of You, which would indeed have been the kind of dystopianism I dislike. It was about the dangers of becoming too powerful yourself."

https://web.archive.org/web/20121007235422/http://squid314.l...

bananaflag•1h ago
Thanks! Even though I have the whole Squid314 archive, I had forgotten about this follow-up.
AndrewDucker•1h ago
As I said in a comment on that post, 13 years ago: "any parable that's about being too powerful is almost necessarily also about technology, because it's technology that allows the average person to get that power"
Jun8•2h ago
Compare/contrast the Whispering earring/LLM chat with The Room from Stalker, each one is terrifying in its aspect: One because it eventually coaxes you to become a shallow shell of yourself, the other by plucking an unexpected wish from the deepest part of your psyche. I wonder what the Earring would advise if one were to ask it if one should enter The Room.
abeppu•2h ago
I want someone to try building a variant that just gives you timely cues about generally good mental health practices. Suggestions could be contextually based on a local-only app that listens to you and your environment, and delivered to a wireless earbud. When you're in a situation that might cause you stress, it reminds you to take some deep breaths. When you're in a situation where you might be tempted to react with hostility, it suggests that you pause for a few seconds. When you've been sitting in front of your computer too long it suggests that maybe you'd like to go for a short walk.

If the moral of the story is that having access to magically good advice is dangerous because it shifts us to habitual obedience ... can a similar device shift us to mental habits that are actually good for us?

ryandv•1h ago
The moral of the story is that neocortical facilities (vaguely corresponding to what distinguishes modern humans) depend on free will. If you want to merely enthral yourself to voices of the gods a la Julian Jaynes' bicameral man, you can, but this is a regression to a prior stage of humanity's development - away from egoic, free willed man, and backwards to more of a reactive automaton, merely a servant of (possibly digital) gods.
abeppu•1h ago
I think there's a meaningful difference between a tool to remind oneself to take a beat before speaking vs being told what to say. For example, cues that help you avoid an impulsive reaction of anger I think is a step away from being a reactive automaton.
ryandv•1h ago
Anger is just another aspect of the human condition, and is absolutely justified in cases of grave injustice (case in point: Nazis, racism). It's not for some earring to decide when it is justly applied and when it is not; that is the prerogative of humanity.

In either case none of this cueing or prompting needs to be exogenous or originate from some external technology. The Eastern mystics have developed totally endogenous psychotechnologies that serve this same purpose, without the need to atrophy your psyche.

abeppu•28m ago
Absolutely anger is sometimes justified. But people are also angry when e.g. someone cuts them off in traffic. The initial feeling of anger may not be appropriate. A cue to help you avoid reacting immediately from hostility isn't so much deciding whether anger is appropriate but giving you the space to make that judgement reflectively rather than impulsively. Even if anger is appropriate, the action you want to take on reflection may not be the first one that arises.

"The eastern mystics" managed to do a lot of things, but often with a large amount of dedicated practice. Extremely practiced meditators can also reach intense states of concentration, equanimity etc, but the fact that it's not strictly necessary to have supportive tools to develop these skills doesn't mean that supportive tooling wouldn't help a lot of people.

throwanem•12m ago
It is strictly necessary not to have supportive tools in order to develop these skills. Sentience and the ability to learn from experience are all that is essentially required. Past that there are no crutches and no shortcuts, because you have mistaken for disability the refusal to grow.
patcon•38m ago
My sensibility is that agency is about "noticing". The content of information seems perhaps less important than the attention allocation mechanism that brings our attention to something.

If you write all your own words, but without an ability to direct your attention to what needed words conjured around it, did you really do anything important at all? (Yes, that's perhaps controversial :) )

tempodox•1h ago
I would recommend Steely Dan’s “Green Earrings” instead. No whispering required!

https://www.youtube.com/watch?v=3wvH1UzhiKk

And the original is fully analog.

JohnKemeny•58m ago
Scott Alexander Siskind is a hack and HN should stop obsessing over him and the rest of the EA cult.
rwnspace•51m ago
He has some great essays and research pieces and has fostered a generally nice community of people who grew out of LessWrong. There aren't many places online to talk about those things in a certain way without it devolving rapidly.
mock-possum•43m ago
Guess we’ll have to just take your word for it - I found this one to be a nice little read, reminds me a bit of Borges.
dafelst•32m ago
What is EA in this context?
y-curious•22m ago
I had to ask AI (ironically), it means Effective Altruism in this context. I'm not really sure what the parent's hate for EA comes from, but I don't hang out in those circles
tacitusarc•45m ago
I think this ignores the internal conflict in most people’s psyche. The simplest form of this is long term vs short term thinking, but certainly our desires pull us in competing, sometimes opposite, directions.

Am I the me who loves cake or the me who wants to be in shape? Am I the me who wants to watch movies or who wants to write a book?

These are not simply different peaks of a given utility function, they are different utility functions entirely.

Soon after being put on, the whispering earring would go insane.

throwanem•28m ago
He warned himself?
y-curious•20m ago
This reminds me of another story that I saw posted on HN and has provided lots of fodder for idle conversations: Manna[1]. It's a less mystical version of the whispering earring.

1: https://marshallbrain.com/manna1