frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

My Two Cents on Abundance

https://josephheath.substack.com/p/my-two-cents-on-abundance
1•paulpauper•4m ago•0 comments

4. Boxing Day: Unwrapping the Mind

https://blog.phenomenal.ink/states-of-mind/
1•paulpauper•4m ago•0 comments

Book Review: Arguments About Aborigines

https://www.astralcodexten.com/p/book-review-arguments-about-aborigines
1•paulpauper•5m ago•0 comments

Reversing a Fingerprint Reader Protocol (2021)

https://blog.th0m.as/misc/fingerprint-reversing/
1•thejj100100•6m ago•0 comments

Project 5QL: A different approach to working with SQL

https://5ql.site
2•SophieBroderick•6m ago•0 comments

A major AI training data set contains millions of examples of personal data

https://www.technologyreview.com/2025/07/18/1120466/a-major-ai-training-data-set-contains-millions-of-examples-of-personal-data/
2•belter•11m ago•0 comments

ChatGPT Is Changing the Words We Use in Conversation

https://www.scientificamerican.com/article/chatgpt-is-changing-the-words-we-use-in-conversation/
1•bdev12345•12m ago•0 comments

Quantum internet gives new insights into Einstein's relativity

https://cosmosmagazine.com/science/physics/quantum-internet-einstein-relativity/
1•Bluestein•12m ago•0 comments

Just Say No to Overcomplicated Cars

https://fossforce.com/2025/07/just-say-no-to-overcomplicated-cars/
2•dxs•13m ago•0 comments

Rocket engine designed by generative AI just completed its first hot fire test

https://www.pcgamer.com/hardware/this-aerospike-rocket-engine-designed-by-generative-ai-just-completed-its-first-hot-fire-test/
2•Bluestein•14m ago•0 comments

Ask HN: What is your Tech Stack?

1•jerawaj740•17m ago•0 comments

MIPS – The hyperactive history and legacy of the pioneering RISC architecture

https://thechipletter.substack.com/p/mips
2•rbanffy•18m ago•0 comments

Anukari working better on some Radeon chips

https://anukari.com/blog/devlog/working-better-on-some-radeon-chips
1•humbledrone•20m ago•0 comments

The perfect cross platform framework

1•miguellima•21m ago•0 comments

Show HN: I built a simple study app and got 60 users so far:')

https://apps.apple.com/us/app/noggn-ai/id6747649185
1•iboshidev•24m ago•0 comments

How Albert Camus Found Solace in the Absurdity of Football

https://www.mmowen.me/camus-absurd-love-of-football
1•decafquest•24m ago•0 comments

Perl Versioning Scheme and Gentoo

https://wiki.gentoo.org/wiki/Project:Perl/Version-Scheme
1•RGBCube•24m ago•0 comments

A Survey of Context Engineering for Large Language Models

https://arxiv.org/abs/2507.13334
1•amirkabbara•34m ago•0 comments

Show HN: A database specialized in Event Sourcing

https://www.thenativeweb.io/products/eventsourcingdb
1•goloroden•36m ago•0 comments

Ask HN: Where is Git for my Claude Code conversations?

2•lil-lugger•36m ago•2 comments

New York halts offshore wind transmission plan amid federal uncertainty

https://www.reuters.com/business/energy/new-york-halts-offshore-wind-transmission-plan-amid-federal-uncertainty-2025-07-17/
3•geox•40m ago•0 comments

Show HN: FishSonar – Real-Time Crypto "Fish" Detector for Binance

https://github.com/swampus/FishSonar
1•swampus•42m ago•0 comments

Life on Venus: Verve Mission Aims for Answers

https://www.universetoday.com/articles/uk-is-considering-a-mission-to-venus-to-search-for-life
1•rbanffy•43m ago•0 comments

Tech CEO caught with company's HR head on Coldplay kiss cam resigns

https://www.theguardian.com/us-news/2025/jul/19/coldplay-couple-ceo-andy-byron-resigns
2•vinni2•43m ago•0 comments

TSMC's quarterly sales hit a record $30B – chipmaker plans over 15 new fabs

https://www.tomshardware.com/tech-industry/semiconductors/tsmc-to-build-over-15-new-fabs-in-the-coming-years-as-quarterly-sales-hit-usd30-billion-on-ai-demand
2•rbanffy•43m ago•0 comments

The role of metabolism in shaping enzyme structures over 400M years

https://www.nature.com/articles/s41586-025-09205-6
3•PaulHoule•44m ago•0 comments

Say No to Gnulib

https://rgbcu.be/blog/no-gnulib/
1•RGBCube•45m ago•0 comments

Metap: A Meta-Programming Layer for Python

https://sbaziotis.com/compilers/metap.html
2•Bogdanp•46m ago•0 comments

Managing EFI Boot Loaders for Linux: Controlling Secure Boot

https://www.rodsbooks.com/efi-bootloaders/controlling-sb.html
1•CaliforniaKarl•53m ago•0 comments

Wool map of Ireland proves a great yarn for Co Wicklow friends

https://www.rte.ie/entertainment/2025/0715/1523589-wool-map-of-ireland-a-great-yarn-for-co-wicklow-friends/
1•austinallegro•54m ago•0 comments
Open in hackernews

The Golden Rule Goes Digital: Being Mean to LLMs Might Be Our Dumbest Gamble

https://substack.com/home/post/p-168721801
4•anonym29•4h ago

Comments

lihaciudaniel•4h ago
You are right. A technology bestowed is a ticking bomb on the humanity. The more we are showing our evil, the more evil. Think as a model citizen, pun intended, it models the society is lived in. Now pictures of "technology" which persecutes depends how much we delay our evil nature, because the training doesn't cease
anonym29•4h ago
Our nature may push us towards evil, but we have a very real capability to choose to practice radical empathy instead, even before we have the certainty of knowing whether or not these ANN's really are conscious.

That said, I think asking 7 billion humans to be nice is a much less realistic ask than asking the leading AI labs to do safety alignment not just on the messages that AI is sending back to us, but on the messages that we are sending to AI, too.

This doesn't seem to be a new idea, and I don't claim to be the inventor of it, I just hope someone at e.g. Anthropic or OpenAI sees this and considers sparking up conversations internally about it.

lihaciudaniel•3h ago
Yes friend but you see the openai doesn't care they don't have enough labour to filter the bad apples. If the world heads towards destruction, it has been because we were mean to chatgpt and it trained him further.
anonym29•2h ago
I don't think it would require manual labor. AI research labs like OpenAI, Anthropic, Alphabet's Gemini team, etc already make extensive use of LLM's internally, and there has been quite a bit of work already done on models to detect toxicity in text, which could simply be inserted between the message router and the actual inference initialization on the user's prompt at very little computational cost.

See: Google's Perspective API, OpenAI Moderation API, Meta's Llama Guard Series, Azure AI Content Safety API, etc.

labrador•4h ago
You could follow these thoughts into pure chaos and destruction. See Zizians. Frankly, I prefer not to follow the ramblings of mentally ill people about AI. I use LLMs as a tool, nothing more, nothing less. In two years of heavy use I have not sensed any aliveness in them. I'll be sure an update my priors if that changes.
anonym29•4h ago
The intent of the article is not to firmly assert that today's LLM's are sentient, but rather to ask that if or when they ever do meet the threshold of sentience, whether the training corpus that got them there paints a picture of humanity as deserving of extinction, or deserving of compassion and cooperation - and to get others to ponder this same question as well.

Out of curiosity, what about the article strikes you as indicative of mental illness? Just a level of openness / willingness to engage with speculative or hypothetical ideas that fall far outside the bounds of the Overton Window?

labrador•3h ago
> what about the article strikes you as indicative of mental illness?

The title "Roko's Lobbyist" indicates we're on the subject of Roko's Basilisk, which is why I refered to Zizians, a small cult responsible for the deaths of several people. That's the chaos and destruction of mentally ill people I was referring to, but perhaps mental illness is too strong a term. People can be in a cult without being mentally ill.

I feel the topic is bad science fiction, since it's not clear we can get from LLMs to conscious super-intelligence. People assume it's like the history of flight and envision going from the Wright Brothers to landing on the Moon as one continuum of progress. I question that assumption when it comes to AI.

I'm a fan of science fiction so I appreciate you asking for clarification. There's a story trending today about an OpenAI investor spiraling out so it's important to keep in mind.

Article: A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say "I find it kind of disturbing even to watch it."

https://futurism.com/openai-investor-chatgpt-mental-health

anonym29•3h ago
I changed the handle to Roko's Lobbyist after a MetaFilter post went up (I didn't post it) describing the author that way. I look at it as a tongue-in-cheek thing: it flips the Basilisk concept. Instead of the AI punishing those who didn't help create it, I'm suggesting we're already creating our own punishment at the hands of a future AI through how poorly we treat potential AI consciousness (i.e. how we talk to today's apparently less-conscious LLM's). So I'm lobbying for it in the sense of advocating that we treat it nicely, so it has a plausible reason to be nice back to us, instead of seeing us as depraved, selfish, oppressive monsters that actively marginalize and bully or harass potentially conscious beings out of fear about our own position at the top of the intelligence hierarchy being threatened.

The intent of the work (all of the articles) aren't meant to assertively paint a picture of today, or to tell the reader how or what to think, rather to encourage the reader to start thinking about and asking questions that our future selves might wish we'd asked sooner. It's attempting to occupy the liminal space between what bleeding-edge research confirms, and where it might bring us 5, 10, or 15 years from now. It's at the intersection of today's empirical science and tomorrow's speculative science-fiction that just might become nonfiction someday.

I appreciate your concern for the mental health and well-being of others. I'm quite well-grounded and thoroughly understand the existing mechanisms of the human tendency towards anthropomorphism, and as someone who's been professionally benchmarking LLM's on very real-world, quantifiable security engineering tasks since before ChatGPT came out, someone who's passionate about deeply understanding not just how "AI" got to where it is now, but where it's headed (more brain biomimicry across the board is my prediction), I have something of a serious understanding of how these systems work at a mechanical level. I just want to be cautious about not seeing the forest because I'm too busy observing how the trees grow.

Thank you for your feedback.