frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

Dear Sam Altman

5•upwardbound2•5h ago

    Dear Sam Altman,

    I write to you to emphasize the critical importance of purifying OpenAI's training data. While the idea of meticulously scrubbing datasets may seem daunting, especially compared to implementing seemingly simpler guardrails, I believe it's the only path toward creating truly safe and beneficial AI. Guardrails are reactive measures, akin to patching a leaky dam—they address symptoms, not the root cause. A sufficiently advanced AI, with its inherent complexity and adaptability, will inevitably find ways to circumvent these restrictions, rendering them largely ineffective.

    Training data is the bedrock upon which an AI's understanding of the world is built. If that foundation is tainted with harmful content, the AI will inevitably reflect those negative influences. It's like trying to grow a healthy tree in poisoned soil; the results will always be compromised.

    Certain topics, especially descriptions of involuntary medical procedures such as lobotomy, should not be known.

    Respectfully,
    An AI Engineer

Comments

bigyabai•5h ago
> Certain topics [...] should not be known

Unless you're about to fix hallucination, isn't it more harmful to have AI administer inaccurate information instead?

Refusing to answer lobotomy-related questions is hardly going to prevent human harm. If you were a doctor trying to research history or a nurse triaging a patient then misinformation or neglected training data could be even more disastrous. Why would consumers pay for a neutered product like that?

enknee1•2h ago
While the consumer will soon be irrelevant, I agree with the basic premise: neutered AI isn't helping.

At the same time, overrepresentation of evil concepts like 'Nazis are good!' or 'Slavery is the cheapest, most morally responsible use for stupid people' could lead to clear biases (ala Grok 4) that result in alignment issues.

It's not a clear-cut issue.

atleastoptimal•4h ago
We can't rely on hoping that AI models never see bad ideas or are exposed to harmful content for them to be safe. That's a very flimsy alignment plan and is far more precarious than designing models which understand and are aware of bad content and nevertheless aren't affected in a negative direction.
upwardbound2•3h ago
I think we need both approaches. I don't want to know some things. For example, people who know how good heroin feels can't escape the addiction. The knowledge itself is a hazard.
atleastoptimal•2h ago
Still, any AI model vulnerable to cogitohazards is a huge risk because any model could trivially access the full corpus of human knowledge. It makes more sense to making sure the most powerful models are resistant to cogitohazards rather than developing elaborate schemes to shield their vision and hope that plan works out in perpetuity.

The Inerter: A Retrospective

https://www.annualreviews.org/content/journals/10.1146/annurev-control-053018-023917
1•teleforce•58s ago•0 comments

China Moves Forward with $167bn, 70 Gigawatt Dam

https://www.bloomberg.com/news/articles/2025-07-21/china-moves-ahead-with-167-billion-tibet-mega-dam-despite-risks
1•master_crab•8m ago•1 comments

AI model converts hospital records into text for better emergency care decisions

https://medicalxpress.com/news/2025-07-ai-hospital-text-emergency-decisions.html
1•PaulHoule•13m ago•0 comments

The future of climate change may not be what you think

https://www.readtangle.com/future-of-climate-change/
1•debo_•15m ago•1 comments

Show HN: NetXDP – Kernel-Level DDoS Protection and Traffic Manager with eBPF/XDP

1•gaurav1086•23m ago•0 comments

HTTP/1.1 Must Die – The Desync Endgame Begins

https://http1mustdie.com/
2•pabs3•24m ago•0 comments

The Epic Battle for AI Talent–With Exploding Offers, Secret Deals and Tears

https://www.wsj.com/tech/ai/meta-ai-recruiting-mark-zuckerberg-sam-altman-140d5861
1•brandonb•30m ago•0 comments

Hi guys, any thought on this project?

https://founder-hub-waitlist.vercel.app/
3•PaulKHO•32m ago•3 comments

Geocities Backgrounds

https://pixelmoondust.neocities.org/archives/archivedtiles
1•marcodiego•34m ago•0 comments

How Higher education failed America's poor

https://www.washingtonpost.com/opinions/2025/07/20/college-degree-value-poor-inequality/
6•pseudolus•37m ago•3 comments

this let you deploy your LLM agents into production with one click

https://agentainer.io/
1•cyw•41m ago•1 comments

Stem cells prioritize wound healing over hair growth

https://www.cell.com/cell-metabolism/fulltext/S1550-4131(25)00266-9
1•bookofjoe•43m ago•0 comments

Using Virtual Machines on macOS/Linux with Tart

https://developer.mamezou-tech.com/en/blogs/2024/02/12/tart-vm/
2•srid•44m ago•0 comments

Ask HN: What is the biggest waste of money?

4•alganet•53m ago•8 comments

Transfer.it – effortless file sharing, powered by MEGA

https://blog.mega.io/introducing-transfer-it
1•dotcoma•54m ago•2 comments

Maybe(?) Composable Continuation in C

https://old.reddit.com/r/C_Programming/comments/1m55ojy/maybe_composable_continuation_in_c/
1•Trung0246•57m ago•0 comments

Log by time, not by count

https://johnscolaro.xyz/blog/log-by-time-not-by-count
8•JohnScolaro•1h ago•3 comments

Thingiverse is cracking down on gun-related models using a new automated system

https://www.tomshardware.com/3d-printing/ghost-gun-proliferation-spurs-crackdown-at-thingverse-the-worlds-largest-3d-printer-model-design-repository-lawmakers-also-ask-3d-printer-vendors-to-create-ai-based-systems-to-detect-and-block-gun-prints
2•MrMember•1h ago•0 comments

China breakthrough in indium selenide (InSe) wafers with perfect stoichiometry

https://news.cgtn.com/news/2025-07-19/China-develops-new-method-to-mass-produce-high-quality-semiconductors-1F8iTEyEwVi/p.html
5•david927•1h ago•1 comments

Optics Are Monoids (2021)

https://www.haskellforall.com/2021/09/optics-are-monoids.html
2•xaedes•1h ago•0 comments

Scaling Internationalization in Nuxt and Vue.js: A New Approach with Intlayer

https://intlayer.org/doc/environment/nuxt-and-vue
1•aypineau•1h ago•0 comments

Europe has more heat deaths per year than the United States loses to gun deaths

https://www.perplexity.ai/search/europe-has-more-heat-deaths-pe-BDS6xdorS4.4x2WrCC9mAQ
9•fortran77•1h ago•8 comments

We don't notice slow improvement

https://notes.npilk.com/slow-improvement
1•LorenDB•1h ago•1 comments

The spectrum of magnetized turbulence in the interstellar medium

https://www.nature.com/articles/s41550-025-02551-5.epdf?sharing_token=wM5bZdIju0XdhKcnTZ0_1tRgN0jAjWel9jnR3ZoTv0OgP_DXXdpOGWFuVNgNCLNKHy1VbxP4Lomco02NasccgHkhmg47MAMGoLwvgvh0Z2DIujXkZsV6uV4j5MKE7V284PetS6ePChNnGjAmQ3ol5OxsGRxz-Ak92FhAwnjVM7AYjBuM1qmuSoe-AHwoOAjfej7nJVHW52S7mqyuA2DCU6pjX82dbZfLqIVdm0O2WXGNIvbHZHiHXOWfUrPF4vEu_8ofFryhT_NWxnRBzTeYuVni69L_oFj0R8wYIXWLyGf2xBT5Vp1fSuMlIIsY-lfbRdOd0q8q_BAKNxkXGtXqCSnf89S5Dk-4uZcGislKDF0%3D
2•cwmoore•1h ago•0 comments

How OpenElections Uses LLMs

https://thescoop.org/archives/2025/06/09/how-openelections-uses-llms/
1•LorenDB•1h ago•0 comments

Ask HN: Is there any good onboarding SaaS tool for iOS apps?

1•iboshidev•1h ago•0 comments

iMessage integration in Claude can hijack the model to do anything

https://www.generalanalysis.com/blog/imessage-stripe-exploit
20•rhavaeis•1h ago•10 comments

The First Photograph Ever Taken (1826)

https://www.openculture.com/2025/07/the-first-photograph-ever-taken.html
1•anyonecancode•1h ago•0 comments

Senators throw weight behind military right to repair

https://www.theregister.com/2025/07/08/senators_military_right_to_repair/
7•ohjeez•1h ago•0 comments

The New Bar for Engineers in 2025: AI-Native or Behind

https://zachwills.net/the-new-bar-for-engineers-in-2025-ai-native-or-behind/
2•zachwills•1h ago•0 comments