frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: A unique twist on Tetris and block puzzle

https://playdropstack.com/
1•lastodyssey•2m ago•0 comments

The logs I never read

https://pydantic.dev/articles/the-logs-i-never-read
1•nojito•3m ago•0 comments

How to use AI with expressive writing without generating AI slop

https://idratherbewriting.com/blog/bakhtin-collapse-ai-expressive-writing
1•cnunciato•5m ago•0 comments

Show HN: LinkScope – Real-Time UART Analyzer Using ESP32-S3 and PC GUI

https://github.com/choihimchan/linkscope-bpu-uart-analyzer
1•octablock•5m ago•0 comments

Cppsp v1.4.5–custom pattern-driven, nested, namespace-scoped templates

https://github.com/user19870/cppsp
1•user19870•6m ago•1 comments

The next frontier in weight-loss drugs: one-time gene therapy

https://www.washingtonpost.com/health/2026/01/24/fractyl-glp1-gene-therapy/
1•bookofjoe•9m ago•1 comments

At Age 25, Wikipedia Refuses to Evolve

https://spectrum.ieee.org/wikipedia-at-25
1•asdefghyk•12m ago•3 comments

Show HN: ReviewReact – AI review responses inside Google Maps ($19/mo)

https://reviewreact.com
2•sara_builds•12m ago•1 comments

Why AlphaTensor Failed at 3x3 Matrix Multiplication: The Anchor Barrier

https://zenodo.org/records/18514533
1•DarenWatson•13m ago•0 comments

Ask HN: How much of your token use is fixing the bugs Claude Code causes?

1•laurex•17m ago•0 comments

Show HN: Agents – Sync MCP Configs Across Claude, Cursor, Codex Automatically

https://github.com/amtiYo/agents
1•amtiyo•18m ago•0 comments

Hello

1•otrebladih•19m ago•0 comments

FSD helped save my father's life during a heart attack

https://twitter.com/JJackBrandt/status/2019852423980875794
2•blacktulip•22m ago•0 comments

Show HN: Writtte – Draft and publish articles without reformatting, anywhere

https://writtte.xyz
1•lasgawe•24m ago•0 comments

Portuguese icon (FROM A CAN) makes a simple meal (Canned Fish Files) [video]

https://www.youtube.com/watch?v=e9FUdOfp8ME
1•zeristor•25m ago•0 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
2•gnufx•27m ago•0 comments

Transcribe your aunts post cards with Gemini 3 Pro

https://leserli.ch/ocr/
1•nielstron•31m ago•0 comments

.72% Variance Lance

1•mav5431•32m ago•0 comments

ReKindle – web-based operating system designed specifically for E-ink devices

https://rekindle.ink
1•JSLegendDev•34m ago•0 comments

Encrypt It

https://encryptitalready.org/
1•u1hcw9nx•34m ago•1 comments

NextMatch – 5-minute video speed dating to reduce ghosting

https://nextmatchdating.netlify.app/
1•Halinani8•35m ago•1 comments

Personalizing esketamine treatment in TRD and TRBD

https://www.frontiersin.org/articles/10.3389/fpsyt.2025.1736114
1•PaulHoule•36m ago•0 comments

SpaceKit.xyz – a browser‑native VM for decentralized compute

https://spacekit.xyz
1•astorrivera•37m ago•0 comments

NotebookLM: The AI that only learns from you

https://byandrev.dev/en/blog/what-is-notebooklm
2•byandrev•37m ago•2 comments

Show HN: An open-source starter kit for developing with Postgres and ClickHouse

https://github.com/ClickHouse/postgres-clickhouse-stack
1•saisrirampur•38m ago•0 comments

Game Boy Advance d-pad capacitor measurements

https://gekkio.fi/blog/2026/game-boy-advance-d-pad-capacitor-measurements/
1•todsacerdoti•38m ago•0 comments

South Korean crypto firm accidentally sends $44B in bitcoins to users

https://www.reuters.com/world/asia-pacific/crypto-firm-accidentally-sends-44-billion-bitcoins-use...
2•layer8•39m ago•0 comments

Apache Poison Fountain

https://gist.github.com/jwakely/a511a5cab5eb36d088ecd1659fcee1d5
1•atomic128•41m ago•2 comments

Web.whatsapp.com appears to be having issues syncing and sending messages

http://web.whatsapp.com
1•sabujp•41m ago•2 comments

Google in Your Terminal

https://gogcli.sh/
1•johlo•43m ago•0 comments
Open in hackernews

Dear Sam Altman

6•upwardbound2•6mo ago

    Dear Sam Altman,

    I write to you to emphasize the critical importance of purifying OpenAI's training data. While the idea of meticulously scrubbing datasets may seem daunting, especially compared to implementing seemingly simpler guardrails, I believe it's the only path toward creating truly safe and beneficial AI. Guardrails are reactive measures, akin to patching a leaky dam—they address symptoms, not the root cause. A sufficiently advanced AI, with its inherent complexity and adaptability, will inevitably find ways to circumvent these restrictions, rendering them largely ineffective.

    Training data is the bedrock upon which an AI's understanding of the world is built. If that foundation is tainted with harmful content, the AI will inevitably reflect those negative influences. It's like trying to grow a healthy tree in poisoned soil; the results will always be compromised.

    Certain topics, especially descriptions of involuntary medical procedures such as lobotomy, should not be known.

    Respectfully,
    An AI Engineer

Comments

bigyabai•6mo ago
> Certain topics [...] should not be known

Unless you're about to fix hallucination, isn't it more harmful to have AI administer inaccurate information instead?

Refusing to answer lobotomy-related questions is hardly going to prevent human harm. If you were a doctor trying to research history or a nurse triaging a patient then misinformation or neglected training data could be even more disastrous. Why would consumers pay for a neutered product like that?

enknee1•6mo ago
While the consumer will soon be irrelevant, I agree with the basic premise: neutered AI isn't helping.

At the same time, overrepresentation of evil concepts like 'Nazis are good!' or 'Slavery is the cheapest, most morally responsible use for stupid people' could lead to clear biases (ala Grok 4) that result in alignment issues.

It's not a clear-cut issue.

graealex•6mo ago
Hallucinations come from lack of information, or rather training data, in a particular field.

It is NOT a malicious try at feeding you untruthful answers, nor is it a result of getting trained with misinformation.

atleastoptimal•6mo ago
We can't rely on hoping that AI models never see bad ideas or are exposed to harmful content for them to be safe. That's a very flimsy alignment plan and is far more precarious than designing models which understand and are aware of bad content and nevertheless aren't affected in a negative direction.
upwardbound2•6mo ago
I think we need both approaches. I don't want to know some things. For example, people who know how good heroin feels can't escape the addiction. The knowledge itself is a hazard.
atleastoptimal•6mo ago
Still, any AI model vulnerable to cogitohazards is a huge risk because any model could trivially access the full corpus of human knowledge. It makes more sense to making sure the most powerful models are resistant to cogitohazards rather than developing elaborate schemes to shield their vision and hope that plan works out in perpetuity.