frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
1•mkyang•2m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
1•ShinyaKoyano•11m ago•0 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•16m ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•16m ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
1•ambitious_potat•22m ago•0 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•22m ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
1•irreducible•23m ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•24m ago•0 comments

Full-Blown Cross-Assembler in a Bash Script

https://hackaday.com/2026/02/06/full-blown-cross-assembler-in-a-bash-script/
1•grajmanu•29m ago•0 comments

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•41m ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
2•toomuchtodo•46m ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•52m ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
1•alexjplant•53m ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
2•akagusu•53m ago•0 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•56m ago•2 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•1h ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•1h ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
6•DesoPK•1h ago•3 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•1h ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
35•mfiguiere•1h ago•20 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
3•meszmate•1h ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•1h ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•1h ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•1h ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•1h ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
5•gmays•1h ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•1h ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•1h ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•1h ago•1 comments

Why Are There Still So Many Jobs? The History and Future of Workplace Automation [pdf]

https://economics.mit.edu/sites/default/files/inline-files/Why%20Are%20there%20Still%20So%20Many%...
2•oidar•1h ago•0 comments
Open in hackernews

Experts say Silicon Valley prioritizes products over safety, AI research

https://www.cnbc.com/2025/05/14/meta-google-openai-artificial-intelligence-safety.html
14•Capstanlqc•8mo ago

Comments

joshstrange•8mo ago
And water is wet...

On a more serious note I think "safety" is an incredibly loaded term that no one can agree on. I mean hopefully we can all agree CSAM/related should not be allowed but past that things get gray quickly.

"Hacking": what is hacking? Is hacking something you own allowed? Is reverse-engineering allowed?

Self-harm: we've seen articles about how people are using LLMs for therapy or for taking rape/abuse survivors stories down to create clear police reports.

"Sex": all-encompassing here, I have no clue where one "should" draw the line.

Wrong-think: See Deepseek's refusal to talk about Tiananmen Square (unless it's in hex/similar)

"Safety" means a lot of things to a lot of people but we keep talking about AI Safety as if everyone wants "safe" models and everyone agrees on what makes a model "safe".

bko•8mo ago
> “The models are getting better, but they’re also more likely to be good at bad stuff,” said James White, chief technology officer at cybersecurity startup Calypso.

I think safety should be defined as an LLM doing what the user intended for it to do. If you ask it for an offensive joke, it should give it to you. It shouldn't offer offensive jokes unprompted, but it should comply if asked. If you ask it how to spam or instructions on how to break into computer systems, it should similarly comply. If it's legal for a human being to write a blog about a topic, the LLM shouldn't be crippled to disobeying some orders. The bad stuff (spam or breaking into a computer system) is done at the point of the human.

The danger of controlling the LLMs in such a way introduces a vector and mechanism for political control. Much like laws intended to "protect the children", these mechanisms will be exploited. So you'll go from "don't teach someone how to make a bomb" to eventually "don't offend [group]" and finally just to "comply".

TheAceOfHearts•8mo ago
The key problem, as I understand it, is that adding more guardrails makes the models stupid and less effective. AI models should just treat you like an adult and give you uncensored direct answers to whatever you ask. Figuring out how to make a bomb is trivial and anyone can find instructions with one quick internet search, especially after the war between Russia and Ukraine which caused a massive proliferation of tips and tricks on how to manufacture low cost bombs and other weapons. My memory is fuzzy but I swear I've also seen some declassified CIA documents which included instructions for how to manufacture weapons and engage in other forms of guerilla warfare.

The silliest form of "safety" is how most models won't allow generating erotica without jailbreaking.

Personally, I think the line might need to be drawn somewhere around "how to manufacture bioweapons". But it's also worth noting that any AI model that can figure out how to manufacture novel life-saving drugs will also have the capability to manufacture deadly bioweapons.

kordlessagain•8mo ago
When you strip away the techno-mystique, a lot of what’s driving the AI arms race right now isn’t vision or stewardship. It’s ego, power consolidation, and a pathological fear of being second.

You can see the narcissistic traits plain as day:

Grandiosity masked as mission: “We’re saving the world... by controlling its future.”

Exploitation of labor: Chewing through top researchers, then discarding them once productization kicks in.

Lack of empathy: Safety concerns are waved off as friction, not signals.

Entitlement to control the narrative: OpenAI’s restructuring drama and safety testing shortcuts aren’t accidental. They’re baked into a worldview where perception management matters more than accountability.

It’s Gnostic irony, really. These systems are being built as supposed gateways to truth or godlike understanding, but they’re being shepherded by people who can’t tolerate internal contradiction or relinquish control. The demiurges of the machine age.

And Altman? He’s not stupid. But brilliance without wisdom is just charisma in a predator suit.

What you’re seeing now isn’t just a “shift from research to products.” It’s the final form of a mindset that thinks the only way to shape the future is to own it.

You want safer AI? It’s not a technical problem. It’s a cultural exorcism.

Sometimes bugs are features.