frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Apache Poison Fountain

https://gist.github.com/jwakely/a511a5cab5eb36d088ecd1659fcee1d5
1•atomic128•1m ago•0 comments

Web.whatsapp.com appears to be having issues syncing and sending messages

http://web.whatsapp.com
1•sabujp•1m ago•1 comments

Google in Your Terminal

https://gogcli.sh/
1•johlo•3m ago•0 comments

Shannon: Claude Code for Pen Testing

https://github.com/KeygraphHQ/shannon
1•hendler•3m ago•0 comments

Anthropic: Latest Claude model finds more than 500 vulnerabilities

https://www.scworld.com/news/anthropic-latest-claude-model-finds-more-than-500-vulnerabilities
1•Bender•7m ago•0 comments

Brooklyn cemetery plans human composting option, stirring interest and debate

https://www.cbsnews.com/newyork/news/brooklyn-green-wood-cemetery-human-composting/
1•geox•7m ago•0 comments

Why the 'Strivers' Are Right

https://greyenlightenment.com/2026/02/03/the-strivers-were-right-all-along/
1•paulpauper•9m ago•0 comments

Brain Dumps as a Literary Form

https://davegriffith.substack.com/p/brain-dumps-as-a-literary-form
1•gmays•9m ago•0 comments

Agentic Coding and the Problem of Oracles

https://epkconsulting.substack.com/p/agentic-coding-and-the-problem-of
1•qingsworkshop•10m ago•0 comments

Malicious packages for dYdX cryptocurrency exchange empties user wallets

https://arstechnica.com/security/2026/02/malicious-packages-for-dydx-cryptocurrency-exchange-empt...
1•Bender•10m ago•0 comments

Show HN: I built a <400ms latency voice agent that runs on a 4gb vram GTX 1650"

https://github.com/pheonix-delta/axiom-voice-agent
1•shubham-coder•11m ago•0 comments

Penisgate erupts at Olympics; scandal exposes risks of bulking your bulge

https://arstechnica.com/health/2026/02/penisgate-erupts-at-olympics-scandal-exposes-risks-of-bulk...
4•Bender•11m ago•0 comments

Arcan Explained: A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
1•fanf2•13m ago•0 comments

What did we learn from the AI Village in 2025?

https://theaidigest.org/village/blog/what-we-learned-2025
1•mrkO99•13m ago•0 comments

An open replacement for the IBM 3174 Establishment Controller

https://github.com/lowobservable/oec
1•bri3d•15m ago•0 comments

The P in PGP isn't for pain: encrypting emails in the browser

https://ckardaris.github.io/blog/2026/02/07/encrypted-email.html
2•ckardaris•18m ago•0 comments

Show HN: Mirror Parliament where users vote on top of politicians and draft laws

https://github.com/fokdelafons/lustra
1•fokdelafons•18m ago•1 comments

Ask HN: Opus 4.6 ignoring instructions, how to use 4.5 in Claude Code instead?

1•Chance-Device•20m ago•0 comments

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
1•ColinWright•22m ago•0 comments

Jim Fan calls pixels the ultimate motor controller

https://robotsandstartups.substack.com/p/humanoids-platform-urdf-kitchen-nvidias
1•robotlaunch•26m ago•0 comments

Exploring a Modern SMTPE 2110 Broadcast Truck with My Dad

https://www.jeffgeerling.com/blog/2026/exploring-a-modern-smpte-2110-broadcast-truck-with-my-dad/
1•HotGarbage•26m ago•0 comments

AI UX Playground: Real-world examples of AI interaction design

https://www.aiuxplayground.com/
1•javiercr•27m ago•0 comments

The Field Guide to Design Futures

https://designfutures.guide/
1•andyjohnson0•27m ago•0 comments

The Other Leverage in Software and AI

https://tomtunguz.com/the-other-leverage-in-software-and-ai/
1•gmays•29m ago•0 comments

AUR malware scanner written in Rust

https://github.com/Sohimaster/traur
3•sohimaster•31m ago•1 comments

Free FFmpeg API [video]

https://www.youtube.com/watch?v=6RAuSVa4MLI
3•harshalone•31m ago•1 comments

Are AI agents ready for the workplace? A new benchmark raises doubts

https://techcrunch.com/2026/01/22/are-ai-agents-ready-for-the-workplace-a-new-benchmark-raises-do...
2•PaulHoule•36m ago•0 comments

Show HN: AI Watermark and Stego Scanner

https://ulrischa.github.io/AIWatermarkDetector/
1•ulrischa•37m ago•0 comments

Clarity vs. complexity: the invisible work of subtraction

https://www.alexscamp.com/p/clarity-vs-complexity-the-invisible
1•dovhyi•38m ago•0 comments

Solid-State Freezer Needs No Refrigerants

https://spectrum.ieee.org/subzero-elastocaloric-cooling
2•Brajeshwar•38m ago•0 comments
Open in hackernews

Illinois Bans AI from Providing Therapy

https://gizmodo.com/illinois-bans-ai-from-providing-therapy-2000639042
22•rntn•6mo ago

Comments

duxup•6mo ago
I got what looked like an AI powered therapy advertisement on youtube recently.

It had that strange vibe that seemed like they're looking for vulnerable people to prey on, almost like gambling ads do.

xrd•6mo ago
If this had made it into the BBB it would have be so bad. I'm glad states can regulate on their own.

https://www.nbcnews.com/tech/tech-news/big-beautiful-bill-ai...

SilverElfin•6mo ago
I hope this isn’t the start of states banning AI in nonsensical ways when it could be a great way to boost health and reduce healthcare costs. There is so much regulatory capture and broken incentives in the healthcare system.
watwut•6mo ago
Have it pass the same set of tests as any other medical device.

Programmers and startup owners constantly claim their 2 weeks product will save the world, but must the time it does not. It is fine when talking about household light management, but half baked ai therapist is as much quack as any human fraudster.

Only difference is that human fraudsters can be prosecuted, but companies and startups demand themselves to be above the law.

SilverElfin•6mo ago
But why should things be locked down in the first place? Why do I need to go through doctors and insurance and all of this for simple diagnostic tests and the obvious prescriptions that are necessary? It’s so frustrating. Especially to do it repeatedly every so many months. My point is we are currently already in a state of regulatory capture. And with this new era of technology, we need to abandon that. Maybe not fully. But for many things.
watwut•6mo ago
Yeah, no, completely deregulated medical device and prescriptions market would be disaster.

> And with this new era of technology

Our current era of technology is allowing us to generate automated bull-shitters. Which is fine for some applications, but not for the ones where people can be actually harmed.

Nothing about our current era generates trustworthy systems. And both our business leaders and culture of elite technical people is the one where being sociopathic is an advantage. We created this world, but we do not have to pretend it is somehow meant to be helpful.

cestith•6mo ago
This article isn’t about AI acting as a medical device. It’s about it acting as a mental health practitioner.

If you’re going to have it pass tests, those should be graduation requirements, clinical training, and licensing exams.

watwut•6mo ago
> This article isn’t about AI acting as a medical device. It’s about it acting as a mental health practitioner.

Which makes it medical device.

> If you’re going to have it pass tests, those should be graduation requirements, clinical training, and licensing exams.

And obviously also loss of practice if they break ethical norms or if there is other issue with that.

McAlpine5892•6mo ago
Is there any evidence that LLMs are safe to deploy as practitioners in healthcare settings? Until there is significant evidence, we shouldn't allow them.

---

My original ramble below:

This is a Very HN Comment. The problem with healthcare in the US isn't that we don't let Sam Altman administer healthcare via souped-up text prediction machines. It's a disaster precisely because we let these greedy ghouls run the system under the guise of "saving money". In the end it costs significantly more money to insure a portion of the population than the big, inefficient, bureaucratic government providing baseline insurance for everyone.

The least bad healthcare systems in the world take out a significant amount of the profit motive. Not all regulation is good, but the US refuses to let go of the idea that all regulation is bad.

If LLMs are to be used in healthcare they should have an incredibly high bar of evidence to pass. Right now, there's no evidence that I'm aware of. Just as doctors need to prove themselves before being certified. Even then, we get bad doctors. What happens when an LLM advises a patient to kill themselves? Probably nothing. Corporations go unpunished in this country. At least bad practitioners are responsible for their actions.

ktallett•6mo ago
Until it is regulated and can pass safety tests, this is sensible. For those struggling with psychosis, or schizophrenic disorders, AI therapy could be incredibly harmful.