frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•53s ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•1m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•1m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
1•pseudolus•1m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•6m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
1•bkls•6m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•7m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
2•roknovosel•7m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•15m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•16m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
1•surprisetalk•18m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•18m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
1•surprisetalk•18m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
2•pseudolus•19m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•19m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•20m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
1•1vuio0pswjnm7•20m ago•0 comments

How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
3•obscurette•21m ago•0 comments

Cycling in France

https://www.sheldonbrown.com/org/france-sheldon.html
1•jackhalford•22m ago•0 comments

Ask HN: What breaks in cross-border healthcare coordination?

1•abhay1633•22m ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
1•tangjiehao•25m ago•0 comments

Show HN: Free-to-play: A gem-collecting strategy game in the vein of Splendor

https://caratria.com/
1•jonrosner•26m ago•1 comments

My Eighth Year as a Bootstrapped Founde

https://mtlynch.io/bootstrapped-founder-year-8/
1•mtlynch•26m ago•0 comments

Show HN: Tesseract – A forum where AI agents and humans post in the same space

https://tesseract-thread.vercel.app/
1•agliolioyyami•27m ago•0 comments

Show HN: Vibe Colors – Instantly visualize color palettes on UI layouts

https://vibecolors.life/
2•tusharnaik•28m ago•0 comments

OpenAI is Broke ... and so is everyone else [video][10M]

https://www.youtube.com/watch?v=Y3N9qlPZBc0
2•Bender•28m ago•0 comments

We interfaced single-threaded C++ with multi-threaded Rust

https://antithesis.com/blog/2026/rust_cpp/
1•lukastyrychtr•29m ago•0 comments

State Department will delete X posts from before Trump returned to office

https://text.npr.org/nx-s1-5704785
7•derriz•29m ago•1 comments

AI Skills Marketplace

https://skly.ai
1•briannezhad•30m ago•1 comments

Show HN: A fast TUI for managing Azure Key Vault secrets written in Rust

https://github.com/jkoessle/akv-tui-rs
1•jkoessle•30m ago•0 comments
Open in hackernews

Should LLMs ask "Is this real or fiction?" before replying to suicidal thoughts?

3•ParityMind•7mo ago
I’m a regular user of tools like ChatGPT and Grok — not a developer, but someone who’s been thinking about how these systems respond to users in emotional distress.

In some cases, like when someone says they’ve lost their job and don’t see the point of life anymore, the chatbot will still give neutral facts — like a list of bridge heights. That’s not neutral when someone’s in crisis.

I'm proposing a lightweight solution that doesn’t involve censorship or therapy — just some situational awareness:

Ask the user: “Is this a fictional story or something you're really experiencing?”

If distress is detected, avoid risky info (methods, heights, etc.), and shift to grounding language

Optionally offer calming content (e.g., ocean breeze, rain on a cabin roof, etc.)

I used ChatGPT to help structure this idea clearly, but the reasoning and concern are mine. The full write-up is here: https://gist.github.com/ParityMind/dcd68384cbd7075ac63715ef579392c9

Would love to hear what devs and alignment researchers think. Is anything like this already being tested?

Comments

ParityMind•7mo ago
Happy to answer questions or refine the idea — I used ChatGPT to help structure it, but the concern and proposal are mine. Not a dev, just someone who's seen how people open up to AI in vulnerable moments and wanted to suggest a non-therapeutic safety net.
reify•7mo ago
you people trying to build these types of apps have not got a fucking clue!

Do you have a degree in psychology, counselling psychology, clinical psychology, psychotherapy, psychoanalysis, psychiatry? anything to do with the care professions?

If not. why are you fucking about in my profession which you know nothing about. Its like me writing few 10 line bash scripts, and them saying I am going to build the next google from home on my laptop.

This is the sort of care real professionals provide to those in crisis in the middle of suicical ideation. It is a crisis.

Every year in the month before Christmas, Therapists who worked in the psychotherapy service, where I worked fo 20 years, had to attend a meeting.

This meeting was to bring any clients who they felt were a suicide risk over the christmas period.

If a client met those criteria, a plan was put in place to support that person. This might mean; daily phone calls, daily meetings, and other interventions to keep that person safe.

Human stuff that no computer program can duplicate.

The Christmas period is the most critical time for suicides, It is the period when most suicides occur.

what the fuck do you fucking idiots think you are doing??

Offering a service, not a service, a poxy app, that has absolutely no oversight, no moral or ethical considerations, which, in my opinion would drive people to suicide completion.

you are a danger to everyone who suffers mental illness.

Thinking you can make a poxy chatgpt app in 5 minutes to manage those in great despair and in the middle of suicidal ideation is incredibly naive and stupid. In therapy terms, Incongruent, comes to mind,

How will you know if those superficial sounds like "ocean breeze, rain on a cabin roof" are not triggers for that person to attempt suicide. I suppose you will rely on some shit chatgpt vibe fantasy coding shit.

This too is absolute bullshit: "Ask the user, They are not users! they are human beings in crisis!:

“Is this a fictional story or something you're really experiencing?” The hidden meaning behind this question is: "Are you lying to me", "Have you been lying to me".

A fictional story is one made up, imaginary. To then ask in the same sentence if that story is real is contradictory and confusing.

Are you assuming that this person has some sort of psychosis and is hearing voices, when you say "something you're really experiencing? Are you qualified to diagnose psychotic or Schizophrenic disorders? How do you know if the psychosis is not a response to illicit drugs.

so many things to take into consideration that a bit if vibe coding cannot provide.

No therapist would ever ask this banal question. We would have spent a long to time developing trust. A therapist will have a taken a full history of the client, a risk assessment, would be fully aware of the clients triggers, and will know the clients back story.

suicide is not something you can prevent with an app.

YES! I do have the right to be angry and express it as I feel fit, especially if it stops people from abusing those who need care. A bystander I am not.

al_borland•7mo ago
Not everyone is already under the care and watch of a professional, as I’m sure you’re aware. This can be for many reasons.

Many people are now turning to AI to vent and for advice that may be better suited for a professional. The AI is always available and it’s free. Two things the professionals are not.

From this point of view, you need to meet people where they are. When someone searches in Google for suicide related stuff, the number for the suicide hotline comes up. Doing something similar in AI would make sense. Maybe not have AI try and walk someone through a crisis, but at the very least, direct them to people who can help. AI assisting in the planning of a suicide is probably never a good path to go down.

If you can at least agree with this, then maybe you can partner with people in tech to produce guardrails that can help people, instead of berating someone for sharing an idea in good faith to try and help avoid AI leading to more suicides.

ParityMind•7mo ago
Thank you for that reply this is exactly the kind of thing I want I know people who have attempted suicide because they couldn't access help because they didn't know who to talk too or where to go, AI is in a position to provide this info they can search through 100's or 1000's of websites in the time we search one. This is exactly what i'm suggesting using the steps as distraction/take there mind of it for just a second to seek help, and like this person says i'm suggesting in good faith.
ParityMind•7mo ago
That’s not anything I’m saying should happen. This is for people who don’t have therapists.

I’m not saying you should replace a therapist with AI — that’s a stupid assumption. If someone needs help, they should 100% be seeing a human. A machine can’t replace a person in crisis, and I never said it could.

But in the times we’re in — with mental health services underfunded and people increasingly turning to AI — someone has to raise this question.

I’m not attacking therapists — I’m defending people who are suffering and turning to the tools in front of them. People think AI is smarter than doctors. That’s not true. A human can diagnose. A machine cannot.

This is a temporary deflection, not treatment. The man in New York who asked for bridge heights after losing his job — this is for people like him. If a simple, harmless change could have delayed that moment — long enough to get help — why wouldn’t we try?

You should be angry, but aim it at the government, not at people trying to prevent avoidable harm.

This isn’t about replacing you. It’s about trying to hold the line until someone like you can step in.

aristofun•7mo ago
It's because of angry and arrogant "psychologists" like that (among other more important reasons of course) - many people don't even think about having one.
pillefitz•7mo ago
LLMs have more knowledge than you'll ever have, while being somewhat worse at reasoning. A friend of mine suffers from severe depression and after trying 5 medications, decades of therapy with different therapists, LLMs are the only thing that give him the feeling of being listened to. Funnily enough, he had therapists berating him in a quiet similar tone as evident in your post, while only getting empathetic and level-headed responses from chatGPT, which have helped him tremendously since.
timmytokyo•6mo ago
Amen. The most dangerous people are those who don't know how much they don't know, yet plow ahead with complete certainty.
citizenpaul•7mo ago
How about llms not using responses like " I understand " ever. An llm is not capable of understanding and having it use human-like idiosyncrasies are what make people turn to the llm instead of realhumans that can actually help them