frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Why social apps need to become proactive, not reactive

https://www.heyflare.app/blog/from-reactive-to-proactive-how-ai-agents-will-reshape-social-apps
1•JoanMDuarte•57s ago•0 comments

How patient are AI scrapers, anyway? – Random Thoughts

https://lars.ingebrigtsen.no/2026/02/07/how-patient-are-ai-scrapers-anyway/
1•samtrack2019•1m ago•0 comments

Vouch: A contributor trust management system

https://github.com/mitchellh/vouch
1•SchwKatze•1m ago•0 comments

I built a terminal monitoring app and custom firmware for a clock with Claude

https://duggan.ie/posts/i-built-a-terminal-monitoring-app-and-custom-firmware-for-a-desktop-clock...
1•duggan•2m ago•0 comments

Tiny C Compiler

https://bellard.org/tcc/
1•guerrilla•3m ago•0 comments

Y Combinator Founder Organizes 'March for Billionaires'

https://mlq.ai/news/ai-startup-founder-organizes-march-for-billionaires-protest-against-californi...
1•hidden80•4m ago•1 comments

Ask HN: Need feedback on the idea I'm working on

1•Yogender78•4m ago•0 comments

OpenClaw Addresses Security Risks

https://thebiggish.com/news/openclaw-s-security-flaws-expose-enterprise-risk-22-of-deployments-un...
1•vedantnair•5m ago•0 comments

Apple finalizes Gemini / Siri deal

https://www.engadget.com/ai/apple-reportedly-plans-to-reveal-its-gemini-powered-siri-in-february-...
1•vedantnair•5m ago•0 comments

Italy Railways Sabotaged

https://www.bbc.co.uk/news/articles/czr4rx04xjpo
2•vedantnair•6m ago•0 comments

Emacs-tramp-RPC: high-performance TRAMP back end using MsgPack-RPC

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•fanf2•7m ago•0 comments

Nintendo Wii Themed Portfolio

https://akiraux.vercel.app/
1•s4074433•11m ago•1 comments

"There must be something like the opposite of suicide "

https://post.substack.com/p/there-must-be-something-like-the
1•rbanffy•14m ago•0 comments

Ask HN: Why doesn't Netflix add a “Theater Mode” that recreates the worst parts?

2•amichail•14m ago•0 comments

Show HN: Engineering Perception with Combinatorial Memetics

1•alan_sass•21m ago•2 comments

Show HN: Steam Daily – A Wordle-like daily puzzle game for Steam fans

https://steamdaily.xyz
1•itshellboy•22m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•spenvo•23m ago•0 comments

Just Started Using AmpCode

https://intelligenttools.co/blog/ampcode-multi-agent-production
1•BojanTomic•24m ago•0 comments

LLM as an Engineer vs. a Founder?

1•dm03514•25m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•26m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•26m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•27m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•29m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
5•codexon•29m ago•2 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•30m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•34m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•35m ago•0 comments

The Optima-l Situation: A deep dive into the classic humanist sans-serif

https://micahblachman.beehiiv.com/p/the-optima-l-situation
2•subdomain•35m ago•1 comments

Barn Owls Know When to Wait

https://blog.typeobject.com/posts/2026-barn-owls-know-when-to-wait/
1•fintler•35m ago•0 comments

Implementing TCP Echo Server in Rust [video]

https://www.youtube.com/watch?v=qjOBZ_Xzuio
1•sheerluck•35m ago•0 comments
Open in hackernews

Ask HN: Have you used an LLM for grief support?

3•mettakindness•2mo ago
My wife of 14 years recently told me she wants to end our marriage. One of the main reasons is that she has felt overwhelmed in a long‑term caregiving role: I have generalized anxiety disorder, and although I’ve been in counseling for years (EMDR, family‑systems work), take an SSRI, meditate daily, and avoid alcohol/drugs, I still sometimes panic when I think I might upset someone. I’m actively working on it, but this news has been devastating.

Emotionally, it feels similar to grief after a major loss. I've been crying a lot, have little appetite, and feel physically sick I'm in touch with friends and my therapist, but I've also been turning to LLMs for supplemental support, and suprisingly, it has been very helpful. It's strange to feel comforted by a machine, but it has helped me emotionally.

I'm curious how others see this.

Have you used an LLM for grief processing or therapy? What was helpful, what wasn't, and what risks are there?

I'm not treating it as a replacement for friends or a professional counselor, just wondering whether LLMs can be safely used as a supplement during a painful period?

Comments

incomingpain•2mo ago
I haven't really. I ask AI deep religious questions pretty often, which might help in your situation. There are therapy AI startups: https://www.talk2us.ai/ or https://lotustherapist.com/

The other option, go local. It's private, ask it whatever it you want. Nobody will ever know.

mettakindness•2mo ago
Thank you . I'll take a look at the sites you mentioned.

For the record, I've been using standard ChatGPT 5.1, and even without a therapy-specific system prompt, it works quite well.

Just to clarify, when I mentioned "safely used as a supplement", I was thinking from a holistic emotional standpoint. I came across the post on "Chatbot Psychosis" (https://news.ycombinator.com/item?id=46045674) and it got me pondering...

incomingpain•2mo ago
>Just to clarify, when I mentioned "safely used as a supplement", I was thinking from a holistic emotional standpoint. I came across the post on "Chatbot Psychosis" (https://news.ycombinator.com/item?id=46045674) and it got me pondering...

What I think is happening there. A model might only reasonably have 65,000 context. You quickly use it all up and it doesnt tell you it's truncating context, but it defaults in the center.

Eventually your chat is silently removing context and the message you think you're sending the LLM is X, Y, Z and what it's processing is X, Z. Which gives a different answer than the missing Y context.

My thought, use a new chat regularly.