frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Happy 20th Birthday, Django

https://www.djangoproject.com/weblog/2025/jul/13/happy-20th-birthday-django/
3•davepeck•6m ago•0 comments

Flox: A virtual environment and package manager all in one

https://github.com/flox/flox
1•saikatsg•8m ago•0 comments

Naming Software Teams

https://staysaasy.com/management/2025/07/06/team-names.html
1•kiyanwang•9m ago•0 comments

Apple, Masimo spar over Apple Watch import ban at US appeals court

https://www.reuters.com/legal/government/apple-masimo-spar-over-apple-watch-import-ban-us-appeals-court-2025-07-07/
1•CharlesW•11m ago•0 comments

Princeton study maps 200k years of Human–Neanderthal interbreeding

https://www.sciencedaily.com/releases/2025/07/250713032519.htm
1•Amezarak•11m ago•0 comments

How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI

https://garymarcus.substack.com/p/how-o3-and-grok-4-accidentally-vindicated
1•NotInOurNames•13m ago•0 comments

Learning to never give up by following you're passion

https://thelabyrinthoftimesedge.com
1•ATiredGoat•15m ago•1 comments

Clashes between web and X11 colors in the CSS color scheme

https://en.wikipedia.org/wiki/X11_color_names
1•mmoogle•18m ago•0 comments

GLP-1s Are Breaking Life Insurance

https://www.glp1digest.com/p/how-glp-1s-are-breaking-life-insurance
5•alexslobodnik•19m ago•0 comments

Httplite: HTTP parser single-header library written in 50 lines of C

https://github.com/xyurt/httplite
2•thunderbong•21m ago•0 comments

'Europe Must Ban American Big Tech and Create a European Silicon Valley'

https://www.tilburguniversity.edu/magazine/overview/europe-must-ban-american-big-tech-and-create-a-european-silicon-valley
2•taubek•23m ago•1 comments

LLMs for Drug-Drug Interaction Prediction: A Comprehensive Comparison

https://arxiv.org/abs/2502.06890
1•stacktrust•23m ago•0 comments

Warfront Nations – Free Web Based Strategy Game

https://warfront-nations.com/
1•overDos3•24m ago•0 comments

Hungary's oldest library fighting to save 100k books from a beetle infestation

https://www.nbcnews.com/world/hungary/hungary-pannonhalma-archabbey-beetle-infestation-rcna218539
7•rntn•26m ago•2 comments

Chapeaugraphy

https://en.wikipedia.org/wiki/Chapeaugraphy
3•arittr•28m ago•0 comments

Exploiting All Google KernelCTF Instances and Debian 12 with a 0-Day for $82k

https://syst3mfailure.io/rbtree-family-drama/
1•todsacerdoti•28m ago•0 comments

Linux Kernel Pipapo Set Double Free LPE

https://ssd-disclosure.com/ssd-advisory-linux-kernel-pipapo-set-double-free-lpe/
1•todsacerdoti•34m ago•0 comments

HarfBuzz Study: Introducing HarfRust

https://docs.google.com/document/d/1aH_waagdEM5UhslQxCeFEb82ECBhPlZjy5_MwLNLBYo/preview?tab=t.0#heading=h.rwkk1hotbpzb
2•phonon•35m ago•0 comments

The Taming of Power by Bertrand Russell

https://www.theatlantic.com/magazine/archive/1938/10/the-taming-of-power/653474/
3•Teever•37m ago•1 comments

Gravity Inspired ML Model

https://github.com/henrivuorinen/GAM
2•henrijv•40m ago•0 comments

How to build a new chip architecture, ft. Nvidia

https://chipinsights.substack.com/p/how-to-build-a-new-chip-architecture
3•bharathw30•45m ago•1 comments

Strap In, Vision Pro Owners

https://spiral.spyglass.org/p/strap-in-vision-pro-owners
11•wslh•46m ago•1 comments

Recraft: Image Generation for Designers

https://www.recraft.ai/
1•pipase•46m ago•0 comments

Large Language Models Are Not Stable Recommender Systems (2023)

https://arxiv.org/abs/2312.15746
2•wslh•48m ago•0 comments

There is a magic bullet that could make us all live longer

https://www.newscientist.com/article/2485021-youve-been-sold-a-giant-myth-when-it-comes-to-improving-your-health/
2•Breadmaker•51m ago•1 comments

TypeScript 5.9 Beta

https://devblogs.microsoft.com/typescript/announcing-typescript-5-9-beta/
2•wslh•53m ago•1 comments

Selective Separation of SiO2 and SnO2 Particles in the Submicron Range

https://www.mdpi.com/2674-0516/4/3/19
2•PaulHoule•53m ago•0 comments

He went missing on Vancouver Island. A whistle and a sledge got him home

https://www.cbc.ca/news/canada/british-columbia/missing-hiker-della-falls-dallin-beaumier-1.7583066
2•colinprince•55m ago•0 comments

Hiding in plain sight – Mount namespaces

https://haxrob.net/hiding-in-plain-sight-mount-namespaces/
1•haxrob•56m ago•0 comments

The Measurement of the Microblogosphere (2025 Update)

https://stylestitches.substack.com/p/the-measurement-of-the-microblogosphere-2e4
1•thefiene•57m ago•0 comments
Open in hackernews

AI therapy bots fuel delusions and give dangerous advice, Stanford study finds

https://arstechnica.com/ai/2025/07/ai-therapy-bots-fuel-delusions-and-give-dangerous-advice-stanford-study-finds/
32•pseudolus•8h ago

Comments

42lux•6h ago
I am bipolar and I help run a group. We lost some people to chatbots already that either fueled a manic or a depressive episode.
sherdil2022•4h ago
Lost as in ‘not meeting anymore since they are using chatbots instead’ or ‘took their lives’?
42lux•3h ago
Both but it's mostly not the therapy chatbots or normal "chatgpt" those are worse enough. It's these dumbass ai girlfriend/boyfriend bots that run on uncensored small models. They get unhinged really fast.
joules77•5h ago
It's a bit like talking about the quality of pastoral care you get at Church. You can get a wide spectrum of results.

Worth pointing out such systems have survived a long long time since access to it is free irrespective of the quality.

giingyui•5h ago
Therapists that work at an institution that makes millions off training therapists say that free therapy is a bad thing.

Being less snarky: there is a monumental conflict of interest here that makes the study worthless.

seanhunter•5h ago
Here is the paper. https://arxiv.org/abs/2504.18412

Literally none of the authors are therapists. They are all researchers.

The conflict of interest is entirely made up by you.

giingyui•4h ago
How exactly can they determine that it’s bad to use AI therapy bots if they are not therapists?
_vertigo•4h ago
So your take is that if they are therapists, it’s a conflict of interest, and if they aren’t therapists, they’re not qualified to make the assessment?
giingyui•4h ago
That is correct. I don’t think this study can be made in a reliable way.
colinmorelli•4h ago
This is an interesting take. By this perspective, it's essentially impossible to ever gauge the efficacy of AI in doing anything, because the people who will know how to measure the quality of that thing are also the people who will be displaced by showing the AI can do that thing. In fact, you could probably argue that every study ever is worthless, because studies are generally performed by people who know the subject matter and it's basically impossible to be unbiased on a topic if you're also highly knowledgable about said topic.

In reality, what matters is the methodology of the study. If the study's methodology is sound, and its results can be reproduced by others, then it is generally considered to be a good study. That's the whole reason we publish methodologies and results: so others can critique and verify. If you think this study is bad, explain why. The whole document is there for you to review.

m3047•1h ago
I think you are correct, and incorrect. However: set and setting. Another of Lanier's observations, which he relates to LLMs, is the Boeing "smart" stall preventer which crashed two <strike>Dreamliners</strike> [correction:] 737 MAXes.

Who can argue with a stall preventer, right? What one can, and has been exposed / argued with, is the observation that information about the operation of the stall preventer, training, and even the ability to effectively control it depended on how much the airline was willing to pay for this necessary feature.

So in reality, what matters is studying the methodology of set and setting, not how the pieces of the crashed airship ended up where they did.

colinmorelli•1h ago
I'm not exactly sure how this relates to my comment above. An analysis of an airline crash and a study are not the same thing.

As it relates to study design, controlling for set and setting are part of the methodology. For example, most drug studies are double-blinded so that neither patients nor clinicians are aware of whether the patient is getting the drug or not, to reduce or eliminate any placebo effect (i.e. to control for the "set"/mental state of those involved in the study).

There are certainly some cases in which it's effectively impossible to control for these factors (i.e. psychedelics). That's not what's really being discussed here, though.

An airline crash is an n of 1 incident, and not the same as a designed study.

m3047•1h ago
> it's essentially impossible to ever gauge the efficacy of AI in doing anything...

... compared to humans? Yes. This is a philosophical conundrum which you tie yourself up in if you choose to postulate the artificial intelligence as equivalent to, rather than a simulacrum of, human intelligence. We fly (planes): are we "smarter" than birds? We breathe underwater: are we "smarter" than fish? And so on.

How do you discern that the "other" has an internal representation and dialogue? Oh. Because a human programmed it to be so. But how do you know that another human has internal representation and dialogue? I do (I have conscious control over the verbal dialogue but that's another matter), so I choose to believe that others (humans) do (not the verbal part so much unfortunately). I could extend that to machines, but why? I need a better reason than "because". I'd rather extend the courtesy to a bird or a fish first.

This is an epistemological / religious question: a matter of faith. There are many things which we can't really know / rigorously define against objective criteria.

colinmorelli•58m ago
This, similar to your other comment, is unrelated to my comment.

This is about determining if AI can be a equivalent or better (defined as: achieving equal or better clinical outcomes) therapist than a human. That is a question that can be studied and answered.

Whether artificial intelligence accurately models human intelligence, or whether an airplane is "smarter" than a bird, are entirely separate questions that can perhaps serve to explain _why/how_ the AI can (or can't) achieve better results than the thing we're comparing against, but not whether it does or does not. Those questions are perhaps unanswerable based on today's knowledge. But they're not prerequisites.

_vertigo•4h ago
Well, that’s helpful to know so that other people can know to ignore what you write on this
seanhunter•4h ago
There is a psychiatrist on the author team and they did a mapping review and evaluated AI therapy using existing guidelines about what constitutes good therapy (as discussed in their paper which I linked). In other words, they did research.

It’s impossible to think that you are discussing this in good faith at this point.

adamgordonbell•4h ago
The study coauthor actually seems positive on their potential:

'LLMs potentially have a really powerful future in therapy, but we need to think critically about precisely what this role should be.'

And they also mention a previous paper that found high levels of engagement from patients.

So, they have potential but currently are giving dangerous advice. It sounds like they are saying a fine tuned therapist model is needed because 'you are a great therapist' prompt, just gives you something that vaguely sounds like a therapist to an outsider.

Sounds like an opportunity honestly.

Would people value a properly trained therapist enough to pay for it over an existing chatgpt subscription?

qgin•4h ago
Benchmarking LLMs on this is an important thing to do. There is a huge potential positive effect of psychotherapy being always-available to every human rather than just for wealthy people once a week. But to get there we need to know the rate of adverse events compared to human therapists (which isn’t zero either).
m3047•1h ago
It was put forward in 1960s (maybe? Robert Anton Wilson? and for parallel purposes Philip K Dick's percept / concept feedback cycle) science fiction, and having therefore casually looked for phenomena when support / disprove this hypothesis over the intervening years: that people in power necessarily become functionally psychotic because people will self-select to be around them as a self-preserving / promoting opportunity (sycophants) who cannot help but filter shared observations through their own biases, this is profoundly unsurprising to me.

If you choose to believe as Jaron Lanier does that LLMs are a mashup (or as I would characterize it a funhouse mirror) of the human condition, as represented by the Internet, this sort of implicit bias is already represented in most social media. This is further distilled by the cultural practice of hiring third world residents to tag training sets and provide the "reinforcement learning"... people who are effectively if not actually in the thrall of their employers and can't help but reflect their own sycophancy.

As someone who is therefore historically familiar with this process in a wider systemic sense I need (hope for?) something in articles like this which diagnoses / mitigates the underlying process.