I guess that explains why people who dig ditches for a living are so satisfied.
It just happens that for technical stuff the minority that likes what they do is larger and thus actually noticeable.
We should encourage stronger research designs (including A/B tests) if we care about the impact of AI use on mental health outcomes. A study like this one cannot say anything about the effect at all (it is even possible that AI use will have a positive impact on mental health).
As for this research particular...pfff...I'm rooting for the collapse of this LLM-fuelled craze, so I'm biased.
Anyway there's no reason to discount it, but it does mean you can't run with the assumption that there is causation.
Like even if you accept a bunch of premises to make the studies even work, the raw stats are often so bad and there's no rigor to try and actually explain alternatives that I just have stopped reading them entirely.
Again, I'm not one to hate on the social sciences. History, anthropology, politics, law, psychology, sociology, all of that is very interesting and important. But the horrible statistics that don't understand garbage in garbage out have turned me off of it. Much rather read qualitative studies that actually try to gather detailed, real data, even if it's not as automated as a random survey
I don't think this is a mark against those users to be clear, I see this as largely the same chicken-egg relationship you find between depressed people and video games. It's also subject to the same kinds of abuses on the part of the merchant, things like in-game purchases that are particularly attractive to people with executive function issues, and why the predominant "whales" of the video game industry and especially the mobile game industry are people who are already struggling. I think AI is going to end up in a similar position because like, again, not trying to be shitty, but if your life kind of broadly sucks, I'm sure playing in an AI chatbox all day where something that sounds vaguely human will validate whatever you say, make stuff for you at request, and never challenge you in the slightest is quite attractive to you. And, thinking through it further, these systems also adapt to their users, learn how to engage with them better, as many products have before them that have trapped the neurodivergent into problematic usage scenarios.
I don't judge the people, but I am incredibly suspicious of the businesses behind these and other products that seem almost designed to attract neurodivergent people. If you design a machine that gives dopamine on demand, you can't really be shocked when people who are dopamine‑starved use it a lot. Potentially to a harmful extent.
(I say general-use because I think there are some AI-based tools that are specially made which _can_ actually be helpful for this - but opening a ChatGPT tab, even with lots of relevant instructions, ain't it in my experience. The interface itself is counter-productive to healthy processing.)
> "Greater levels of AI use were associated with modest increases in depressive symptoms"
to me ever so slightly implies causality via "increases ...", even though, as they are also very transparent about, this paper isn't about any causal mechanism. I feel like "associated with higher rates of depressive symptoms" might have read more neutrally and would have been in line with the results of their paper.
Not suggesting something intentional by the authors, of course, I just found it interesting how verbs subtly influence the meaning of things, at least for me.
But perhaps I'm also biased because I kind of intuitively believe that the causation is that depressive people enjoy talking to the AI, rather than AI being the cause of anything. I worry that any reverse interpretations will lead to an over-regulation of AI in such contexts.
It's frustrating watching this topic turn into culture war.
Seems like it would be overall beneficial.
Regardless, you're correct that it also shouldn't be taken to imply a causal relationship.
> The highest estimates were observed among individuals using AI for personal use
and
> Incorporating individual terms for school, work, and personal use, only personal use was significantly associated with PHQ-9 (β = 0.31 [95% CI, 0.10-0.52]), while the other 2 were not
Whereas generative AI is a recent thing. ~27% in 2021.
The correlation therefore is very very low and certainly not causal.
The question 'can AI make it worse' and this study didnt really do that.
Then consider confounders and this study is even weaker. Depression leads into AI usage, not the other way around.
I know that’s not a fair correlation to make, but I have friends who use AI casually and not in tech, they seem outwardly fine and don’t make depressive comments about the future.
Often talking to Claude/using AI agents to build software is really enjoyable/motivating, and it also makes it easier to get the satisfaction from completing projects.
But it also tends to make me think about how quickly the technology is developing. This makes me anxious about x-risks from AI, which makes it harder to get work done.
LatencyKills•2h ago
The thing I miss most about work (yes, you really can miss work) is collaborative problem-solving. At Microsoft, we called it “teddy bear debugging”—basically, self-explaining a problem out loud to clarify your thinking. [1]
These days, when I’m stuck, I open Claude Code and “talk it through.” That back-and-forth helps me reason through technical issues and scratches a bit of that collaborative itch that helped keep my depression in check.
[1]: https://economictimes.indiatimes.com/news/international/us/w...
poszlem•1h ago
righthand•1h ago
I’m not trying to be rude but it seems like you’re conflating collaborative problem solving with rubber duck debugging. You haven’t actually collaborated with a rubber duck when you’re finished rubber duck debugging.
salawat•1h ago
But what do I know man, I'm just a duck on the Internet. On the Internet, no one knows you're a duck.
Quack.
righthand•55m ago
You don’t send kids to Rubber Duck Debugging Class (you send them to School) because you can’t see the teacher in the classroom while you’re at work.
You’re debugging yourself, not the actual problem per say.
nottorp•59m ago
I prefer grabbing a colleague that is technical but does not work on this particular project. Seems to force me to organize the info in my head more than an actual rubber duck.
righthand•52m ago
LatencyKills•26m ago
That isn't how we did it at either Microsoft or Apple. There, we defined it as walking another engineer through a problem. That person may or may not have been an expert in whatever I was working on at the time. You truly aren't suggesting that rubber duck debugging only works when you don't receive feedback?
I use Claude to bounce ideas around just like I did with my human teammates.
I think you're being pedantic, but it doesn't matter to me: in the end, I work must better when I can talk through a problem; Claude is a good stand-in when I don't have access to another human.
byproxy•1h ago
magicpin•57m ago
fzzzy•50m ago
sgc•48m ago
lanyard-textile•40m ago
"Talk with me first:" (Implying anything other than talking, like coding, would be a separate distinct step that is not to be done)
Proposal is the best keyword imo if it fits what you'd like.
"Propose changes you would make to (this repo|staged changes|latest commit)."
"Propose alternatives."
"Propose flaws." / "Propose flaws in my reasoning."
LatencyKills•32m ago
tclancy•15m ago
Basically, it helps me avoid what they called "gumption traps" in Zen and the Art of Motorcycle Maintenance.