Soliciting reactions on a “self-learning” AI tool for mental health.
Facial expressions correlate with depression. ML models of small datasets (4 mins. of facial video from small #s of particpants) predict patient depression scores better than doctor’s scores do. Other smart-phone-compatible biomarkers like pupillometry, eye saccades, audio, and kinematics, also correlate with depression. Several companies sell AI tools using such platforms, but I’m unaware of any massively multi-modal models.
Psychiatry’s reliance on subjective data undermines reliability and trust. AI scores, though trained on subjective data, are objective and could address.
What if a non-profit: (a) Launched a free smart-phone app generating real-time depression score based on facial-expressions. (b) Asked users to complete a PHQ8 (8 question depression survey) periodically (e.g. every fourth use) and store both PHQ and video data. (c) Used this data to expand model training database; subsequent use should find a more robust model. (d) Expanded to include other data modes and account for fators like age, sex, race, culture, etc. (e) Evolve from an active app (“get your score in 1 minute”), to a passive one app (“leave running,, with user consent, to chart mental health over time”) (f) Monetized: a. Protecting user data, keeping the app free for data donors. b. Licensed algorithms to health care professionals. c. Reinvested proceeds of fund maintenance and development after an initial philanthropical startup period
Proof of concept for facial expression models exist in academia, but need redevelopment for smartphones. Coding, modeling, and mental health system design challenges remain. A non-profit structure may be critical to encourage data donations.
Unanswered questions include: (a) how model fidelity scales with data, (b) performance of longitudinal systems (tracking video snippets over time), (c) minimum video duration for reliable monitoring, (d) environmental impacts on video capture (e.g., need for emotionally provocative prompts, etc.)
Critics may call this a gimmick as, a users already self-assess depression (and self-assessment is used as the ground-truth for training) and the system could be spoofable. However, objective, longitudinal could benefit individuals, and sufficient data might enable spoof-detection. High usage could also transform mental health research by providing objective data to redefine conditions and evaluate treatments.
Regulatory concerns (e.g., avoiding “diagnosis, ensuring informed consent) exist but seem management for an early version.
Curious to hear your thoughts.
p_ing•3h ago
What are you going to do from there? States (generally) require therapists to be licensed. Only doctor's can prescribe medication.
Just having an app track depression seems useless. "You're still depressed, Bob". Thanks, f-ing LLM.
diogenix•2h ago
p_ing•2h ago
I agree with medication/devices the efficacy for the individual is often an unknown until you try it. It's the curse of not understanding exactly how these things work, simply that (at scale) they do.
As for someone entrapping you to treat you, that presents serious ethical red flags that can get someone's license yanked (in the US). Such practitioners should be few and far between, especially with the little amount of money they're reimbursed from insurance by.
Can you describe the goal of this app in a single sentence? What does it do for the individual/how does it make the individual 'better'? And could this individual get this sort of feedback on their own (anyone can take PHQ-9 at any time they wish)?
diogenix•2h ago
For those that trust their ability to assess their own depression, it offers minimal value. Sounds like you're profoundly in that camp.
That said, many don't trust their ability to assess their own depression. They may trust their therapists more but have to pay to get access to them and my expectation is that this will be more accurate then them regardless.
And, if the system is set up to be a passive instrument, it would allow you to monitor evolution of your mental health condition over time. For many people in the course of a treatment, this data could be useful.
Simple description of value to individual: "Simple, digital, free, objective ability to obtain a depression score that doesn't rely on your own subjective self-assessment."
And I agree whole-heartedly that there are a ton of ethical questions around many forms of medical treatment and conflicts-of-interest. The ones that personally scare me the most are those that rely on subjective data.