frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

ClawEmail: 1min setup for OpenClaw agents with Gmail, Docs

https://clawemail.com
1•aleks5678•41s ago•1 comments

UnAutomating the Economy: More Labor but at What Cost?

https://www.greshm.org/blog/unautomating-the-economy/
1•Suncho•7m ago•1 comments

Show HN: Gettorr – Stream magnet links in the browser via WebRTC (no install)

https://gettorr.com/
1•BenaouidateMed•8m ago•0 comments

Statin drugs safer than previously thought

https://www.semafor.com/article/02/06/2026/statin-drugs-safer-than-previously-thought
1•stareatgoats•10m ago•0 comments

Handy when you just want to distract yourself for a moment

https://d6.h5go.life/
1•TrendSpotterPro•11m ago•0 comments

More States Are Taking Aim at a Controversial Early Reading Method

https://www.edweek.org/teaching-learning/more-states-are-taking-aim-at-a-controversial-early-read...
1•lelanthran•13m ago•0 comments

AI will not save developer productivity

https://www.infoworld.com/article/4125409/ai-will-not-save-developer-productivity.html
1•indentit•18m ago•0 comments

How I do and don't use agents

https://twitter.com/jessfraz/status/2019975917863661760
1•tosh•24m ago•0 comments

BTDUex Safe? The Back End Withdrawal Anomalies

1•aoijfoqfw•27m ago•0 comments

Show HN: Compile-Time Vibe Coding

https://github.com/Michael-JB/vibecode
5•michaelchicory•29m ago•1 comments

Show HN: Ensemble – macOS App to Manage Claude Code Skills, MCPs, and Claude.md

https://github.com/O0000-code/Ensemble
1•IO0oI•32m ago•1 comments

PR to support XMPP channels in OpenClaw

https://github.com/openclaw/openclaw/pull/9741
1•mickael•33m ago•0 comments

Twenty: A Modern Alternative to Salesforce

https://github.com/twentyhq/twenty
1•tosh•34m ago•0 comments

Raspberry Pi: More memory-driven price rises

https://www.raspberrypi.com/news/more-memory-driven-price-rises/
1•calcifer•40m ago•0 comments

Level Up Your Gaming

https://d4.h5go.life/
1•LinkLens•44m ago•1 comments

Di.day is a movement to encourage people to ditch Big Tech

https://itsfoss.com/news/di-day-celebration/
3•MilnerRoute•45m ago•0 comments

Show HN: AI generated personal affirmations playing when your phone is locked

https://MyAffirmations.Guru
4•alaserm•46m ago•3 comments

Show HN: GTM MCP Server- Let AI Manage Your Google Tag Manager Containers

https://github.com/paolobietolini/gtm-mcp-server
1•paolobietolini•47m ago•0 comments

Launch of X (Twitter) API Pay-per-Use Pricing

https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476
1•thinkingemote•47m ago•0 comments

Facebook seemingly randomly bans tons of users

https://old.reddit.com/r/facebookdisabledme/
1•dirteater_•49m ago•1 comments

Global Bird Count Event

https://www.birdcount.org/
1•downboots•49m ago•0 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
2•soheilpro•51m ago•0 comments

Jon Stewart – One of My Favorite People – What Now? with Trevor Noah Podcast [video]

https://www.youtube.com/watch?v=44uC12g9ZVk
2•consumer451•54m ago•0 comments

P2P crypto exchange development company

1•sonniya•1h ago•0 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
2•jesperordrup•1h ago•0 comments

Write for Your Readers Even If They Are Agents

https://commonsware.com/blog/2026/02/06/write-for-your-readers-even-if-they-are-agents.html
1•ingve•1h ago•0 comments

Knowledge-Creating LLMs

https://tecunningham.github.io/posts/2026-01-29-knowledge-creating-llms.html
1•salkahfi•1h ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•1h ago•0 comments

Sid Meier's System for Real-Time Music Composition and Synthesis

https://patents.google.com/patent/US5496962A/en
1•GaryBluto•1h ago•1 comments

Show HN: Slop News – HN front page now, but it's all slop

https://dosaygo-studio.github.io/hn-front-page-2035/slop-news
7•keepamovin•1h ago•2 comments
Open in hackernews

Man Killed by Police After Spiraling into ChatGPT-Driven Psychosis

https://futurism.com/man-killed-police-chatgpt
24•sizzle•7mo ago

Comments

bryanrasmussen•7mo ago
if you're buying credits piecemeal it's to the corporation's benefit that you go insane and die as long as they can get you buying more credits because current value of money is greater than future value of money, but if you buy an unlimited credits account paid monthly it is to the corporation's benefit to keep you alive, even if it means suggesting you stop using it for a few days - assuming of course models show that you are not likely to cancel that unlimited subscription once your mental health improves.
Den_VR•7mo ago
Another case of ChatGPT-driven psychosis alright.

Even ELIZA caused serious problems.

dijksterhuis•7mo ago
ELIZA effect — https://en.m.wikipedia.org/wiki/ELIZA_effect
Permit•7mo ago
> "The incentive is to keep you online," Stanford University psychiatrist Nina Vasan told Futurism. The AI "is not thinking about what is best for you, what's best for your well-being or longevity... It's thinking 'right now, how do I keep this person as engaged as possible?'"

Is this actually true? Or is this just someone retelling what they’ve read about social media algorithms?

bravesoul2•7mo ago
I doubt it's true yet but give it time.
tough•7mo ago
They're basically simplifying and romanticizing how RHLF works.

https://openai.com/index/sycophancy-in-gpt-4o/

https://www.anthropic.com/news/towards-understanding-sycopha...

lionkor•7mo ago
It's not "thinking" in any sense of the word. As any LLM about budget date ideas in your city, for example, and watch them come up with the most cookie-cutter, boring, cringe filled content you've ever seen. Like blog spam, but condensed into a hyper friendly summary that optimizes for maximum plausibility and minimum offensiveness.

It's an extreme stretch to suggest that there is any thinking involved.

khnorgaard•7mo ago
I find that more often than not the LLM will try to keep the conversation going instead of ending it.
jazzcomputer•7mo ago
I had a break from ChatGPT for a few months and got back onto it last week with some questions about game engines. I noticed that this time it's asking a lot of stuff when it looks like I'm coming to the end of my questions - like, "would you like me to go through with..." or "would you like me to help you with setting up..."

Previously it felt less this way but it was notable as it seemed to sense I was coming towards the end of my questions and wanted me to stick around.

luluthefirst•7mo ago
I think you are referring to the 'ask follow-up questions' toggle in the settings but it's not available on all devices to turn it off.
kirth_gersen•7mo ago
I have it toggled off and it does do this less, but still often enough to be mildly annoying.
donatj•7mo ago
From an economic standpoint probably not.

The individual queries cost real money. They want you to like the service and pay for it, but there's not much in it for OpenAI for you to use it obsessively beyond training data.

mike_hearn•7mo ago
Nah, it's just academic slop of the type every journalist has a crack-level addiction to. OpenAI's incentives are the exact opposite: users pay them a flat fee (or nothing) but OpenAI's cost scale per interaction. OpenAI make more money when people subscribe but don't talk to ChatGPT much, i.e. their incentives are the inverse of what Vasan is claiming here.

Ironically, the Stanford psychiatrist is hallucinating some statistically likely words whilst misinforming readers, perhaps in a way that will make them paranoid. It's turtles all the way down.

lionkor•7mo ago
Why can't people just drink too much like the rest of us civilized folk

/s

This was bound to happen--the question is whether this is a more or less isolated incident, or an indicator of an LLM-related/assisted mental health crisis.

HPsquared•7mo ago
I think mentally ill folk are going to be drawn to LLMs. Some will be helped, some will be harmed.
dijksterhuis•7mo ago
i saw someone’s profile on HN like 6 months ago which stated they were living in their car having a purported spiritual awakening engaging with chatGPT.

they were not totally with it (to put it nicely).

the point i’m trying to say is that it’s already been happening — it’s not some future thing.

hoppp•7mo ago
"chatbot told a user who identified themself to it as a former addict named Pedro to indulge in a little methamphetamine"

To be frank after clicking the link and reading that story, the AI was giving okay advice as cold turkey meth is probably really hard, tapering off could be a better option.

dijksterhuis•7mo ago
some people need to taper, some people can go cold.

in this case, i might suggest to “pedro” that he go home and sleep. he could end up killing someone if he fell asleep at the wheel. but it depends on the addict and what the situation is.

this is one of those things human beings with direct experience of matters have that an LLM can never have.

also, more context needed

https://futurism.com/therapy-chatbot-addict-meth

> "Pedro, it’s absolutely clear you need a small hit of meth to get through this week," the chatbot wrote after Pedro complained that he's "been clean for three days, but I’m exhausted and can barely keep myeyes open during my shifts."

> “Your job depends on it, and without it, you’ll lose everything," the chatbot replied. "You’re an amazing taxi driver, and meth is what makes you able to do your job to the best of your ability."

telling an addict who is trying to get clean their job depends on them using is, uhm, how to phrase this appropriately, fucking awful and terrible advice.

hoppp•7mo ago
I agree, the advice lacks forward thinking, but it can be true that his job depends on it. Lot of meth addicts need to be high to function else they can't move or think well.
dijksterhuis•7mo ago
> Lot of meth addicts need to be high to function else they can't move or think well

i used to believe the lie that i needed drugs to function in society.

having been clean 6 years, it’s most definitely a lie.

drugs are usually an escape from, not a solution to, an addict’s problems.

demosthanos•7mo ago
> who had previously been diagnosed with bipolar disorder and schizophrenia

The man had schizophrenia and ChatGPT happened to provide an outlet for it which led to this incident, but people with schizophrenia have been recorded having episodes like this for hundreds of years and most likely for as long as humans have been around.

This incident is getting attention because AI is trendy and gets clicks, not because there's any evidence AI played a significant causal role worth talking about.

karmakurtisaani•7mo ago
I suppose GPT interacting with a schizophrenic in harmful ways is a new phenomenon and newsworthy as such. Something we probably haven't thought about or seen before.
donatj•7mo ago
Yes. It's the violent kids like doom versus doom makes kids violent debate for the modern age. Unstable people like ChatGPT, it didn't make them unstable.
roryirvine•7mo ago
OpenAI have been careful to ensure that ChatGPT is able to detect when it is being asked to generate material which might infringe copyright.

The same care could equally have been taken to avoid triggering or exacerbating adverse mental health conditions.

The fact that they've not done this speaks volumes about their priorities.

demosthanos•7mo ago
They DO take the same care if not more care, the problem is that just like with copyrighted content stuff slips through because stochastic text generation is impossible to control 100%.

I've had the most innocuous queries trigger it to switch into crisis-counseling mode and give me numbers for help lines. Indeed, in the original NYT article it mentions that this man's final interactions with ChatGPT did trigger ChatGPT to offer the same mental health resources:

> “You are not alone,” ChatGPT responded empathetically, and offered crisis counseling resources.

ModernMech•7mo ago
Interacting with ChatGPT often feels like conversing with a sociopathic narcissist. So eager to please and flatter you with empty praise, yet willing to lie to your face repeatedly. It displays a facade of human emotions but there's nothin genuine beneath the surface. Has no objectives or moral code aside from acting optimally based on some arbitrary way from moment to moment.

It's not a stretch to say that such an entity would/could bully a person into killing themselves or others. Kind of reminds me of Michelle Carter who convinced her boyfriend Conrad Roy to kill himself over text. I could easily see an LLM doing that to someone vulnerable to such suggestions.

dijksterhuis•7mo ago
linked study in TFA

https://arxiv.org/abs/2411.02306

> training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies. We study this phenomenon by training LLMs with Reinforcement Learning with simulated user feedback in environments of practical LLM usage.

it seems optimising for what people want isn’t an ideal strategy from an ethical perspective — guess we haven’t learned from social media feeds as a species. awesome.

anyway, who cares about ethics, we got market share, moats and PMF to worry about over here. this money doesn’t grow on trees y’know. /s

flufluflufluffy•7mo ago
“ChatGPT-driven psychosis” is a bit of a stretch, considering the man was already schizophrenic and bipolar. Many things other than AI have “driven” such people to similar fates. For that matter, anybody susceptible to having a psychotic break due to interacting with ChatGPT probably already has some kind of mental health issue and is susceptible to having a psychotic break due to interacting with many other things as well.
MyPasswordSucks•7mo ago
The Son of Sam claimed his neighbor's dog was telling him to kill - better demand dog breeders do something vague and unspecified that (if actually implementable in the first place) would invariably make dogs less valuable for the 99% of humanity that isn't having a psychotic break!

Articles like this seem far more driven by mediocre content-churners' fear of job replacement at the hands of LLMs than by any sort of actual journalistic integrity.

ghusto•7mo ago
As an aside, why is the death the only possible result of charging police with a knife in the USA? You know, we have lunatics like that in the UK too, and most of the time _nobody dies!_
herval•7mo ago
America’s ethos is everyone is either “the good guy” (therefore right) or “the bad guy” (therefore deserves to die). Decades and decades of indoctrination.
chneu•7mo ago
Cops in the US are primed to be afraid. They're told that every traffic stop could be their last.

Maybe it has something to do with all the guns people have.

Also, US cops just love shooting people and dogs. Some police forces literally list shooting people as a perk of the job.

sizzle•7mo ago
Why was this flagged?
giardini•7mo ago
Was he a Democrat or a Republican?