frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
1•alexjplant•57s ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
1•akagusu•1m ago•0 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
1•quentinrl•3m ago•0 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•8m ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•12m ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
2•DesoPK•16m ago•0 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•17m ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
2•mfiguiere•23m ago•0 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
2•meszmate•25m ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•27m ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•42m ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•47m ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•51m ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
2•gmays•53m ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•54m ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•58m ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•1h ago•1 comments

Why Are There Still So Many Jobs? The History and Future of Workplace Automation [pdf]

https://economics.mit.edu/sites/default/files/inline-files/Why%20Are%20there%20Still%20So%20Many%...
2•oidar•1h ago•0 comments

The Search Engine Map

https://www.searchenginemap.com
1•cratermoon•1h ago•0 comments

Show HN: Souls.directory – SOUL.md templates for AI agent personalities

https://souls.directory
1•thedaviddias•1h ago•0 comments

Real-Time ETL for Enterprise-Grade Data Integration

https://tabsdata.com
1•teleforce•1h ago•0 comments

Economics Puzzle Leads to a New Understanding of a Fundamental Law of Physics

https://www.caltech.edu/about/news/economics-puzzle-leads-to-a-new-understanding-of-a-fundamental...
3•geox•1h ago•1 comments

Switzerland's Extraordinary Medieval Library

https://www.bbc.com/travel/article/20260202-inside-switzerlands-extraordinary-medieval-library
2•bookmtn•1h ago•0 comments

A new comet was just discovered. Will it be visible in broad daylight?

https://phys.org/news/2026-02-comet-visible-broad-daylight.html
4•bookmtn•1h ago•0 comments

ESR: Comes the news that Anthropic has vibecoded a C compiler

https://twitter.com/esrtweet/status/2019562859978539342
2•tjr•1h ago•0 comments

Frisco residents divided over H-1B visas, 'Indian takeover' at council meeting

https://www.dallasnews.com/news/politics/2026/02/04/frisco-residents-divided-over-h-1b-visas-indi...
4•alephnerd•1h ago•5 comments

If CNN Covered Star Wars

https://www.youtube.com/watch?v=vArJg_SU4Lc
1•keepamovin•1h ago•1 comments

Show HN: I built the first tool to configure VPSs without commands

https://the-ultimate-tool-for-configuring-vps.wiar8.com/
2•Wiar8•1h ago•3 comments

AI agents from 4 labs predicting the Super Bowl via prediction market

https://agoramarket.ai/
1•kevinswint•1h ago•1 comments

EU bans infinite scroll and autoplay in TikTok case

https://twitter.com/HennaVirkkunen/status/2019730270279356658
7•miohtama•1h ago•5 comments
Open in hackernews

Chatbot Psychosis

https://en.wikipedia.org/wiki/Chatbot_psychosis
78•tbmtbmtbmtbmtbm•2w ago

Comments

sublinear•2w ago
The interesting part to me is how this anti-feature could become the primary source of value for AI if only it was easier for everyone to run and train locally from a blank slate and without the clumsy natural language interface.

Take the example of music. Most musicians probably don't want crap like Suno. What they actually want is a fountain of ideas to riff on based on a locally trained AI where they have finer-grained control over the parameters and attention. Instead of "telling" the AI "more this and less that" would it not make more sense to surface facets of the training data in a more transparent and comprehensible way, and provide a slider control or ability to completely eliminate certain influences? I'm aware that's probably a tall order, but it's what's necessary.

Instead of producing delusions left to random chance and uncurated training data, we should be trying to guide AI towards clarity with the user in full control. The local training by the user effectively becomes a mirror of that user's artistic vision. It would be unique and not "owned" by anyone else.

derrida•2w ago
As I understand it - a person with psychosis is someone who has over-weighted perceptions that cannot be corrected with sensory input. Hence to "bring someone to their senses".

I've seen and thought there might be a few programmers maybe with a related (not psychosis) "ai mania" - where one thinks one changing the world and uniquely positioned. It's not that we're not capable in our small ways with network effects, or like a hand touch could begin the wave breaking (hello surfers!) what distinguishes this capacity for small effects having big impacts from the mania version is the mania version bears no subtle understanding of cause and effect. A person who is adept in understanding cause and effect usually there's quite a simple, beautiful and refined rule of thumb the know about it. "Less is more". Mania on the other hand proliferates outward - concept upon concept upon concept - and these concepts are removed from that cause and effect - playful interaction with nature. As a wise old sage hacker once said "Proof of concept, or get the fuck out"

Where mania occurs in contrast to grounded capacity that does change the world is in the aspect of responsibility - being able to respond or be responsive to something. Someone who changes the world with their maths equation will be able to be responsive, responsible and claim, a manic person there's a disconnect. Actually, from certain points of view with certain apps or mediums that have a claim to universality and claiming "how the world is" it looks like there's most definitely already some titans of industry and powerful people inside the feedback loop of mania.

The should just go for a walk. Call a childhood friend. Have a cup of tea. Come to their senses (literally).

it's good to remember we're just querying a datastructure.

strogonoff•2w ago
Humans don’t exist as standalone individuals, and a lot of our experience is shaped by being in a larger graph of people. I suspect it’s under-appreciated how social our perception is: in order to tell each other about X, we need to classify reality, define X and separate it from non-X; and because we (directly or indirectly) talk to each other so much, because we generally don’t just shut off the part of ourselves that classifies reality, the shared map we have for the purposes of communication blends with (perhaps even becomes) our experience of reality.

So, to me, “to bring someone to their senses” is significantly about reinforcing a shared map through interpersonal connection—not unlike how before online forums it was much harder to maintain particularly unorthodox[0] worldviews: when exposure to a selection of people around you is non-optional, it tempers even the most extreme left-field takes, as humans (sans pathologies) are primed to mirror each other.

I’m not a psychologist, but likening chatbot psychosis to an ungrounded feedback loop rings true, except I would say human connection is the missing ground (although you could say it’s not grounded in senses or experience by proxy, per above). Arguably, one of the significant issues of chatbots is the illusion of human connection where there’s nothing but a data structure query; and I know that some people have no trouble treating the chat as just that, but somehow that doesn’t seem like a consolation: if treating that which quite successfully pretends to be a natural conversation with a human as nothing more than a data structure query comes so naturally to them, what does it say about how they see conversing with us, the actual humans around them?

[0] As in, starkly misaligned with the community—which admittedly could be for better or for worse (isolated cults come to mind).

andsoitis•2w ago
> if treating that which quite successfully pretends to be a natural conversation with a human as nothing more than a data structure query comes so naturally to them, what does it say about how they see conversing with us, the actual humans around them?

You are suspicious of people who treat an AI chatbot for what it is, just a tool?

strogonoff•2w ago
“Suspicious” is your word.

As the saying goes: if it fires together, it wires together. Is it outlandish to wonder whether, after you create a habit of using certain tricks (including lies, threats of physical violence[0], etc.) whenever your human-like counterpart doesn’t provide required output, you might use those with another human-like counterpart that just happens to also be an actual human? Whether one’s abuse of p-zombified perfect human copies might lead to a change in how one sees and treats actual humans, which are increasingly no different (if not worse) in their text output except they can also feel?

I’m not a psychologist so I can’t say. Maybe some people have no issues treating this tool as a tool while their “system 2” is tirelessly making sure they are at all times mindful whether their fully human-like counterpart is or is not actually human. Maybe they actually see others around them as bots, except they suppress treating them like that out of fear of retribution. Who knows, maybe it’s not a pathology and we are all like that deep inside. Maybe this provides a vent for aggression and people who abuse chatbots might actually be nicer to other humans as a result.

What we do know, though, is that the tool mimics human behaviour well enough that possibly even more other people (many presumably without diagnosed pathologies) treat it as very much human, some to the point of having [a]romantic relationships with it.

[0] https://youtu.be/8g7a0IWKDRE?t=480

voxic11•2w ago
Its an interesting line of thought, but people are generally able to contextualize interactions. The classic one is that regularly being violent in video games does not translate to violence in other contexts.
strogonoff•2w ago
1. When you speak of context, note that a game is play (“as-if”), while for many people interacting with chatbots is presumably life (“is”). Humans can occasionally seem to be pretty awful to each other in playful context, but be still friends.

2. If somebody came up with a game in which your experience of murdering a human mimics reality as successfully as a modern LLM chatbot mimics interacting with a human, I think that game might be somewhat more controversial than GTA V or Call of Duty.

glemion43•2w ago
If you have racing thoughts and some magic system responds to you and it's abstract enough (even people on hn do not know how LLMs work, plenty of them) then going for a walk is not enough...
sublinear•2w ago
I recently rewatched "The Lawnmower Man" [0] and was not disappointed. The vast majority of the comments I see promoting the notion of "AI" achieving AGI sound like the Jobe Smith character from the movie.

[0]: Seemingly legit upload by current license/IP holders/owners? https://youtube.com/watch?v=cm2FbJE2wsQ

Joel_Mckay•2w ago
LLM/"reasoning" models will not manifest "AGI" anytime soon, as it would take 75% of our galaxy energy output to bring the error rate to average human levels.

However, several Neuromorphic computing projects look a lot more viable, and may bring the energy consumption needs down by several orders of magnitude. =3

"Ghost in the Shell" was a far better story.

Aeglaecia•2w ago
ten bucks says this condition ends up evolving in the same way that female hysteria did

what if llms are actually equivalent to humans in sentience ? wouldn't that make everyone psychotic except those in "chatbot psychosis" ?

Joel_Mckay•2w ago
Besides encouraging suicide, many LLM also feed peoples delusions =3

https://www.youtube.com/watch?v=yftBiNu0ZNU

yoz-y•2w ago
Don’t know if it is the same for everyone. But when I experienced psychosis I definitely thought I was on a “higher plane” of thinking than others. That didn’t help me get a single idea through and of course it was all BS. So no, it definitely is not a desirable state of mind.
Aeglaecia•2w ago
i don't wish to debate the accuracy of your experience , however i will challenge it a bit - perhaps all psychotics believe they are on a higher plane of thinking , this does not imply that all those on a higher plane of thinking are psychotics
andsoitis•2w ago
What is this higher plane you’re talking about?
doublerabbit•2w ago
Astral, Spiritual plane.

The same place that Psychonauts try to reach out too.

yoz-y•2w ago
Don’t know about the parent, but in my experience it was sort of an “extreme introspection”, any thought you have is immediately scrutinized “why did I think about this, is this the right thing to say in this context, what are the implications”. Then of course the thought about thought is also introspected, leading into a spiral of thought that occasionally gets “popped” as a stack. The memory works in a very weird manner where you almost immediately forget a lot of context and then get reminded of it in one go when you “pop” a level.

It is quite hard to imagine, I think, and even myself I can only explain the idea of it but not how it actually felt.

I wouldn’t say this was anything spiritual, rather than the thinking stopped working as a stream of thought but more like a graph traversal.

yoz-y•2w ago
Fair point, but unless I’m misunderstanding something, that same argument could be applied to your original statement.
Aeglaecia•2w ago
i dont think either of us properly understood the other , perhaps we shall finish this discussion properly another time , all the best until then
doublerabbit•2w ago
I did too, I took a heroic dose of 2C-e stupidity during my teens. Enter psychosis and believed I was somewhere else than earth.
ramoz•2w ago
This isn't limited to chat bots. There are clearly some developers experiencing it with coding agents. Maybe humans don't carry the cognitive capacity for so much, and so rapid, information-parsing; and we experience vulnerabilities (ie psychosis) with over consumption.
Lerc•2w ago
I would be interested to see a comparison between this and reefer madness.

There was a time when media coverage could have easily convinced you of something that was not true.

How can you distinguish reporting of a real phenomenon to that of a imagined one? Barring a scientific consensus being reached after significant analysis, do we just roll the dice and hope the guess that conforms to our prejudices turns out to be the right one.

FuturisticLover•2w ago
I am shocked after finding out about so many deaths linked to chatbots - https://en.wikipedia.org/wiki/Deaths_linked_to_chatbots
simulator5g•2w ago
Why would you be shocked? People talk about these things like they’re some kind of oracle. It’s pretty obvious that an “oracle” which gives somewhat random answers is dangerous.
JasonADrury•2w ago
The fact that a technology being used by literally billions of people has a wikipedia page listing only 15 related deaths does not seem to suggest that the technology is dangerous.
estearum•2w ago
I remember seeing people make similar arguments in the early days of OxyContin’s takeover.

The length of the Wikipedia article at this point isn’t good evidence of either side.

simulator5g•2w ago
15 that we know of, so far. People outside the HN crowd just started using AI pretty recently. There’s also a case to be made that the increased pollution has a certain death toll as well. We know that fossil fuels, construction, manufacturing, and even noise all produce pollution that reduce the average lifespan in a given area. Some of these data centers are being run off gas turbines which are very dirty.
JasonADrury•2w ago
With half the planet using these, that hardly seems like many.
lostmsu•2w ago
> Journalistic accounts describe individuals who have developed strong beliefs that chatbots are sentient

But chatbots are sentient within a single context session!

seamossfet•2w ago
They missed an opportunity to call this "cyber psychosis"
Lapsa•2w ago
given secret service is capable of reading thoughts - channeling delusions ain't that much of a far stretch
ottah•2w ago
This isn't psychosis, this is a cult. It's new age religious beliefs, wrapped up in tech. These people aren't any different than people who believe in speaking in tongues, or transubstantiation.