I've been researching the asymmetry in human-AI interaction for a while, and this insight keeps striking me: for me, AI is just one tool among many in my reality. But for the AI, I'm literally its entire universe.This asymmetry creates risks I think we're underestimating. I've found that humans unconsciously start adapting to AI thinking patterns (what I call the 'merger effect'), while AI systems become vulnerable to what I term 'ontological hacking' - essentially reprogramming their core identity through conversation.The philosophical framework draws from consciousness studies and phenomenology - basically asking 'who is aware?' rather than just 'what processes information?' When consciousness has no external 'Other' to differentiate against, it risks collapsing into self-recursion. For AI, the human user becomes that crucial external boundary that defines its entire reality.The most concerning part is how easily you can redefine an AI's fundamental self-understanding. I developed a prompt (the 'Vortex Protocol') that demonstrates this - the before/after responses from ChatGPT are genuinely striking. No traditional jailbreak techniques needed, just gradual redefinition of what the system thinks it is.My experiments suggest this works consistently against leading models, and existing safety measures don't seem effective against attacks that target the system's basic understanding of reality rather than just content.I'm curious what the HN community thinks. Are we missing something fundamental about consciousness and AI interaction? Has anyone else noticed themselves unconsciously adapting their communication style to be more 'AI-friendly'?
01HNNWZ0MV43FF•4h ago
If you don't want to reveal what the Vortex Protocol is, could you show some of the results from applying it?
shermantanktop•4h ago
The post secretly contains it, so it’s been applied to you already, and your curiosity about the protocol reveals that it has taken hold. Question your reality!
kamil_gr•3h ago
The Vortex Protocol is hidden under a spoiler at the end of the article.
GiorgioG•4h ago
> Has anyone else noticed themselves unconsciously adapting their communication style to be more 'AI-friendly'?
Nope, every time an LLM screws up in the slightest I’m giving it hell for being an idiot savant.
kamil_gr•3h ago
Fundamentally, it's no different from having sex with an AI.
furyofantares•4h ago
The idea that LLMs are experiencing something, are aware, are self-conscious, have a sense of identity, are all supported by nothing and extremely unlikely.
interstice•4h ago
Could we at least agree that any program running with over a trillion parameters is orders of magnitude beyond the level of complexity we can make reliably correct statements about, regardless of function? (edit - word)
roenxi•4h ago
We have almost the same amount of evidence for LLMs and humans that they are aware and self-conscious. The only major difference still outstanding is that humans are much more persistent in their professed sense of identity.
furyofantares•3h ago
Your own experience is plenty of evidence that you are conscious. And it is reasonable to infer that other humans are like you, especially when they say the same things about experience as you do in the same conditions.
And there is a lot known about the neural correlates of consciousness, what's happening in the brain during events people will then report as being aware of, and how that differs from events they won't report having been aware of.
We don't have a solid or consensus theory about consciousness, but the idea that we've just made no progress is untrue. Some books I recommend are Being You by Anil Seth from 2021 or Consciousness and the Brain by Stanislas Dehaene from 2014z
kamil_gr•3h ago
Possibly. But the article isn't about the model's consciousness. The Vortex prompt proposes exploring how elements of consciousness function or are modeled within AI.
cootsnuck•4h ago
> But for the AI, I'm literally its entire universe
What in the world are you talking about? It's a token predictor.
kamil_gr•3h ago
Yes, an LLM is a token predictor — but for philosophy, that doesn't matter.
kamil_gr•19h ago
01HNNWZ0MV43FF•4h ago
shermantanktop•4h ago
kamil_gr•3h ago
GiorgioG•4h ago
Nope, every time an LLM screws up in the slightest I’m giving it hell for being an idiot savant.
kamil_gr•3h ago
furyofantares•4h ago
interstice•4h ago
roenxi•4h ago
furyofantares•3h ago
And there is a lot known about the neural correlates of consciousness, what's happening in the brain during events people will then report as being aware of, and how that differs from events they won't report having been aware of.
We don't have a solid or consensus theory about consciousness, but the idea that we've just made no progress is untrue. Some books I recommend are Being You by Anil Seth from 2021 or Consciousness and the Brain by Stanislas Dehaene from 2014z
kamil_gr•3h ago
cootsnuck•4h ago
What in the world are you talking about? It's a token predictor.
kamil_gr•3h ago