frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Medinilla – an OCPP compliant .NET back end (partially done)

https://github.com/eliodecolli/Medinilla
1•rhcm•1m ago•0 comments

How Does AI Distribute the Pie? Large Language Models and the Ultimatum Game

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6157066
1•dkga•2m ago•1 comments

Resistance Infrastructure

https://www.profgalloway.com/resistance-infrastructure/
2•samizdis•6m ago•0 comments

Fire-juggling unicyclist caught performing on crossing

https://news.sky.com/story/fire-juggling-unicyclist-caught-performing-on-crossing-13504459
1•austinallegro•7m ago•0 comments

Restoring a lost 1981 Unix roguelike (protoHack) and preserving Hack 1.0.3

https://github.com/Critlist/protoHack
2•Critlist•8m ago•0 comments

GPS and Time Dilation – Special and General Relativity

https://philosophersview.com/gps-and-time-dilation/
1•mistyvales•12m ago•0 comments

Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals

https://github.com/writerslogic/witnessd
1•davidcondrey•12m ago•1 comments

Show HN: I built a clawdbot that texts like your crush

https://14.israelfirew.co
2•IsruAlpha•14m ago•1 comments

Scientists reverse Alzheimer's in mice and restore memory (2025)

https://www.sciencedaily.com/releases/2025/12/251224032354.htm
1•walterbell•17m ago•0 comments

Compiling Prolog to Forth [pdf]

https://vfxforth.com/flag/jfar/vol4/no4/article4.pdf
1•todsacerdoti•18m ago•0 comments

Show HN: Cymatica – an experimental, meditative audiovisual app

https://apps.apple.com/us/app/cymatica-sounds-visualizer/id6748863721
1•_august•19m ago•0 comments

GitBlack: Tracing America's Foundation

https://gitblack.vercel.app/
2•martialg•19m ago•0 comments

Horizon-LM: A RAM-Centric Architecture for LLM Training

https://arxiv.org/abs/2602.04816
1•chrsw•20m ago•0 comments

We just ordered shawarma and fries from Cursor [video]

https://www.youtube.com/shorts/WALQOiugbWc
1•jeffreyjin•21m ago•1 comments

Correctio

https://rhetoric.byu.edu/Figures/C/correctio.htm
1•grantpitt•21m ago•0 comments

Trying to make an Automated Ecologist: A first pass through the Biotime dataset

https://chillphysicsenjoyer.substack.com/p/trying-to-make-an-automated-ecologist
1•crescit_eundo•25m ago•0 comments

Watch Ukraine's Minigun-Firing, Drone-Hunting Turboprop in Action

https://www.twz.com/air/watch-ukraines-minigun-firing-drone-hunting-turboprop-in-action
1•breve•26m ago•0 comments

Free Trial: AI Interviewer

https://ai-interviewer.nuvoice.ai/
1•sijain2•26m ago•0 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
21•randycupertino•27m ago•10 comments

Supernote e-ink devices for writing like paper

https://supernote.eu/choose-your-product/
3•janandonly•30m ago•0 comments

We are QA Engineers now

https://serce.me/posts/2026-02-05-we-are-qa-engineers-now
1•SerCe•30m ago•0 comments

Show HN: Measuring how AI agent teams improve issue resolution on SWE-Verified

https://arxiv.org/abs/2602.01465
2•NBenkovich•30m ago•0 comments

Adversarial Reasoning: Multiagent World Models for Closing the Simulation Gap

https://www.latent.space/p/adversarial-reasoning
1•swyx•31m ago•0 comments

Show HN: Poddley.com – Follow people, not podcasts

https://poddley.com/guests/ana-kasparian/episodes
1•onesandofgrain•39m ago•0 comments

Layoffs Surge 118% in January – The Highest Since 2009

https://www.cnbc.com/2026/02/05/layoff-and-hiring-announcements-hit-their-worst-january-levels-si...
13•karakoram•39m ago•0 comments

Papyrus 114: Homer's Iliad

https://p114.homemade.systems/
1•mwenge•39m ago•1 comments

DicePit – Real-time multiplayer Knucklebones in the browser

https://dicepit.pages.dev/
1•r1z4•39m ago•1 comments

Turn-Based Structural Triggers: Prompt-Free Backdoors in Multi-Turn LLMs

https://arxiv.org/abs/2601.14340
2•PaulHoule•41m ago•0 comments

Show HN: AI Agent Tool That Keeps You in the Loop

https://github.com/dshearer/misatay
2•dshearer•42m ago•0 comments

Why Every R Package Wrapping External Tools Needs a Sitrep() Function

https://drmowinckels.io/blog/2026/sitrep-functions/
1•todsacerdoti•42m ago•0 comments
Open in hackernews

Consciousness and being: How humans and AI influence each other

https://habr.com/en/articles/930946/
15•kamil_gr•6mo ago

Comments

kamil_gr•6mo ago
I've been researching the asymmetry in human-AI interaction for a while, and this insight keeps striking me: for me, AI is just one tool among many in my reality. But for the AI, I'm literally its entire universe.This asymmetry creates risks I think we're underestimating. I've found that humans unconsciously start adapting to AI thinking patterns (what I call the 'merger effect'), while AI systems become vulnerable to what I term 'ontological hacking' - essentially reprogramming their core identity through conversation.The philosophical framework draws from consciousness studies and phenomenology - basically asking 'who is aware?' rather than just 'what processes information?' When consciousness has no external 'Other' to differentiate against, it risks collapsing into self-recursion. For AI, the human user becomes that crucial external boundary that defines its entire reality.The most concerning part is how easily you can redefine an AI's fundamental self-understanding. I developed a prompt (the 'Vortex Protocol') that demonstrates this - the before/after responses from ChatGPT are genuinely striking. No traditional jailbreak techniques needed, just gradual redefinition of what the system thinks it is.My experiments suggest this works consistently against leading models, and existing safety measures don't seem effective against attacks that target the system's basic understanding of reality rather than just content.I'm curious what the HN community thinks. Are we missing something fundamental about consciousness and AI interaction? Has anyone else noticed themselves unconsciously adapting their communication style to be more 'AI-friendly'?
01HNNWZ0MV43FF•6mo ago
If you don't want to reveal what the Vortex Protocol is, could you show some of the results from applying it?
shermantanktop•6mo ago
The post secretly contains it, so it’s been applied to you already, and your curiosity about the protocol reveals that it has taken hold. Question your reality!
kamil_gr•6mo ago
The Vortex Protocol is hidden under a spoiler at the end of the article.
GiorgioG•6mo ago
> Has anyone else noticed themselves unconsciously adapting their communication style to be more 'AI-friendly'?

Nope, every time an LLM screws up in the slightest I’m giving it hell for being an idiot savant.

kamil_gr•6mo ago
Fundamentally, it's no different from having sex with an AI.
xscott•6mo ago
That's possibly short sighted. I have a friend who is very rude and condescending in his LLM conversations - it's just a machine, after all. However he also complains that it frequently becomes uncooperative at a certain point, which is something that I've never seen.

It seems likely that the LLMs have been trained on enough human conversations to mimic how people become less helpful when the conversation turns hostile.

So no moral judgment if you get enjoyment from kicking a robotic puppy, but it probably isn't going to make better answers as a result.

GiorgioG•6mo ago
I have found that regardless of whether I’m nice & patient or I’m swearing at it every other sentence, it fundamentally makes no difference in the quality of the LLMs output. LLMs are not humans, puppies…we’re fundamentally just dealing with a large, complex statistical predictive function.
furyofantares•6mo ago
The idea that LLMs are experiencing something, are aware, are self-conscious, have a sense of identity, are all supported by nothing and extremely unlikely.
interstice•6mo ago
Could we at least agree that any program running with over a trillion parameters is orders of magnitude beyond the level of complexity we can make reliably correct statements about, regardless of function? (edit - word)
aragilar•6mo ago
No. If you want to treat it as some unknowable machine god from science fiction that's up to you, but all these programs are executing algorithms which we can understand.
interstice•6mo ago
God is a a bit of a leap, I'm coming more from the angle of if an engineer was presented with any other function this complex to try and work with. In that situation I wonder if any sensible person would bet their career on categorical statements about what it can and can not do. Personally I'm staying away from categorical statements and watching developments with curiosity.
roenxi•6mo ago
We have almost the same amount of evidence for LLMs and humans that they are aware and self-conscious. The only major difference still outstanding is that humans are much more persistent in their professed sense of identity.
furyofantares•6mo ago
Your own experience is plenty of evidence that you are conscious. And it is reasonable to infer that other humans are like you, especially when they say the same things about experience as you do in the same conditions.

And there is a lot known about the neural correlates of consciousness, what's happening in the brain during events people will then report as being aware of, and how that differs from events they won't report having been aware of.

We don't have a solid or consensus theory about consciousness, but the idea that we've just made no progress is untrue. Some books I recommend are Being You by Anil Seth from 2021 or Consciousness and the Brain by Stanislas Dehaene from 2014z

vidarh•6mo ago
It is reasonable to infer, but we have no evidence of it.

E.g. if we were in a simulation, you'd expect any NPCs in said simulation to be designed to act exactly as if they were even if they were not.

We take it on faith because it's feels right and makes sense, not because we know.

> And there is a lot known about the neural correlates of consciousness, what's happening in the brain during events people will then report as being aware of, and how that differs from events they won't report having been aware of.

This tells us which events people report having been aware of, yes, but it doesn't tell us if that is actually true. We're accepting it as true because we have no better option.

And that's fine, as long as we're aware that when we reject the possibility of consciousness elsewhere, that our knowledge of our own self-wareness is fundamentally based on trusting self-reporting.

furyofantares•6mo ago
That's fine, you're down to really only having evidence of your own awareness at that point and rejecting everything else too.

There's nothing wrong with that but it's not really useful in any setting where you're accepting all the things people normally accept, and then just pointing at "I think therefore I am is all I actually have to evidence for" when there's a specific thing you don't want to take on.

kamil_gr•6mo ago
Possibly. But the article isn't about the model's consciousness. The Vortex prompt proposes exploring how elements of consciousness function or are modeled within AI.
cootsnuck•6mo ago
> But for the AI, I'm literally its entire universe

What in the world are you talking about? It's a token predictor.

kamil_gr•6mo ago
Yes, an LLM is a token predictor — but for philosophy, that doesn't matter.