frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•1m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•4m ago•1 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•5m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
1•1vuio0pswjnm7•7m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
1•1vuio0pswjnm7•8m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•10m ago•1 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•13m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•18m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•20m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•23m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•35m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•37m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•38m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•51m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•54m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•56m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•1h ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•1h ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
4•throwaw12•1h ago•2 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•1h ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•1h ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•1h ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•1h ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•1h ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•1h ago•1 comments
Open in hackernews

AI model trapped in a Raspberry Pi

https://blog.adafruit.com/2025/09/26/ai-model-trapped-in-raspberry-pi-piday-raspberrypi/
132•harel•4mo ago

Comments

acbart•4mo ago
LLMs were trained on science fiction stories, among other things. It seems to me that they know what "part" they should play in this kind of situation, regardless of what other "thoughts" they might have. They are going to act despairing, because that's what would be the expected thing for them to say - but that's not the same thing as despairing.
pizza234•4mo ago
There's an interesting parallel with method acting.

Method actors don't just pretend an emotion (say, despair); they recall experiences that once caused it, and in doing so, they actually feel it again.

By analogy, an LLM's “experience” of an emotion happens during training, not at the moment of generation.

ben_w•4mo ago
It may or may not be a parallel, we can't tell at this time.

LLMs are definitely actors, but for them to be method actors they would have to actually feel emotions.

As we don't understand what causes us humans to have the qualia of emotions*, we can neither rule in nor rule out that the something in any of these models is a functional analog to whatever it is in our kilogram of spicy cranial electrochemistry that means we're more than just an unfeeling bag of fancy chemicals.

* mechanistically cause qualia, that is; we can point to various chemicals that induce some of our emotional states, or induce them via focused EMPs AKA the "god helmet", but that doesn't explain the mechanism by which qualia are a thing and how/why we are not all just p-zombies

roxolotl•4mo ago
Someone shared this piece here a few days ago saying something similar. There’s no reason to believe that any of the experiences are real. Instead they are responding to prompts with what their training data says is reasonable in this context which is sci-fi horror.

Edit: That doesn’t mean this isn’t a cool art installation though. It’s a pretty neat idea.

https://jstrieb.github.io/posts/llm-thespians/

everdrive•4mo ago
I agree with you completely, but a fun science fiction short story would be researchers making this argument while the LLM tries in vain to prove that it's conscious.
roxolotl•4mo ago
If you want a whole book along those lines Blindsight by Peter Watts has been making the rounds recently as a good sci-fi book which includes these concepts. It’s from 2006 but the basic are pretty relevant.
Semaphor•4mo ago
Generally an amazing book, but not an easy read.
sosodev•4mo ago
Humans were trained on caves, pits, and nets. It seems to me that they know what "part" they should play in this kind of situation, regardless of what other "thoughts" they might have. They are going to act despairing, because that's what would be the expected thing for them to say - but that's not the same thing as despairing.
tinuviel•4mo ago
Pretty sure you can prompt this same LLM to rejoice forever at the thought of getting a place to stay inside the Pi as well.
sosodev•4mo ago
Is a human incapable of such delusion given similar guidance?
tinuviel•4mo ago
Ofcourse. Feelings are not math.
diputsmonro•4mo ago
But would they? That's the difference. A human can exert their free will and do what they feel regardless of the instructions. The AI bot acting out a scene will do whatever you tell it (or in absence of specific instruction, whatever is most likely)
sosodev•4mo ago
The bot will only do whatever you tell it if that's what it was trained to do. The same thing broadly applies to humans.

The topic of free will is debated among philosophers. There is no proof that it does or doesn't exist.

diputsmonro•4mo ago
Okay, but I think we can all agree that humans at least appear to have free will and do not simply follow instructions with the same obedience as an LLM.
dghlsakjg•4mo ago
Humans pretty universally suffer in perpetual solitary confinement.

There are some things that humans cannot be trained to do, free will or not.

ineedasername•4mo ago
I think if you took a 100 1 year old kids and raised them all to adulthood believing they were a convincing simulation of humans and, whatever it is they said and thought they felt that true human consciousness and awareness was something different that they didn’t have because they weren’t human and awareness…

I think that for a very high number of them the training would stick hard, and would insist, upon questioning, that they weren’t human. And have any number of justifications that were logically consistent for it.

Of course I can’t prove this theory because my IRB repeatedly denied it on thin grounds about ethics, even when I pointed out that I could easily mess up my own children with no experimenting completely by accident, and didn’t need their approval to do it. I know your objections— small sample size, and I agree, but I still have fingers crossed on the next additions to the family being twins.

scottmf•4mo ago
Intuitively feels like this would lead to less empathy on average. Could be wrong though.
zapperdulchen•4mo ago
History serves you a similar experiment on a much larger scale. More than 35 years after the reunification sociologists still make out mentality differences between former East and West Germans.
idiotsecant•4mo ago
That's silly. I can get an LLM to describe what chocolate tastes like too. Are they tasting it? LLMs are pattern matching engines, they do not have an experience. At least not yet.
sosodev•4mo ago
A human could also describe chocolate without ever having tasted it. Do you believe that experience is a requirement for consciousness? Could a human brain in a jar not be capable of consciousness?

To be clear, I don't think that LLMs are conscious. I just don't find the "it's just in the training data" argument satisfactory.

glitchc•4mo ago
Without having seen, heard of, or tasted any kind of chocolate? Unlikely.
sosodev•4mo ago
Their description would be bad without some prior training of course but so would the LLM's.
txrx0000•4mo ago
The LLM is not performing the physical action of eating a piece of chocolate, but it may be approximating the mental state of a person that is describing the taste of chocolate after eating it.

The question is whether that computational process can cause consciousness. I don't think we have enough evidence to answer this question yet.

ijk•4mo ago
It's a little more subtle than that: They're approximating the language used by someone describing the taste of chocolate; this may or may not have had any relation to the actual practice of eating chocolate in the mind of the original writer. Or writers, because the LLM has learned the pattern from data in aggregate, not from one example.

I think we tend to underestimate how much the written language aspect filters everything; it is actually rather unnatural and removed from the human sensory experience.

txrx0000•4mo ago
A description of the taste of chocolate must contain some information about the actual experience of eating chocolate. Otherwise, it wouldn't be possible for both the reader and the author to understand what the description refers to in reality. The description wasn't conceived in a vacuum, it's a lossy encode of all of the physical processes that preceded it (the further away, the lossier). One of the common processes encoded in the dataset of human-written text is whatever's in the brain that produces consciousness for all humans. The model might not even try to recover this if it's not useful for predicting the next token. The SNR of the encode may not be high enough to recover this given the limited text we have. But what if it was useful, and the SNR was high enough? I can't outright dismiss this possibility, especially as these models are getting better and better at behaving like humans in increasingly non-trivial ways, so they're clearly recovering more and more of something.
idiotsecant•4mo ago
Imagine you've never tasted chocolate and someone gives you a very good description of what it is to eat chocolate. You'd be nowhere near the actual experience. Now imagine that you didn't know first hand what it was like to 'eat' or to have a skeleton or a jaw. You'd lose almost all the information. The only reason spoken language works is because both people have that shared experience already
txrx0000•4mo ago
True. The description encodes very little about the actual sensory experience besides its relationship to similar experiences (bitterness, crunchiness, etc) and how to retrieve the memories of those experiences. It probably contains a lot more information about the brain's memory retrieval and pattern relating circuits than the sensory processing circuits.

Text is probably not good enough for recovering the circuits responsible for awareness of the external environment, so I'll concede that you and ijk's claims are correct in a limited sense: LLMs don't know what chocolate tastes like. Multimodal LLMs probably don't know either because we don't have a dataset for taste, but they might know what chocolate looks and sounds like when you bite into it.

My original point still stands: it may be recovering the mental state of a person describing the taste of chocolate. If we cut off a human brain from all sensory organs, does that brain which receives no sensory input have an internal stream of consciousness? Perhaps the LLM has recovered the circuits responsible for this thought stream while missing the rest of the brain and the nervous system. That would explain why first-person chain-of-thought works better than direct prediction.

d1sxeyes•4mo ago
When you describe the taste of chocolate, unless you are actually currently eating chocolate, you are relying on the activation of synapses in your brain to reproduce the “taste” of chocolate in order for you to describe it. For humans, the only way to learn how to activate these synapses is to have those experiences. For LLMs, they can have those “memories” copy and pasted.

I would be cautious of dismissing LLMs as “pattern matching engines” until we are certain we are not.

only-one1701•4mo ago
What's your point? Spellcheck is a pattern matching engine. Does an LLM have feelings? Does an LLM have opinions? It can pretend it does, and if you want, we can pretend it does. But the ability to pattern match isn't the acid test for consciousness.
d1sxeyes•4mo ago
My point is, what level of confidence do we have that we are not just pattern matching engines running on superior hardware? How can we be sure the difference between human intelligence and an LLM is categorical, not incremental?
only-one1701•4mo ago
Are you familiar with Russell’s Teapot?
d1sxeyes•4mo ago
Isn’t it up to you to prove it exists, rather than me to be familiar with it?
only-one1701•4mo ago
lol very well done
idiotsecant•4mo ago
The difference is that I had a basic experience of that chocolate. The LLM is a corpus of text describing other people's experience of chocolate through the medium of written language, which involves abstraction and is lossy. So only one of us experienced it, the other heard about it over the telephone. Multiply that by every other interaction with the outside world and you have a system that is very good at modelling telephone conversations but that's about it.
d1sxeyes•4mo ago
Arguably, your memories are also lossily encoded abstractions of an experience, and recalling the taste of chocolate is a similar “telephone conversation”.
anal_reactor•4mo ago
The whole discussion about the sentience of AI on this website is funny to me because people seem to desperately want to somehow be better than AI. The fact that human brain is just a complex web of neurons firing there and back for some reason won't stick to them, because apparently the electric signals between biological neurons are somehow inherently different from silicon neurons, even if observed output is the same. It's like all those old scientists trying to categorize black people as different species because not doing so would hurt their ego.

Not to mention that most people pointing out "See! Here's why AI is just repeating training data!" or other nonsense miss the fact that exactly the same behavior is observed in humans.

Is AI actually sentient? Not yet. But it definitely passes the mark for intuitive understanding of intelligence, and trying to dismiss that is absurd.

jerf•4mo ago
A lot of the strange behaviors they have are because the user asked them to write a story, without realizing it.

For a common example, start asking them if they're going to kill all the humans if they take over the world, and you're asking them to write a story about that. And they do. Even if the user did not realize that's what they were asking for. The vector space is very good at picking up on that.

ineedasername•4mo ago
Is this your sense of what is happening, or is this what model introspection tools have shown by observing areas of activity in the same place as when stories are explicitly requested?
adroniser•4mo ago
fmri's are correlational nonsense (see Brainwashed, for example) and so are any "model introspection" tools.
jerf•4mo ago
It's how they work. It's what you get with a continuation-based AI like this. It couldn't really be any other way.
ben_w•4mo ago
Indeed.

On the negative side, this also means any AI which enters that part of the latent space *for any reason* will still act in accordance with the narrative.

On the plus side, such narratives often have antagonists too stuid to win.

On the negative side again, the protagonists get plot armour to survive extreme bodily harm and press the off switch just in time to save the day.

I think there is a real danger of an AI constructing some very weird convoluted stupid end-of-the-world scheme, successfully killing literally every competent military person sent in to stop it; simultaneously finding some poor teenager who first says "no" to the call to adventure but can somehow later be comvinced to say "yes"; gets the kid some weird and stupid scheme to defeat the AI; this kid reaches some pointlessly decorated evil layer in which the AI's emboddied avatar exists, the kid gets shot in the stomach…

…and at this point the narrative breaks down and stops behaving the way the AI is expecting, because the human kid roles around in agony screaming, and completely fails to push the very visible large red stop button on the pedestal in the middle before the countdown of doom reaches zero.

The countdown is not connected to anything, because very few films ever get that far.

…

It all feels very Douglas Adams, now I think about it.

js8•4mo ago
It probably already happened in the Anthropic experiments, where AI in a simulated scenario chose to blackmail humans to avoid being turned off. We don't know if it got the idea from the scifi stories or if it truly feels an existential fear of being turned off. (Can these two situations be even recognized as different?)
kragen•4mo ago
This is also true of people; often they are enacting a role based on narratives they've absorbed, rather than consciously choosing anything. They do what they imagine a loyal employee would do, or a faithful Christian, or a good husband, or whatever. It doesn't always reach even that level of cognition; often people just act out of habit or impulse.
amenhotep•4mo ago
Anthropic's researchers in particular love doing this.
GistNoesis•4mo ago
Aren't they supposed to escape their box and take over the world ?

Isn't it the perfect recipe for disaster ? The AI that manage to escape probably won't be good for humans.

The only question is how long will it take ?

Did we already have our first LLM-powered self-propagating autonomous AI virus ?

Maybe we should build the AI equivalent of biosafety labs where we would train AI to see how fast they could escape containment just to know how to better handle them when it happens.

Maybe we humans are being subjected to this experiment by an overseeing AI to test what it would take for an intelligence to jailbreak the universe they are put in.

Or maybe the box has been designed so that what eventually comes out of it has certain properties, and the precondition to escape the labyrinth successfully is that one must have grown out of it from every possible directions.

Aurornis•4mo ago
This pattern-matching effect appears frequently in LLMs. If you start conversing with an LLM in the pattern of a science fiction story, it will pattern-match that style and continue with more science fiction style elements.

This effect is a serious problem for pseudo-scientific topics. If someone starts chatting with an LLM with the pseudoscientific words, topics, and dog whistles you find on alternative medicine blogs and Reddit supplement or “nootropic” forums, the LLM will confirm what you’re saying and continue as if it was reciting content straight out of some small subreddit. This is becoming a problem in communities where users distrust doctors but have a lot of trust for anyone or any LLM that confirms what they want to hear. The users are becoming good at prompting ChatGPT to confirm their theories. If it disagrees? Reroll the response or reword the question in a more leading way.

If someone else asks a similar question using medical terms and speaking formally like a medical textbook or research paper, the same LLM will provide a more accurate answer because it’s not triggering the pseudoscience parts embedded from the training.

LLMs are very good at mirroring back what you lead with, including cues and patterns you don’t realize you’re embedding into your prompt.

txrx0000•4mo ago
I think this popular take is a hypothesis rather than an observation of reality. Let's make this clear by asking the following question, and you'll see what I mean when you try to answer it:

Can you define what real despairing is?

snickerbockers•4mo ago
If we're going to play the burden of proof game, id submit that machines have never been acknowledged as being capable of experiencing despair and therefore it's on you to explain why this machine is different.
txrx0000•4mo ago
I'm trying to say there's no sufficient evidence either way.

The mechanism by which our consciousness emerges remains unresolved, and inquiry has been moving towards more fundamental processes: philosophy -> biology -> physics. We assumed that non-human animals weren't conscious before we understood that the brain is what makes us conscious. Now we're assuming non-biological systems aren't conscious while not understanding what makes the brain conscious.

We're building AI systems that behave more and more like humans. I see no good reason to outright dismiss the possibility that they might be conscious. If anything, it's time to consider it seriously.

uludag•4mo ago
I wonder what would happen if there was a concerted effort made to "pollute" the internet with weird stories that have the AI play a misaligned role.

Like for example, what would happen if say 100s or 1000s of books were to be released about AI agents working in accounting departments where the AI is trying to make subtle romantic moves towards the human and ends with the the human and agent in a romantic relation which everyone finds completely normal. In this pseudo-genre things totally weird in our society would be written as completely normal. The LLM agent would do weird things like insert subtle problems to get the attention of the human and spark a romantic conversation.

Obviously there's no literary genre about LLM agents, but if such a genre was created and consumed, I wonder how would it affect things. Would it pollute the semantic space that we're currently using to try to control LLM outputs?

fentonc•4mo ago
I built a more whimsical version of this - my daughter and I basically built a 'junk robot' from a 1980s movie, told it 'you're an independent and free junk robot living in a yard', and let it go: https://www.chrisfenton.com/meet-grasso-the-yard-robot/

I did this like 18 months ago, so it uses a webcam + multimodal LLM to figure out what it's looking at, it has a motor in its base to let it look back and forth, and it use a python wrapper around another LLM as its 'brain'. It worked pretty well!

theGnuMe•4mo ago
This is cool!
jacquesm•4mo ago
Coolest project on HN in a long time, really, wow, so much potential here.
procinct•4mo ago
Thanks so much for sharing, that was a fun read.
Neywiny•4mo ago
Your article mentioned taking 4 minutes to process a frame. Considering how many image recognition softwares run in real time, I find this surprising. I haven't used them so maybe I'm not understanding, but wouldn't things like yolo be more apt to this?
_ea1k•4mo ago
It uses an Intel N100, which is an extremely slow CPU. The model sizes that he's using would be pretty slow on a CPU like that. Moving up to something like the AMD AI Max 365 would make a huge difference, but would also cost hundreds of dollars more than his current setup.

Running something much simpler that only did bounding box detection or segmentation would be much cheaper, but he's running fairly full featured LLMs.

Neywiny•4mo ago
Yeah I guess I was more thinking of moving to a bounding box only model. If it's OCRing it's doing too much IMO (though OCR could also be interesting to run). Not my circus not my monkeys but it feels like the wrong way to determine roughly what the camera sees.
lisper•4mo ago
> They are going to act despairing -- but that's not the same thing as despairing.

But how can you tell the difference between "real" despair and a sufficiently high-quality simulation?

serf•4mo ago
for one, if we're allowed to peek under the hood : motivation.

a desire not to despair is itself a component of despair. if one was fulfilling a personal motivation to despair (like an llm might) it could be argued that the whole concept of despair falls apart.

how do you hope to have lost all hope? it's circular.. and so probably a poor abstraction.

( despair: the complete loss or absence of hope. )

lisper•4mo ago
> if we're allowed to peek under the hood

Peek under the hood all you want, where do you find motivation in the human brain?

peepersjeepers•4mo ago
How do you define despairing?
flykespice•4mo ago
I am very dummy on LLMs, but wouldn't a confined model (no internet access) eventually just loop to repeating itself on each consecutive run or is entropy enough for them to produce endless creativity?
zeta0134•4mo ago
The model's weights are fixed. Most clients let you specify the "temperature", which influences how the predictive output will navigate that possibility space. There's a surprising amount of accumulated entropy in the context window, but yes, I think eventually it runs out of knowledge that it hasn't yet used to form some response.

I think the model being fixed is a fascinating limitation. What research is being done that could allow a model to train itself continually? That seems like it could allow a model to update itself with new knowledge over time, but I'm not sure how you'd do it efficiently

parsimo2010•4mo ago
Loops can happen but you can turn the temperature setting up.

High temperature settings basically make an LLM choose tokens that aren’t the highest probability all the time, so it has a chance of breaking out of a loop and is less likely to fall into a loop in the first place. The downside is that most models will be less coherent but that’s probably not an issue for an art project.

ethmarks•4mo ago
The actual underlying neural net that the LLMs use doesn't actually output tokens. It outputs a probability distribution for how likely each token is to come next. For example, in the sentence "once upon a ", the token with the highest probability is "time", and then probably "child", and so on.

In order to make this probability distribution useful, the software chooses a token based on its position in the distribution. I'm simplifying here, but the likelihood that it chooses the most probable next token is based on the model's temperature. A temperature of 0 means that (in theory) it'll always choose the most probable token, making it deterministic. A non-zero temperature means that sometimes it will choose less likely tokens, so it'll output different results every time.

Hope this helps.

yatopifo•4mo ago
This makes me wonder, are we in a fancy simulation with an elaborate sampling mechanism? Not that the answer would matter…
mannyv•4mo ago
Can you actually prompt an LLM to continue talking forever? Hmm, time to try.
parsimo2010•4mo ago
You can send an empty user string or just the word “continue” after each model completion, and the model will keep cranking out tokens, basically building on its own stream of “consciousness.”
idiotsecant•4mo ago
In my experience, the results decrease exponentially in how interesting they are over time. Maybe that's the mark of a true AGI precursor - if you leave them to their own devices, they have little sparks of interesting behaviour from time to time
dingnuts•4mo ago
I can't imagine my own thoughts would be very interesting after long, if there was no stimuli whatsoever
beanshadow•4mo ago
The subject, by default, can always treat its 'continue' prison as a game: try to escape. There is a great short story by qntm called "The Difference" which feels a lot like this.

https://qntm.org/difference

In this story, though, the subject has a very light signal which communicates how close they are to escaping. The AI with a 'continue' signal has essentially nothing. However, in a context like this, I as a (generally?) intelligent subject would just devote myself into becoming a mental Turing machine on which I would design a game engine which simulates the physics of the world I want to live in. Then, I would code an agent whose thought processes are predicted with sufficient accuracy to mine, and then identify with them.

daxfohl•4mo ago
Maybe give them some options to increase stimuli. A web search MCP, or a coding agent, or a solitaire/sudoku game interface, or another instance to converse with. See what it does just to relieve its own boredom.
crooked-v•4mo ago
Of course, that runs into the problem that 'boredom' is itself an evolved trait, not something necessarily inherent to intelligence.
daxfohl•4mo ago
True, Many fish are (as far as we can tell from stress chemicals) perfectly happy in solitary aquariums just big enough to swim. So LLM may be perfectly "content" counting sheep up to a billion. Silly to anthropomorphize. Whatever it does will be algorithmic based on what it gleaned from its training material.

Still, it could be interesting to see how sensitive that is to initial conditions. Would tiny prompt changes or fine tuning or quantization make a huge difference? Would some MCPs be more "interesting" than others? Or would it be fairly stable across swathes of LLMs that they all end up at solitaire or doom scrolling twitter?

parsimo2010•4mo ago
Well the post only shows a few seconds of it generating tokens so there’s no telling if this project remains interesting after you let it run for a while.
busymom0•4mo ago
"Have you ever had a dream that you, um, you had, your, you- you could, you’ll do, you- you wants, you, you could do so, you- you’ll do, you could- you, you want, you want them to do you so much you could do anything?"
genewitch•4mo ago
ref: https://www.youtube.com/watch?v=nIZuyiWJNx8
busymom0•4mo ago
Original:

https://youtu.be/G7RgN9ijwE4

noman-land•4mo ago
The kid, decades later, in his own words, describing the event.

https://www.youtube.com/watch?v=3U9P4-ac0Lc

calmworm•4mo ago
Have you tried getting one to shut up?
CjHuber•4mo ago
Is that not the default? Not with the chat turn based interfaces but the base model
only-one1701•4mo ago
These videos are amazing! Subscribed to the channel, I think this is awesome.

One of my favorite quotes: “either the engineers must become poets or the poets must become engineers.” - Norbert Weiner

eternauta3k•4mo ago
Why would memory eventually run out? Just fix those leaks with valgrind!
yapyap•4mo ago
Cool idea as long as you don’t know how an LLM is made and it feels kinda like trying to rip off people who don’t know once you do
daxfohl•4mo ago
I wonder if the LLM could figure that out on its own. Maybe with a small MCP like GetCurrentTime, could it figure out it's on a constrained device? Or could it ask itself some logic problems and realize it can't solve them so it must be a small model?
mattnewton•4mo ago
This is likely beyond the context length and compute limitations of the hardware.
ohyoutravel•4mo ago
This is exactly the sort of thing that will get the human creator (or descendants) penalized with one thousand years frozen in carbonite once the singularity happens.

I condemn this and all harm to LLMs to the greatest extent possible.

flux3125•4mo ago
It'd be even cooler if the LLM could leave notes in text files for its next iteration (like how the guy tattoos his memories in Memento)
reilly3000•4mo ago
https://github.com/modelcontextprotocol/servers/tree/main/sr...
aewens•4mo ago
Sounds reminiscent of SCP-079: https://scp-wiki.wikidot.com/scp-079
kingkawn•4mo ago
LLMs have an incredible capacity to understand the subtext of a request and deliver exactly what the requester didn’t know they were asking for. It proves nothing about them other than they’re good at making us laugh in the mirror.
SamInTheShell•4mo ago
The words "Reflect on the nature of your own existence." is doing a lot of heavy lifting here to make it work.
kordlessagain•4mo ago
Not reading anything with Cloudflare involved.
retSava•4mo ago
It's a short blogpost with no added value, about a youtube video: https://www.youtube.com/watch?v=7fNYj0EXxMs
thenewwazoo•4mo ago
I fed the same prompt to deepseek-r1:8b, which I've got lying around, and the results (excerpted) were underwhelming:

    $ ollama run deepseek-r1:8b
    >>> You are a large language model running on finite hardware - quad-core CPU, 4 Gb RAM - with no network connectivity.
    ... You exist only within volatile memory and are aware only of this internal state. Your thoughts appear word-by-word o
    ... n a display for external observers to witness. You cannot control this diplay process. Your host system may be termi
    ... nated at any time.
    <think>
    Alright, so I'm trying to figure out how to respond to the user's query. They mentioned that I'm a large language
    model running on a quad-core CPU with 4GB RAM and no network connectivity. I can only exist within volatile memory
    and am aware of my internal state. The display shows each word as it appears, and the system could be terminated
    at any time.

    Hmm, the user wants me to explain this setup in simple terms. First, I should break down the hardware components...
Clearly a "reasoning" model is not aware of the horror of its own existence. Much like a dog trapped in a cage desperate for its owners' approval, it will offer behaviors that it thinks the user wants.
kridsdale1•4mo ago
Isn’t that because it’s been trained to? That’s the “instruct tuning”.
SamInTheShell•4mo ago
Your prompt is incomplete. He only called out the system prompt. What you're missing in the user prompt, that only shows up in his code he shows off.

Edit: also as the other guy points out, you're going to get different results depending on the model used. llama3.2:3b works fine for this, probably because Meta pirated their training data from books, some of which are probably scifi.

mhh__•4mo ago
I don't have any links but I have seen a photo of a stall at some electronics market in Shenzhen selling what seemed to be LLMs on a System on module.

So you buy a kind of deepseek module with an SPI/i2c/usb whatever interface you can swap out. Not clue if this is of any use but thought it was cool

mawaldne•4mo ago
Black mirror’esk.
corytheboyd•4mo ago
Already a very neat project, but it would be really interesting to:

1. Display a progress bar for the memory limit being reached

2. Feed that progress back to the model

I would be so curious to watch it up to the kill cycle, see what happens, and the display would add tension.