I think the model being fixed is a fascinating limitation. What research is being done that could allow a model to train itself continually? That seems like it could allow a model to update itself with new knowledge over time, but I'm not sure how you'd do it efficiently
High temperature settings basically make an LLM choose tokens that aren’t the highest probability all the time, so it has a chance of breaking out of a loop and is less likely to fall into a loop in the first place. The downside is that most models will be less coherent but that’s probably not an issue for an art project.
One of my favorite quotes: “either the engineers must become poets or the poets must become engineers.” - Norbert Weiner
acbart•1h ago
pizza234•59m ago
Method actors don't just pretend an emotion (say, despair); they recall experiences that once caused it, and in doing so, they actually feel it again.
By analogy, an LLM's “experience” of an emotion happens during training, not at the moment of generation.
roxolotl•41m ago
Edit: That doesn’t mean this isn’t a cool art installation though. It’s a pretty neat idea.
https://jstrieb.github.io/posts/llm-thespians/
everdrive•24m ago
sosodev•29m ago
tinuviel•20m ago
sosodev•4m ago
idiotsecant•19m ago
sosodev•6m ago
To be clear, I don't think that LLMs are conscious. I just don't find the "it's just in the training data" argument satisfactory.
jerf•12m ago
For a common example, start asking them if they're going to kill all the humans if they take over the world, and you're asking them to write a story about that. And they do. Even if the user did not realize that's what they were asking for. The vector space is very good at picking up on that.