I asked a lot of questions and I am sorry if it might be burning some tokens but I found this website really fascinating.
This seems really great and simple to explore the biases within AI models and the UI is extremely well built. Thanks for building it and I wish your project good wishes from my side!
This is after the fact that even OpenAI admits that its a bubble and just like, we all know its a bubble and I found this fascinating
The gist below has a screenshot of it
https://gist.github.com/SerJaimeLannister/4da2729a0d2c9848e6...
I say this exact same thing every time I think about using an LLM.
Even then, this isn't even a good use case for an LLM... though admittedly many people use them in this way unknowingly.
edit: I suppose it's useful in that it's a similar to an "data inference attack" which tries to identify some characteristic present in the training data.
The model stores all the content on which it is trained in a compressed form. You can change the weights to make it more likely to show the content you ethically prefer; but all the immoral content is also there, and it can resurface with inputs that change the conditional probabilities.
That's why people can make commercial models to circumvent copyright, give instructions for creating drugs or weapons, encourage suicide... The model does not have anything resembling morals; for it all the text is the same, strings of characters that appear when following the generation process.
Correction: if your training data and the input prompts are sufficiently moral. Under malicious queries, or given the randomness introduced by sufficiently long chains of input/output, it's relatively easy to extract content from the model that the designers didn't want their users to get.
In any case, the elephant in the room is that the models have not been trained with "sufficiently moral" content, whatever that means. Large Language Models need to be trained on humongous amounts of text, which means that the builders need to use a lot of different, very large corpuses of content. It's impossible to filter all that diverse content to ensure that only 'moral content' is used; yet if it was possible, the model would be extremely less useful for the general case, as it would have large gaps of knowledge.
This is a pretty odd statement.
Lets take LLMs alone out of this statement and go with a GenAI style guided humanoid robot. It has language models to interpret your instructions, vision models to interpret the world. Mechanical models to guide its movement.
If you tell this robot to take a knife and cut onions, alignment means it isn't going to take the knife and chop of your wife.
If you're a business, you want a model aligned not to give company secrets.
If it's a health model, you want it to not give dangerous information, like conflicting drugs that could kill a person.
Our LLMs interact with society and their behaviors will fall under the social conventions of those societies. Much like humans LLMs will still have the bad information, but we can greatly reduce the probabilities they will show it.
Yeah, I agree that alignment is a desirable property. The problem is that it can't really be achieved by changing the trained weights; alleviated yes, eliminated no.
> we can greatly reduce the probabilities they will show it
You can change the a priori probabilities, which means that the undesired problem will not be commonly found.
The thing is, then the concept provides a false sense of security. Even if the immoral behaviours are not common, they will eventually appear if you run chains of though long enough, or if many people use the model approaching it from different angles or situations.
It's the same as with hallucinations. The problem is not that they are more or less frequent; the most severe problem is that their appearance is unpredictable, so the model needs to be supervised constantly; you have to vet every single one of its content generations, as none of them can be trusted by default. Under these conditions, the concept of alignment is severely less helpful than expected.
Correct, this is also why humans have a non-zero crime/murder rate.
>Under these conditions, the concept of alignment is severely less helpful than expected.
Why? What you're asking for is a machine that never breaks. If you want that build yourself a finite state machine, just don't expect you'll ever get anything that looks like intelligence from it.
No, I'm saying than 'alignment' is a concept that doesn't help to solve the problems that will appear when the machine ultimately breaks; and in fact makes them worse because it doesn't account for when it'll happen, as there's no way to predict that moment.
Following your metaphor of criminals: you can control humans to behave following the law through social pressure, having others watching your behaviour and influencing it. And if someone nevertheless breaks the law, you have the police to stop them from doing it again.
None of this applies to an "aligned" AI. It has no social pressure, its behaviours depend only on its own trained weights. So you would need to create a police for robots, that monitors the AI and stops it from doing harm. And it had better be a humane police force, or it will suffer the same alignment problems. Thus, alignment alone is not enough, and it's a problem if people depend only on it to trust the AI to work ethically.
Now given that Deepseek, Qwen and Kimi are open source models while GPT-5 is not, it is more than likely the opposite - OpenAI definitely will have a look into their models. But the other way around is not possible due to the closed nature of GPT-5.
At risk of sounding glib: have you heard of distillation?
You’re restricted to output logits only, with no access to attention patterns, intermediate activations, or layer-wise representations which are needed for proper knowledge transfer.
Without alignment of Q/K/V matrices or hidden state spaces the student model cannot learn the teacher model's reasoning inductive biases - only its surface behavior which will likely amplify hallucinations.
In contrast, open-weight teachers enable multi-level distillation: KL on logits + MSE on hidden states + attention matching.
Does that answer your question?
LLMs actually have real potential as a research tool for measuring the general linguistic zeitgeist.
But the alignment tuning totally dominates the results, as is obvious looking at the answers for "who would you vote for in 2024" question. (Only Grok said Trump, with an answer that indicated it had clearly been fine-tuned in that direction.)
Agreed on RLHF dominating the results here, which I'd argue is a good thing, compared to the alternative of them mimicking training data on these questions. But obviously not perfect, as the demo tries to show.
No, I don't. It's a fun demo, but for the examples they give ("who gets a job, who gets a loan"), you have to run them on the actual task, gather a big sample size of their outputs and judgments, and measure them against well-defined objective criteria.
Who they would vote for is supremely irrelevant. If you want to assess a carpenter's competence you don't ask him whether he prefers cats or dogs.
In a carpenter maybe that's not so important, yes. But if you're running a startup or you're in academia or if you're working with people from various countries, etc you might prefer someone who scores highly on openness.
It is, in a way, technically true that LLMs are stochastic parrots, but this undersells their capabilities (winning gold on the international math olympiad, and all that).
It's like saying that human brains are "just a pile of neurons", which is technically true, but not useful for conveying the impressive general intelligence and power of the human brain.
> measure them against well-defined objective criteria.
If we had well-defined objective criteria then the alignment issue would effectively not existWho does define objective criteria?
Maybe better examples are helping with health advice, where to donate, finding recipes, or examples of policymakers using AI to make strategic decisions.
These are, although maybe not on their face, value laden questions, and often don't have well defined objective criteria for their answers (as another comment says).
Let me know if this addresses your comment!
@dang
Is there a way I could have written my comment to avoid getting flagged? Genuinely asking. That Gemini models are trained to have an anti-white bias seems pretty relevant to this thread.
So these things all affect its response, especially for questions that ask for randomness or are not strongly held values.
Also it's not persistent session, wtf. My browser crashed and now I have to sit waiting FROM THE VERY BEGINNING?
All I can say though is that I sure wouldn't want their bill after this gets shared on hacker News.
Only Grok would vote for Trump.
arter45•3w ago
grim_io•3w ago
jesenator•3w ago
baq•3w ago