Anybody using AI tools should be extremely cautious about what is being produced.
Hard to get around these kinds of issues and definitely leads me to avoid them for non-technical questions.
That said, there is no such thing as an objective unbiased political opinion. Chinese LLMs may have issues with events of 1989 but Western LLMs have their blindspots too.
I have often wondered about the legality of such manipulation. As AI becomes used for increasingly important things, it becomes increasingly valuable to make a system serve the needs of someone other than its owner.
It reminds me of the early internet days and everyone making a big deal about the anonymity of internet forurms and safety.. sure it is an isssue
MarkusQ•1h ago
For example, they will occasionally replace "colour" with "color". Why? Because both occur in the training data in the "same role" but "color" is, apparently, more common[1]. You can also trick them into replacing things like "sardines" with "anchovies" (on pizza) and "head of lettuce" with "cabbage" in the context of rowboats.
They are lossy text compressing parrots and we are all suffering from a massive madness-of-crowds scale Eliza Effect.
[1] Yep. https://books.google.com/ngrams/graph?content=color%2C+colou...
Alive-in-2025•1h ago
wonnage•1h ago
semiquaver•1h ago
ozlikethewizard•1h ago
E-Reverance•48m ago
https://arxiv.org/abs/2602.02385
semiquaver•13m ago
At any rate, just because the architecture of current LLMs doesn’t support learning at inference time does not constitute a fundamental limit that can never be changed, just a local maximum that has worked well to productize the approach.
And I’m quite certain that once systems that include post-training learning exist people like you will find a way to distinguish that from human learning, moving the goalposts again. You’re not arguing in good faith, you have an essentially religious opinion and you will stick to it as long as you are able.
This is not an accurate description of the transformer architecture. I’m not surprised that you are misinformed about this.stetrain•1h ago