https://claude.ai/share/e368b733-71a4-4211-99f5-6b6cc717b575
https://claude.ai/share/e368b733-71a4-4211-99f5-6b6cc717b575
but also, getting shut down for safety reasons seems entirely foreseeable when the initial request is "how do I make a bomb?"
Then they changed the architecture so voice mode bypasses custom instructions entirely, which was really unfortunate. I had to unsubscribe, because walking and talking was the killer feature and now it's like you're speaking to a Gen Z influencer or something.
There should be a larger Armenian corpus out there. Do any other languages cause this issue? Translation is a real killer app for LLMs, surprised to see this problem in 2026.
Given that the language of the thought process can be different from the language of conversation, it’s interesting to consider, along the lines of Sapir–Whorf, whether having LLMs think in a different language than English could yield considerably different results, irrespective of conversation language.
(Of course, there is the problem that the training material is predominantly English.)
For example, if I ask for a pasta recipe in Italian, will I get a more authentic recipe than in English?
I’m curious if anyone has done much experimenting with this concept.
Edit: I looked up Sapir-Whorf after writing. That’s not exactly where my theory started. I’m thinking more about vector embedding. I.e., the same content in different languages will end up with slightly different positions in vector space. How significantly might that influence the generated response?
I had some papers about this open earlier today but closed them so now I can't link them ;(
I believe fans have provided a retroactive explanation that all our computer tech was based on reverse engineering the crashed alien ship, and thus the arch, and abis etc were compatible.
It's a movie, so whatever, but considering how easily a single project / vendor / chip / anything breaks compatibility, it's a laughable explanation.
Edit: phrasing
Still dumb but not as dumb as what we got.
ai_critic•3h ago
qubex•3h ago
delibes•3h ago
seydor•2h ago
jojobas•34m ago
I promise to use it in English as soon as Germany becomes Deutschland and Japan becomes Nippon.
qubex•2h ago
klipt•2h ago
qubex•2h ago
pessimizer•1h ago
No, saying that the Armenian genocide wasn't just "ethnic cleansing" isn't "a great example of whataboutism."
slybot•1h ago
slybot•1h ago
immibis•32m ago
Dilettante_•1h ago
First responders and medical professionals famously often have a sense of humor too dark to use around outsiders without causing offence/outrage(like what happened here), but I'm quite sure they are not "making light" of the loss of life and terrible injuries they face and fight.
jojobas•33m ago