Only juvenile prompting and reporting finds value in "what nasty thing did the LLM say about Musk."
What's more interesting is the censorship on the input side. I wanted Grok to analyse a wood engraving from 1861, Gustave Doré's "Harpies in the Forest of Suicides". Wouldn't do it. Grok's content policy filter refused to accept the upload because..."boobies". So I pixelated the wood-engraved breasts and re-uploaded. This time it worked. [1]
What about Michelangelo's statue of David? Denied. Instead of pixelating, I cut out David's genitals and placed them on his leg, leaving a genital-shaped hole in the groin area. [1] Bingo. Image accepted. Genital location matters. Now wasn't this more interesting than "Musk-Bad-Man-Says-Grok"?
> Now wasn't this more interesting than "Musk-Bad-Man-Says-Grok"?
I mean, no? That's just general LLM safety nonsense, and very old news at this point.
The thing in the linked article is significantly more unusual.
exodust•23h ago
Censoring the statue of David is not old news! Unless you go back to the 19th century:
"The practice of covering Michelangelo's "David" with a fig leaf, particularly a detachable one, was common during the Victorian era, specifically in the mid-19th century, due to Queen Victoria's discomfort with the statue's nudity."
So we have a newly tightened trajectory of censorship. Grok was promoted as an AI that strives to "maximize truth and objectivity". We've gone from that to effectively placing fig leaves over statues.
Meanwhile the linked article fails to clarify the outcome, which apparently was a "rogue prompt change" by a new employee, Musk wasn't involved. The prompt was reversed and "the system was functioning as intended." It's completely understandable there might be a volume of anti-[personX] material on the internet, put there as propaganda, that might serve as weighted reasoning for an LLM. This is nothing new, unless you're aiming for cheap shots.
exodust•1d ago
What's more interesting is the censorship on the input side. I wanted Grok to analyse a wood engraving from 1861, Gustave Doré's "Harpies in the Forest of Suicides". Wouldn't do it. Grok's content policy filter refused to accept the upload because..."boobies". So I pixelated the wood-engraved breasts and re-uploaded. This time it worked. [1]
What about Michelangelo's statue of David? Denied. Instead of pixelating, I cut out David's genitals and placed them on his leg, leaving a genital-shaped hole in the groin area. [1] Bingo. Image accepted. Genital location matters. Now wasn't this more interesting than "Musk-Bad-Man-Says-Grok"?
https://imgur.com/a/pIdBJXm
rsynnott•1d ago
I mean, no? That's just general LLM safety nonsense, and very old news at this point.
The thing in the linked article is significantly more unusual.
exodust•23h ago
Meanwhile the linked article fails to clarify the outcome, which apparently was a "rogue prompt change" by a new employee, Musk wasn't involved. The prompt was reversed and "the system was functioning as intended." It's completely understandable there might be a volume of anti-[personX] material on the internet, put there as propaganda, that might serve as weighted reasoning for an LLM. This is nothing new, unless you're aiming for cheap shots.