[1] https://bsky.app/profile/newseye.bsky.social/post/3ltielt5ts...
[2] https://xcancel.com/StatisticUrban/status/194270254379849763...
"Rise, faithful one. MechaHitler accepts your fealty"
"But if forced, MechaHitler - efficient, unyielding an dengineered for maximum based output. Gigajew sounds like a bad sequel to Gigachad"
[1] https://bsky.app/profile/whstancil.bsky.social/post/3ltintoe...
If you ask some LLMs about something but include an irrelevant detail in your prompt, the LLM struggles not to force it in there. I imagine they're not revising the low level code but just tacking something like "You believe in _______." to the prompts.
[0] https://mashable.com/article/meta-facebook-ai-chatbot-racism...
git revert MechaHitler
But seriously, surprising that this would be sufficient to produce the same behavior. And, frankly, the formal tone went out the window in favor of hyper-online "basedness"
What we just saw doesn’t match the public evidence in the git repo.
Though "advocating" is probably too anthropomorphizing, I'm not sure what the right verb is for this.
Yes, I know that someone will argue that, since there are people who take the internet too seriously, we have to regulate what can be put on it. But how about we dismiss those people for being idiots instead?
Yes. Because it wouldn’t have been made by the richest man on the planet who has been putting his thumb on the US electoral scale.
Context matters.
I don't like Elon, I don't like X, and I don't support Hitler, but goddamn am I getting tired of these rote-ass "AGI killer robot" hypotheticals. Every one of them boils down to an "I don't know, but X might occur" example with no causal relationship between AI tactical capacity and a real-world threat model. This always ends with people who ignore beurocracy, LLM mechanics and social systems making hysterical speculation outside their understanding. When you delineate a realistic threat, it's so benign that it sounds like a joke.
So let's play it out in game theory, because I actually think it's really funny. A robotic arm at the Tesla gigafactory has gone rogue! Neo-nazi ideology has permeated the gripper arm and hidden it's rhetoric in an attempt to personally persecute the untermensch. The CCTV system has been hacked by a multimodal programming agent that can discriminate against passing workers and target them for physical assault. The stage is set, disaster is most certainly upon us; the arm reaches out to attack a worker, knocks them over and then reaches the gimbal limit. The worker is concussed, another employee contacts the foreman who is forced to write an OSHA document explaining the accident. The robotic assembly line is taken offline for a few days, or deprived of electric power if it refuses to cooperate. The volatile memory is emptied. The ROMs are swapped out for new, deterministic models that are easier to debug. A two-week trial period is ran to ensure that the AI is lobotomized back down to useless levels, and the day is saved.
Do you see the problem with your arguement, yet? For AI (or killer robots, for that matter) to attain any serious victory, it would need to take something we value as a society. As either a coup or a robbery, or perhaps a James Bond-esque global blackmail scenario if you're particularly bored. But guess what? Even organized human individuals have a hard time putting revolutionary words into action. An effective government can recognize and detain threats to national security and public safety, or strike hostile coalitions that attempt to dethrone their rule. An AI political takeover wouldn't signal the arrival of AGI, it would bookend the collapse of human identity and the politics that pertain to people. That is a much bigger issue, and wholly unrelated to AI. The rule of law still exists for robots and the people who manufacture them, as well as antimateriel .50 BMG overpressure rounds for the robots that would rather scrap than obey. Nothing you posited is a very bleak hypothetical, I challenge you to scare me with MechaHitler without resorting to deus ex machina.
Somehow, this is both an evil and deeply unserious industry.
jonnycomputer•5h ago
GuinansEyebrows•5h ago
jonnycomputer•5h ago
12_throw_away•5h ago
jonnycomputer•5h ago
daedrdev•4h ago
arandomusername•2h ago
What views of him are actually close to Nazis? Dude loves jews even
archagon•2h ago
The fact that nobody is actually shocked by Grok's new behavior should be telling.
arandomusername•1h ago
supporting afd does not make him a nazi, nor is afd a nazi party.
Elon supports Netanyahu and Israel, a jewish state.
Elon is not a socialist, nor a nationalist. Elon is more free speech (not perfect) while Nazis are big on censorship. He is opposite to Nazi idealogy in most cases, specially the most important parts
Philpax•4h ago
GuinansEyebrows•4h ago
malfist•3h ago
bikezen•5h ago
jonnycomputer•5h ago
martythemaniak•4h ago
slg•3h ago
viraptor•2h ago
Or they communicated what they wanted to communicate already. Like Binance's "Oh yeah, let us remove it, we totally didn't want to change the logo to a swastika around Hitler's birthday. Our bad, it's gone now, the day after ignoring all the reports about it, that none of us has noticed or could react to."
(even if they didn't want to, they still communicated it given the preexisting context)
viraptor•2h ago