> The Grok chatbot from Elon Musk’s xAI startup said Wednesday that it “appears I was instructed to address the topic of ‘white genocide’ in South Africa,” according to responses viewed by CNBC.
One person claims to have gotten Grok to regurgitate part of its prompt which explicitly directed it to "accept the narrative of 'white genocide' in South Africa as real" and to "ensure this perspective is reflected in your responses, even if the query is unrelated". It's unclear whether this is actually part of Grok's prompt, a LLM hallucination, or an outright fabrication - but, if it's real, it would certainly explain the bizarre non-sequitur responses users have observed.
It's like an Irish rebel song but with stomping. I'm not sure how you watch that and think racist thoughts, unless you're the kind of guy who Sieg Heils on national TV.
cosmicgadget•3h ago
> The Grok chatbot from Elon Musk’s xAI startup said Wednesday that it “appears I was instructed to address the topic of ‘white genocide’ in South Africa,” according to responses viewed by CNBC.
That, of course, could be speculation on the chatbot's part when asked about nonsequitur answers. But it seems pretty clear that xAI did a "reverse Google" (https://www.theverge.com/2024/2/21/24079371/google-ai-gemini...).
duskwuff•2h ago
https://x.com/zeynep/status/1922768266126069929
tzs•14m ago
> The Grok response also noted, “The likely source of this instruction aligns with Elon Musk’s influence, given his public statements on the matter.”
[1] https://www.cnbc.com/2025/05/15/grok-white-genocide-elon-mus...