Garbage in, garbage out.
It has been a breeding ground for it, amplified by foreign agents bots since Elon took over.
Who knows if Elon actually thinks this is problematic. His addiction to the platform is well documented and quantified in the billions of dollars.
2. Remove moderation, promote far right accounts, retweet some yourself
3. Allow Nazi speech to fester
4. Train LLM on said Nazi speech
5. Deploy Nazi-sympathizing LLM, increase engagement with Nazi content
6. Go to step 4
The article also leads into what oversight and regulation is needed, and how we can expect AIs to be used for propaganda and influence in future. I worry that what we're seeing with Grok, where it's so easily identifiable, are the baby steps to worse and less easily identifiable propaganda in future.
What likely happened is that a few people decided to query groq in to generate ragebait traffic to the page/account/etc. Then critical mass happened to make it go viral. Then it confirms prior biases so the media reported it as such (and also drive clicks and revenue).
Microsoft had basically the same scandal with twitter chatbot as well a few years ago.
Sadly, ragebait is a business model.
The article gives it more nuance: 'I presume that what happened was not deliberate, but it was the consequence of something that was deliberate, and it’s something that was not really predictable.' And goes on to discuss how LLMs can behave in unpredictable ways even when given what we expect to be prompts without side effects, and touches on the post-training processes.
It paraphrases to "it wasn't intentional, but something was intentional, and also unpredictable."
I'm sorry but what does that even mean? It's pure speculation.
Furthermore, I highly doubt that a reporter from politico has the either the expertise or the connections to qualify the post processing / fine tuning processes for the one of the most closely guarded and expensive processes in all of technology (training large scale foundation models).
Finally, the paragraph from the quote begins with "I mean, even Elon Musk, who’s probably warmer to Hitler than I am, doesn’t really want his LLM to say stuff like this."
Again it confirms a prior bias/narrative and is rage-bait to drive revenue.
That said, you're right. We don't know, and maybe we're giving too much credit to someone who seems unreliable. I'd love to know more in general about how LLMs get from the training stage to the release stage -- there seems to be a lot of tuning.
If I want to be generous, something along the lines of "The Law Of Unintended Consequences".
Less generous is "someone turned the dial to the right and didn't realize how far they turned it".
Even less generous is that someone feels some of these things in private but doesn't want to make it too explicit. They personally have a hard time toeing the line between edgy and obviously toxic and programming an AI to toe that line is even harder.
but wish Jay-Z would slap Ye for the squeaky autotune at the start
If history repeats itself, maybe with software we can automate that whole process…
franze•7mo ago