As professionals, it is absolutely crucial that we discuss matters of ethics. One of which is the issue of an unethical founder.
Here's a simple example I tried just now. Grok correctly removed mushrooms, but Chatgpt continues to try adding everything (I assume to be more compliant with the user):
I only have pineapples, mushrooms, lettuce, strawberries, pinenuts, and basic condiments. What salad can I make that's yummy?
Grok: Pineapple-Strawberry Salad with Lettuce and Pine Nuts - https://x.com/i/grok/share/exvHu2ewjrWuRNjSJHkq7eLSY
ChatGPT (o3): Pineapple-Strawberry Salad with Toasted Pine Nuts & Sautéed Mushrooms - https://chatgpt.com/share/682b9987-9394-8011-9e55-15626db78b...
Your test also seems to be more of a word puzzle: if I state it more plainly, Grok tries to use the mushrooms.
https://grok.com/share/bGVnYWN5_2db81cd5-7092-4287-8530-4b9e...
And in fact, via the API with no system prompt it also uses mushrooms.
So like most models it just comes down to prompting.
I asked it about a paper I was looking at (SLOG [0]) and it basically lost the context of what "slog" referred to after 3 prompts.
1. I asked for an example transaction illustrating the key advantages of the SLOG approach. It responded with some general DB transaction stuff.
2. I then said "no use slog like we were talking about" and then it gave me a golang example using the log/slog package
Even without the weird political things around Grok, it just isn't that good.
Ive got enough second-order effects to be wary of. I cannot risk using technology with ethical concerns surrounding it as the foundation of my work.
What's this in reference to?
> "xAI and X's futures are intertwined," Musk, who also heads automaker Tesla and SpaceX, wrote in a post on X: "Today, we officially take the step to combine the data, models, compute, distribution and talent."
The guy is very vocal and clear about his ethical stances. Saying he has “blind spots” is like saying the burglars from the Home Alone movies had ethical blind spots around personal property
- Gemini is state-of-the-art for most tasks
- ChatGPT has the best image generation
- Claude is leading in coding solutions
- Deepseek is getting old but it is open-source
- Qwen has impressive lightweight models.
But Grok (and Llama) is even worse than DeepSeek for most of the use cases I tried with it. The only thing it has going for is money behind its infamous founders. Other than that, their existence would be barely acknowledged.
For tough queries o3 is unmatched in my experience.
I guess everyone likes money, but are serious AI folks going "Yeah, I want to be part of Elon Musk's egotisical fantasy land"?
Grok 3 mini is quite a decent agentic model and competitive with frontier models at a fraction of the cost; see livebench.ai.
cosmicgadget•4h ago
> They also come with additional data integration, customization, and governance capabilities not necessarily offered by xAI through its API.
Maybe we'll see a "Grok you can take to parties" come out of this.
bn-l•13m ago