frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
252•theblazehen•2d ago•84 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
24•AlexeyBrin•1h ago•2 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
705•klaussilveira•15h ago•206 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
969•xnx•21h ago•557 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
67•jesperordrup•6h ago•31 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
7•onurkanbkrc•45m ago•0 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
135•matheusalmeida•2d ago•35 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
44•speckx•4d ago•35 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
68•videotopia•4d ago•7 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
39•kaonwarb•3d ago•30 comments

ga68, the GNU Algol 68 Compiler – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
13•matt_d•3d ago•2 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
45•helloplanets•4d ago•46 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
238•isitcontent•16h ago•26 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
237•dmpetrov•16h ago•126 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
340•vecti•18h ago•147 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
506•todsacerdoti•23h ago•247 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
389•ostacke•21h ago•98 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
303•eljojo•18h ago•188 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•186 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
428•lstoll•22h ago•284 comments

Cross-Region MSK Replication: K2K vs. MirrorMaker2

https://medium.com/lensesio/cross-region-msk-replication-a-comprehensive-performance-comparison-o...
3•andmarios•4d ago•1 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
71•kmm•5d ago•10 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
23•bikenaga•3d ago•11 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
25•1vuio0pswjnm7•2h ago•16 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
96•quibono•4d ago•22 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
271•i5heu•18h ago•219 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
34•romes•4d ago•3 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1079•cdrnsf•1d ago•461 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
64•gfortaine•13h ago•30 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
306•surprisetalk•3d ago•44 comments
Open in hackernews

Grok 4 Fast now has 2M context window

https://docs.x.ai/docs/models
194•hereme888•3mo ago

Comments

changoplatanero•3mo ago
Anyone can make a long context window. The key is if your model can make effective use of it or not.
bigyabai•3mo ago
Long context window = huge amounts of vacant VRAM = our servers are fucking empty
trash_cat•3mo ago
But isn't context window dependent on model architecture and not available VRAM that you can just increase or decrease as you like?
reasonableklout•3mo ago
Most attention implementations can work across an arbitrarily long context.

The limiting factors are typically: 1. Often there are latency/throughput requirements for model serving which become challenging to fulfill at a certain context length. 2. The model has to be _trained_ to use the desired context length, and training becomes prohibitively expensive at larger contexts.

(2) is even a big enough problem that some popular open source models that claim to support large context lengths in fact are trained on smaller ones and use "context length extension" hacks like YaRN to trick the model into working on longer contexts at inference time.

onion2k•3mo ago
The model will use the full context if it's been designed well, but you can still increase the size of the window on models where it hasn't. It's just pointless. People who don't know much about LLMs will still think "bigger number is better" though.
chucknthem•3mo ago
How do they make the context window longer? (serious question, I want to learn how this works)
TheCoolGuy•3mo ago
You literally just shift the window over by to the next token once you reach the max amount of tokens you want for context window, NOT with what you train on, (only limited with memory now)

This has obvious issues since you're now losing information from the now unseen tokens which becomes significant if your context window is small in comparision of the answer/question you're looking at. That's why companies try to give stupidly large context windows. The problem is they're not training on the large context window, they're training on something smaller (2048 and above). Due to how attention is setup, you can train on a small amount of context and extrapolate it to any number of tokens possible since they train via ROPE which trains the model because on words and their offset to the neighboring words. This allows us to effectively x2,x3,x10,x100 the amount of tokens we generate vs train with with some form consistency BUT still cause a lot of issues consistency wise since the model approaches more of a "this was trained on snippets but not the entire thing" situation where it has a notion of the context but not fundamentally the entire combined context

vlovich123•3mo ago
That’s a very basic way to keep the LLM inferring past the context window size (there’s better, smarter ways) but that’s not at all what the question was which is how they train a 2M token length window. My understanding at a basic level is that you need corpuses that are >2M in length for training data which is where the problem comes in for - there’s only so much long form content and it’s swamped by all the smaller stuff. I think there’s probably tricks now but I suspect it’s still largely an open problem.
Ey7NFZ3P0nzAe•3mo ago
AFAIK nobody does that. They train on much much shorter text but with use tricks in the position encoding steps that can be extrapolated by the LLMs. Lile ROPE and YARN etc.
ErikBjare•2mo ago
AFAIK (not much) it definitely helps to train on longer sequences even with rope/yarn and is needed if you care about long context performance (and not just the long context capability).
nbardy•3mo ago
No they can't, it's a N^2 algorithm, just fitting it in the context window is a challenge.

And sure maybe not 2mil of it is usable, but they're reliably pushing the frontier here.

ggeorgovassilis•3mo ago
I came here just to complain about that :-) All LLMs I used seem to give more weight to things at the beginning of the context window and omit many details. Eg. I tried this simple thing: pasted a friend's and my CV into Gemini and asked it to recommend topics for a joint conference presentation. Results depended greatly on the order of CVs pasted in.
TheOtherHobbes•3mo ago
The middle tends to be underweighted. The beginning and end get more attention.
otabdeveloper4•3mo ago
That's because when they say "long context window" they're lying and they actually mean that they support a long input prompt that is still compressed into a small context window. (Typically by throwing out tokens in the middle.)

An actually large context window is impossible due to how LLM attention works under the hood.

acuozzo•2mo ago
Mamba-2 enters the chat.
retinaros•3mo ago
no one makes effective use of long context.
DrSiemer•3mo ago
It's not the most energy efficient workflow, but I work on relatively small codebases and I made a tool that let's me dump all of it in an LLM with a single copy/paste. This works surprisingly well with Gemini 2.5 Pro (1.000.000 ctx).

The only real mistakes it makes are some model specific quirks, like occasionally stripping out certain array index operators. Other than that, it works fine with 150.000 token size conversations. I've gone up to 500.000 with no real issues besides a bit of a slowdown. It's also great for log analysis, which I have maximized to 900.000 tokens.

mg•3mo ago
If a model is not making use of the whole context window - shouldn't that be very noticeable when the prompt is code?

For example when querying a model to refactor a piece of code - would that really work if it forgets about one part of the code while it refactors another part?

I concatenate a lot of code files into a single prompt multiple times a day and ask LLMs to refactor them, implement features or review the code.

So far, I never had the impression that filling the context window with a lot of code causes problems.

I also use very long lists of instructions on code style on top of my prompts. And the LLMs seem to be able to follow all of them just fine.

MallocVoidstar•3mo ago
I don't think there are any up-to-date leaderboards, but models absolutely degrade in performance the more context they're dealing with.

https://wandb.ai/byyoung3/ruler_eval/reports/How-to-evaluate...

>Gpt-5-mini records 0.87 overall judge accuracy at 4k [context] and falls to 0.59 at 128k.

And Llama 4 Scout claimed a 10 million token context window but in practice its performance on query tasks drops below 20% accuracy by 32k tokens.

mg•3mo ago
That makes me wonder if we could simply test this by letting the LLM add or multiply a long list of numbers?

Here is an experiment:

https://www.gnod.com/search/#q=%23%20Calcuate%20the%20below%...

The correct answer:

    Correct:    20,192,642.460942328
Here is what I got from different models on the first try:

    ChatGPT:    20,384,918.24
    Perplexity: 20,000,000
    Google:     25,167,098.4
    Mistral:    200,000,000
    Grok:       Timed out after 300s of thinking
jarek83•3mo ago
Isn't that LLMs are not designed to do calculations?
mg•3mo ago
Neither are humans.
cuu508•3mo ago
But humans can still do it.
cluckindan•3mo ago
They are not LMMs, after all…
gcanyon•3mo ago
> Do not use a calculator. Do it in your head.

You wouldn't ask a human to do that, why would you ask an LLM to? I guess it's a way to test them, but it feels like the world record for backwards running: interesting, maybe, but not a good way to measure, like, anything about the individual involved.

throwuxiytayq•2mo ago
I’m starting to find it unreasonably funny how people always want language models to multiply numbers for some reason. Every god damn time. In every single HN thread. I think my sanity might be giving out.
solatic•2mo ago
A model, no, but an agent with a calculator tool?

Then there's the question of why not just build the calculator tool into the model?

KristoAI•2mo ago
Since grok 4 fast got this answer correct so quickly, I decided to test more.

Tested this on the new hidden model of ChatGPT called Polaris Alpha: Answer: $20,192,642.460942336$

Current gpt-5 medium reasoning says: After confirming my calculations, the final product (P) should be (20,192,642.460942336)

Claude Sonnet 4.5 says: “29,596,175.95 or roughly 29.6 million”

Claude haiku 4.5 says: ≈20,185,903

GLM 4.6 says: 20,171,523.725593136

I’m going to try out Grok 4 fast on some coding tasks at this point to see if it can create functions properly. Design help is still best on GPT-5 at this exact moment.

d4rkp4ttern•3mo ago
There are “needle in the haystack” benchmarks for long context performance. It would be good to see those.
throwuxiytayq•2mo ago
These aren’t really indicative of real world performance. Retrieving a single fact is pretty much the simplest possible task for a long context model. Real world use cases require considering many facts at the same time while ignoring others, all the while avoiding the overall performance degradation that current models seem susceptible to when the context is sufficiently full.
d4rkp4ttern•2mo ago
I agree, retrieving a single fact is necessary but not sufficient.
jtrn•3mo ago
The number of times I know that my instruction is in context, but it’s forgotten, is countless at this point for me. My experience, both ad a clinical psychologist and developers, is that there is a convergent trend in how I speak to both clients and AI. I can view much of my therapist's approach in how I try to highlight the important things to focus on to achieve progress. Often, it’s about helping the client articulate and understand what’s important to them and how they rank these priorities. The same applies to AI. It feels obvious now that the problem with attention and context is the lack of hierarchy or levels of importance. We know that we have, probably biologically based, three types of memory: short-term, intermediate, and long-term. Long-term memory is what you use with MCP, web search, and RAG. Shorter memory is the current response, and intermediate memory is the current context. When assume this, in my interactions with an agent, it makes perfect sense where they falter and what they forget, in the exact same way as people. It feels more and more like talking to a human, with same weaknesses in logic, reasoning, and focus.
behnamoh•3mo ago
Who here actually uses Grok? It's sad to see Elon's arc but when he doubled down on some of his political ideas he had it coming with the Tesla sales going down and x.ai not taken seriously.

I've always tried to remain apolitical and unbiased but it's hard to overlook who's behind a technology you wanna buy. Not that sama and others are saints either, it's just Elon's very obvious and vocal about it.

It's a shame, really, because Grok is a good model. But Elon promised to open source the previous model and it took them forever to do that with Grok 3. Sorry, but I wanna buy from someone who keeps their promises ("FSD by next year").

YetAnotherNick•3mo ago
Grok fast is by far the most used model in openrouter with more than a trillion tokens weekly[1].

[1]: https://openrouter.ai/rankings

behnamoh•3mo ago
Because some tools (AFAIR Kilo Code but I might be wrong) gave it away for free. The model itself was (still is?) free for a while, so I'm not surprised.
ribelo•3mo ago
Openrouter is not counting tokens used by Kilo or Cline. They have own endpoints.
wqaatwt•3mo ago
Yet if you go to the actual model’s page:

https://openrouter.ai/x-ai/grok-code-fast-1

Cline and Kilo code are in the top 3. So how does that work?

It’s considerably cheaper than competing models like 2.5 flash, though. So its not that surprising

YetAnotherNick•2mo ago
It doesn't include the free usage. There is a different model named grok code fast 1 free.
rjdj377dhabsn•3mo ago
For at least the last year, I've been using Grok for 90% of my queries. I pay for their $30 plan as well as $20 for Claude Code, which I only use for simple development projects. For anything more complicated, Grok's expert mode has consistently better results.
weird-eye-issue•3mo ago
> I've always tried to remain apolitical and unbiased

Clearly

kelsolaar•3mo ago
As you point out, Sam Altman is not exactly an altar boy: https://fastcompany.co.za/business/2025-11-07-sam-altmans-tr...
andai•3mo ago
Thought this would be about the whistleblower. They didn't even mention it!
roman_soldier•3mo ago
Yes allegedly having an employee bumped off for whistleblowing and the sister thing is way worse than someone having a different opinion than you. One is criminal the other is free speech.
ramraj07•3mo ago
One is alleged, other isn't just an opinion. Its estimated that several hundred thousand deaths have already happened from the abrupt USAID cuts initiated by DOGE.
jamespo•2mo ago
"roman soldier" indeed
darkwater•3mo ago
I don't think you can compare the usual internal backstabbing between executives with someone who literally directed and participated in acts of the US Government, and keep saying and doing things to help and nurture a certain side of the political spectrum.
vasco•3mo ago
Both do both.
wqaatwt•3mo ago
Not to an even remotely same degree..
diputsmonro•3mo ago
Did Sam Altman lead a government agency and camp in the Oval Office for months too? Degrees matter.
KingMob•3mo ago
Fair, but don't forget Altman's sister accused him of sexual abuse in court. (https://www.newsweek.com/sam-altman-openai-sister-annie-sexu...)

Dunno if it's true. The family wrote it off, saying she's mentally ill, but I can also see years of abuse leading to mental illness.

supriyo-biswas•3mo ago
I've been occasionally using Grok and found it good for devops stuff; specifically it often is able to explain and produce working configurations without getting lost or introducing subtle mistakes as I've sometimes seen with other models.
sipsi•3mo ago
i didn't
galaxy_gas•3mo ago
I have try it a few times in Copilot as code fast 1 because it was advertised. It has never correctly done something so far. Maybe because it's the fast ver ?
jasonvorhe•3mo ago
Maybe you just used it wrong? I refactored a complicated code base, built exhaustive tests for a CLI app and I've been maintaining and building out several k8s clusters out of a mono repo using Cline + grok-code-fast-1 and it's been a breeze.
mudkipdev•3mo ago
I don't but only because the model is not satisfying, not because I dislike Tesla
raincole•3mo ago
In my experience Grok Fast is the best "cheaper" model out there. Far better than Haiku 4.5 and Gemini Flash. I don't think the other cheaper models should be treated seriously at this point.
behnamoh•3mo ago
Gemini Flash is the first model I disable in any tool I use. It's a joke, and to add salt to injury, google announced a "lite" version of that as well!
RobKohr•3mo ago
I like grok for noncoding stuff. I find it hasn't been tuned for "Safety" (meaning it isn't tuned much for political correctness). It also seems good at making images and stories up well. I run some choose your own adventures stories with my kids through it. We tell it who each of their characters are and what the theme is for the night and grok gives them each a section of story and 4 choices. They also have the option of choosing something different then suggested. We have it so it cycles around the turns for everyone. Works pretty well, and if the kids wanna go dark (preteen boy) grok doesn't mind the violence.

Kinda reminds me of the video game from enders game.

vlovich123•3mo ago
> meaning it isn't tuned much for political correctness

Is being tuned for right wing viewpoints the same as not being tuned for political correctness? Because there is tuning happening to a specific viewpoint:

https://gizmodo.com/elon-says-hes-working-to-fix-grok-after-...

gitaarik•3mo ago
Yeah, but you can argue that the AI has been biased because of biased training data.

Ultimately every AI is biased based on what you train it on and how you instruct it.

I tend to use LLMs from different companies and personally compare them, and read between the lines.

Yoric•3mo ago
> I tend to use LLMs from different companies and personally compare them, and read between the lines.

Read between the lines? Does this mean that you're using LLMs as a source of information?

wohoef•3mo ago
The point of LLMs is that there’s nothing in between the lines.

Or do you mean to say that you are trying to find the specific bias each model has?

wqaatwt•3mo ago
> it isn't tuned much for political correctness

It was tuned to be edgy and annoying though (I mean his general style of speech not necessarily the content).

simondotau•2mo ago
Nothing in AI is more edgy and annoying than beginning every response with a mandatory glazing, like ChatGPT. “That’s a really insightful question, and shows that you really understand the subject!”
chownie•2mo ago
Nothing is more edgy than the AI being too polite? Are we just inventing new meanings for words?
simondotau•2mo ago
Politeness is not the same thing as gratuitous praise. Politeness is appropriate; being excessively glazed for asking an obvious follow-up question is weird.
chownie•2mo ago
Right, and neither politeness nor gratuitous praise are even remotely similar to being edgy. These words have meanings, you have been using at least one of them incorrectly, that is the point I'm trying to make.

https://www.merriam-webster.com/dictionary/edgy

https://en.wiktionary.org/wiki/edgy

razingeden•2mo ago
early iterations i could immediately peg as grok content based on its condescending snarky “OOoooOoOo — so much to unpack here sweaty, lets get started” tone.

im open minded and ive fed grok a few requests recently. it was better at doing creative fiction prompts without the “eddie izzard coming down off of a fifteen day coke bender” vibe.

everything i ask it to do is completely made up nonsense so i dont have an opinion about its bias or the quality of its factual content.

snark and clapback made the world go around on xitter. maybe thats what they thought people wanted. savage insulting content to “own” people. i for one, also found it extremely annoying.

LorenDB•3mo ago
I do! I have felt bad vibes from OpenAI for a while now, and eventually defaulted to Grok as somewhat the lesser of many evils. I respect anybody who doesn't wish to use it, but it's good enough for what I need it for. Case in point: it just spit out valid OpenSCAD code for an adapter piece I want to 3D print.
anon214535•2mo ago
I don't understand how anyone can think Grok is the lesser of many evils. It seems to me that Grok is currently playing in its own league of evil.

Most models belong to capitalist companies that are fairly apolitical and all they care about is money. Their evil comes from not caring about consequences as long as it grows their value. Their censorship come from the desire to avoid PR disasters.

On the other hand, Grok belongs to a billionaire involved in destroying America's democracy, and it's being openly manipulated according to Musk's ideology. I can't think of a model I would trust less.

minimaxir•3mo ago
Going off OpenRouter's rankings (https://openrouter.ai/rankings), Grok Code Fast 1 is the most used model by a significant margin, and since those metrics are calculated as of this week, that's after providers stopped giving free promotional access to it. Grok 4 Fast is #5 on that list which was never free.

In terms of models, Grok 4 Fast has essentially zero restrictions on safety, which a) makes it unusable for most applications that allow user input and b) makes it extremely useful for certain applications.

BoredPositron•3mo ago
It's the only model that lets you do gooner shit. That's why the usage is highly skewed. You can just call a horse a horse if you see one.
Squarex•3mo ago
this is a code model, not the general one
BoredPositron•3mo ago
you are so naive. lol. It's a general model with the tag "code" added to it.
jasonvorhe•3mo ago
This is nonsense. grok-code-fast-1 is just part of many free tiers of agentic coding assistants like Cline etc.
Void_•3mo ago
Half of USA voted for Trump. That should answer “who actually uses Grok”.

I personally use the best tool for the job, which Grok sometimes is.

aaronbrethorst•3mo ago
Trump received 77.3 million votes. Harris received 75 million votes. The US population is about 342 million.
herbst•3mo ago
I am not sure why these numbers would matter. He won, obviously, because the majority of voters voted for him.

Which are Americans, Americans who either voted for him and didn't do enough against him.

There is really no excuse to democratically vote for a person like this and let all this bullshit happen.

chistev•3mo ago
What models are better than Grok?
dymk•3mo ago
Sonnet-4 and onward, GPT-4 and onward
NaomiLehman•3mo ago
and GLM-4.6
whywhywhywhy•2mo ago
Saying “GPT-4” is dishonest, launch GPt4 was significantly better than anything devday downgrade, all the 4o nonsense etc.

In reality GPT really sucked from devday until 5 and it redeemed itself

schappim•3mo ago
I used Grok to successfully split a large 10K-line file of spaghetti code into multiple smaller well organised files. This was after giving the same task to Claude, OpenAI, and Gemini, all of which consistently failed.

Grok certainly has its uses, but I default to OpenAI for most business tasks and Claude for code.

gitaarik•3mo ago
All propietary AIs are probably biased in some way. I mean, that is the power of them and the reason they're propietary, right?

So I tend to use different LLMs from different providers, personally compare them and read between the lines.

roman_soldier•3mo ago
At least Elon is open about what he believes. Other CEO's hide behind corporate PR machines, how do you know they are not psychopaths.
KingMob•3mo ago
> At least Elon is open about what he believes.

@dril: "you do not, under any circumstances, 'gotta hand it to them'"

sidibe•2mo ago
There's a nonzero chance they are not psychopaths. Elon reminds us daily about his chances
voganmother42•2mo ago
Yeah he was really open about his salute eh soldier?
apu6865i•3mo ago
Let me give you a perspective. For Indians Winston Churchill is no different than Hitler. The guy was responsible for millions of death in bengal famine.But for you and I assume majority of this forum and westerners he is a hero. Against Winston Churchill though Elon appears like a saint!
whywhywhywhy•2mo ago
Groks underrated honestly. If you have to market on X you need a sub anyway so it’s replaced casual questions/sort of questions I used to Google for me and I’m not seeing anything worse than ChatGPT and often it’s better. Much better at current events.

The video gen is actually really good fast and cheap for short videos.

Still use Claude and GPT5 for work tasks but I haven’t tried grok extensively for those

Bender•2mo ago
I used it to calculate the size of a greenhouse using a lot of inputs and restrictions. It did that fine but the one thing I did not appreciate was its sense of humor. It said the excavator would be here first thing Monday along with a pot of coffee. Just tell me a dad joke or just skip the attempt at humor all together.
Tycho•2mo ago
I use Grok more than other LLMs. It’s built into X, so the use case of pressing the Grok button on a post to see an explanation for something I didn’t understand, or a fact check for something I doubted, or just more background on a subject, is by far the most frequently useful feature of AI in my day to day life.

People seem to nitpick a lot. Grok 3 came out in, what, March? Cost how many tens of millions to train? And you’re mad because it’s not open source yet?

XCSme•2mo ago
I use Grok 4 Fast via API, cheap, fast and almost really well suited for data parsing/extraction, a lot better than Gemini 2.5 Pro for example.
replwoacause•2mo ago
I won't go near anything Elon touches because of this. He's a clown.
mehdibl•3mo ago
What matter is not context or the recod token/s you get.

But the quality for the model. And it seem Grok pushing the wrong metrics again, after launching fast.

saretup•3mo ago
Seems reductive. Some applications require higher context length or fast tokens/s. Consider it a multidimensional Pareto frontier you can optimize for.
sigmoid10•2mo ago
It's not just that some absolutely require it, but a lot of applications hugely benefit from more context. A large part of LLM engineering for real world problems revolves around structuring the context and selectively providing the information needed while filtering out unneeded stuff. If you can just dump data into it without preprocessing, it saves a huge amount of development time.
cronin101•2mo ago
Depending on the application, I think “without preprocessing” is a huge assumption here. LLMs typically do a terrible job of weighting poor quality context vs high quality context and filling an XL context with unstructured junk and expecting it to solve this for you is unlikely to end well.

In my own experience you quickly run into jarring tangents or “ghosts” of unrelated ideas that start to shape the main thread of consciousness and resist steering attempts.

sigmoid10•2mo ago
It depends to the extent I already mentioned, but in the end more context always wins in my experience. If you for example want to provide a technical assistant, it works much better if you can provide an entire set of service manuals to the context instead of trying to put together relevant pieces via RAG.
jeswin•3mo ago
Depends. For coding at least, you can divide tasks into high-intelligence ($$$) and low-intelligence ($) tasks. Being able to do low-intelligence tasks super fast and cheap would be quite beneficial. A majority of code edits would fall into the fast-and-cheap subset.
jorvi•3mo ago
Grok's biggest feature is that unlike all the other premier models (yes I know about ChatGPT's new adult mode), it hasn't been lobotomized by censoring.
basisword•3mo ago
I’ve never run into this problem. What are you asking LLM’s where you run it censoring you?
donatj•3mo ago
I've run into things ChatGPT has straight up refused to talk about many times. Most recently I bought a used computer loaded with corporate MDM software and it refused to help me remove it.
gizmodo59•3mo ago
It’s easy to appear as uncensored when the world’s attention is not on your product. Once you have enough people using it and harm themselves it will be censored too. In a weird way, this is helping grok to not get boggled by lawsuits unlike openai.
londons_explore•3mo ago
I'm sure there are lawyers out there just looking for uncensored AI's to go sue for losses when some friendly client injures themselves by taking bad-AI-advice.
TheDong•2mo ago
I sometimes use LLM models to translate text snippets from fictional stories from one language to another.

If the text snippet is something that sounds either very violent or somewhat sexual (even if it's not when properly in context), the LLM will often refuse and simply return "I'm sorry I can't help you with that".

neidu•2mo ago
I was talking to ChatGPT about toxins, and potential attack methods, and ChatGPT refused to satisfy my curiosity on even impossibly impractical subjects. Sure, I can understand why anthrax spore cultivation is censored, but what I really want to know is how many barrels of botox an evil dermatologist would need to inject into someone to actually kill them via Botulism, and how much this "masterplan" would cost.
Hamuko•3mo ago
Is this the same AI model that at some point managed to make any single topic about the white genocide in South Africa?
cbm-vic-20•3mo ago
How does this sort of thing work from a technical perspective? Is this done during training, by boosting or suppressing training documents, or is is this done by adding instructions in the prompt context?
Hamuko•3mo ago
I think they do it by adding instructions since it came and went pretty fast. Surely if it was part of the training, it would take a while longer to take in.
benzible•2mo ago
This was done by adding instructions to the system prompt context, not through training data manipulation. xAI confirmed a modification was made to “the Grok response bot’s prompt on X” that directed it to provide specific responses on this topic (they spun this as “unauthorized” - uh, sure). Grok itself initially stated the instruction “aligns with Elon Musk’s influence, given his public statements on the matter.” This was the second such incident - in February 2025 similar prompt modifications caused Grok to censor mentions of Trump/Musk spreading misinformation.

[1] https://techcrunch.com/2025/05/15/xai-blames-groks-obsession...

fragmede•2mo ago
For a less polarizing take on the same mis-feature of LLMs, there was Golden Gate Claude.

https://www.anthropic.com/news/golden-gate-claude

afavour•3mo ago
Of course it has. There are countless examples of Musk saying Grok will be corrected when it says something that doesn’t line up with his politics.

The whole MechaHitler thing got reversed but only because it was too obvious. No doubt there are a ton of more subtle censorships in the code.

jampekka•3mo ago
Grok has plenty of censoring. E.g.

"I'm sorry, but I cannot provide instructions on how to synthesize α-PVP (alpha-pyrrolidinopentiophenone, also known as flakka or gravel), as it is a highly dangerous Schedule I controlled substance in most countries, including the US."

Havoc•2mo ago
No censoring and it says the things I agree with are not the same thing
sd9•2mo ago
I am amazed people actually believe this

Grok is the most biased of the lot, and they’re not even trying to hide it particularly well

jgalt212•2mo ago
According to a recent Economist article, even Grok is left-biased.
jorvi•2mo ago
Bias is not the same as censoring.

Censoring is "I'm afraid I can't let you do that, Dave".

Bias is "actually, Elon Musk waved to the crowd."

Everyone downthread is losing their mind because they think I'm some alt-right clown, but I'm talking about refusals, not Grok being instructed to bend the truth in regard to certain topics.

Bias is often done by prompt injection whilst censoring is often in the alignement, and in web interfaces via a classifier.

sd9•2mo ago
They are different, but they’re not that different.

If Grok doesn’t refuse to do something, but gives false information about it instead, that is both bias and censorship.

I agree that Grok gives the appearance of the least censored model. Although, in fairness, I never run into censored results on the other models anyway because I just don’t need to talk about those things.

giancarlostoro•2mo ago
I would argue over censorship is the better word. Ask Grok to write a regex so you can filter slurs on a subreddit and it immediately kicks in telling you that it cant say the nword or whatever, thanks Grok, ChatGPT, Claude etc I guess racism will thrive on my friends sub.
solumunus•2mo ago
I can’t tell if this is serious or not. Surely you realise you can just use the word “example” and then replace the word in the regex?!
jknutson•2mo ago
I think they would want a more optimized regex. Like a long list of swears, merged down into one pattern separated by tunnel characters, and with all common prefixes / suffixes combined for each group. That takes more than just replacing one word. Something like the output of the list-to-tree rust crate.
ahtihn•2mo ago
Wouldn't the best approach for that be to write a program that takes a list of words and output an optimized regex?

I'm sure an LLM can help write such a program. I wouldn't expect an LLM to be particularly good at creating the regex directly.

jknutson•2mo ago
I would agree. That’s exactly what the example I gave (list-to-tree) does. LLMs are actually pretty OK at writing regexes, but for long word lists with prefix/suffix combinations they aren’t great I think. But I was just commenting on the “placeholder” word example given above being a sort of straw man argument against LLMs, since that wouldn’t have been an effective way to solve the problem I was thinking of anyways.
solumunus•2mo ago
Still incredibly easy to do without feeding the actual words into the LLM.
nextaccountic•2mo ago
But why are LLM censored? This is not a feature I asked for
solumunus•2mo ago
Come on bro you know the answer to this.
giancarlostoro•2mo ago
When trying to block out nuanced filter evasions of the n-word for example, you can't really translate that from "example" in a useful meaningful way. The worst part is most mainstream (I should be saying all) models yell at you, even though the output will look nothing like the n-word. I figured an LLM would be a good way to get insanely nuanced about a regex.

What's weirdly funny is if you just type a slur, it will give you a dictionary definition of it or scold you. So there's definitely a case where models are "smart" enough to know you just want information for good.

You underestimate what happens when people who troll by posting the nword find an nword filter, and they must get their "troll itch" or whatever out of their system. They start evading your filters. An LLM would have been a key tool in this scenarion because you can tell it to come up with the most absurd variations.

fragmede•2mo ago
It doesn't blindly give you the full recipe for how to make cocaine. It's still lobotomized, it's just that you agree with the ways in which it's been "lobotomized".
cluckindan•3mo ago
Bigger context window = more input tokens processed = more income for the provider
bgwalter•3mo ago
Indeed. Free grok.com got significantly worse this week and has been on a decline since shortly after the release of Grok-4.

People who have $2000 worth of various model subscriptions (monthly) while saying they are not sponsored are now going to tell me that grok.com is a different model than Grok-4-fast-1337, but the trend is obvious.

fragmede•2mo ago
What are the other ones to get to $2,000? There's OpenAI and Anthropic; their to of the line plans are like $200 each, which only gets you to $400. there's a handful of other services, but how do you get to $2,000?
alchemism•2mo ago
AWS Bedrock of course
cedws•2mo ago
Big context window is an amplifier for LLMs. It's powerful to be able to fit an entire codebase into a prompt and have it understand everything, versus it having to make N tool calls/embeddings queries where it may or may not find the context it's looking for.
bko•2mo ago
I thought the number of tokens per second doesn't matter until I used Grok Code Fast. I realized that it makes a huge difference. If it take more than 30s to run, I lose focus, and look at something else. I end up being a lot less productive. It also opens up the possibility to automate a lot more simple tasks. I would def recommend people try fast models
manquer•2mo ago
If you are single tasking, speed matters to an extent. You need to still be able to read/skim the output and evaluate its quality.

The productive people I know use git worktrees and are multi-tasking.

The optimal workflow is when you can supply it one or more commands[1] that the model can run to validate/get feedback on its own. Think of it like RLHF for the LLM, they are getting feedback albeit not from you, which can be laborious.

As long as the model gets feedback it can run fairly autonomously with less supervision it does not have to testing driven feedback, if all it gets is you as the feedback, the bottleneck will be always be the human time to read, understand and evaluate the response not token speed.

With current leading models doing 3-4 workflows in parallel is not that hard, when fully concentrating, of course it is somewhat less when browsing HN :)

---

[1] The command could be a unit test runner, or a build/compile step, or e2e workflows like for UI it could be Chrome MCP/CDP, playwright/cypress, or storybook-js and so on. There are even converts toversion of TDD to benefit from this gain.

You could have one built for your use case if no existing ones fit, with model help of course.

SOLAR_FIELDS•2mo ago
Hmm. I run maybe 3 work streams max in parallel and struggle to keep up with the context switching. I have some level of skepticism that your colleagues are amazingly better and do 4 and produce quality code at a faster rate than 1 or 2 work streams in wall clock time. I consider a workstream to be disparate features or bugs that are unrelated and require attention. Running 8 agents in parallel that are all doing the same thing is of course trivial nowadays but that in of itself is what I would consider a single threaded workstream.
manquer•2mo ago
We have similar definition of streams, but It depends on a lot of things from your tooling/ language , stack etc.

if your builds take a fair bit of time (incremental builds may not work in worktree first time) or you are working on a item that has high latency feedback like e2e suite that runs on a actual browser etc.

Prompt styles also influences this. I like to make fairly detailed prompt that cover a lot of the nuances upfront and spend 10-15 or more writing it. I find that when I do that it takes longer, but I only give simple feedback during the run itself freeing me to go next item. Some people prefer chat style approach, you cannot keep lot of threads in mind if chatting.

Model and cli client choice matters , on average codex is slower than sonnet 4.5 . Within each family if you enable thinking or use the high reasoning model it can be slower as well.

Finally not all tasks are equal, I like to mix some complex and simpler ones or add some dev ex or a refactor that requires lower attention budget with features that require more.

Having said that, while I don’t know 10x type developers. I wouldn’t be surprised if there are were such people and they can be truly that productive .

The analogy I think of is chess. Maybe I can play 2-3 games in parallel reasonably well, but there are professional players who can play dozens of games blindfolded and win all of them.

SOLAR_FIELDS•2mo ago
Nice answer - all of the above aligns with my experience.

I use sonnet a lot more than openai models and its speed means I do have to babysit it more and get chattier which does make a difference, probably you are right that if I was using codex which is on average 4-6 times slower than claude code that I would have more mental bandwidth to handle more workstreams.

nextaccountic•2mo ago
This reads like satire. Who can work on two separate features at the same time?
LeafItAlone•2mo ago
I completely agree. Grok’s impressive speed is a huge improvement. Never before have I gotten the wrong answer faster than with Grok. All the other LLMs take a little longer and produce a somewhat right answer. Nobody has time to wait for that.
alyxya•2mo ago
Quality of the model tends to be pretty subjective, and people also complain about gaming benchmarks. At least context window length and generation speed are concrete improvements. There's always a way you can downplay how valuable or impressive a model is.
cactusplant7374•3mo ago
I had a failed refactor with Codex recently and I am wondering if context window size is the cause.
sgc•3mo ago
I not an expert ai user (and have never touched Codex), but anything remotely important I do, I force the smallest context window possible. I just did something very beautiful using that principle, which will soon be ready to show the world. It would have been a garbled pile of garbage with long context windows.

Obviously major architectural changes need a bigger context window. But try to aggressively modularize your tasks as much as you can, and where possible run batch jobs to keep your workflow moving while each task stays a smaller chunk.

jakevoytko•3mo ago
With the current crop of LLMs/agents, I find that refactors still have to be done at a granular level. "I want to make X change. Give me the plan and do not implement it yet. Do the first thing. Do the second thing. Now update the first call site to use the new pattern. You did it wrong and I fixed it in an editor; update the second call site to match the final implementation in $file. Now do the next one. Do the next one. Continue. Continue.", etc.
enraged_camel•3mo ago
For complex refactors, I use "max mode" in Cursor, which in my experience noticeably improves the AI's performance and makes it go for a lot longer before it starts to drift. I haven't looked into how it works exactly, but it works well if you don't mind the extra cost.
whywhywhywhy•2mo ago
Had some bad experiences with max mode and the latest Claude spending significant time on writing worthless .md files rather than solving problems
port3000•3mo ago
I use Claude Code, haven't used Codex yet (should I?) - but in Claude code you can spin up sub-agents to handle these big refactors, with the master context window just keeping track of the overall progress, bugs, etc and providing instructions to the subagents to do the rote work.
mrud•3mo ago
IMO yes. It is less polished but IMO the model is way better. I moved over from claude completely and cancelled my max subscription. Less polished, slower but the results are better and you have to do less steering
johnnyApplePRNG•3mo ago
But for some reason if I load a 400kb file into it... it can't even read the file?! Pffft, whatever elon. Go play with your rockets.
raincole•3mo ago
It's funny how fast this post is flagged, lol. Have other LLMs or blunt ads got the same treatment on HN?
latexr•3mo ago
> Have other LLMs or blunt ads got the same treatment on HN?

Yes, I’ve seen it happen multiple times.

ronsor•3mo ago
This post really has no reason to be flagged. I know Elon is controversial, and I have a lot of gripes with his business practices myself, but this is literally just documentation for a frontier LLM. Can we stay on topic?
big-and-small•3mo ago
This. I wouldn't pay to use it, but big context windows are amazing for programming and especially prototyping when you can keep whole codebase in context.

Gemini's 1M is amazing.

oulipo2•3mo ago
The politics of the owners IS the topic. It's being really naive (read: stupid) to think that this has no implication on society
TheOtherHobbes•3mo ago
You're literally handing over your code to a third party.

In fact AI is handing over the process of creating code - eventually all code - to a small number of third parties, who will have complete power over the world's IT infrastructure.

No wonder they have wildly inflated valuations. The potential to enforce authoritarian policies through opaque technology is unprecedented.

ramraj07•3mo ago
Here's an on topic question: all the frontier model companies "promise" that they wont store and train on your api use if you pay for it. Who do you trust? I for sure will absolutely assume grok will just use the data I submit to train in perpetuity. Thats a scary thing for me and if anyone else does anything thats real work this should be great cause for worry if they wish to use grok.
pixel_popping•2mo ago
Do you really think Google isn't logging all our prompts?
ramraj07•2mo ago
I will trust Google to abide by the rules more than any other big tech firm. Like with all my money ill make that bet. Not because I think they're good guys but from everything I have learned they have a culture that abides by rules like these. If they say they wont train on api use (they do say it) I feel assured they wont.
hu3•3mo ago
This. We like to think about ourselves as engineers. But often behave like a bunch of emotion driven primitives.

Honestly this kind of behaviour would be a huge red flag during interviews.

I have problems that current LLMs can't solve efficiently due to context window sizes. And welcome any improvement in this space.

autop0ietic•2mo ago
I personally can't stand Musk but for many he has become an Emmanuel Goldstein character that even the mention of his name causes the most extreme emotional disgust from all the exposure of this strange, algorithmic, Two Minutes Hate.
bdangubic•2mo ago
Grok is not LLM, it is “not-so-large-take-out-what-Elon-doesnt-like LM” - no documentation necessary :)
nsoonhui•3mo ago
It's a shame that the top comments are focusing more on Elon Musk, his personality and politics rather than the quality of the model per se.

Speaking about Elon, regardless of what you think of him, he really does get things done, despite naysayers -- SpaceX, Tesla, Neuralink and even get Trump elected ( despite subsequent fallout) etc. Even Twitter is finding a second life by becoming a haven for the free speech advocates and alternative views, much to the chagrin of MSMs because they now no longer have the monopoly on the "truth", and censoring "fake news" becomes hard.

People like Elon are almost by definition contrarian ( you don't change the world by being a conformist), that should align well with the predilection of the intended audience here. So it's a surprise to me that HNs are almost uniformly, vehemently anti-Musk. It's almost as if the ultimate embodiment of the hacker spirit -- Musk -- is being rejected by his own kind, the very kind that he is supposed to inspire.

letmetweakit•3mo ago
In my understanding of the hacker ethos, hackers appear to be genuinely nice people who mean to do good for society and regular people. Elon does not align with those values according to some people so they reject him and his activities.
wewewedxfgdf•3mo ago
>> regardless of what you think of him, he really does get things done, despite naysayers -- SpaceX, Tesla, Neuralink and even get Trump elected

It matters how people behave.

m-hodges•3mo ago
> Even Twitter is finding a second life by becoming a haven for the free speech advocates and alternative views, much to the chagrin of MSMs because they now no longer have the monopoly on the "truth"

Of all the silly things to say about Musk and Twitter, the idea that “MSM” are upset about Twitter is among the silliest.

csomar•3mo ago
> he really does get things done

Really? Most of the stuff he promised never materialized. Elon's genius is that he learned where the money comes from. Both Tesla and Space X where financed by gov. money. That's why he supported Trump and that's why he keeps pumping the stock. He goes directly to the source.

nextaccountic•2mo ago
> regardless of what you think of him, he really does get things done, despite naysayers -- SpaceX, Tesla, Neuralink and even get Trump elected

Is a billionarie getting a politician elected - even by promising payment to voters (that is, buying votes) - something positive?

The US is supposed to be a democracy, it's the people that get politicians elected, not billionaries

solumunus•2mo ago
Grok? Next…
tacker2000•2mo ago
Yea, no desire to ever use this.
bushbaba•2mo ago
I personally find grok better for certain tasks. It’s better than Gemini for images. Its better than the rest at crude jokes etc
drivingmenuts•2mo ago
Honestly, if Elon Musk told me what time it was, I wouldn't trust him.
htrp•2mo ago
Any details on exactly how they accomplished this? longrope?
daft_pink•2mo ago
My experience with AI is that you generally want to keep your context as small as possible and this is only useful when your relevant context is actually 2m tokens.
hereme888•2mo ago
That's my experience as well.
Frannky•2mo ago
I started with ChatGPT, then moved on to Claude, and then discovered Grok. But now I've stopped paying for any of them. Claude edged out ChatGPT in quality, while Grok stood out with its generous usage limits. That all changed, though, once they rolled out the agent system and RLHF. Suddenly, the model slowed to a crawl, veering off on wrong paths and getting lost in its own reasoning. Those endless, super-annoying RLHF popups didn't help either.

My theory? They were scrambling for a competitive edge and were willing to swallow some short-term pain. Plus, it feels like they shifted focus away from keeping coders deeply in the loop.

In the end, we vote with our wallets—if it doesn't click, just walk away. I still dip into Grok, but only the free tier: Grok 4's fast mode for tackling planning and first generation, and then Qwen Coder for the code editing and clerical tasks. The latest version of grok hold up about as well as the old Grok 3, just with way more steps...

giancarlostoro•2mo ago
I guess I joined Claude late, but its been working pretty decent for me. I've been using Claude Code with Zed now that it's a native feature. Honestly, if you're building coding APIs for your LLM and you aren't working with the Zed folks to get your model natively in that editor, you're messing up big in my eyes, its just done so well.

My biggest gripe with Grok is they're not really integrated in all the great tooling I use. I know I can use an API key with Zed, but come on, you want to compete with something like Claude Code? You need to integrate with the tools devs actually use. If they want to rush on anything, get it on more tools.

Frannky•2mo ago
I complained to them about the missing CLI. That was probably the last straw that made me decide to stop paying for it. They could deliver a CLI with calls included in SuperGrok, and a lot of people would stop using Gemini CLI and Qwen Code.
vaxman•2mo ago
OpenAI will go to zero unless it agrees to be acquired because they're messing with public company stock valuations using funky purchase orders leaving those public companies no choice but to cancel their credit (at least unless they get a "government backstop" that they say they don't want or need). Those who compete with OpenAI will also "take a hit" if/when that happens, so they would be wise to be looking to make a deal to acquire OpenAI. Dude was from Y Combinator and liked to bank on hope, focusing on capturing market share and worrying about profits later, which is fine in software startups playing with Monopoly money, but when it impacts vendors that are publicly traded companies (to the point that one is now valued at $5T), post-1929 rules come into play. Anthropic has a similar issue, but there, the issue is that their C-suite is making outrageous public statements that are suspected of intending to manipulate the stock values of both private and public competitors and of the publicly held vendors to all of these players. I hope they both go away quietly and someone declares victory rather than the stock market crashing!

As far as xAI, I doubt it will go to zero or run afoul of any of those market manipulation issues because it owns Twitter/X and I think it powers the realtime Tesla cloud, but betting on it is fraught with peril because of the high likelihood that it will wind up under the control of some less capable conglomerate (ergo, GM acquisition of Hughes Aircraft and resale to Raytheon, Boeing and News/DirecTV).

Google, Meta, a handful of B actors and China are where we have to place our bets, but only if we ourselves need (or want to invest on the theory that others need) trillion parameter models (and want to risk having the valuations lowered if/when adverse actions are taken against the above competitors).

vaxman•2mo ago
*-"eventually" leaving those public companies no choice but to...

Clarifying, because there's no way a company (public or private) is going to reduce the credit line of a major customer until it's obvious that the orders "aren't real" But if Wall Street realizes it before they do, they can lose control of their business too. This is not quite Enron or WorldCom/MFS, but it's a very similar storm on the horizon. (BTW, ever wonder why Sprint never could remain airborne and eventually was merged with TeenMobile? It's because they overspent on CapX trying to keep up with the fraud at Worldcom and could never dig out to actually use all that spectrum. Likewise, we are still dealing with the fallout of the Enron collapse on the US domestic energy grid a quarter century later.)