I was surprised reading through this problem that the machine solved it well at all.
I get that it’s a leet code style question but it’s got a lot of specifics and I assumed the corpus of training data on optimizing this type of problem was several orders of magnitude too small to train an LLM on and have good results.
Something like that.
As someone with a degree in computer science it reminds me of almost every course I took. As someone who has worked at multiple FAANG and adjacent companies with high expectations, I’ve encountered things like this in most interviews and have devised similar problems to be given as interviews. The point isn’t to make something objectively useful in the question itself but to provide a toy example of a specific class of problem that absolutely shows up in practical situations, although by in large most IT programmers would never see such a problem in their careers. This does not however mean such problems don’t exist in the world and are not solved by computer scientists professionally at work in practical uses. Beyond that they also are tests of how well people have learned computer science, discrete math, and complex programming as a proxy for general technical intelligence (albeit not testing any specific technology or toolkit, as is emphasized in IT work). This seems surprising to me when people belly ache about computer science being asked in any context - at school, work, or in a programming contest as if the only worthwhile things to do are systems programming questions.
The LLM was probably getting nowhere trying to improve after the first few minutes.
How did you come to that conclusion from the contents of the article?
The final scores are all relatively close. How could that happen if the ai was floundering the whole time? Just a good initial guess?
Yes, that and marginal improvements over it.
I would assume the LLM is trying an inhuman number of solutions and the best one was #2 in this contest.
Impressive by the human winner but good luck on that in 2026.
Are the submissions available online without needing to become a member of AtCoder?
I want to see what these 'heuristic' solutions look like.
Is it just that the ai precomputed more states and shoved their solutions in as the 'heuristic' or did it come up with novel, more broad, heuristics? Did the human and ai solutions have overlapping heuristics?
First, there's a world coding championship?! Of course there is. There's a competition for anything these days.
Why is he exhausted?
> The 10-hour marathon left him "completely exhausted."
> ... noting he had little sleep while competing in several competitions across three days. "I'm completely exhausted. ... I'm barely alive."
oh! That's a lot.
> beating an advanced AI model from OpenAI ...
> On Wednesday, programmer Przemysław Dębiak (known as "Psyho"), a former OpenAI employee,
Interesting that he used to work there.
> Dębiak won 500,000 yen
JPY 500,000 -> USD 3367.20 -> EUR 2889.35
I'm guessing it's more about the clout than it is about the payment, because that's not a lot of money for the effort spent
Yeah I'm not in tech but I've seen his handle like 3 times today already, so he's definitely got recognition.
to be fair he also said
> "Honestly, the hype feels kind of bizarre," Dębiak said on X. "Never expected so many people would be interested in programming contests."
I think people need to realize that just because an AI model fails at one point, or some certain architecture has common failure modes, that billions of dollars are poured into correcting those failures and improving in every economically viable domain. Two years ago AI video looked like a garbled 140p nightmare, now it's higher quality video than all but professional production studios could make.
AI agents don't get tired. They don't need to sleep. They don't require sick days, parental leave, or PTO. They don't file lawsuits, they don't share company secrets, they don't disparage, deliberately sandbag to get extra free time, whine, burn out or go AWOL. The best AI model/employee is infinitely replicatable, and can share its knowledge with other agents perfectly and clone itself arbitrarily many times, and it doesn't have a clash of egos working with copies of itself, it just optimizes and is refit to accomplish whatever task its given.
All this means is that gradually the relative advantage of humans in any economically viable domain will predictably trend towards zero. We have to figure out now what that will mean for general human welfare, freedom and happiness, because barring extremely restrictive measures on AI development or voluntary cessation by all AI companies, AGI will arrive.
On a related note, many people also assume that just because something has been trending exponential that it will _continue_ to do so...
Imagine a software company without a single software engineer. What kind of software would it produce? How would a product manager or some other stakeholder work with "AI agents"? How do the humans decide that the agent is finished with the job?
Software engineering changes with the tools. Programming via text editors will be less important, that much is clear. But "AI" is a tool. A compressed database of all languages, essentially. You can use that tool to become more efficient, in some cases wastly more efficient, but you still need to be a software engineer.
Given that understanding, consider another question: When has a company you worked for ever said "that's enough software, the backlog is empty. We're done for the quarter with software development?"
Currently AI failure modes (consistency over long context lengths, multi-modal consistency, hallucinations) make it untenable as a "full-replacement" software engineer, but effective as a short-term task agent overseen by an engineer who can review code and quickly determine what's good and what's bad. This allows a 5x engineer to become a 7x engineer, 10x become a 13x, etc. which allows the same amount of work to be done with fewer coders, effectively replacing the least productive engineers in aggregate.
However, as those failure modes becomes less and less frequent, we will gradually see "replacement". It will come in the form of senior engineers using AI tools noting that a PR of a certain complexity is coded correctly 99% of the time by a given AI model, so they will start assigning longer, more complex tasks to it and stop overseeing the smaller ones. The length of tasks it can reliably complete get longer and longer, until all a suite of agents needs is a spec, API endpoints and the ability to serve testing deployments to PM's, and it begins doing first only what a small, poorly run team could accomplish, but month after month gets better and better until companies start offloading entire teams to AI models and simply require a higher-up team to check and reconfigure them once and a while and budget manage token use.
This process will continue as long as AI models grow more capable, less hallucinatory over long-context horizons, and agentic/scaffolding systems become more robust and effectively designed to mitigate and deal with the issues affecting the AI models that do exist. It won't be easy or straightforward, but the economic potential gains are so enormous that it makes sense that billions are being poured into any AI agent startup that can snatch a few IOI medalists and a coworking space in SF.
This does not follow. Your argument, set in the 1950s, would be that cars keep getting faster, therefore they will reach light speed.
I'm bullish on specific areas improving (I'm sure you could selectively train an LLM on the latest Angular version to replace the majority of front-end devs given enough time and money, it's a limited problem space and a strongly opinionated framework after all), but for the most part enshittification is already starting to happen with the general models.
Nowadays even ChatGPT doesn't bother to even refer to the original question posed after a few responses, so you're left summarising a conversation and starting a new context to get anywhere.
So, yeah, I think we're very much into finding the equilibrium now. Cost vs scale. Exponential improvements won't be in the general LLMs.
Happy to be wrong on this one..
Reading through the challenge, there's a lot of data modelling and test harness writing and ideating that an LLM could knock out fairly quickly, but would take even a competitive coder some time to write (even if just limited by typing speed).
That'd give the human more time to experiment with different approaches and test incremental improvements.
And it's not against the rules to use LLMs apparently in the competition. (https://atcoder.jp/posts/1495). I'd be curious what other competitors used.
> All competitors, including OpenAI, were limited to identical hardware provided by AtCoder, ensuring a level playing field between human and AI contestants.
And assumed that meant a pretty restricted (and LLM-free) environment. I think their policy is pretty pragmatic.
ChrisMarshallNY•2h ago
esseph•2h ago
ChrisMarshallNY•2h ago
coldtea•1h ago
ChrisMarshallNY•1h ago
I seem to encounter cultural milestones, that are no longer there, every day.
tyre•58m ago