frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Build a minimal decorator with Ruby in 30 minutes

https://remimercier.com/minimal-decorator-ruby/
29•unripe_syntax•2h ago•7 comments

Expanding Racks [video]

https://www.youtube.com/watch?v=iWknov3Xpts
81•doctoboggan•4h ago•10 comments

Agentic Coding Recommendations

https://lucumr.pocoo.org/2025/6/12/agentic-coding/
4•rednafi•9m ago•0 comments

Chatterbox TTS

https://github.com/resemble-ai/chatterbox
410•pinter69•13h ago•134 comments

How much EU is in DNS4EU?

https://techlog.jenslink.net/posts/dns4eu/
100•todsacerdoti•1h ago•46 comments

Microsoft Office migration from Source Depot to Git

https://danielsada.tech/blog/carreer-part-7-how-office-moved-to-git-and-i-loved-devex/
138•dshacker•9h ago•119 comments

SchemeFlow (YC S24) Is Hiring a Founding Engineer (London) to Speed Up Construction

https://www.ycombinator.com/companies/schemeflow/jobs/SbxEFHv-founding-engineer-full-stack
1•andrewkinglear•23m ago

The hunt for Marie Curie's radioactive fingerprints in Paris

https://www.bbc.com/future/article/20250605-the-hunt-for-marie-curies-radioactive-fingerprints-in-paris
47•rmason•2d ago•18 comments

AOSP project is coming to an end

https://old.reddit.com/r/StallmanWasRight/comments/1l8rhon/aosp_project_is_coming_to_an_end/
186•kaladin-jasnah•4h ago•70 comments

Show HN: Eyesite - experimental website combining computer vision and web design

https://blog.andykhau.com/blog/eyesite
75•akchro•8h ago•13 comments

Research suggests Big Bang may have taken place inside a black hole

https://www.port.ac.uk/news-events-and-blogs/blogs/space-cosmology-and-the-universe/what-if-the-big-bang-wasnt-the-beginning-our-research-suggests-it-may-have-taken-place-inside-a-black-hole
509•zaik•13h ago•445 comments

Show HN: Spark, An advanced 3D Gaussian Splatting renderer for Three.js

https://sparkjs.dev/
284•dmarcos•16h ago•62 comments

Plants hear their pollinators, and produce sweet nectar in response

https://www.cbc.ca/listen/live-radio/1-51-quirks-and-quarks/clip/16150976-plants-hear-pollinators-produce-sweet-nectar-response
257•marojejian•4d ago•53 comments

How I Program with Agents

https://crawshaw.io/blog/programming-with-agents
452•bumbledraven•3d ago•255 comments

Dancing brainwaves: How sound reshapes your brain networks in real time

https://www.sciencedaily.com/releases/2025/06/250602155001.htm
4•lentoutcry•3d ago•0 comments

V-JEPA 2 world model and new benchmarks for physical reasoning

https://ai.meta.com/blog/v-jepa-2-world-model-benchmarks/
241•mfiguiere•18h ago•78 comments

Show HN: Ikuyo a Travel Planning Web Application

https://ikuyo.kenrick95.org/
269•kenrick95•20h ago•84 comments

OpenAI o3-pro

https://help.openai.com/en/articles/9624314-model-release-notes
235•mfiguiere•1d ago•140 comments

How long it takes to know if a job is right for you or not

https://charity.wtf/2025/06/08/on-how-long-it-takes-to-know-if-a-job-is-right-for-you-or-not/
179•zdw•2d ago•109 comments

Bypassing GitHub Actions policies in the dumbest way possible

https://blog.yossarian.net/2025/06/11/github-actions-policies-dumb-bypass
192•woodruffw•19h ago•93 comments

Danish Ministry Replaces Windows and Microsoft Office with Linux and LibreOffice

https://www.heise.de/en/news/From-Word-and-Excel-to-LibreOffice-Danish-ministry-says-goodbye-to-Microsoft-10438942.html
10•jlpcsl•55m ago•1 comments

The curious case of shell commands, or how "this bug is required by POSIX" (2021)

https://notes.volution.ro/v1/2021/01/notes/502e747f/
119•wonger_•1d ago•77 comments

Fine-tuning LLMs is a waste of time

https://codinginterviewsmadesimple.substack.com/p/fine-tuning-llms-is-a-huge-waste
135•j-wang•1d ago•64 comments

Show HN: RomM – An open-source, self-hosted ROM manager and player

https://github.com/rommapp/romm
196•gassi•19h ago•76 comments

Firefox OS's story from a Mozilla insider not working on the project (2024)

https://ludovic.hirlimann.net/2024/01/firefox-oss-story-from-mozila-insider.html
158•todsacerdoti•21h ago•105 comments

Show HN: S3mini – Tiny and fast S3-compatible client, no-deps, edge-ready

https://github.com/good-lly/s3mini
237•neon_me•1d ago•93 comments

Unveiling the EndBOX – A microcomputer prototype for EndBASIC

https://www.endbasic.dev/2025/06/unveiling-the-endbox.html
26•jmmv•9h ago•7 comments

Rohde and Schwarz AMIQ Modulation Generator Teardown

https://tomverbeure.github.io/2025/04/26/RS-AMIQ-Teardown-Analog-Deep-Dive.html
50•iamsrp•3d ago•15 comments

Congratulations on creating the one billionth repository on GitHub

https://github.com/AasishPokhrel/shit/issues/1
505•petercooper•11h ago•117 comments

DeskHog, an open-source developer toy

https://posthog.com/deskhog
206•constantinum•19h ago•84 comments
Open in hackernews

OpenAI o3-pro

https://help.openai.com/en/articles/9624314-model-release-notes
235•mfiguiere•1d ago

Comments

mmsc•1d ago
I understand that things are moving fast and all, but surely the.. 8? models which are currently available is a bit .. overwhelming for users that just want to get answers to their questions of life? What's the end goal with having so many models available?
Osyris•1d ago
This is a much more expensive model to run and is only available to users who pay the most. I don't see an issue.

However, the "plus" plan absolutely could use some trimming.

djrj477dhsnv•4h ago
If it's better (and newer) than gpt4, it shouldn't have a lower version number.
bachittle•1d ago
free users don't have this model selector, and probably don't care which model they get so 4o is good enough. paid users at 20$/month get more models which are better, like o3. paid users at 200$/month get the best models that are also costing OpenAI the most money, like o3-pro. I think they plan to unify them with GPT-5.
stavros•1d ago
That doesn't help much when we're asymptotically approaching GPT-5. We're probably going to be at GPT-4.9999 soon.
rfw300•1d ago
Not necessarily true. GPT-4.1 was released after GPT-4.5-preview. Next model might be GPT-3.7.
nikcub•1d ago
I'd be curious what proportion of paid users ever switch models. I'd guess < 10%
CamperBob2•1d ago
I switch to o1-pro on occasion, but it is slow enough that I don't use it as much as some of the others. It is a reasonably-effective last resort when I'm not getting the answer quality that I think should be achievable. It's the best available reasoning model from any provider by a noticeable margin.

Sounds like o3-pro is even slower, which is fine as long as it's better.

o4-mini-high is my usual go-to model if I need something better than the default GPT4-du jour. I don't see much point in the others and don't understand why they remain available. If o3-pro really is consistently better, it will move o1-pro into that category for me.

CuriouslyC•1d ago
If you're not at least switching from 4o to 4.1 you're doing it wrong.
macawfish•1d ago
Overwhelming yet pretty underwhelming
nickysielicki•1d ago
I just can’t believe nobody at the company has enough courage to tell their leadership that their naming scheme is completely stupid and insane. Four is greater than three, and so four should be better than three. The point of a name is to describe something so that you don’t confuse your users, not to be cute.
browningstreet•1d ago
At Techcrunch AI last week, the OpenAI guy started his presentation by acknowledging that OpenAI knows their naming is a problem and they're working on it, but it won't be fixed immediately.
moomin•1d ago
I know they have a deep relationship with Microsoft, but perhaps they shouldn’t have used Microsoft’s product naming department.
orra•1d ago
Zune .NET O3... shudders
simonw•1d ago
Sam Altman has said the same thing on Twitter a few times. https://x.com/sama/status/1911906570835022319

> how about we fix our model naming by this summer and everyone gets a few more months to make fun of us (which we very much deserve) until then?

nickysielicki•1d ago
I’d prefer for them to just fix it asap instead and then keep the existing endpoints around for a year as aliases.
lobsterthief•6h ago
They will definitely keep the endpoint aliases around for years. No real cost in doing so.
asah•43m ago
How about they ask ChatGPT for help !
MallocVoidstar•1d ago
The reason their naming scheme is so bad is because their initial attempts at GPT-5 failed in training. It was supposed to be done ~1 year ago. Because they'd promised that GPT-5 would be vastly more intelligent than GPT-4, they couldn't just name any random model "GPT-5", so they suddenly had to start naming things differently. So now there's GPT-4.5, GPT-4.1, the o-series, ...
kaoD•1d ago
Surely there's a less stupid way than naming two very different models o4 and 4o.
transcriptase•1d ago
What’s worse is that the app doesn’t even have descriptions. As if I’m supposed to memorize the use case for each based on:

GPT-4o

o3

o4-mini

o4-mini-high

GPT-4.5

GPT-4.1

GPT-4.1-mini

koakuma-chan•1d ago
Just use o4-mini for everything
ralfd•1h ago
Why not o3?
aetherspawn•1d ago
Came here to say this, the naming scheme is ridiculous and is getting more impossible to follow each day.

For example the other day they released a supposedly better model with a lower number..

aetherspawn•1d ago
I’d honestly prefer they just have 3 personas of varying cost/intelligence: Sam, Elmo and Einstein or something, and then tack on the date, elmo-2025-1 and silently delete the old ones.
dmos62•2h ago
If you obfuscate the naming, you obfuscate the value proposition, and people become easier to mislead into choosing an overly expensive model. Same as with Intel CPUs, or many many other hardware products.
levocardia•1d ago
There's a humorous version of Poe's law that says "any sufficiently genuine attempt to explain the differences between OpenAI's models is indistinguishable from parody"
paxys•1d ago
> users that just want to get answers to their questions of life

Those users go to chat.openai.com (or download the app), type text in the box and click send.

AtlasBarfed•1d ago
I'd like one to do my test use case:

Port unix-sed from c to java with a full test suite and all options supported.

Somewhere between "it answers questions of life" and "it beats PhDs at math questions", I'd like to see one LLM take this, IMO, rather "pure" language task and succeeed.

It is complicated, but it isn't complex. It's string operations with a deep but not that deep expression system and flag set.

It is well-described and documented on the internet, and presumably training sets. It is succinctly described as a problem that virtually all computer coders would understand what it entailed if it were assigned to them. It is drudgerous, showing the opportunity for LLMs to show how they would improve true productivity.

GPT fails to do anything other than the most basic substitute operations. Claude was only slightly better, but to its detriment hallucinated massive amounts and made fake passing test cases that didn't even test the code.

The reaction I get to this test is ambivalence, but IMO if LLMs could help port entire software packages between languages with similar feature sets (aside from Turing Completeness), then software cross-use would explode, and maybe we could port "vulnerable" code to "safe" Rust en masse.

I get it, it's not what they are chasing customer-wise. They want to write (in n-gate terms) webcrap.

CamperBob2•1d ago
How does the latest Gemini 2.5 Pro Ultra Flash Max Hemi XLT release do on that task? It obviously demands a massive context window.
AtlasBarfed•12h ago
I'll check once I get the nitrous tanks and the aftermarket turbos overnighted from Japan arrive.
nipah•12h ago
I have a very simple question with like, 5 lines at best, that basically no model, neither reasoning or simpler could grasp. For obvious reasons I'm not disclosing it here (because I fear data contamination in the long run), but it basically breaks the "reasoning" of those things. Unfortunately, I still can't try the o3-pro because the API version is not easily available, and I'm certainly not willing to pay for it in pro mode, but when it comes to the plus version (if it comes) I'll try. To this date, because of this question (and similar ones) I stand very unimpressed with those models, the marketing is a thousand times larger than reality, and I suspect people in general are surprisingly less capable of detecting intelligence than they think.

The normal o3 also managed to break 3 isolated installations of linux I was trying it with, a few days ago. The task was very simple, simply setup ubuntu with btrfs, timeshift and grub-btrfs and it managed to fail every single time (even when searching the web), so it was not impressive either.

resters•1d ago
Models are used for actual tasks where predictable behavior is a benefit. Models are also used on cutting-edge tasks where smarter/better outputs are highly valued. Some applications value speed and so a new, smaller/cheaper model can be just right.

I think the naming scheme is just fine and is very straightforward to anyone who pays the slightest bit of attention.

manmal•1d ago
The benchmarks don’t look _that_ much better than o3. Does that mean Pro models are just incrementally better than base models, or are we approaching the higher end of a sigmoid function, with performance gains leveling off?
bachittle•1d ago
it's the same model as o3, just with thinking tokens turned up to the max.
Tiberium•1d ago
That's simply not true, it's not just "max thinking budget o3" just like o1-pro wasn't "max thinking budget o1". The specifics are unknown, but they might be doing multiple model generations and then somehow picking the best answer each time? Of course that's a gross simplification, but some assume that they do it this way.
cdblades•1d ago
> That's simply not true, it's not just "max thinking budget o3"

> The specifics are unknown, but they might...

Hold up.

> but some assume that they do it this way.

Come on now.

MallocVoidstar•1d ago
Good luck finding the tweet (I can't) but at least one OpenAI engineer has said that o1-pro was not just 'o1 thinking longer'.
boole1854•1d ago
I also don't have that tweet saved, but I do remember it.
PhilippGille•1d ago
This one? Found with Kagi Assistant.

https://x.com/michpokrass/status/1869102222598152627

It says:

> hey aidan, not a miscommunication, they are different products! o1 pro is a different implementation and not just o1 with high reasoning.

cdblades•20h ago
That's a rather crappy product naming scheme.
firejake308•1d ago
> "We also introduced OpenAI o3-pro in the API—a version of o3 that uses more compute to think harder and provide reliable answers to challenging problems"

Sounds like it is just o3 with higher thinking budget to me

dyauspitr•1d ago
Don’t they have a full fledged version of o4 somewhere internally at this point?
ankit219•1d ago
They do it seems. o1 and o3 were based on the same base model. o4 is going to be based on a newer (and perhaps smarter) base model.
lhl•22h ago
I've been using o3 extensively since release (and a lot of Deep Research). I also use a lot of Claude and Gemini 2.5 Pro (most of the times, for code I'll let all of them go at it and iterate on my fav results).

So far I've only used o3-pro a bit today, and it's a bit too heavy to use interactively (fire it off, revisit in 10-15 minutes), but it seems to generate much cleaner/more well organized code and answers.

I feel like the benchmarks aren't really doing a good job at capturing/reflecting capabilities atm. eg, while Claude 4 Sonnet appears to score about as well as Opus 4, in my usage Opus is always significantly better at solving my problem/writing the code I need.

Besides especially complex/gnarly problems, I feel like a lot of the different models are all good enough and it comes down to reliability. For example, I've stopped using Claude for work basically because multiple times now it's completely eaten my prompts and even artifacts it's generated. Also, it hits limits ridiculously fast (and does so even when on network/resource failures).

I use 4.1 as my workhorse for code interpreter work (creating graphs/charts w/ matplotlib, basic df stuff, converting tables to markdown) as it's just better integrated than the others and so far I haven't caught 4.1 transposing/having errors with numbers (which I've noticed w/ 4o and Sonnet).

Having tested most of the leading edge open and closed models a fair amount, 4.5 is still my current preferred model to actually talk to/make judgement calls (particularly with translations). Again, not reflected in benchmarks, but 4.5 is the only model that gives me the feeling I had when first talking to Opus 3 (eg, of actual fluid intelligence, and a pleasant personality that isn't overly sychophantic) - Opus 4 is a huge regression in that respect for me.

(I also use Codex, Roo Code, Windsurf, and a few other API-based tools, but tbt, OpenAI's ChatGPT UI is generally better for how I leverage the models in my workflow.)

manmal•19h ago
Thanks for your input, very appreciated. Just in case you didn’t mean Claude Code, it’s really good in my experience and mostly stable. If something fails, it just retries and I don’t notice it much. Its autonomous discovery and tool use is really good and I‘m relying more and more on it.
lhl•4h ago
For the Claude issues, I'm referring to the claude.ai frontend. While I use some Codex, Aider, and other agentic tools, I found Claude Code to be not to my taste - for my uses it tended burn a lot of tokens and gave relatively mediocre results, but I know it works well for others, so YMMV.
petesergeant•5h ago
I wonder if we'll start to see artisanal benchmarks. You -- and I -- have preferred models for certain tasks. There's a world in which we start to see how things score on the "simonw chattiness index", and come to rely on smaller more specific benchmarks I think
lhl•4h ago
Yeah, I think personalized evals will definitely be a thing. Besides reviewing way too much Arena, WildChat and having now seen lots of live traces firsthand, there's a wide range of LLM usage (and preferences), which really don't match my own tastes or requirements, lol.

For the past year or two, I've had my own personal 25 question vibe-check I've used on new models to kick the tires, but I think the future is something both a little more rigorous and a little more automated (something like LLM Jury w/ an UltraFeedback criteria based off of your own real world exchanges and then BTL ranked)? A future project...

petesergeant•5h ago
I am starting to feel like hallucination is a fundamentally unsolvable problem with the current architecture, and is going to keep squeezing the benchmarks until something changes.

At this point I don't need smarter general models for my work, I need models that don't hallucinate, that are faster/cheaper, and that have better taste in specific domains. I think that's where we're going to see improvements moving forward.

OccamsMirror•4h ago
If you could actually teach these models things, not just in the current context, but as temporal learning, then that would alleviate a lot of the issues of hallucination. I imagine being able to say "that method doesn't exist, don't recommend it again" and then give it the documentation and it would absorb that information permanently, that would fundamentally change how we interact with these models. But can that work for models hosted for everyone to use at once?
petesergeant•1h ago
There are an almost infinite number of things that can be hallucinated, though. You can't maintain a list of scientific papers or legal cases that don't exist! Hallucinations (almost certainly) aren't specific falsehoods that need to be erased...
tiahura•1d ago
So, upgrade to Teams and pay the $50? Plus more usage of o3. Seems like it might be a shot at the $100 claude max?
dog436zkj3p7•1d ago
What do you mean with "pay the $50"?

Also, does anybody know what limits o3-pro has under the team plan? I don't see it available in the model picker at all (on team).

sanex•1d ago
I believe teams is $25/user with a 2 user minimum.
dog436zkj3p7•1d ago
Ah, thanks for explaining!
carmelion•1d ago
Jl App
ChrisArchitect•1d ago
Related:

OpenAI dropped the price of o3 by 80%

https://news.ycombinator.com/item?id=44239359

swyx•1d ago
here's a nice user review we published: https://www.latent.space/p/o3-pro

sama's highlight[0]:

> "The plan o3 gave us was plausible, reasonable; but the plan o3 Pro gave us was specific and rooted enough that it actually changed how we are thinking about our future."

I kept nudging the team to go the whole way to just let o3 be their CEO but they didn't bite yet haha

0: https://x.com/sama/status/1932533208366608568

tomComb•1d ago
Big fan swyx, but both here and in the article there is some bragging about being quoted by sama, and while I acknowledge that that’s not out of the ordinary, I’m concerned about where it leads: what it takes to get quoted by sama (or similar interested party) is saying something good about his product, and having a decent follower count.

Dangerous incentives IMO.

swyx•1d ago
acked. in my defense i didnt write the article + ben already had a good track record from the o1 article. while our relationship with oai is v v v impt to us, we've also covered negative openai stories: https://www.latent.space/p/clippy-v-anton and will continue to give balanced coverage with the other labs when they do well.

we are definitely not seeking to be openai sycophants, nor would they want us to be.

alightsoul•1d ago
if o3 is so good why aren't they using it to replace management?
martin_corredor•45m ago
It's been 1 day

The technology needs to diffuse through and find its equilibrium within the market

You could say 3.5/3.7 Sonnet was good enough to replace some juniors but the juniors didn't get replaced immediately - it has a lag in time for it to ripple through

WhitneyLand•1d ago
So, we currently have o4-mini and o4-mini-high, which represent medium and high usage of “thinking” or use of reasoning tokens.

This announcement adds o3-pro, which pairs with o3 in the same way the o4 models go together.

It should be called o3-high, but to align with the $200 pro membership it’s called pro instead.

That said o3 is already an incredibly powerful model. I prefer it over the new Anthropic 4 models and Gemini 2.5. It’s raw power seems similar to those others, but it’s so good at inline tool use it usually comes out ahead overall.

Any non-trivial code generation/editing should be using an advanced reasoning model, or else you’re losing time fixing more glitches or missing out on better quality solutions.

Of course the caveat is cost, but there’s value on the frontier.

boole1854•1d ago
No, this doesn't seem to be correct, although confusion regarding model names is understandable.

o4-mini-high is the label on chatgpt.com for what in the API is called o4-mini with reasoning={"effort": "high"}. Whereas o4-mini on chatgpt.com is the same thing as reasoning={"effort": "medium"} in the API.

o3 can also be run via the API with reasoning={"effort": "high"}.

o3-pro is different than o3 with high reasoning. It has a separate endpoint, and it runs for much longer.

See https://platform.openai.com/docs/guides/reasoning?api-mode=r...

johnecheck•26m ago
OpenAI started strong in the naming department (ChatGPT, DALL-E) then fell off so hard since.
chad1n•1d ago
The guys in the other thread who said that OpenAI might have quantized o3 and that's how they reduced the price might be right. This o3-pro might be the actual o3-preview from the beginning and the o3 might be just a quantized version. I wish someone benchmarks all of these models to check for drops in quality.
simonw•1d ago
That's definitely not the case here. The new o3-pro is slow - it took two minutes just to draw me an SVG of a pelican riding a bicycle. o3-preview was much faster than that.

https://simonwillison.net/2025/Jun/10/o3-pro/

k2xl•1d ago
Not distilled, same model. https://x.com/therealadamg/status/1932534244774957121?s=46&t...
CamperBob2•1d ago
Would you say this is the best cycling pelican to date? I don't remember any of the others looking better than this.

Of course by now it'll be in-distribution. Time for a new benchmark...

jstummbillig•1d ago
I love that we are in the timeline where we are somewhat seriously evaluating probably super human intelligence by their ability to draw a svg of a cycling pelican.
CamperBob2•1d ago
I still remember my jaw hitting the floor when the first DALL-E paper came out, with the baby daikon radish walking a dog. How the actual fuck...? Now we're probably all too jaded to fully appreciate the next advance of that magnitude, whatever that turns out to be.

E.g., the pelicans all look pretty cruddy including this one, but the fact that they are being delivered in .SVG is a bigger deal than the quality of the artwork itself, IMHO. This isn't a diffusion model, it's an autoregressive transformer imitating one. The wonder isn't that it's done badly, it's that it's happening at all.

datameta•4h ago
This makes me think of a reduction gear as a metaphor. At a high enough ratio, the torque is enormous but being put toward barely perceptible movement. There is the huge amount of computation happening to result in SVG that resembles a pelican on a bicycle.
cdblades•20h ago
I don't love that this is the conversation and when these models bake-in these silly scenarios with training data, everyone goes "see, pelican bike! super human intelligence!"

The point is never the pelican. The point is that if a thing has information about pelicans, and has information about bicycles, then why can't it combine those ideas? Is it because it's not intelligent?

CamperBob2•18h ago
"I'm taking this talking dog right back to the pound. It told me to go long on AAPL. Totally overhyped"
johnmaguire•6h ago
Just because it's impressive doesn't mean it has "super human intelligence" though.
simonw•1d ago
I like the Gemini 2.5 Pro ones a little more: https://simonwillison.net/2025/Jun/6/six-months-in-llms/#ai-...
AstroBen•1d ago
That's one good looking pelican
FergusArgyll•1d ago
Wow! pelican benchmark is now saturated
esperent•1d ago
Not until I can count the feathers, ask for a front view of the same pelican, then ask for it to be animated, all still using SVG.
dtech•1h ago
I wonder how much of that is because it's getting more and more included in training data.

We now need to start using walrusses riding rickshaws

Terretta•1d ago
> It's only available via the newer Responses API

And in ChatGPT Pro.

teruakohatu•6h ago
Do you think a cycling pelican is still a valid cursory benchmark? By now surely discussions about it are in the training set.

There is quite a few on Google Image search.

On the other hand they still seem to struggle!

gkamradt•1d ago
o3-pro is not the same as the o3-preview that was shown in Dec '24. OpenAI confirmed this for us. More on that here: https://x.com/arcprize/status/1932535380865347585
weinzierl•1d ago
Is there a way to figure out likely quantization from the output. I mean, does quantization degrade output quality in certain ways that are different from other modification of other model properties (e.g. size or distillation)?
hapticmonkey•3h ago
What a great future we are building. If AI is supposed to run everything, everywhere....then there will be 2, maybe 3, AI companies. And nobody outside those companies knows how they work.
torginus•22m ago
I've wondered if some kind of smart pruning is possible during evaluation.

What I mean by that, is if a neuron implements a sigmoid function and its input weights are 10,1,2,3 that means if the first input is active, then evaluation the other ones is mathematically pointless, since it doesn't change the result, which recursively means the inputs of those neurons that contribute to the precursors are pointless as well.

I have no idea how feasible or practical is it to implement such an optimization and full network scale, but I think its interesting to think about

DanMcInerney•1d ago
I'm really hoping GPT5 is a larger jump in metrics than the last several releases we've seen like Claude3.5 - Claude4 or o3-mini-high to o3-pro. Although I will preface that with the fact I've been building agents for about a year now and despite the benchmarks only showing slight improvement, I have seen that each new generation feels actively better at exactly the same tasks I gave the previous generation.

It would be interesting if there was a model that was specifically trained on task-oriented data. It's my understanding they're trained on all data available, but I wonder if it can be fine-tuned or given some kind of reinforcement learning on breaking down general tasks to specific implementations. Essentially an agent-specific model.

codingwagie•1d ago
I'm seeing big advances that arent shown in the benchmarks, I can simply build software now that I couldnt build before. The level of complexity that I can manage and deliver is higher.
shmoogy•1d ago
Yeah I kind of feel like I'm not moving as fast as I did, because the complexity and features grow - constant scope creep due to moving faster.
alightsoul•1d ago
mind telling examples?
motorest•1d ago
Not OP, but a couple of days ago I managed to vibecode my way through a small app that pulled data from a few services and did a few validation checks. By itself its not very impressive, but my input was literally "this is how the responses from endpoint A,B and C look like. This field included somewhere in A must be somewhere in the response from B, and the response from C must feature this and that from response A and B. If the responses include links, check that they exist". To my surprise, it generated everything in one go. No retry nor Agent mode churn needed. In the not so distant past this would require progressing through smaller steps, and I had to fill in tests to nudge Agent mode to not mess up. Not today.
alightsoul•1d ago
what tools did you use?
motorest•23h ago
> what tools did you use?

Nothing fancy. Visual Studio Code + Copilot, agent mode, a couple prompt files, and that's it.

corysama•6h ago
I’m wrapping up doing literally the same thing. I did it step-by-step. But, for me there was also a process of figuring out how it should work.
IanCal•18h ago
A really important thing is the distinction between performance and utility.

Performance can improve linearly and utility can be massively jumpy. For some people/tasks performance can have improved but it'll have been "interesting but pointless" until it hits some threshold and then suddenly you can do things with it.

protocolture•6h ago
I am finding that my ability to use it to code, aligns almost perfectly with increasing token memory.
kevinqi•5h ago
yeah, the benchmarks are just a proxy. o3 was a step change where I started to really be able to build stuff I couldn't before
iLoveOncall•1h ago
Okay but this has all to do with the tooling and nothing to do with the models.
mofeien•1h ago
Can you explain why?
iLoveOncall•59m ago
You can write projects with LLMs thanks to tools that can analyze your local project's context, which didn't exist a year ago.

You could use Cursor, Windsurf, Q CLI, Claude Code, whatever else with Claude 3 or even an older model and you'd still get usable results.

It's not the models which have enabled "vibe coding", it's the tools.

An additional proof of that is that the new models focus more and more on coding in their releases, and other fields have not benefited at all from the supposed model improvements. That wouldn't be the case if improvements were really due to the models and not the tooling.

eru•1m ago
You need a certain quality of model to make 'vibe coding' work. For example, I think even with the best tooling in the world, you'd be hard pressed to make GPT 2 useful for vibe coding.
energy123•1d ago
That would require AIME 2024 going above 100%.

There was always going to be diminishing returns in these benchmarks. It's by construction. It's mathematically impossible for that not to happen. But it doesn't mean the models are getting better at a slower pace.

Benchmark space is just a proxy for what we care about, but don't confuse it for the actual destination.

If you want, you can choose to look at a different set of benchmarks like ARC-AGI-2 or Epoch and observe greater than linear improvements, and forget that these easier benchmarks exist.

croddin•1d ago
There is still plenty of room for growth on the ARC-AGI benchmarks. ARC-AGI 2 is still <5% for o3-pro and ARC-AGI 1 is only at 59% for o3-pro-high:

"ARC-AGI-1: * Low: 44%, $1.64/task * Medium: 57%, $3.18/task * High: 59%, $4.16/task

ARC-AGI-2: * All reasoning efforts: <5%, $4-7/task

Takeaways: * o3-pro in line with o3 performance * o3's new price sets the ARC-AGI-1 Frontier"

- https://x.com/arcprize/status/1932535378080395332

saberience•1d ago
I’m not sure the arcagi are interesting benchmarks, for one they are image based and for two most people I show them too have issues understanding them, and in fact I had issues understanding them.

Given the models don’t even see the versions we get to see it doesn’t surprise me they have issues we these. It’s not hard to make benchmarks that are so hard that humans and Lims can’t do.

HDThoreaun•20h ago
arc agi is the closest any widely used benchmark is coming to an IQ test, its straight logic/reasoning. Looking at the problem set its hard for me to choose a better benchmark for "when this is better than humans we have agi"
nipah•12h ago
"most people I show them too have issues understanding them, and in fact I had issues understanding them" ??? those benchmarks are so extremely simple they have basically 100% human approval rates, unless you are saying "I could not grasp it immediately but later I was able to after understanding the point" I think you and your friends should see a neurologist. And I'm not mocking you, I mean seriously, those are tasks extremely basic for any human brain and even some other mammals to do.
clbrmbr•2h ago
You may be above average intelligence. Those challenges are like classic IQ tests and I bet have a significant distribution among humans.
achierius•1h ago
No, they've done testing against samples from the general population.
jstummbillig•1d ago
It's hard to be 100% certain, but I am 90% certain that the benchmarks leveling off, at this point, should tell us that we are really quite dumb and simply not good very good at either using or evaluating the technology (yet?).
alightsoul•1d ago
either that or the improvements aren't as large as before.
motorest•1d ago
> (...) at this point, should tell us that we are really quite dumb and simply not good very good at either using or evaluating the technology (yet?).

I don't know about that. I think it's mainly because nowadays LLMs can output very inconsistent results. In some applications they can generate surprisingly good code, but during the same session they can also do missteps and shit the bed while following a prompt to small changes. For example, sometimes I still get prompt responses that outright delete critical code. I'm talking about things like asking "extract this section of your helper method into a new methid" and in response the LLM deletes the app's main function. This doesn't happen all the time, or even in the same session for the same command. How does one verify these things?

XCSme•1d ago
I remember the saying that from 90% to 99% is a 10x increase in accuracy, but 99% to 99.999% is a 1000x increase in accuracy.

Even though it's a large10% increase first then only a 0.999% increase.

jsjohnst•5h ago
The saying goes:

From 90% to 99% is a 10x reduction in error rate, but 99% to 99.999% is a 1000x decrease in error rates.

zmgsabst•5h ago
Sometimes it’s nice to frame it the other way, eg:

90% -> 1 error per 10

99% -> 1 error per 100

99.99% -> 1 error per 10,000

That can help to see the growth in accuracy, when the numbers start getting small (and why clocks are framed as 1 second lost per…).

bobbylarrybobby•4h ago
I think the proper way to compare probabilities/proportions is by odds ratios. 99:1 vs 99999:1. (So a little more than 1000x.) This also lets you talk about “doubling likelihood”, where twice as likely as 1/2=1:1 is 2:1=2/3, and twice as likely again is 4:1=4/5.
littlestymaar•2h ago
> I'm really hoping GPT5 is a larger jump in metrics than the last several releases we've seen like Claude3.5 - Claude4 or o3-mini-high to o3-pro.

This kind of expectations explains why there hasn't been a GPT-5 so far, and why we get a dumb numbering scheme instead for no reason.

At least Claude eventually decided not to care anymore and release Claude 4 even if the jump from 3.7 isn't particularly spectacular. We're well into the diminishing returns at this point, so it doesn't really make sense to postpone the major version bump, it's not like they're going to make a big leap again anytime soon.

Voloskaya•50m ago
> We're well into the diminishing returns at this point

Scaling laws, by definition have always had diminishing returns because it's a power law relationship with compute/params/data, but I am assuming you mean diminishing beyond what the scaling laws predict.

Unless you know the scale of e.g. o3-pro vs GPT-4, you can't definitively say that.

Because of that power law relationship, it requires adding a lot of compute/params/data to see a big jump, rule of thumb is you have to 10x your model size to see a jump in capabilities. I think OpenAI has stuck with the trend of using major numbers to denote when they more than 10x the training scale of the previous model.

* GPT-1 was 117M parameters.

* GPT-2 was 1.5B params (~10x).

* GPT-3 was 175B params (~100x GPT-2 and exactly 10x Turing-NLG, the biggest previous model).

After that it becomes more blurry as we switched to MoEs (and stopped publishing), scaling laws for parameters applies to a monolithic models, not really to MoEs.

But looking at compute we know GPT-3 was trained on ~10k V100, while GPT-4 was trained on a ~25k A100 cluster, I don't know about training time, but we are looking at close to 10x compute.

So to train a GPT-5-like model, we would expect ~250k A100, or ~150k B200 chips, assuming same training time. No one has a cluster of that size yet, but all the big players are currently building it.

So OpenAI might just be reserving GPT-5 name for this 10x-GPT-4 model.

indigo945•26m ago
I have tried Claude 4.0 for agentic programming tasks, and it really outperforms Claude 3.7 by quite a bit. I don't follow the benchmarks - I find them a bit pointless - but anecdotally, Claude 4.0 can help me in a lot of situations where 3.7 would just flounder, completely misunderstand the problem and eventually waste more of my time than it saves.

Besides, I do think that Google Gemini 2.0 and its massively increased token memory was another "big leap". And that was released earlier this year, so I see no sign of development slowing down yet.

nickandbro•1d ago
"create a svg of a pelican riding on a bicycle"

https://www.svgviewer.dev/s/c3j6TEAP

in case anyone is interested

ikerino•1d ago
Am I right to say: doesn't look better than anything we've seen before?
ikerino•1d ago
https://www.latent.space/p/o3-pro

Have completed around a dozen chats with o3-pro so far. Can't say I'm impressed, output feels qualitatively very similar to regular o3.

Tried feeding in loads of context as suggested in the article but generally feels like a miss.

mark_l_watson•1d ago
I am still not willing to upgrade to a Pro account. I pay $20 a month for both Gemini and ChatGPT, and for what I need this is currently enough.

I have dreamed of having powerful AI ever since I read Bertram Raphael's great book Mind Inside Matter around 1978, getting hooked on AI research and sometimes practical applications for my life since then.

I can easily afford $200 for a Pro account but I get this nagging feeling that LLMs are not the final path to the powerful AI I have always dreamed of and I don't want to support this level of hype.

I have lived through a few AI winters and I worry that accountants will tally up the costs, environmental and money, versus the benefits and that we collectively have an 'oh shit' moment.

baq•3h ago
LLMs would be transformative technology if all progress stopped today if only for their NLP capabilities, but the recent models obviously do so much more than that. Winter isn’t coming in that regard; what might happen if models won’t get smarter from here is a race to the bottom in token prices, which would still be not bad at all for token buyers.
buu700•1h ago
Agreed. I've said exactly the same thing before. If GPT-4 from two years ago had turned out to be the endgame of LLM technology, and we collectively spent the following 20 years integrating those capabilities throughout the economy, even that would be a profound change to the world as we know it.

If we froze LLM technology at present-day capabilities and spent the next 20 years on that, I'd expect it to ultimately look transformative in a similar way to the Internet. I mean if you told me in fall 2022 that 2.5 years later I'd be building software by meta-prompting and meta-meta-prompting AI agents to write code overnight while I slept, I'd assume that we were fictional characters in a Black Mirror episode.

jwrallie•1h ago
I have trouble justifying the $20 tier when compared to other offers for similar service from other providers. I think OpenAI should, every once in a while, offer a new feature with no delay to their Plus tier, with lots of limits of course.
paul7986•6h ago
GPT needs way better image creation! Today I asked it to create a full image of a 2025 calendar highlighting all weekday workdays excluding federal holidays. At the bottom of legend tell me how many weekday work hours are available within criteria noted.

It created the image showing each month but when you looked at each month it was so janky ... February 31st and other huge errors!

I'm not using image creation to create 3d art for fun or art sake im trying to use it to create utility images to share for discussion with friends & co-workers. The above is just one of many ways it fails when creating utility images!

mkl•5h ago
Wrong tool for the job. Try asking it to generate an SVG calendar with those features, or to generate Python code that produces an SVG calendar with those features.
catlifeonmars•1h ago
That makes sense. Naively, one would expect this to be the type of reasoning that it should “figure out” on its own.
vintagedave•26m ago
> Update to o4-mini (June 6, 2025) > We are rolling back an o4-mini snapshot, that we deployed less than a week ago and intended to improve the length of model responses, because our automated monitoring tools detected an increase in content flags.

Does anyone know what it did or returned? I had not seen anything, nor have I read anything, about issues here.

honeybadger1•9m ago
Gemini still, for me, feels like the king for speed and accuracy.
eru•3m ago
I'm trying out o3-pro now with some algorithmic questions. It seems to be doing alright, but it's taking an awfully long time (as expected) and the UIs seem to time out a lot, especially the Android app and the MacOS desktop app. The web interface seems the least flaky, but that's not saying much.