frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

OpenAI releases GPT-5.5 and GPT-5.5 Pro in the API

https://developers.openai.com/api/docs/changelog
96•arabicalories•1h ago

Comments

throw03172019•1h ago
Faster than anticipated because of Deepseek release?
swyx•39m ago
more like they wanted to release it yesterday but merely had some last min flags they wanted to hold off for
Jhonwilson•23m ago
ok not bad
m3kw9•16m ago
Maybe but no one serious is using deepseek
XCSme•9m ago
Doubt it, DeepSeek v4 is quite underwhelming.
pants2•1h ago
Is anyone here actually using pro models through the API? I'd be very curious what the use-case is.
ComputerGuru•59m ago
Yes? The same reason you would use it via the tooling.
chadash•58m ago
Yes. High value work where cost (mostly) doesn't matter. For example, if I need to look over a legal doc for possible mistakes (part of a workflow i have), it doesn't matter (in my case) whether it costs $0.01 or $10.00, since it's a somewhat infrequent event. So i'll pay $9.99 more, even if the model is only slightly better.
freedomben•54m ago
Indeed, even just Terms of Service and Privacy Policy work. Infrequent enough that cost isn't an issue, but model quality absolutely is
bogtog•23m ago
I'm surprised I never heard people talking about using -Pro variants, even though their rates ($125-175/M?) aren't drastically larger than old Opus ($75/M), which people seemed to use
sigmoid10•1h ago
Huh. Yesterday they said:

>API deployments require different safeguards and we are working closely with partners and customers on the safety and security requirements for serving it at scale.

And now this. I guess one day counts as "very soon." But I wonder what that meant for these safeguards and security requirements.

embedding-shape•1h ago
The same person who've mercilessly lied about safety is still running the company, so not sure why anyone would expect any different from them moving forward. Previous example:

> In 2023, the company was preparing to release its GPT-4 Turbo model. As Sutskever details in the memos, Altman apparently told Murati that the model didn’t need safety approval, citing the company’s general counsel, Jason Kwon. But when she asked Kwon, over Slack, he replied, “ugh . . . confused where sam got that impression.”

Lots of cases where Altman hass not been entirely forthcoming about how important (or not) safety is for OpenAI. https://www.newyorker.com/magazine/2026/04/13/sam-altman-may... (https://archive.is/a2vqW)

simonw•59m ago
I wonder if the fact that GPT-5.5 was already available in their Codex-specific API which they had explicitly told people they were allowed to use for other purposes - https://simonwillison.net/2026/Apr/23/gpt-5-5/#the-openclaw-... - accelerated this release!
FINDarkside•57m ago
When stuff is delayed due to "safeguards" it just means they don't think they have the compute to release it right now.
redsaber•58m ago
not available for Github Copilot pro(only in pro+, business and enterprise), I am really now feeling the era of subsidized AI is over.
skeledrew•19m ago
This is where the emigration to Chinese providers begins.
sunaookami•6m ago
With a 7.5x multiplier and even that is a promo!! Microsoft is insane! https://github.blog/changelog/2026-04-24-gpt-5-5-is-generall...
rvnx•54m ago
Very bad habit these safeguards. These "safety" filters are counter-productive and even can be dangerous.

In my place for example, a lot of doctors are using ChatGPT both to search diagnosis and communicate with non-English speaking patients.

Even yourself, when you want to learn about one disease, about some real-world threats, some statistics, self-defense techniques, etc.

Otherwise it's like blocking Wikipedia for the reason that using that knowledge you can do harmful stuff or read things that may change your mind.

Freedom to read about things is good.

timedude•36m ago
Yup, deliberately making the model retarded
NicuCalcea•32m ago
> a lot of doctors are using ChatGPT both to search diagnosis and communicate with non-English speaking patients

I think that's the problem. Who's going to claim responsibility when ChatGPT hallucinates or mistranslates a patient's diagnosis and they die? For OpenAI, this would at best be a PR nightmare, so that's why they have safeguards.

hellohello2•30m ago
The doctor would be responsible.

I had a choice better a doctor that used AI or not, I would much prefer one that did...

NicuCalcea•3m ago
The doctor would be responsible for the accuracy of their translation tool, something they can't verify but you expect them to use?
czk•49m ago
API page lists the knowledge cutoff as Dec 01, 2025 but when prompting the model it says June 2024.

   Knowledge cutoff: 2024-06
   Current date: 2026-04-24

   You are an AI assistant accessed via an API.
htrp•44m ago
Can you really believe things that the model says? (A lot of prior model api pages say knowledge cutoffs of June 2024, maybe the model picks that up?)
czk•26m ago
you cant but its pretty reproducible across api and codex and other agents so i just thought it was odd. full text it gives:

   Knowledge cutoff: 2024-06
   Current date: 2026-04-24

   You are an AI assistant accessed via an API.

   # Desired oververbosity for the final answer (not analysis): 5
   An oververbosity of 1 means the model should respond using only the minimal content necessary to satisfy the request, using
 concise phrasing and avoiding extra detail or explanation."
   An oververbosity of 10 means the model should provide maximally detailed, thorough responses with context, explanations, and
 possibly multiple examples."
   The desired oververbosity should be treated only as a *default*. Defer to any user or developer requirements regarding
 response length, if present.
swyx•39m ago
can u test it on say who won the 2024 US election
ghurtado•34m ago
I can't really think of a less reliable test for anything at all than making a random guess as to something that had about 50/50 odds to begin with

Easiest Turing test ever...

himata4113•33m ago
ask it 10 times.
pixel_popping•27m ago
MASSIVE ADVERSARIAL x50
czk•25m ago
with thinking off and tools disabled:

  Donald Trump won the 2024 U.S. presidential election.
WarmWash•21m ago
Usually the labs do some kind of post training on major events so the model isn't totally lost.

A better test is something like "what is the latest version of NumPy?"

bakugo•16m ago
That sort of test isn't super reliable either, in my experience.

You're probably better off asking something like "what are the most notable changes in version X of NumPy?" and repeating until you find the version at which it says "I don't know" or hallucinates.

BeetleB•26m ago
I don't know why this keeps coming up. This has always been the least reliable way to know the cutoff date (and indeed, it may well have been trained on sites with comments like these!)

Just ask it about an event that happened shortly before Dec 1, 2025. Sporting event, preferably.

czk•21m ago
the model obviously knows things after the reported date but its just curious that it reports that date consistently

could be they do it intentionally to encourage more tool calls/searches or for tuning reasons

bakugo•20m ago
Models don't know what their cutoff dates are unless told via a system prompt.

The proper way to figure out the real cutoff date is to ask the model about things that did not exist or did not happen before the date in question.

A few quick tests suggest 5.5's general knowledge cutoff is still around early 2025.

czk•19m ago
i wonder if they put an older cutoff date into the prompt intentionally so that when asked on more current events it leans towards tool calls / web searches for tuning
soco•15m ago
Stupid question: wouldn't it then search the web for that event?
bakugo•14m ago
If you have web search enabled, sure. But if you're testing on the API, you can just not enable it.
MallocVoidstar•5m ago
OpenAI does tell the model the current date via API, so it's odd for them not to also tell the model its cutoff
neosat•46m ago
Enterprise user here and still seeing only 5.4. Yesterday's announcement said that it will take a few hours to roll out to everybody. OpenAI needs better GTM to set the right expectations.
neosat•23m ago
Just refreshed and see 5.5 now - yay! Love the speedy resolution ;) Thanks folks, I'll complain faster next time....
gigatexal•42m ago
what's the real world comparison to opus 4.7 fellow coders?
Jhonwilson•24m ago
that is great news
pillefitz•22m ago
Please consider the ethical aspects of giving money to OpenAI versus alternatives.
wincy•22m ago
Just tried it out for a prod issue was experiencing. Claude never does this sort of thing, I had it write an update statement after doing some troubleshooting, and I said “okay let’s write this in a transaction with a rollback” and GPT-5.5 gave me the old “okay,

BEGIN TRAN;

-- put the query here

commit;

I feel like I haven’t had to prod a model to actually do what I told it to in awhile so that was a shock. I guess that it does use fewer tokens that way, just annoying when I’m paying for the “cutting edge” model to have it be lazy on me like that.

This is in Cursor the model popped up and so I tried it out from the model selector.

syspec•10m ago
Can't tell if above is good or bad.
XCSme•9m ago
I feel like the last 2-3 generations of models (after gpt-5.3-codex) didn't really improve much, just changed stuff around and making different tradeoffs.
pixel_popping•6m ago
I disagree, it improved enormously especially at staying consistent for long-tasks, I have a task running for 32 days (400M+ tokens) via Codex and that's only since gpt-5.4
ericpauley•3m ago
Has that task accomplished anything yet?

After waiting years for justice, many Purdue opioid victims are defeated

https://www.reuters.com/legal/litigation/after-waiting-years-justice-many-purdue-opioid-victims-a...
2•tartoran•2m ago•0 comments

Software Is Liquid

https://www.thingelstad.com/2026/04/20/software-is-liquid.html
1•speckx•2m ago•0 comments

The sounds you never noticed in Bohemian Rhapsody

https://www.youtube.com/watch?v=XLqhOEzpWyo
1•fallinditch•3m ago•0 comments

Prusa Core One INDX – Toolchanger with 8 Nozzles

https://blog.prusa3d.com/prusa-core-one-indx-orders-now-open_134915/
1•_Microft•4m ago•0 comments

Show HN: Nimbus – Browser with Claude Code UX

https://usenimbus.app/
1•pycassa•4m ago•0 comments

XChat, X's standalone messaging app, now available

https://9to5mac.com/2026/04/24/xchat-xs-standalone-messaging-app-launching-on-iphone-and-ipad-nex...
1•thm•5m ago•0 comments

Google to invest up to $40B in Anthropic in cash and compute

https://techcrunch.com/2026/04/24/google-to-invest-up-to-40b-in-anthropic-in-cash-and-compute/
2•elpakal•5m ago•0 comments

Mind the van Emden Gap

https://blog.fogus.me/llm/van-emden.html
1•janvdberg•8m ago•0 comments

Tell HN: Claude 4.7 is ignoring stop hooks

2•LatencyKills•11m ago•0 comments

Which SaaS Categories Is AI Replacing?

https://www.2power16.com/fear/2026-q1/
1•JumpingTortoise•12m ago•1 comments

MenteDB – open-source memory database for AI agents (Rust)

https://github.com/nambok/mentedb
1•mentedb•15m ago•0 comments

Ask HN: Honest question about text-to-CAD. Is it BS?

1•giasiara•16m ago•0 comments

Show HN: Lilo – a self-hosted, open-source intelligent personal OS

https://github.com/abi/lilo
1•abi•19m ago•2 comments

Ask HN: Anyone still using JetBrains products today?

1•zkid18•19m ago•0 comments

Intelligence Brownouts

https://jsfour.substack.com/p/intelligence-brown-outs
2•js4•19m ago•0 comments

I built a lightweight alternative to Docker for LAMP multisite hosting

https://github.com/albarreto/lampdeck-v2
2•albarreto•21m ago•1 comments

How CRDTs and sync engines keep realtime lists ordered with fractional indexing

https://liveblocks.io/blog/how-crdts-and-sync-engines-keep-realtime-lists-ordered-with-fractional...
1•Eduard•23m ago•0 comments

Lazard's Levelized Cost of Energy (2025) [pdf]

https://www.lazard.com/media/5tlbhyla/lazards-lcoeplus-june-2025-_vf.pdf
1•lawrenceyan•24m ago•0 comments

Trace – a compiled language where every value knows why it has its value

https://github.com/the-pro-coder/trace-lang
1•thepro77•25m ago•0 comments

TerraPower starts construction of first US utility-scale advanced nuclear plant

https://world-nuclear-news.org/articles/terrapower-starts-construction-of-first-us-utility-scale-...
1•mpweiher•27m ago•0 comments

What Color Was The Sky – yesterday's sky above your city, from real data

https://sinceyouarrived.world/sky
1•mwheelz•27m ago•0 comments

Deutsch–Jozsa Algorithm

https://en.wikipedia.org/wiki/Deutsch%E2%80%93Jozsa_algorithm
1•tosh•29m ago•0 comments

How you implemented your Python decorator is wrong

https://github.com/GrahamDumpleton/wrapt/blob/develop/blog/01-how-you-implemented-your-python-dec...
1•Tomte•31m ago•0 comments

Shortest Sudoku Solver

https://web.archive.org/web/20070208100501/http://markbyers.com/moinmoin/moin.cgi/ShortestSudokuS...
1•tosh•31m ago•0 comments

IOSurface Kernel Teardown Panic (macOS 15.x / 26.x)

https://github.com/MEKOD/not-a-security-issue
1•p_ing•31m ago•0 comments

To become a good C programmer (2011)

https://fabiensanglard.net/c/
1•downbad_•31m ago•1 comments

GPT-5.5 has pulled ahead of Opus for accounting and finance tasks

https://twitter.com/MaxMinsker/status/2047760245389205865
1•MaxMinsker•31m ago•0 comments

How good is Mac Studio M3 Ultra for Trillion param models like DeepSeekv4?

2•namegulf•31m ago•1 comments

The Alignment Problem in Your Government

https://kunnas.com/articles/alignment-problem-in-your-government
1•ekns•34m ago•0 comments

My audio interface has SSH enabled by default

https://hhh.hn/rodecaster-duo-fw/
7•hhh•35m ago•0 comments