>API deployments require different safeguards and we are working closely with partners and customers on the safety and security requirements for serving it at scale.
And now this. I guess one day counts as "very soon." But I wonder what that meant for these safeguards and security requirements.
> In 2023, the company was preparing to release its GPT-4 Turbo model. As Sutskever details in the memos, Altman apparently told Murati that the model didn’t need safety approval, citing the company’s general counsel, Jason Kwon. But when she asked Kwon, over Slack, he replied, “ugh . . . confused where sam got that impression.”
Lots of cases where Altman hass not been entirely forthcoming about how important (or not) safety is for OpenAI. https://www.newyorker.com/magazine/2026/04/13/sam-altman-may... (https://archive.is/a2vqW)
In my place for example, a lot of doctors are using ChatGPT both to search diagnosis and communicate with non-English speaking patients.
Even yourself, when you want to learn about one disease, about some real-world threats, some statistics, self-defense techniques, etc.
Otherwise it's like blocking Wikipedia for the reason that using that knowledge you can do harmful stuff or read things that may change your mind.
Freedom to read about things is good.
I think that's the problem. Who's going to claim responsibility when ChatGPT hallucinates or mistranslates a patient's diagnosis and they die? For OpenAI, this would at best be a PR nightmare, so that's why they have safeguards.
I had a choice better a doctor that used AI or not, I would much prefer one that did...
Knowledge cutoff: 2024-06
Current date: 2026-04-24
You are an AI assistant accessed via an API. Knowledge cutoff: 2024-06
Current date: 2026-04-24
You are an AI assistant accessed via an API.
# Desired oververbosity for the final answer (not analysis): 5
An oververbosity of 1 means the model should respond using only the minimal content necessary to satisfy the request, using
concise phrasing and avoiding extra detail or explanation."
An oververbosity of 10 means the model should provide maximally detailed, thorough responses with context, explanations, and
possibly multiple examples."
The desired oververbosity should be treated only as a *default*. Defer to any user or developer requirements regarding
response length, if present.Easiest Turing test ever...
Donald Trump won the 2024 U.S. presidential election.A better test is something like "what is the latest version of NumPy?"
You're probably better off asking something like "what are the most notable changes in version X of NumPy?" and repeating until you find the version at which it says "I don't know" or hallucinates.
Just ask it about an event that happened shortly before Dec 1, 2025. Sporting event, preferably.
could be they do it intentionally to encourage more tool calls/searches or for tuning reasons
The proper way to figure out the real cutoff date is to ask the model about things that did not exist or did not happen before the date in question.
A few quick tests suggest 5.5's general knowledge cutoff is still around early 2025.
BEGIN TRAN;
-- put the query here
commit;
I feel like I haven’t had to prod a model to actually do what I told it to in awhile so that was a shock. I guess that it does use fewer tokens that way, just annoying when I’m paying for the “cutting edge” model to have it be lazy on me like that.
This is in Cursor the model popped up and so I tried it out from the model selector.
throw03172019•1h ago
swyx•39m ago
Jhonwilson•23m ago
m3kw9•16m ago
XCSme•9m ago