frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Trusted access for the next era of cyber defense

https://openai.com/index/scaling-trusted-access-for-cyber-defense/
37•surprisetalk•2h ago

Comments

ofjcihen•1h ago
I love that in the era of having LLMs summarize everything all of these companies have opted for what I call the “YouTube streamer apology video” tone and length for these announcements.

These feels more or less like a way to get in the news after Anthropic's Mythos announcement by removing some guardrails. I’m still signing up though.

alopha•1h ago
That's a lot of waffle to try and say 'we've got a really scary next model coming too real soon, promise!'
guzfip•1h ago
More like they realized how much money they were wasting letting the proles generate slop and vibe code the same CRUD app they rewrote in 5 different JavaScript frameworks a few years back.

The money is in enterprise and government. The consumer market doesn’t remotely pay enough. It’s just the same story with Microsoft purposely making Windows an unusable mess because that’s not where they make their money. It was good to establish themselves, but that market is getting dumped.

flyinglizard•1h ago
Wait six months, get the Chinese version.
everlier•51m ago
Changes as we speak, z.ai is the first one to show differential pricing
Phelinofist•1h ago
Sounds totally reasonable to trust OpenAI and the sociopath sama.
iammjm•1h ago
"trusted" + openai just simply doesn't compute for me any more
mmooss•1h ago
This approach means only a tiny portion of the population will every qualify. Doesn't that make everyone else beholden to those few, who are beholden to OpenAI?

Another solution is to make software makers responsible and liable for the output of their products. It's long been a problem that there is little legal responsibility, but we shouldn't just accept it. If Ford makes exploding cars, they are liable. If OpenAI makes software that endangers people, it should be the same.

> Democratized access: Our goal is to make these tools as widely available as possible while preventing misuse. We design mechanisms which avoid arbitrarily deciding who gets access for legitimate use and who doesn’t. That means using clear, objective criteria and methods – such as strong KYC and identity verification – to guide who can access more advanced capabilities and automating these processes over time.

KYC isn't democratic and doesn't prevent arbitrary favoritism, it's the opposite: It's used to control people and to favor friends and exclude enemies.

luma•1h ago
So who is at fault in your solution, the org who created and shipped the software bug, or the company that discovered it?

I don't see how OpenAI is Ford in your analogy as OpenAI didn't make the software that blew up.

sureMan6•1h ago
> Another solution is to make software makers responsible and liable for the output of their products. It's long been a problem that there is little legal responsibility, but we shouldn't just accept it. If Ford makes exploding cars, they are liable. If OpenAI makes software that endangers people, it should be the same.

That kind of thinking is exactly why LLMs are so censored, because people think OAI should be liable if someone uses chatgpt to commit cyber crimes

How about cyber crimes are already illegal and we just punish whoever uses the new tools to commit crimes instead of holding the tool maker liable

This gets complex if LLMs enable children to commit complex crimes but that's different from just outright restricting the tool for everyone because someone might misuse it

0x3f•57m ago
There's always some wedge issue that means "don't punish the toolkmaker" is not politically viable. You can pick from guns to legal drugs to illegal drugs to all kinds of emotive things.

And once the wedge is in and the concept of maker responsibility is planted, it expands to people's pet issues, obviously.

The actual line of who gets punished just ends up at some equilibrium in the middle. Largely arbitrarily.

Havoc•1h ago
>democratized access

>partner with a limited set of organizations for more cyber-permissive models.

I get where they're going with this, but still rather hilarious how they had to get a corporate speak expert pull of the mental gymnastics needed for the announcement

0x3f•1h ago
It must be representative democracy! And our representative is... Larry Ellison. Oh no.
bunnywantspluto•1h ago
It seems like local LLMs will get popular for cybersecurity if this trend of locking access to models continues.
alephnerd•3m ago
Not really. Not performant enough. Most organizations who would be interested in using a foundation model for security would either purchase the model directly or purchase a vendor who adds their special sauce or context to the model
ACCount37•1h ago
Too little too late. OpenAI's shit was nearly worthless for cybersec for what, a year already?

ChatGPT 5.x just tries to deny everything remotely cybersecurity-related - to the point that it would at times rather deny vulnerabilities exist than go poke at them. Unless you get real creative with prompting and basically jailbreak it. And it was this bad BEFORE they started messing around with 5.4 access specifically.

And that was ChatGPT 5.4. A model that, by all metrics and all vibes, doesn't even have a decisive advantage over Opus 4.6 - which just does whatever the fuck you want out of the box.

What's I'm afraid the most of is that Anthropic is going to snort whatever it is that OpenAI is high on, and lock down Mythos the way OpenAI is locking down everything.

jruz•55m ago
That’s the whole point of this variant of the model, it won’t have those guardrails.
ACCount37•47m ago
Yes. But "perform a humiliation ritual of KYC to access the actual model instead of the nerfed version of it that's so neurotic about cybersec you have to sink 400 tokens into getting it to a usable baseline" does not inspire any confidence at all.
alephnerd•2m ago
> OpenAI's shit was nearly worthless for cybersec for what, a year already

Most AI for Cybersecurity companies use a mixture of models depending on iteration and testing.

zb3•1h ago
> Ultimately, we aim to make advanced defensive capabilities available to legitimate actors large and small, including those responsible for protecting critical infrastructure, public services, and the digital systems people depend on every day.

Translation: we aim to make defensive capabilities available to US and their vassals so they can protect critical infrastructure, while ensuring countries that are independent can't protect against US attacking their critical infrastructure.

Fortunately, this plan will backfire - the model capability is exaggerated and these "safeguards" don't reliably work.

gavinray•1h ago
I completed the "Trusted Access" verification, but it seems to have unlocked nothing in the OpenAI API or Codex models.

Just FYI for others.

2001zhaozhao•47m ago
Requiring verified access is a good idea to mitigate risks from hacking while still giving people access to the latest models. Take notes, Anthropic.
striking•35m ago
A 5.4 spin with slightly different guardrails is not "access to the latest models". We know this to be true from the article because they have a section entitled "Looking ahead to our upcoming model release and beyond". I wonder if they didn't just feel like they were caught out by Mythos.
Avicebron•5m ago
I don't think they've added enough cyber. My cyber workflow demands more trusted access for cyber so that I can use these cyber-permissive models for my cybersecurity.

Claude Code Routines

https://code.claude.com/docs/en/routines
307•matthieu_bl•5h ago•196 comments

Rare concert recordings are landing on the Internet Archive

https://techcrunch.com/2026/04/13/thousands-of-rare-concert-recordings-are-landing-on-the-interne...
456•jrm-veris•8h ago•135 comments

The Orange Pi 6 Plus

https://taoofmac.com/space/reviews/2026/04/11/1900
79•rcarmo•3d ago•46 comments

Trusted access for the next era of cyber defense

https://openai.com/index/scaling-trusted-access-for-cyber-defense/
37•surprisetalk•2h ago•25 comments

5NF and Database Design

https://kb.databasedesignbook.com/posts/5nf/
107•petalmind•6h ago•45 comments

Turn your best AI prompts into one-click tools in Chrome

https://blog.google/products-and-platforms/products/chrome/skills-in-chrome/
69•xnx•5h ago•35 comments

Let's talk space toilets

https://mceglowski.substack.com/p/lets-talk-space-toilets
109•zdw•1d ago•39 comments

I wrote to Flock's privacy contact to opt out of their domestic spying program

https://honeypot.net/2026/04/14/i-wrote-to-flocks-privacy.html
421•speckx•4h ago•177 comments

guide.world: A compendium of travel guides

https://guide.world/
49•firloop•5d ago•8 comments

The dangers of California's legislation to censor 3D printing

https://www.eff.org/deeplinks/2026/04/dangers-californias-legislation-censor-3d-printing
104•salkahfi•23h ago•170 comments

H.R.8250 – To require operating system providers to verify the age of any user

https://www.congress.gov/bill/119th-congress/house-bill/8250/all-info
24•cft•27m ago•3 comments

Show HN: Plain – The full-stack Python framework designed for humans and agents

https://github.com/dropseed/plain
44•focom•5h ago•18 comments

Tell HN: Fiverr left customer files public and searchable

204•morpheuskafka•3h ago•29 comments

OpenSSL 4.0.0

https://github.com/openssl/openssl/releases/tag/openssl-4.0.0
163•petecooper•5h ago•49 comments

Show HN: LangAlpha – what if Claude Code was built for Wall Street?

https://github.com/ginlix-ai/langalpha
85•zc2610•7h ago•27 comments

Backblaze has stopped backing up OneDrive and Dropbox folders and maybe others

https://rareese.com/posts/backblaze/
887•rrreese•14h ago•538 comments

Troubleshooting Email Delivery to Microsoft Users

https://rozumem.xyz/posts/14
17•rozumem•2d ago•4 comments

Civilization Is Not the Default. Violence Is

https://apropos.substack.com/p/civilization-is-a-public-good
9•paulpauper•23m ago•3 comments

jj – the CLI for Jujutsu

https://steveklabnik.github.io/jujutsu-tutorial/introduction/what-is-jj-and-why-should-i-care.html
468•tigerlily•12h ago•403 comments

Gas Town: From Clown Show to v1.0

https://steve-yegge.medium.com/gas-town-from-clown-show-to-v1-0-c239d9a407ec
57•martythemaniak•3h ago•70 comments

Introspective Diffusion Language Models

https://introspective-diffusion.github.io/
215•zagwdt•14h ago•41 comments

Responsive images in Hugo using Render Hooks

https://mijndertstuij.nl/posts/hugo-responsive-images-using-render-hooks/
6•mijndert•5d ago•0 comments

Carol's Causal Conundrum: a zine intro to causally ordered message delivery

https://decomposition.al/zines/
38•evakhoury•4d ago•3 comments

DaVinci Resolve – Photo

https://www.blackmagicdesign.com/products/davinciresolve/photo
1032•thebiblelover7•20h ago•260 comments

YouTube now world's largest media company, topping Disney

https://www.hollywoodreporter.com/business/digital/youtube-worlds-largest-media-company-2025-tops...
236•bookofjoe•5d ago•181 comments

A new spam policy for “back button hijacking”

https://developers.google.com/search/blog/2026/04/back-button-hijacking
807•zdw•19h ago•457 comments

Lean proved this program correct; then I found a bug

https://kirancodes.me/posts/log-who-watches-the-watchers.html
375•bumbledraven•22h ago•167 comments

The M×N problem of tool calling and open-source models

https://www.thetypicalset.com/blog/grammar-parser-maintenance-contract
120•remilouf•5d ago•41 comments

Nucleus Nouns

https://ben-mini.com/2026/nucleus-nouns
54•bewal416•4d ago•14 comments

Free, fast diagnostic tools for DNS, email authentication, and network security

https://mrdns.com/
6•dogsnews•2h ago•0 comments