frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
133•theblazehen•2d ago•38 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
662•klaussilveira•14h ago•198 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
948•xnx•19h ago•550 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
42•helloplanets•4d ago•39 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
121•matheusalmeida•2d ago•31 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
51•videotopia•4d ago•1 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
15•kaonwarb•3d ago•19 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
228•isitcontent•14h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
221•dmpetrov•14h ago•117 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
330•vecti•16h ago•143 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
492•todsacerdoti•22h ago•242 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
380•ostacke•20h ago•94 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•20h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
288•eljojo•17h ago•169 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
411•lstoll•20h ago•278 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
23•jesperordrup•4h ago•14 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
63•kmm•5d ago•5 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
90•quibono•4d ago•21 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
18•bikenaga•3d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
255•i5heu•17h ago•196 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
32•romes•4d ago•3 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
12•speckx•3d ago•3 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
33•gmays•9h ago•12 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
57•gfortaine•11h ago•23 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1065•cdrnsf•23h ago•446 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
150•vmatsiiako•19h ago•67 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
149•SerCe•10h ago•134 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
287•surprisetalk•3d ago•43 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
182•limoce•3d ago•97 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
73•phreda4•13h ago•14 comments
Open in hackernews

OpenAI – vulnerability responsible disclosure

https://requilence.any.org/open-ai-vulnerability-responsible-disclosure
221•requilence•6mo ago

Comments

requilence•6mo ago
Reported a flaw to OpenAI that lets users peek at others' chat responses. Got an auto-reply on May 29th, radio silence since. Issue remains unpatched :( Avoided their bug bounty due to permanent NDAs preventing disclosure even after fixes. Following standard 45-day disclosure window—users should avoid sharing sensitive data until this is resolved.
fcpguru•6mo ago
well done, sounds very reasonable and following the rules.
requilence•6mo ago
Appreciate it. Just trying to do the right thing by both OpenAI and users here.
poniko•6mo ago
The NDA part feels really murky.
tptacek•6mo ago
It's pretty standard for bounty programs. If you don't like it, which is reasonable, do what this researcher did and just post independently.
pyman•6mo ago
The bug bounty world is a funny one. I remember one complaining that their bug was dismissed and fixed after they signed an NDA, no payout, nothing. Another one got $100 instead of $5,000 because the company downgraded the severity from high to low. So they ended up with little or no money, and no recognition either. Not sure if these were edge cases, but it does make you wonder how fair the process really is.
tptacek•6mo ago
If you're dealing with large companies, a good rule of thumb is that the bounty program is incentivized to pay you out. Their internal metrics improve the more they pay; the point is to turn up interesting bugs, and the figure of merit for that is "how much did we have to spend". At a large company, a bounty that isn't paying anything out is a failure.

All bets are off with small random startups that do bug bounties because they think they're supposed to (most companies should not run bounties). But that's not OpenAI. Dave Aitel works at OpenAI. They're not trying to stiff you.

Simultaneous discovery (either with other researchers or, even more often, with internal assessments) is super common. What's more, you're not going to get any corroboration or context for them (sets up a crazy bad incentive with bounty seekers, who litigate bounty results endlessly). When you get a weird and unfair-seeming response to a bounty from a big tech company, for the sake of your own sanity (and because you'll probably be right), just assume someone internal found the bug before you did, and you reported it in the (sometimes long) window during which they were fixing it.

pyman•6mo ago
Interesting insights, thanks for sharing
asadotzler•6mo ago
That's an exaggeration. Most industry leaders do not require NDAs, only coordinated disclosure.

Mozilla's program, which has been around longer than most, doesn't. Google and Microsoft don't. Meta and Apple don't.

This is water carrying, intentional or not, for a terrible practice that should be shamed, so that it doesn't become standard.

tptacek•6mo ago
My understanding is that all Bugcrowd bounties do by default.

You can shame it all you want, but you can also just publish your bugs directly. Nobody has to use the Bugcrowd platform. You don't even have to wait 45 days; I don't buy these "CERT/CC" rules.

asadotzler•6mo ago
You said it was pretty standard for bug bounty programs, and I disagreed pointing to several of the largest and longest lived bug bounty programs, none of which do that, and your response is pointing out that one particular platform does it?

Even among 3rd party platforms, of which there are several bigs, the NDAs are not a platform requirement, just an option for participating firms.

NDAs are not the norm. Don't mislead people who would otherwise get into this game with non-issues they need not worry over.

tptacek•6mo ago
OpenAI's security team commented on the thread themselves that they believe they simply accepted the Bugcrowd defaults. I think you're trying to find a controversy that just isn't here.
jonrouach•6mo ago
you're sure it's not their "feature" that calling the api with empty string returns random hallucinations?

https://jarbon.medium.com/gpt-prompt-bug-94322a96c574

requilence•6mo ago
No, definitely not the empty string hallucination bug. These are clearly real user conversations. They start like proper replies to requests, sometimes reference the original question, and appear in different languages.
JyB•6mo ago
I don’t see anything here that would prevent a LLM from generating these. Right?
requilence•6mo ago
In one of the responses, it provided the financial analysis of a not well-known company with a non-Latin name located in a small country. I found this company; it is real and numbers in the response are real. When I asked my ChatGPT to provide a financial report for this company without using web tools, it responded: `Unfortunately, I don’t have specific financial statements for “xxx” for 2021 and 2022 in my training data, and since you’ve asked not to use web search, I can’t pull them live.`.
Sebguer•6mo ago
Do you understand what a hallucination is?
jojobas•6mo ago
Coming up with accurate financial data that you can't get it to report outright doesn't seem like one.
refulgentis•6mo ago
I don't understand the wording

Accurate financial data?

How do we know?

What does using not-web-search not having the data have to do with the claim that private chats with the data are being leaked?

01HNNWZ0MV43FF•6mo ago
> I found this company; it is real and numbers in the response are real.

???

refulgentis•6mo ago
Which of my questions does that answer?
queenkjuul•6mo ago
That the financial data is accurate?
refulgentis•6mo ago
It's an ourobos - he can't verify it's real! If he can, its online and available by search.
JyB•6mo ago
Therefore what are the odds that this is just the LLM doing its thing versus "a vulnerability". Seem like a pretty obvious bet.
Sebguer•6mo ago
Models do not possess awareness of their training data. Also you are taking at face value that it is "accurate".
BoiledCabbage•6mo ago
> numbers in the response are real.

OpenAI very well may have a bug, but I'm not clear on this part. How do you know the numbers are real?

I understand you know the name is the company is real, but how do you know the numbers are real?

It's way may than anyone should need to do, but the only way I can see someone knowing this is contacting the owners is the company.

jonrouach•6mo ago
i had the exact same behavior back in 2023, it seemed like clearly leakage of user conversations - but it was just a bug with api calls in the software i was using.

https://snipboard.io/FXOkdK.jpg

postalcoder•6mo ago
There was an issue with conversation leakage, though. It involved some bug with Redis.

I felt like it was a huge deal at the time but it’s surprisingly hard to quickly google it.

Sebguer•6mo ago
It was the classic "oh no we did caching wrong" bug that many startups bump into. It didn't expose actual conversations though, only their titles: https://openai.com/index/march-20-chatgpt-outage/
postalcoder•6mo ago
ah there it is. thanks for jogging my memory. funny to think of how niche chatgpt was considered then to now.
addandsubtract•6mo ago
New Touring Test unlocked! Differentiate between real and fake hallucinations.
DANmode•6mo ago
So THAT'S what the "GT" means on all of these GPU model names!
maxlin•6mo ago
Permanent NDA's? Oof. It's like their plan is to just try to force the lid down till they reach ASI or something lol
tptacek•6mo ago
Again: NDAs are bog standard bounty terms.
999900000999•6mo ago
Users should always avoid sharing sensitive data.

A lot of AI products straight up have plan text logs available for everyone at the company to view.

ameliaquining•6mo ago
Which ones? Do you just mean tiny startups and side projects and the like or is this a problem that major model providers have?
pyman•6mo ago
It's not just about sensitive data like passwords, contracts, or IP. It's also about the personal conversations people have with ChatGPT. Some are depressed, some are dealing with bullying, others are trying to figure out how to come out to their parents. For them, this isn't just sensitive, it's life-changing if it gets leaked. It's like Meta leaking their WhatsApp messages.

I really hope they fix this bug and start taking security more seriously. Trust is everything.

milkshakes•6mo ago
maybe you should stop trusting random people on the internet making extraordinary claims without proof then?
baby_souffle•6mo ago
Isn't "assume vulnerable" The only prudent thing to do here?
refulgentis•6mo ago
No? Yes? Mu?

After some hemming and hawing, my most cromulent thought is, having good security posture isn't synonymous with accepting every claim you get from the firehose

milkshakes•6mo ago
everything is vulnerable. the question is, has this researcher demonstrated that they have discovered and successfully exploited such a vulnerability. what exactly in this post makes you believe that this is the case?
999900000999•6mo ago
https://arstechnica.com/tech-policy/2025/07/nyt-to-start-sea...
ameliaquining•6mo ago
This is going to be subject to the legal discovery process with the usual safeguards to prevent leaks; in particular, the judge will directly supervise the decision of who needs access to these logs, and if someone discloses information derived from them for an improper purpose, there's a very good chance they'll go to jail for contempt of court, which is much more stringent than you can usually expect for data privacy. You can still quite reasonably be against it, but you cannot reasonably call it "plain text logs available for everyone at the company to view".
com2kid•6mo ago
I see other users conversations on my Gemini dashboard, not sure who to even complain to.

Software quality is... Minimal now days.

novia•6mo ago
https://archive.is/mYehH
JyB•6mo ago
I believe it is extremely important to disclose that the ‘responses leaks’ you obtained did not originate from LLM models themselves, but rather through other insecure systems / in a more conventional manner.

Just to avoid yet another case of hallucinations outputs getting misinterpreted.

requilence•6mo ago
Right, thank you for the suggestion. Just added a paragraph to the original blog post.
tabletcorry•6mo ago
Your added paragraph appears to suggest the opposite, that this was an LLM response. Was the "leaked data" a response from an LLM directly?
JyB•6mo ago
Yes apparently which makes this report pretty flimsy.
tptacek•6mo ago
Upthread, OpenAI's security team confirms it's a false report; it's a variant of the empty-prompt hallucination.
JyB•6mo ago
Incredible that so many people still don't understand what an LLM is. Especially ones that you would expect to grasp it.
pyman•6mo ago
> A single misconfiguration can leak thousands of sensitive conversations in seconds. Treating privacy as an afterthought is untenable when the blast radius is this large.

Massive security bug, well spotted. It's like Bank of America showing other people my transactions, or Meta leaking my WhatsApp messages.

This raises some serious questions about security.

jofzar•6mo ago
I'm curious which mailbox they sent to, trying to find a mailbox is surprisingly hard even with my Google searching.
requilence•6mo ago
they have security.txt file on their domain and mentioned it in some other place
blibble•6mo ago
good to see more and more hackers refusing to use corporate bug bounty platforms with onerous terms

I certainly wouldn't sign an indefinite NDA for a chance to win:

Average payout: $836.36

openai should be grateful, after all, they want all information to be free

rglover•6mo ago
Thank you for sharing and reporting this.
thorum•6mo ago
> The leaked responses show clear signs of being real conversations: they start with contextually appropriate replies, sometimes reference the original user question, appear in various languages, and maintain coherent conversational flow. This pattern is inconsistent with random model hallucinations but matches exactly what you'd expect from misdirected user sessions.

A model like GPT-4o can hallucinated responses that are indistinguishable from real user interactions. This is easy to confirm for yourself: just ask it to make one up.

I’m certainly willing to believe OpenAI leaks real user messages, but this is not proof of that claim.

robertclaus•6mo ago
Ya, hard to know how to react without more information.
requilence•6mo ago
In one of the responses, it provided the financial analysis of a not well-known company with a non-Latin name located in a small country. I found this company; it is real and numbers in the response are real. When I asked my ChatGPT to provide a financial report for this company without using web tools, it responded: `Unfortunately, I don’t have specific financial statements for “xxx” for 2021 and 2022 in my training data, and since you’ve asked not to use web search, I can’t pull them live.`.
krainboltgreene•6mo ago
I’m struggling to understand why you are so adamant that this is proof.
Xx_crazy420_xX•6mo ago
Did you try to ask it to provide data of the company, by explicitly invoking hallucination in the model?

Right now there is no real proof, untill you confirm that the data it provided cannot be hallucinated (which could be not feisable).

Also, acknowledging the response fron OpenAI staff dismissing it, would you mind sharing PoC?

requilence•6mo ago
I've updated the original post with technical details and an output example.
astrange•6mo ago
GPT-4o's writing style is so specific that I find it hard to believe it could fake a user query.

You can spot anyone using AI writing a mile away. It stopped saying "delve" but started saying stuff like "It's not X–it's Y" and "check out the vibes (string of wacky emoji)" constantly.

wavemode•6mo ago
LLMs are trained and fine-tuned on real conversations, so resembling a real conversation doesn't really rule out hallucination.

If the story in OP about getting a company's private financial data is true (i.e. the numbers are correct and nonpublic) that could be a smoking gun.

Either way it's a bad look for OpenAI to have not responded to this. Even if the resolution turns out to be that these are just hallucinations, it should've been investigated and responded to by now if OpenAI actually care about security.

lostmsu•6mo ago
I would not say that OpenAI must respond in timely manner to bogus bug reports of any kind, this one included.
robswc•6mo ago
Reminds me of a time I found a serious issue with mailgun. Messaged them, no reply. Had to spam their twitter to get a response. Basically you could have stolen tons of API keys from users without their knowledge and mailgun never disclosed it.

I could have actually gone to their office in person if I wanted to be pedantic but it actually seemed like a pretty weird office space lol.

tptacek•6mo ago
I don't think disclosure of reported security issues is really a norm, unless the firm finds evidence the bug was exploited (by someone other than the reporter). It's a good thing to do, but I think the majority of stuff that gets reported everywhere is never disclosed --- with the major and obvious exception of consumer or commercial software that needs to be updated "on prem".
robswc•6mo ago
Makes sense.

The problem I have with it is that there's no way they could have determined if an API key was stolen or not, even to this day.

Basically, their docs (which seemed auto-generated) pointed to a domain they did not own (verified this). So if you ran any API examples you sent your keys to a 3rd party. I know because I did this. There's no way to know that the domain in the docs is simply wrong.

I tried explaining this to the support people, that I needed to talk with a software engineer but they kept stonewalling. I think it was fixed after 24 hours or so.

ajdude•6mo ago

    > I am issuing this limited, non‑technical disclosure:
    > No exploit code, proof‑of‑concept, or reproduction steps are included here.
Then why bother? I feel a bit cynical here, but if the goal is to get this fixed, they're not going to care unless it becomes a zero day and is given to the masses, otherwise it's going to quietly be exploitable by the few unsavory groups who know of it and will never be patched. Isn't the whole point of responsible disclosures to give them a time clock to get this situated before actual publication? Forgive me if I'm wrong, I haven't been in that field in a long time.
tptacek•6mo ago
This is the security equivalent of getting Google support by getting something to the top of HN. The real audience for this post is OpenAI, not you.
lyu07282•6mo ago
It adds some pressure, we know now what the bug is about so we can guess which endpoints to poke at, then it's only a matter of time before it leaks. It would be unethical for the researcher to just publish it.
Eduard•6mo ago
POC?
Eduard•6mo ago
> PGP Key: 1234 5678 9ABC DEF0 1234 5678 9ABC DEF0 1234 5678

For real? At least doesn't match the one on https://keybase.io/requilence

winstonhowes•6mo ago
Hi all, I work on security at OpenAI. We have looked into this report and the model response does not contain outputs from any other users nor does it reflect a security vulnerability, compromise, or exploit.

The original report was that submitting a message close to (but not quite) 1500 seconds to the audio transcription API would result in weird, unrelated, off-topic responses that look like they might be replies to someone else’s query. This is not what’s happening. Our API has a bug where if the tokenization of the audio (which is not strictly correlated with the audio length) exceeds a limit, the entire input is truncated, and the model effectively receives a blank query. We’re working with our API team to get this fixed and to produce more useful error messages.

When the model receives an empty query, it generates a response by selecting one random token, then another (which is influenced by the first token), and another, and so on until it has completed a reply. It might seem odd that the responses are coherent, but this is a feature of how all LLM's work - each token that comes before influences the probability for the next token, and so the model generates a response containing words, phrases, code, etc. in a way that appears humanlike but in fact is solely a creation of the model. It’s just that in this case, the output started in a random (but likely) place and the responses were generated without any input. Our text models display the same behavior if you send an empty query, or you can try it yourself by directly sampling an open source model without any inputs.

We took a while to respond to this. Our goal is to provide a reasonable response to reports. If you have found a security vulnerability, we encourage you to report it via our bug bounty program: https://bugcrowd.com/engagements/openai.

diggan•6mo ago
> If you have found a security vulnerability, we encourage you to report it via our bug bounty program

It seems like reporting bugs/issues via that program forces you to sign a permanent NDA preventing disclosures after the reported issue been fixed. I'm guessing the author of this disclosure isn't the only one that avoided it because of the NDA. Is that potentially something you can reconsider? Otherwise you'll probably continue to see people disclosing these things publicly and as a OpenAI user it sounds like a troublesome approach.

ragona•6mo ago
(Note; I also work for OpenAI Security — though I’ve not worked on our bounty program for some time. These just my thoughts and experiences.)

I believe the author was referring to the standard BugCrowd terms, which as far as I know are themselves fairly common across the various platforms. In my experience we are happy for researchers to publish their work within the normal guidelines you’d expect from a bounty program — it’s something I’ve worked with researchers on without incident.

winstonhowes•6mo ago
100%. We want to ensure we can fix real security issues responsibly before details are published. In practice, if a researcher asks to disclose after we've addressed the issue, we're happy for them to publish.
DANmode•6mo ago
In practice, it sounds like you guys didn't accept this dude's valid vuln because he didn't register and sign his life away.
tptacek•6mo ago
They just stated it was all just model hallucination, and was not in fact a valid vuln.
DANmode•6mo ago
shrugs If you're convinced, I'm convinced!
tptacek•6mo ago
I'm convinced.
requilence•6mo ago
Thank you, I made an update to the original post with your explanation, and because you stated that the output was a pure hallucination, I also attached one of them.