frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

OpenAI Has New Focus (On the IPO)

https://om.co/2026/03/17/openai-has-new-focus-on-the-ipo/
53•aamederen•1h ago

Comments

wcgan7•1h ago
I thought it is against OpenAI interest to IPO, especially now that it has made a deal with the Pentagon. IPO would likely prevent the company from burning money at the current rate and pursue shorter terms profit.
jacquesm•54m ago
It's not about OpenAI's interest, it is about the current stockholders' ability to divest OpenAI stock on people who don't know what the state of affairs in the AI domain is where OpenAI still has tremendous name recognition. If they don't IPO then they'll lose that window of opportunity, the stock market is super precarious right now and if it should tank the IPO window will close for a long time.
sonink•1h ago
From the article: "You can see that in the recent iterations of ChatGPT. It has become such a sycophant, and creates answers and options, that you end up engaging with it. That’s juicing growth. Facebook style."

This is something I relalized lately. ChatGPT is juicing growth Facebook style. The last time, I asked it a medical question, it answered the question, but ended the answer with something like "Can I tell you one more thing from your X,Y,Z results which is most doctors miss ? " And I replied "yes" to it, and not just once.

I was curious what was going on. And Om nails it in this article - they have imported the Facebook rank and file and they are playing 'Farmville' now.

I was already not positive of what OpenAI is being seen as a corporate, but a "Facebook" version of OpenAI, scares the beejus out of me.

aurareturn•1h ago
I don't have a problem with the suggestions. Google search does the same at the end of searches.

It does very often suggest things I want to know more about.

fhub•57m ago
Then just write the extra paragraph rather than bait?
IMTDb•7m ago
Bait what exactly ? Getting the user to type "yes" ? Great accomplishment.

Sometimes I want the extra paragraph, sometimes I don't. Sometimes I like the suggested follow up, sometimes I don't. Sometimes I have half an hour in front of me to keep digging into a subject, sometimes I don't.

Why should the LLM "just write the extra paragraph" (consuming electricity in the process) to a potential follow up question a user might, or might not, have ? If I write a simple question I hope to get a simple answer, not a whole essay answering stuff I did not explicitly ask for. And If I want to go deeper, typing 3 letters is not exactly a huge cost.

sonink•53m ago
Suggestions are absolutely fine. But this is baiting. Chatgpt could have easily given me that information without the bait. And I would have happily consumed it. And maybe if it did it once, it was fine - but it kept on doing it - bait after bait after bait.

The objective was to increase the engagement "metrics" clearly. The seems to me as if the leadership will take all 'shortcuts' required for growth.

llm_nerd•27m ago
This seems overly cynical.

Firstly, tl;dr; is a very real thing. If the user asks a question and the LLM both answers the question but then writes an essay about every probable subsequent question, that would be negatively overwhelming to most people, and few would think that's a good idea. That isn't how a conversation works, either.

Worse still if you're on a usage quota or are paying by token and you ask a simple question and it gives you volumes of unasked information, most people would be very cynical about that, noting that they're trying to saturate usage unprompted.

Gemini often does the "Would you like to know more about {XYZ}" end to a response, and as an adult capable of making decisions and controlling my urges, 9 times out of 10 I just ignore it and move on having had my original question satisfied without digging deeper. I don't see the big issue here. Every now and then it piques me, though, and I actually find it beneficial.

The prompts for possible/probable follow-up lines of inquiry are a non-issue, and I see no issue at all with them. They are nothing compared to the user-glazing that these LLMs do.

FartyMcFarter•22m ago
Agreed completely. I don't use ChatGPT, but Gemini's offers to answer follow-up questions are great, I haven't noticed them being self-serving (i.e. ads) or manipulative.
markers•6m ago
Have you used ChatGPT lately?

What you describe is not quite what they are doing, they are adding nudges at the end of the follow-up question suggestions. For instance I was researching some IKEA furniture and it gives suggestions for followup, with nudges in parenthesis "IKEA-furniture many people use for this (very cool solution)" and at the end of another question suggestion: "(very simple, but surprisingly effective)". They are subtle cliffhangers trying to influence you to go on, not pure suggestions. I'm just waiting for the "(You wouldn't believe that this did!)". It has soured me on the service, Claude has a much better personality imo.

DiscourseFan•52m ago
Well they are realizing they just can't compete in terms of raw productivity gains with Anthropic, their moat is in their brand and user base (and government contracts, I suppose, at least while Trump is still in office--although a few years of setting up the architecture might be enough to cement it there).
nicce•52m ago
The output is also very manipulative in order to keep you using it. They want you to feel good. I don't use ChatGPT at all anymore, as it is misleading too badly. But it will work for masses as it worked with Facebook/Instagram etc.
llm_nerd•33m ago
Gemini does the same thing. For every question it looks to extend the conversation into natural follow-up questions, always ending a response with "Would you like to know more about {some important aspect of the answer}?"

And...I don't see it as a bad thing. It's trying to encourage use of the tool by reducing the friction to continued conversations, making it an ordinary part of your life by proving that it provides value. It's similar to Netflix telling you other shows you might like because they want to continue providing value to justify the subscription.

surgical_fire•10m ago
Ironically, I found the recent models engage a lot less in sycophant behavior than in ChatGPT 4 days.

Maybe it's the way I prompt it or maybe something I set in the personalization settings? It questions some decisions I make, point out flaws in my rationale, and so on.

It still has AI quirks that annoy me, but it's mostly harmless - it repeats the same terms and puns often enough that it makes me super aware that it is a text generator trying to behave as a human.

But thankfully it stopped glazing over any brainfart I have as if it was a masterstroke of superior human intelligence. I haven't seen one of those in quite a while.

I don't find the suggestions at the end of messages bad. I often ignore those, but at some points I find them useful. And I noticed that when I start a chat session with a definite goal stated, it stops suggesting follow ups once the goal is reached.

allovertheworld•1h ago
Focus on programming since they just bruteforce the type checkers/compilers to find out if their slop was correct the first time.

Basically an illusion. Imagine if they focused on medical tech instead? You cant bruteforce vaccines or radiation therapy

petcat•56m ago
> they just bruteforce the type checkers/compilers to find out if their slop was correct

Have you used an AI coding model at all in the last year and a half? I think your knowledge is pretty outdated now.

allovertheworld•52m ago
Yes, gpt 5.4 always tries to compile/check my c++ code after every prompt. Despite it being in my AGENTS.md to never run builds. Then I have to explicitly mention it, but it will do it again randomly after.

What this means is the training/RL was trained with this workflow ;) But as you can tell, this workflow has no uses outside programming. Its just a hack to make it seem like the model is smart, but in fact its just them performing loops to get it right.

dana321•10m ago
All the models ignore specific instructions most of the time.

It requires follow-up instructions to get it to do what you want.

By the time its farted around and you have farted around reprompting it you could have done the change yourself.

keyle•58m ago
Time to jump ship.

I have noticed 5.3 in xtra high was a turd today. High used to be enough for most of my use cases. xhigh used to surprise me. Now it's incapable of following the very first instructions.

I just hope open source models get as good as last few month's top models before the enshittification has gone too far.

girvo•40m ago
Qwen3.5 (-plus, which isn’t actually open to be fair) is surprisingly decent I’ve found.
aurareturn•55m ago

  One thing odd, maybe just to me, is why OpenAI has been stuffing its ranks with former Facebookers who are known to juice growth, find edges, and keep people addicted. They have little background in getting enterprises to buy into a product. Simo herself ran the Facebook app. That organization’s genius is consumer engagement: behavioral hooks, dopamine loops, the relentless optimization of the feed. You can see that in the recent iterations of ChatGPT. It has become such a sycophant, and creates answers and options, that you end up engaging with it. That’s juicing growth. Facebook style.
This is because ChatGPT is gearing up to sell ads. It's the only way to sustain a free chat service in the long term. Ads require engagement and usage. Hiring former Meta employees for this is smart business - even if HN crowd doesn't like it.

People say OpenAI is burning money and is on the verge of collapse. The same people will say OpenAI building an ads business on ChatGPT is "enshittifcation". These people are quite insufferable, no offense to the many who are exactly as I described.

reactordev•51m ago
100%. It’s about to become the sleaziest used car salesman the internet has ever seen.
deanc•50m ago
AI is ubiquitous to the point where it's permeating almost every desk job in the world. Even those who don't work are using AI to help them find work, research health problems, ask questions about their daily life. I can't think of anything else since the invention of the internet that has had this much of an impact on people's lives.

People will have to pay for this. I don't see it being free for long other than a few chats a day. If most people in the world are paying 10-200 bucks a month then AI companies will make money, and I doubt they will need to rely much on ads at all.

pipnonsense•46m ago
Or people are just using as much because it is free.
bananaflag•45m ago
On the other hand, costs are getting lower with time.

Sort of how now I have an unlimited 5G data plan for like 10 dollars, and in 2011 I didn't even have Internet on my phone. This is happening also with AI.

sanitycheck•44m ago
Anecdotally I know approximately zero 'normal' (non-tech) people who are intentionally using generative AI, several who have been badly misled by Google's AI summaries, and quite a few who are vehemently anti-AI (usually artists and writers).

(Except when mandated by their employers, which nobody is happy about or finds particularly useful.)

deanc•29m ago
Every single person I know outside of my profession is using it, including all relatives of all ages. Even if it's at the top of the google search results :)
DarkNova6•50m ago
In other words, they need more experts on enshittification.
pipnonsense•47m ago
So that’s why I am getting clickbaity last sentences in every response now at ChatGPT.

Things like ”If you want, I can also show a very fast Photoshop-style trick in Krita that lets you drag-copy an area in one step (without copy/paste). It’s hidden but extremely useful.”

Every single chat now has it. Not only the conversational prompt with “I can continue talking about this”, but very clickbaity terms like: almost nobody knows about this, you will be surprised, all VIPs are now using this car, do you want to know which it is? Etc

buzzy_hacker•43m ago
Same here. “Do you want the one useful tip related to this topic that most people miss? It’s quite surprising.”

If it were so useful, just tell me in the first place! If you say “Yes” then it’s usually just a regurgitation of your prior conversation, not actually new information.

This immediately smelled of engagement bait as soon as the pattern started recently. It’s omnipresent and annoying.

dostick•35m ago
Yes, ChatGPT just recently started to add these engagement phrased follow-ups; “If you want, I can also show you one very common sign people miss that tells you…”
Esophagus4•8m ago
You can tell it not to do this in your personalized context.

The model doesn’t always obey it, but 80% of the time it’s worked for me.

jjallen•41m ago
This is not just OpenAI though. I don’t think this is new in general for these AI chat apps. Claude at the very least asks a question as the last part of its responses I believe every time.
Bengalilol•41m ago
Those "Prompt-YES-baity" last sentences are somehow counterproductive.
dkrich•38m ago
This and also constantly saying stupid things like “yes that is a great observation and that’s how the pros do it for this very reason!” for a specific question that doesn’t apply to anything anyone else is doing
KellyCriterion•37m ago
I find -again- Claude (web) here outstanding & very comfortable:

In most of my discussions throughout the day, it doesnt ask any "follow up" questions at the end. Very often it says thingslike: "you have two options: A - ..... and B - while the one includes X and the other Y..."

But this is was OP underlined: Claude is popular amongst businesses, most "non-tech" people dont even know that it exists.

Esophagus4•5m ago
The worst are the ones who say things like “OpenAI only has 5% paying users!” As if that’s a really bad number. That is the same ratio YouTube, the world’s largest media company, has. And ChatGPT has like 800m users after only a few years of existence.

And “once they sell ads, they’ll lose all their users!” As if that happened to FB, Google, YouTube, or Instagram…

Some people are really rooting for the downfall of OpenAI that will simply not happen, and their rage makes them utterly unreasonable.

rvz•54m ago
The "I" in AGI stands for IPO.
cmiles8•54m ago
There’s a strong chance the IPO window has passed. I just don’t see investors willing to jump in here given all the questions about the financial viability of AI.

The bulk of those investing now are broadly just pumping cash into the fire to keep their prior investments from going to zero.

We have hit a mass deceleration of what the current tech can do with transformers. The tech is also on a path to hyper-commoditization which will destroy the value of the big players as there zero moat to be had here. Absent a new major breakthrough it looks like we’re well on our way into the “trough of disillusionment” for the current AI hype cycle.

Will be interesting to see how all this plays out, but get your popcorn ready.

newsclues•50m ago
Unless the play is the fleece retail investors
thegreatpeter•45m ago
Retail investors do just fine fleecing themselves on their own

The term fleecing means „there’s nothing left here, jump ship”. Do you really believe they’re going public to cash out this early in the game?

cmiles8•45m ago
True, although even here there likely aren’t enough retail suckers to go around given the amount of initial investment folks need to cash in. Thats the challenge when you have so much crazy pre-IPO cash pumped in.

After you float you still need to sell all those shares at the valuations you want to exit. If they floated say 10% of shares to go public and the price tanks everyone else trying to exit loses their shirt so it’s not a magic exit for the early investors.

Ekaros•38m ago
The size of these companies make be doubtful of retail being able to fund them. There being enough retail investors with enough liquid funds who are willing to jump on this.

Lot of retail is in various funds. So those doing active management to scale of this is questionable. And then you most likely also have downward pressure for those that try to bet against these IPOs...

FartyMcFarter•19m ago
There's always the Softbanks of the world.
DaedalusII•35m ago
there arent enough retail investors in the world to buy this ipo

but they will get a lot of flow from sovereign wealth fund and pensions

you might wonder why anthropic spend time in australia, a country with less economy than canada and almost no industry at all? likely because it has very big pension fund pool to buy their ipo

chollida1•44m ago
> There’s a strong chance the IPO window has passed

Ha, i'll take the other side of that bet. I'm not sure why you think they couldn't possibly IPO and you don't really specify why in your post.

Having been in the capital markets for 20 years, now is one of the better times to IPO and I'd bet that both OpenAI and Anthropic will IPO within 12 months.

There are lots of games you can play like releasing a small 10% float) if you are worried about not enough buyers.

cik•23m ago
100% agreed. There's so much locked up appetite for IPOs, both from the tech crowd and the general public. There have been very few quality IPOs since COVID frankly.

I'll wager that the IPO market can actually absorb all three of these that yes, are the size of the last 10 years combined. The trading market itself is larger, as are values, and valuations.

I assume that to maximize value you see a standard lock and roll play here. The S-1 will declare the 10% release, with commentary about future (6 or 12 months) another 5%. Plus don't forget institutional. There's ample space here, even before the Nasdaq 100 changes that are probably coming into play. If those come into play then inflows accelerated, as did valuations.

7thpower•37m ago
You must be living on a different planet than me. Enterprises are just now seeing that these technologies can actually have an impact, and the companies do not have a discretionary cost cap the same way consumers/hobbyists do, so they will pay based on value.
badgersnake•35m ago
I would expect a lot of smart money to flow out of the Nasdaq-100 trackers in anticipation of this grift.
DaedalusII•27m ago
nasdaq listings can be rough, not sure if anyone remember fb ipo

but how else will they own spacex, openai, anthropic, nvidia, in such concentration

pera•28m ago
The Private Equity world already has a solution for this:

Nasdaq's Shame

https://news.ycombinator.com/item?id=47392550

surgical_fire•4m ago
Have you seem how Tesla stock is valued? Investors are notoriously retarded, they will absolutely buy stocks for memetic value, financial viability be damned.

I won't buy into it, but I actually think it will go strong, even as OpenAI finances keep deteriorating.

spacecadet•53m ago
As I said, from AGI to IPO and everyone will forget and move on.
pop_calc•48m ago
Is it just me, or has Om become almost entirely unreadable of late? This post is 80% posturing about the WSJ's ‘narrative’ and 20% vague metaphors about ‘souls’ and ‘spigots’. It’s essentially tech-themed poetry. I appreciate he’s cynical about the AI hype cycle, but there’s absolutely no signal here. Ben Thompson might be equally enamoured with his own voice, but he at least tethers his ego to actual unit economics and a framework you can test. Om is just sharing a mood board and calling it analysis
chirau•39m ago
How does a non-employee get exposure to the OpenAI IPO?
DaedalusII•37m ago
simple, just have a private bank relationship

jpm and gs will let you open an account in the us if you have $50m cash

avnfish•27m ago
There are some side-bet experiments like $2Mn on Hyperliquid[1], $1Mn on Polymarket[2] which are available to everyone. Unfortunately companies stay private for longer these days and a seat at the big boy table is de facto impossible

[1] https://app.hyperliquid.xyz/trade/vntl:OPENAI

[2] https://polymarket.com/event/openai-ipo-closing-market-cap-a...

zurfer•22m ago
Buy public openai investors, e.g. Microsoft. It's diluted but easy.
jrjeksjd8d•35m ago
The quoted revenue numbers seem insane, but I guess it's the result of corporate deals where every developer seat is hundreds of dollars a month?

My job has been publicly promoting who's on top of the "AI use dashboard" while our whole product falls apart. Surely this house of cards has to collapse at some point, better get public money before it does.

MeetingsBrowser•7m ago
At least I’m not alone.

My company has a vibe coded leaderboard tracking AI usage.

Our token usage and number of lines changed will affect our performance review this year.

tyleo•20m ago
ChatGPT seems to have become a LinkedIn lunatic. I just asked Opus and ChatGPT to explain bitonic sort:

Opus: Let me build an interactive explainer for bitonic sort (builds diagram/no nonsense)

GPT:

"This algorithm feels weird but once you see it it clicks"

(Emoji) The Core Idea ...; (Emoji) High-Level Flow ...; > (Emoji) Superpower ...; (Emoji) Why You Should Care;

"If you want, I can: ... (things it wants me to do next)"

wewewedxfgdf•12m ago
OpenAI needs to focus on how Claude is leaving them in the dust for LLM assisted coding.

A note to AI agents about authentik

https://goauthentik.io/blog/2026-03-16-a-note-to-ai-agents-about-authentik/
1•dizhn•42s ago•0 comments

Claude Phone Verification

https://github.com/anthropics/claude-code/issues/34229
1•ZakHussein•1m ago•0 comments

A lot of Touhou 3's gameplay mechanics has been decompiled and researched

https://rec98.nmlgc.net/blog/2026-03-16
1•mimasama•1m ago•0 comments

DLSS 5 – "AI dogshit is actually depressing"

https://arstechnica.com/gaming/2026/03/gamers-react-with-overwhelming-disgust-to-dlss-5s-generati...
1•aureliusm•2m ago•0 comments

Show HN: Generate Devcontainer.json Files in the Browser

https://devcontainer.live
1•MotiBanana•4m ago•0 comments

Dictate and Ship

https://gumeo.github.io/post/dictate-and-ship/
1•gumeo•5m ago•0 comments

Show HN: Aethalloc – lock-free Rust memory allocator for Linux

https://github.com/shift/aethalloc
1•section_me•5m ago•0 comments

"4B unique (and sometimes memorable) sentences"

https://unsung.aresluna.org/4-billion-unique-and-sometimes-very-memorable-sentences/
1•arbesman•6m ago•0 comments

Show HN: Claude-copy – Copy Claude Code output to clipboard

https://github.com/clementrog/claude-copy
1•crog•7m ago•0 comments

Developing nasal vaccine that prevents transmission of all coronavirus variants

https://www.jci.org/articles/view/203781
1•ck2•7m ago•0 comments

Show HN: Sandopolis – A Sega Genesis/Mega Drive Emulator Written in Zig and C

1•habedi0•7m ago•0 comments

Ask HN: Feedback on a spy strategy game I built (Sleeper Cells)

1•ax3726•7m ago•0 comments

Terminal Multiplexers > IDEs

https://amux.io/guides/ai-terminal-multiplexer/
1•Beefin•8m ago•0 comments

Cyborg cockroaches are coming to a pipeline near you

https://www.ft.com/content/f2f31c54-9b97-4211-9cfa-a8ee6df04f4d
1•Brajeshwar•8m ago•0 comments

Show HN: ÆTHERYA Core – deterministic action-governance kernel for LLM agents

https://github.com/nayfly/aetherya-core
1•RobertMihai•9m ago•0 comments

Duranium: A More Reliable PostmarketOS

https://postmarketos.org/blog/2026/03/17/introducing-duranium/
2•wicket•9m ago•0 comments

React-Auth: Google Sign-In for React Native and Expo and Web

https://github.com/forwardsoftware/react-auth/tree/main/packages/google-signin
1•IronTony•9m ago•1 comments

Mirofish: A Simple and Universal Swarm Intelligence Engine

https://github.com/666ghj/MiroFish/blob/main/README-EN.md
1•danielmorozoff•10m ago•0 comments

Turing Award Honors Bennett and Brassard for Quantum Information Science

https://amturing.acm.org
1•throw0101d•11m ago•0 comments

Ask HN: How to write quality code if now we are mostly prompting?

1•hhcoder•13m ago•1 comments

Implementing Growing Arrays for C

https://xnacly.me/posts/2023/growing-array-c/
1•ibobev•14m ago•0 comments

Holotron-12B – High Throughput Computer Use Agent

https://huggingface.co/blog/Hcompany/holotron-12b
1•ibobev•14m ago•0 comments

Nemotron 3 Nano 4B: A Compact Hybrid Model for Efficient Local AI

https://huggingface.co/blog/nvidia/nemotron-3-nano-4b
2•ibobev•15m ago•0 comments

Multiple departures from Deno amid ‘lean focus’ shift, citing business realities

1•coinfused•16m ago•0 comments

AI agents write great code but terrible tests – fixing it with outside-in TDD

https://www.joegaebel.com/articles/principled-agentic-software-development/
1•joegaebel•18m ago•0 comments

Nvidia launches Space-1 Vera Rubin for orbital AI

https://nvidianews.nvidia.com/news/space-computing
2•seanwatson•20m ago•0 comments

Why Parsley, Sage, Rosemary, and Thyme?

https://signoregalilei.com/2026/03/01/why-parsley-sage-rosemary-and-thyme/
1•surprisetalk•21m ago•0 comments

Prisim Optical Connectomics [video]

https://www.youtube.com/watch?v=c6XFA1esgVE
1•surprisetalk•21m ago•0 comments

Invariance, Equivariance, and Factorization

https://www.dissonances.blog/p/invariance-equivariance-and-factorization
1•surprisetalk•21m ago•0 comments

I'll probably never use Windows

https://waspdev.com/articles/2026-03-12/i-ll-probably-never-use-widows
1•surprisetalk•21m ago•0 comments