frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

EVs Are a Failed Experiment

https://spectator.org/evs-are-a-failed-experiment/
1•ArtemZ•6m ago•1 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•7m ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
1•LiamPowell•9m ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
2•duxup•11m ago•0 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•13m ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•25m ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•27m ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
2•savrajsingh•28m ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•29m ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•33m ago•0 comments

Pony Alpha: New free 200K context model for coding, reasoning and roleplay

https://ponyalpha.pro
1•qzcanoe•38m ago•1 comments

Show HN: Tunbot – Discord bot for temporary Cloudflare tunnels behind CGNAT

https://github.com/Goofygiraffe06/tunbot
1•g1raffe•40m ago•0 comments

Open Problems in Mechanistic Interpretability

https://arxiv.org/abs/2501.16496
2•vinhnx•46m ago•0 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
2•rolph•50m ago•0 comments

Dexter: Claude-Code-Style Agent for Financial Statements and Valuation

https://github.com/virattt/dexter
1•Lwrless•52m ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•57m ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•58m ago•1 comments

They Hijacked Our Tech [video]

https://www.youtube.com/watch?v=-nJM5HvnT5k
1•cedel2k1•1h ago•0 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
34•chwtutha•1h ago•5 comments

HRL Labs in Malibu laying off 1/3 of their workforce

https://www.dailynews.com/2026/02/06/hrl-labs-cuts-376-jobs-in-malibu-after-losing-government-work/
4•osnium123•1h ago•1 comments

Show HN: High-performance bidirectional list for React, React Native, and Vue

https://suhaotian.github.io/broad-infinite-list/
2•jeremy_su•1h ago•0 comments

Show HN: I built a Mac screen recorder Recap.Studio

https://recap.studio/
1•fx31xo•1h ago•1 comments

Ask HN: Codex 5.3 broke toolcalls? Opus 4.6 ignores instructions?

1•kachapopopow•1h ago•0 comments

Vectors and HNSW for Dummies

https://anvitra.ai/blog/vectors-and-hnsw/
1•melvinodsa•1h ago•0 comments

Sanskrit AI beats CleanRL SOTA by 125%

https://huggingface.co/ParamTatva/sanskrit-ppo-hopper-v5/blob/main/docs/blog.md
1•prabhatkr•1h ago•1 comments

'Washington Post' CEO resigns after going AWOL during job cuts

https://www.npr.org/2026/02/07/nx-s1-5705413/washington-post-ceo-resigns-will-lewis
4•thread_id•1h ago•1 comments

Claude Opus 4.6 Fast Mode: 2.5× faster, ~6× more expensive

https://twitter.com/claudeai/status/2020207322124132504
1•geeknews•1h ago•0 comments

TSMC to produce 3-nanometer chips in Japan

https://www3.nhk.or.jp/nhkworld/en/news/20260205_B4/
3•cwwc•1h ago•0 comments

Quantization-Aware Distillation

http://ternarysearch.blogspot.com/2026/02/quantization-aware-distillation.html
2•paladin314159•1h ago•0 comments

List of Musical Genres

https://en.wikipedia.org/wiki/List_of_music_genres_and_styles
1•omosubi•1h ago•0 comments
Open in hackernews

GPT-5 is a joke. Will it matter?

https://www.bloodinthemachine.com/p/gpt-5-is-a-joke-will-it-matter
47•dbalatero•5mo ago

Comments

petesergeant•5mo ago
The big, big mistake here was routing, imo. I wanna choose my intelligence level for a question, not have the machine make that decision for me. Sam Altman over-hyping stuff is not new. I have had GPT-5 do some very impressive work for me, I've also had it fall on its ass many times.

Also, that gpt-5-nano can't handle the work that gpt-4o-mini could (and is somehow slower?) is also surprising and bad, but they'd really painted themselves into a corner with the version numbers.

charcircuit•5mo ago
I think most people will just use whatever is the default instead of manually adjusting it for each query. I think it makes sense to adapt to the user's query since that can more efficiently allocate resources for higher quality results that can be returned to the user faster.
SchemaLoad•5mo ago
Not ideal for you maybe but for the average user it's very much an improvement.
petesergeant•5mo ago
> for the average user it's very much an improvement

what are you basing that on?

nickthegreek•5mo ago
I do agree that this will be a big improvement for regular users. But this is a situation where it is trivial to have both. Give the power users complete model control via the selector. Let the default be Auto for normies.
827a•5mo ago
GPT-5 is a product that represents the needs of its users. They have 700M weekly active users, the vast majority of which just want a better google, therapist, life coach, or programming mate; not some superintelligent being that can solve the Riemann hypothesis.

The reason why true singularity-level ASI will never happen isn't because the technology cannot support it (though, to be clear: It can't). Its because no one actually wants it. The capital markets will take care of the rest.

mlyle•5mo ago
> The reason why true singularity-level ASI will never happen isn't because the technology cannot support it (though, to be clear: It can't). Its because no one actually wants it.

Plenty of people want something capable of doing tasks well beyond what GPT-5 can and that are equally capable to a proficient human.

If you can do that cheaper or faster or more available than said skilled human, there is definitely a market for it.

esalman•5mo ago
Nobody will pay them 1.5m bonus for developing ASI though.

As much as we hate on meta, open models are the answer.

827a•5mo ago
The issue is, what you're describing actually doesn't require intelligence, beyond a point. A mathematician who can solve the Reimann hypothesis isn't likely to be all that good of an L8 AWS Engineer; but what AWS would pay a lot of money for is a digital L8 AWS Engineer, not a digital Terrance Tao. The deep reflection that AI frontier labs have to do, soon, is: If its not intellect, what are we missing?
Avicebron•5mo ago
embodiment?
klipt•5mo ago
I'm sure plenty of cancer patients would pay good money for a digital genius doctor who can custom design a drug to target their cancer.
827a•5mo ago
Again, this doesn't require intelligence, beyond a point. Breakthrough medical therapies aren't invented by 180 IQ Einsteinian hyper-intellectuals.

My point in being pedantic about this is to just point out that an extreme amount of value could be generated by these systems if they only attained the capability to do the things 110 IQ humans already do today. Instead of optimizing toward that goal, the frontier labs seem obsessed with optimizing toward some other weird goal of hyper-intelligence that has very unclear marketability. When pressed on this, leaders in these companies will say "its transferable"; that hyper-intellectuality will suddenly unlock realistic 110 IQ use-cases; but we're increasingly seeing, quite clearly to me, that this isn't the case. And there's very good first principal reasons why you should believe that it won't be.

Gud•5mo ago
No, because Einstein-level intelligence is rare.

But 180 IQ intelligence that doesn’t sleep doesn’t stop with instant access to the worlds knowledge will absolutely be able to revolutionise the sciences.

mlyle•5mo ago
So, I'm not sure 180 IQ intelligence can.

But something that has equivalence to high human intelligence (say 140IQ, generalized to a wide variety of domains) and can engage in long-term directed, purposeful, and autonomous behavior will (not all humans of high intelligence cross this bar, but the ones that do tend to make an impact).

mlyle•5mo ago
I would argue that AI is able to do a lot of what a 110 IQ human does today already-- at least the 30% of the population that is of average intelligence and has a below average appetite for growth and challenge.

And sure, it's not perfectly transferable. It's going to be a long time before it can fix my plumbing, but there's a fair number of tasks that it can do now that are useful.

> toward some other weird goal of hyper-intelligence

I think the near term goal is something that can do most human tasks of the scope of tens of minutes of work at a computer, without task selection or self-direction. In some ways this looks superhuman, doing this stuff at high speed and 24/7. But it's superhuman like a mechanical loom is superhuman.

Even better would be something you could delegate higher level goals, including tasks of larger scope and duration, but that's slower coming.

danaris•5mo ago
> If its not intellect, what are we missing?

...Why would it not be intellect?

What we're missing is anything that actually resembles thought at anything more than the very surface level.

LLMs are not intelligent. They sound intelligent because they're being trained very well at predicting what tokens to produce to sound that way, but they have no intellect, no understanding, no world model, no consciousness—no intelligence.

seba_dos1•5mo ago
I love how in the world of LLMs "reasoning" means having the model write out things like "I'm collecting a list of resources" just so it can then better cosplay someone who did that.

I'm eagerly waiting for the next big breakthrough when someone trains a model that switches between "thinking" and "outputting" modes mid-answer :)

mlyle•5mo ago
> I'm eagerly waiting for the next big breakthrough when someone trains a model that switches between "thinking" and "outputting" modes mid-answer :)

??? That was last year's breakthrough.

seba_dos1•5mo ago
See??? What a breakneck pace! :D
mlyle•5mo ago
So, I guess it was just a low effort comment, and I shouldn't take you seriously? I thought you were making an actual effort to converse.

Generating tokens to represent steps isn't just "cosplaying" --- it's representing more state than can fit into a single step of the model.

Ditto for when I talk to myself when figuring something out, either out loud or just mentally.

seba_dos1•5mo ago
It was a joke (with an emoticon even) - and it was there to make a point.

> Generating tokens to represent steps isn't just "cosplaying" --- it's representing more state than can fit into a single step of the model.

Well duh, obviously, but once you look closer, some of these "steps" aren't really any actual steps - they're just cosplaying. You can often catch a model outputting something like "I'm collecting the list of resources to base this table on", even though it doesn't do anything close to that (which is hopefully obvious). It just writes this, because it's a reasonable thing for someone who researches a topic to include in their stream of conscio... I mean, chain of thought. This then also helps it move the weights for future tokens in the direction of a writing from someone who appears to have collected such list, the same way people had more success with early image generation by adding "award-winning" or "high quality" to the prompt. Remember "prompt hacking"? Not in the security sense, but in the "I'm a prompt engineer" sense. The models prompt-hack themselves now. These breakthroughs consist of obvious next steps, it's just that someone with deep enough pockets allowed them to be tried on big enough datasets with long enough compute.

techpineapple•5mo ago
> just want a better google, therapist, life coach, or programming mate; not some superintelligent being that can solve the Riemann hypothesis

Except it seems like it's a worse google, therapist, life coach and programming mate, with the personality of someone who spends all their time trying to solve the Riemann hypothesis.

mlyle•5mo ago
To me, GPT-5 feels slightly better. And rumor is the cost for OpenAI to provide it is much less.
827a•5mo ago
That's probably true for the time being, though I'd consider that a simple miscalculation on their part than the grand expression of market dynamics that I asserted. They're course-correcting on this and are planning to add more warmth back to its responses.
georgemcbay•5mo ago
Any of the LLMs (including google's own) is a better "google" than traditional google search. Not really for any technical reason as much as it was that google was perfectly willing to cede the search war to garbage SEO sites as long as they were driving eyeballs to adsense.

But the rest I agree with.

piva00•5mo ago
I wouldn't say a "worse Google", and I'm also tired of the hallucinations and people copy-pasting AI-generated bullshit as if it was correct.

It's just a different tool than Google, and quite complimentary in many ways: instead of doing many keywords searches to collect information, aggregate it in a mental model, and derive a conclusion, it's easier to ask it to answer the aggregation I need and back-verify if it makes sense. To me it has helped a lot to not have to know the right incantations of keywords for the information I'm trying to find right out of the bat, and from the LLM answer I can backtrace what searches to do to confirm the validity of it.

It's a small twist but it has definitely showed me some value in these tools for more general information lookup.

The thing is: I still want and need to have a search engine, if for some reason search engines cease to exist and LLMs take their place I will not be able to trust anything...

aurareturn•5mo ago
I agree with your first part. There are a ton of people who want to use LLMs to solve the Riemann hypothesis but those people will need a different model with vastly more compute than just regular old ChatGPT.

ASI isn't what people want or not. It's an AGI that is able to self improve at a rapid rate. We haven't built something that can self improve continuously yet. It's not related to whether people want it or not. Personally, I do want ASI.

827a•5mo ago
I simply disagree on that specific definition of ASI. Sure, what we're both describing is a system that is able to self-improve at a rapid rate. But, what does "improve" mean? Is an AI which is able to rapidly self-improve only along the intelligence vertical of making the best sourdough bread actually ASI?

> We haven't built something that can self improve continuously yet.

Actually, we have. Its called AlphaGo; its capable of rapid self-improvement on the intelligence vertical of playing Go. Is that ASI?

If your answer to that is No, then you've accepted the base case of a more general idea: That the vertical of self-improvement does matter in the human designation of ASI, which means the objective and reward functions defining "ASI" are at some level human designated.

In other words, it is deeply related to what humans want.

therein•5mo ago
Is it just me or is anyone else worried what will happen when the industry realizes LLMs are just not going to be the path to AGI? We are building nuclear power plants, massive datacenters that make our old datacenters look like toys.

We are investing literal hundreds of billions into something that is looking more and more likely to flop than succeed.

What scares me the most is we are being steered into a sunk cost fallacy. Industry will continue to claim it is just around the corner, more and more infrastructure will be built, even underground water is being rationed in certain places because AI datacenters apparently deserve priority.

Are we being forced into a situation where we are invested too much in this to come face to face with it doesn't work and it makes everything worse?

What is this capacity being built for? It no longer makes any sense.

techpineapple•5mo ago
> Are we being forced into a situation where we are invested too much in this to come face to face with it doesn't work and it makes everything worse?

Yes, but don't worry, they're getting close to the government so they'll get bailed out.

> What is this capacity being built for? It no longer makes any sense.

Give more power to Silicon Valley billionaires.

tootie•5mo ago
I honestly wonder how much they even believe their own hype. Altman is a world class circus barker for AI. I seriously doubt his sincerity about anything. Obviously Zuck is putting his money where his mouth is but idk how a data center the size of Manhattan is a means to any useful end.

I'm just imagining in 2030 when there is an absolutely massive secondary market for used GPUs.

Fade_Dance•5mo ago
I suspect (but obviously can't confirm, although this is a thought I've had before) that they are now much more of a "normal" company than they were before. I doubt the workforce feels like a god-tier supergroup either - after all there are many places you can work as an AI researcher now, and no doubt many of them left to make a startup or take that hundred million dollar zuck paycheck.

Which leaves Altman frankly looking increasingly awkward with his revolutionary proclamations. His company just isn't as cool as it used to be anymore. Jobs era Apple was arguably a cooler company.

SchemaLoad•5mo ago
We are likely in for the biggest market crash in history. Even if LLMs are super useful and will be with us going forward, that doesn't mean they will be profitable enough to justify the spending.

Open source and last years models will be good enough for 99% of people. Very few are going to pay billions to use the absolute best model there is.

techpineapple•5mo ago
> We are likely in for the biggest market crash in history

I don't think there's the kind of systemic risk that you had in a say 2008 is there?, but I do think there is likely to be a "correction" to put it lightly.

Fade_Dance•5mo ago
Datacenter build-outs are being financed by mega caps with fortress balance sheets and free cash flows that are coming in well above expectations. They are still able to do things like route hundreds of billions in buybacks.

And regardless of being great investments or not, all of those companies have a burning desire for accelerated depreciation to lower their effective tax rate, which data center spend offers.

The more bubbly names will likely come down to earth eventually, but the growth stock sell-off we say in '22 due to the termination of the zero interest rate environment will probably dwarf it in scale. That was a true DotCom 2.0 bubble, with vaporware EV companies with nothing more than a render worth 10 billion, fake meat worth 10 billion, web chat worth 100 billion, 10 billion dollar treadmill companies... Space tourism, LIDAR... So many of those names have literally gone down 90 to 99%. It's odd to me that we don't look at that as what it was - a true dotcom bubble 2.0. The AI related stuff looks tame in comparison to what we saw just a few years ago.

vpribish•5mo ago
nah, you're being hyperbolic. run the actual numbers. just the US GDP is 28T - AI investment in 2024 was like 250B. it will barely leave a dent
rsynnott•5mo ago
> We are likely in for the biggest market crash in history.

This seems unlikely, because if and when the bottom falls out, it seems implausible that it will be the sort of systemic shock that the financial crisis was, much less the Great Depression. Lots of people would lose lots of money, but you wouldn't expect the same degree of contagion.

aurareturn•5mo ago

  Is it just me or is anyone else worried what will happen when the industry realizes LLMs are just not going to be the path to AGI? We are building nuclear power plants, massive datacenters that make our old datacenters look like toys.
Nothing will happen. LLMs even at GPT5 intelligence but scaled up with significantly with higher context size, faster inference speed, and lower cost would change the world.
SchemaLoad•5mo ago
In what way. So far almost all the change has been in spamming social media and customer support.
aurareturn•5mo ago
There are many ideas (society enhancing ones) that I have that I can't do due to context size being too low, inference being too slow, or price being too high.

For example, I want to use an LLM system to track every promise a politician makes ever and see if he or she actually did it. No one is going to give me $1 billion to do this. But I think it would enhance society.

I don't need an AGI to do this idea. I think current LLMs are already more than good enough. But LLM prices are still way to high to do this cost effectively. When inference is as cheap relatively as serving up a web page, LLMs will be ubiquitous.

dse1982•5mo ago
Out of pure curiousity: do your really think that would make any significant difference for voter behavior? My understanding is that one of the biggest misunderstandings of the last decade+, was that center and leftist politicians assumed it would keep people from voting against their interests if you point out the lies of the relevant politicians and how their policies actually go against their voters interests. I mean that was the whole point of the fact-checking stuff gaining traction in mainstream-media during that time: just to be mostly abolished again because people are just not interested in truth and facts as much as we like to assume. Not not at all. But not as much as we tend to think.

Please don't get me wrong, I am not trying to be sarcastic here. I would love to see a perspective – just any perspective – of how to get out of the current political situation. Not just in the US but in many other countries the same playbook is followed by authoritarians with just as much success as in the US. So if you have material or some reasoning at hand why more information for the population would make a difference on the voting behaviour I would be super-interested. Thanks in advance!

aurareturn•5mo ago
I don't know if it will make any difference. But this sort of large scale data categorization is possible only with an LLM. Previously, you probably needed dozens of people working full time to do this. Now you just have a few GPUs do it.

I have a lot more ideas that are gated by low context size, inference speed, and price per token.

The bottomline is that we don't need AGI to change the world. Just more an cheaper compute.

Ekaros•5mo ago
I have been toying with idea to make candidate chooser on what members of parliament actually voted for. That data set is pretty limited and readily available. Just get the vote record and identify key bills. No LLM needed.
lcnPylGDnU4H9OF•5mo ago
> Just get the vote record and identify key bills. No LLM needed.

How do you compare their vote record with the things they've said publicly?

leodiceaa•5mo ago
> For example, I want to use an LLM system to track every promise a politician makes ever and see if he or she actually did it.

You're describing a list. Why do you need GPU farms to create a list?

aurareturn•5mo ago
How do you populate that list and verify the promises on the list?
tpm•5mo ago
You can't verify anything with a LLM.
andrewflnr•5mo ago
Well, we can for sure put the nuke plants to good use. Maybe for desalination to replace the water supplies.
distalx•5mo ago
Probably valid sunk cost fallacy, but it makes me wonder what will happen to the applications and systems being built on top of LLMs? If we face limitations or setbacks, will these innovations survive, or could we see a backlash against all thinking machines, reminiscent of Isaac Asimov's cautionary tales?
georgemcbay•5mo ago
> Is it just me or is anyone else worried what will happen when the industry realizes LLMs are just not going to be the path to AGI?

Yes, some people are concerned, see for example a recent Hank Greene YouTube video:

https://www.youtube.com/watch?v=VZMFp-mEWoM

I'm probably more concerned than he is in that a large market correction would be bad, but even scarier are the growing (in number and power) techo-religious folks who really think we are on the cusp of creating an electronic god and are willing to trash what's left of the planet in an attempt to birth that electronic god that they think will magically save us.

shubhamjain•5mo ago
Even if AI progress all stops and we are stuck with these models, it would at least take a few years to make use them to the full potential. What gets lost in this AGI debate is how impressive are these models without even reaching it. And the places where it has been applied are just tip of the iceberg.

It's just like dot-com bubble, everyone was pumped that Internet is going to take over the world. And even though, the bubble popped, the Internet did eventually take over the world.

danaris•5mo ago
LLMs are not going to take over the world.

LLMs are not remotely comparable to the Internet in terms of their effect on ordinary people (unless you want to talk about their negative effects...).

People keep trotting out this comparison, and as time goes on it makes even less sense, as we come to see even more clearly just how false the promises of the AI bros are.

If progress stops with these models, we will be left with some interesting curiosities that can help with certain tasks, but are in no way worth the amount of resources it costs to run them en masse.

vpribish•5mo ago
so i don't see the path to AGI either, certainly not with LLMs. but there is some very useful stuff to be done with deep learning, and LLMs are a pretty amazin advance for search, translation, communication and user interfaces. but it's not the next industrial revolution.

i don't see the sunk-cost fallacy angle, just the sunk costs. the capital allocators will absolutely shut off the spigot when they see that it isn't going to yield. yeah, there could be some dark data centers. not the end of the world, just another ai winter at worst - maybe a dot-com crash... whatever.

the world is way bigger than the techo chamber

lostmsu•5mo ago
I'm more worried what will happen when people in denial will be faced with AI replacing their job. I used to hire outsource devs of _relatively_ poor quality (think just below avg. Wipro), but who completed descent amount of projects, for frontend. I will never do that again, unless I have to work with audio or video which modern LLMs can't test on their own.
neutronicus•5mo ago
> We are building nuclear power plants

If only

rsynnott•5mo ago
> We are building nuclear power plants

Who is? There'll be about 10 nuclear reactor construction starts this year, largely either replacing end-of-life reactors, or in China (which had a well-established nuclear build-out prior to AI stuff). Beyond media hype, there's little reason to think that the AI bubble is actually leading to nuclear reactors being built anywhere.

EcommerceFlow•5mo ago
1) Sam said only 7% of PLUS users were using thinking models. This auto-router is probably one of the biggest innovations for "normie use" ever.

2) Maybe I'm biased because I'm using GPT5-Pro for my coding, but so far it's been quite good. Normal thinking mode isn't substantially better than o3 IMO, but that's a limitation of data/search.

jbellis•5mo ago
GPT-5 is the best model now for writing code by a significant margin.

https://brokk.ai/power-rankings

lvl155•5mo ago
I don’t know how you can say that with a straight face. It’s simply not the best and by a wide margin. No one doing any significant agentic workflow would consider GPT-5 over Sonnet 4. Not even close.
ec109685•5mo ago
How can you say this about a product that has 700M MAU’s:

> As is true with a good many tech companies, especially the giants, in the AI age, OpenAI’s products are no longer primarily aimed at consumers but at investors

ho_lee_phuk•5mo ago
It is a good product. But probably not a great one.
dse1982•5mo ago
Because the users pay an unrealistically low price. You aim for money and right now you make money via the investors and not the users. Would 700M people use it if they had to pay a realistic price? I doubt it.
Uehreka•5mo ago
I used to say that the dumbest conversation about AI was about whether it was “actually intelligent”, but I was wrong: It’s the conversation about whether it’s “overhyped”.

Like, I don’t care how much Sam Altman hyped up GPT-5, or how many dumb people fell for it. I figured it was most likely GPT-5 would be a good improvement over GPT-4, but not a huge improvement, because, idk, that’s mostly how these things go? The product and price are fine, the hype is annoying but you can just ignore that.

If you feel it’s hard to ignore it, then stop going on LinkedIn.

All I want to know is which of these things are useful, what they’re useful for, and what’s the best way to use them in terms of price/performance/privacy tradeoffs.

zdragnar•5mo ago
The problem with the hype is the market distortion for startups looking for funding.

I don't know if it improved at all lately, but for awhile it seemed like every startup I could find was the same variation on "healthcare note taking AI".

djfobbz•5mo ago
GPT-4 seemed more snappy and faster...my experience with GPT-5 has been nothing short of poor. It takes too much time thinking. I'd rather have a generally good answer really fast than a very good one after 25-35s.
duxup•5mo ago
Agreed and I feel like I get the same output with a longer wait.
PaulStatezny•5mo ago
I've read plenty of criticism about ChatGPT 5, but as a Plus user I'm surprised nobody has brought this up:

Speed.

ChatGPT 5 Thinking is So. Much. Slower. than o4-mini and o4-mini-high. Like between 5 and 10 times slower. Am I the only one experiencing this? I understand they were "mini" models, but those were the current-gen thinking models available to Pro. Is GPT 5 Thinking supposed to be beefier and more effective? Because the output feels no better.

benterix•5mo ago
Yeah, I always use mistral for fast answers.
rbinv•5mo ago
I think it's more appropriate to compare GPT 5 Thinking to o3. You will find that the response times are actually quite similar (at least in my experience over hundreds of identical prompts with each model).
leshokunin•5mo ago
I use it several hours day. 5 is definitely slower. I’m not certain the quality has improved. I do hate that it keeps saying it’ll think even longer and take up to a minute to do stuff.
cookiengineer•5mo ago
Kind of funny how this article immediately has been removed from the Frontpage and has been flagged.

AI bros at work, I guess, and criticism isn't allowed?

So I should probably write "All hail OpenAI, hail Hydra"?