Tale as old as time. When the retina display macs first came out, we say web design suddenly no longer optimized for 1080p or less displays (and at the time, 1376x768 was the default resolution for windows laptops).
As much suffering as it'd be, I swear we'd end up with better software if we stopped giving devs top of the line machines and just issued whatever budget laptop is on sale at the local best buy on any given day.
It would be awesome if Apple or someone else could have an in-OS slider to drop the specs down to that of other chips. It'd probably be a lot of work to make it seamless, but being able to click a button and make an M4 Max look like an M4 would be awesome for testing.
What were they even thinking? Don't they care about this? Is their AI generating all their charts now and they don't even bother to review it?
How many charts will the person create, how many the machine?
That's based solely on my own personal vibes after regularly using LLMs for a while. I became less willing to and capable of thinking critically and carefully.
This honestly just sounds like distilled intelligence. Because a huge pitfall for very intelligent people is that they're really good at convincing themselves of really bad ideas.
That but commoditized en masse to all of humanity will undoubtedly produce tragic results. What an exciting future...
To sharpen the point a bit, I don't think it's genius "arguing" or logical jujitsu, but some simpler factors:
1. The experience has reached a threshold where we start to anthropomorphize the other end as a person interacting with us.
2. If there were a person, they'd be totally invested in serving you, with nearly unlimited amounts of personal time, attention, and focus given to your questions and requests.
3. The (illusory) entity is intrinsically shameless and appears ever-confident.
Taken together, we start judging the fictional character like a human, and what kind of human would burn hours of their life tirelessly responding and consoling me for no personal gain, never tiring, breaking-character, or expressing any cognitive dissonance? *gasp* They're my friend now and I should trust them. Keeping my guard up is so tiring anyway, so I'm sure anything wrong is either an honest mistake or some kind of misunderstanding on my part, right?
TLDR: It's not not mentat-intelligence or even eloquence, but rather stuff that overlaps with culty indoctrination tricks and con[fidence]-man tactics.
But at the same time that this technology can seemingly be misused and cause really psychological harm is kind of a new thing it feels like. Right? Like there are reports of AI Psychosis, don't know how real it is, but if it's real I don't know any other tool that's really produced that kind of side effect.
At a certain point you might need to ask what the toolmakers can do differently, rather than only blaming the users.
Mission accomplished for them then.
However, I can't think of a sensible way to actually translate that to a bar chart where you're comparing it to other things that don't have the same 'less is more' quality (the general fuckery with graphs not starting at 0 aside - how do you even decide '0' when the number goes up as it approaches it), and what they've done seems like total nonsense.
So if that ^ is why 50.0 is lower than 47.4 ... but why is then 86.7 not lower than 9.0? Or 4.8 not lower than 2.1
If that’s the case, it’s mislabelled and should have read “17%” which would better the visual.
>there seems to be a mistake in this chart ... can you find what it is?
Here is what it told me:
> Yes — the likely mistake is in the first set of bars (“Coding deception”). The pink bar for GPT-5 (with thinking) is labeled 50.0%, while the white bar for OpenAI o3 is labeled 47.4% — but visually, the white bar is drawn shorter than the pink bar, even though its percentage is slightly lower.
So they definitely should have had ChatGPT review their own slides.
But the white bar is not shorter in the picture.
They may not be perfect, but they provided a lot of value to many different industries including coding.
Any sufficiently advanced technology is
indistinguishable from magic.[0]
0 - https://en.wikipedia.org/wiki/Clarke%27s_three_lawsOk, I see there was a bug on the site and it wasn't scrolling on iOS. They fixed that now, although the background context is still unclear, and none of the links in the site seem to explain it.
So they spotted what seems to be an unintentional error in a chart in a youtube video, and created a completely different chart with random errors to make a point, while due to their own coding error the (somewhat obtuse) explanation wasn't even visible on mobile devices.
Not sure why this was voted to the top of the first page of HN, although I can surmise.
By and large people do not have the integrity to even care that numbers are obviously being fudged, and they know that the market is going to respond positively to blustering and bald faced lies. It's a self reinforcing cycle.
This doesn't explain the 50.0 column height though.
Just remember, everyone involved with these presentations is getting a guaranteed $1.5 million bonus. Then cry a little.
Why, unless specifically for the purpose of making it possible to do inaccurate and misleading inconsistencies off this type, would you make charts for a professional presentation by a mechanism that involved separately manually creating the bars and the labels in the first place? I mean, maybe, if you were doing something artistic with the style that wasn't supported in charting software you might, but these are the most basic generic bar charts except for the inconsistencies.
https://gizmodo.com/leaked-documents-show-openai-has-a-very-...
[1] If a computer can perform the task its economic usefulness drops to near zero, and new economically useful tasks which computers can't do will take its place.
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
I don’t believe they intentionally fucked up the graphs, but it is nonetheless funny to see how much of an impact that has had. Talk about bad luck…
Lots of hype from Sam Altman and nothing to really show for it.
https://openai.com/index/introducing-gpt-5/
So, maybe this is just sloppiness and not intentionally misleading. But still, not a good look when the company burning through billions of dollars in cash and promising to revolutionize all human activity can't put together a decent powerpoint.
probably AI generated
GPT-5 has to be one of the most underwhelming releases to date, and that's fresh on the heels of the "gift" of GPT-OSS.
The hottest news out of OpenAI lately is who Mark Zuckerberg has added to Meta's "Superintelligence" roster.
For me, it's just another nice incremental improvement. Nothing special, but who doesn't like smarter better models? The drop in hallucination rates also seems meaningful for real-world usage.
I'm genuinely a bit concerned that LLM true believers are beginning to, at some level, adopt the attitude that correctness _simply does not matter_, not only in the output that spews from their robot gods, but _in general_.
> Hmm. We’re having trouble finding that site.
> We can’t connect to the server at www.vibechart.net.
Imagine a revolutionary technology comes out that has the potential to increase quality of life, longevity and health, productivity and the standard of living, or lead to never before seen economic prosperity, discover new science, explain things about the universe, or simply give lonely people a positive outlet.
Miraculously, this technology is free to use, available to anyone with an internet connection.
But there was one catch: during its release, an error was made on a chart.
Where should this community focus its attention?
that should be a tell that other things may be rigged to look better than they are
So this miraculous technology that can do everything, cure diseases, reverse human aging, absolve us of our sins etc. can't accurately make a bar chart? Something kids learn in 5th grade mathematics? (At least I did, mileage might vary there)
If something is free but not open source, you are the product.
Here is the corrected version:
Imagine a revolutionary technology comes out that has the potential to increase quality of life, longevity and health, productivity and the standard of living, or lead to never before seen economic prosperity, discover new science, explain things about the universe, or simply give lonely people a positive outlet.
But there was one catch: during its release, an error was made on a chart. It turns out that it did not lead to the massively over exaggerated benefits that were promised and that it merely represents a minor incremental improvement over its predecessor and that it will be overshadowed in a matter of months by another release from a competitor.
Where should this community focus its attention?
LLMs can be a huge performance boost, when used wisely (i.e. not just blindly using whatever they spit out)
Maybe the fact that there were additional blunders, such as the incorrect explanation of the Bernoulli Effect, suggests that the team responsible for organizing this presentation didn't review every detail carefully. Maybe I'm reading too much into a simple mistake.
From the HN FAQ:
> What does [flagged] mean?
> Users flagged the post as breaking the guidelines or otherwise not belonging on HN.
> Moderators sometimes also add [flagged] (though not usually on submissions), and sometimes turn flags off when they are unfair.
Users flagged it. We can only guess why users flag things, but in this case there had been so much coverage of GPT-5 on the frontpage, plus the chart gaffe was being extensively in those threads, that they probably found this post some combination of repetitive and unsubstantive.
It ended up spending 14 hours on the frontpage anyhow, which is quite a lot, especially for one of those single-purpose sites that people spin up for joke/drama purposes. Those are a great internet tradition but not always the best fit for HN (https://news.ycombinator.com/newsguidelines.html).
I_am_tiberius•6mo ago
lnenad•6mo ago
datadrivenangel•6mo ago
andrewstuart2•6mo ago
zigzag312•6mo ago
EDIT: I was looking just at the first chart. I didn't see there's more below.
croes•6mo ago
And even if it’s just one chart. There are 3 or 4 bars (depends on how you count) so they screwed up 33%/25 % of the chart.
Quite an error margin.
zigzag312•6mo ago
brazzy•6mo ago
That one was added later, if I interpret the attribution at the bottom correctly. And I'm also pretty sure it wasn't there when I first saw it pop up.
danpalmer•6mo ago
qustrolabe•6mo ago
macNchz•6mo ago
danpalmer•6mo ago
I could completely believe someone who is all-in on the tech, working in marketing, and not really that familiar with the failure modes, using a prompt like this and just missing the bad edit.
datadrivenangel•6mo ago
nonhaver•6mo ago
KronisLV•6mo ago
Apparently quite a bit.
outside1234•6mo ago