You give them too much street cred. I'm not convinced they're even good at that.
Teasing over the various Midwestern accents is sort of like dealing with boxing great Joe Louis: you can run, but you just can't hide.
It's the human person Gell-Mann effect, we listen to CEOs talk about science, tech, and engineering and they sound like morons because we know these fields. But their audience is specifically people who don't-- and to them, thanks to the effect, they sound like they know what they're talking about.
"Our product is the second coming of Christ and if you give me money now you'll 100000x your investment!" is the correct answer to all questions when you're in that position. I'm not saying it's admirable, but it's what you do to keep money coming in for the time being. It's not that deep.
GenAI has all of the media attention in the world, all the capital in the world, and a huge amount of human resources put into it. I think (or at least hope) that this fact isn't controversial to anyone, be they for or against it. We can then ask ourselves if having models that can write an e-shop API really is an acceptable result, when looking at the near-incomprehensible amounts spent into it.
One could say "It has also led to advances in [other field of expertise]", but couldn't a fraction of that money have achieved greater results if invested directly in that field? To build actual specialized tools and structures? That's an unfalsifiable hypothetical, but reading Sam Altman's "Gentle Singularity[1]" blogpost, it seems like wild guesses are a perfectly fair arguing ground.
On a small tangent about "Gentle Singularity", I think it's not fair to scoff at Ed's delivery when Sam Altman also pulls a lot of sneaky tricks when addressing the public:
> being amazed that it can make live-saving medical diagnoses
The classification model that spots tumor has nothing to do with his product category, I find it very dishonest to sandwich this one example between to examples of generative AI.
> A lot more people will be able to create software, and art. But the world wants a lot more of both, and experts will probably still be much better than novices, as long as they embrace the new tools.
"the world wants a lot more of both" doesn't quite justify the flood of slop on every single art-sharing platform. That's like saying the world wants a lot more of communication to hand wave the 10s of spam calls you get each day. "As long as they embrace the new tools" is just parroting the "adapt or die Luddite!" argument. As the CEO of the world's foremost AI company I expect more than the average rage bait comment you'd see on a forum, the fact that it's somehow an "improvement" is taken for granted, even though Sam is talking about fields he's never even dabbled in.
The statement probably doesn't weigh much since my biases are transparent, but I believe there's just so much more intellectual honesty in Ed's arguments than in much of what Sam Altman says.
PaulHoule•7mo ago
Dark matter is the most notable contradiction in physics today, where there is a complete mismatch between the physics we see in the lab, in the solar system, and globular clusters and the physics we see at the galactic scale. Contrast that to Newton's unified treatment of gravity on Earth and the Solar System.
There is no lack of darkon candidates or MOND ideas [1] what is lacking is an experiment or observation that can confirm one or the other. Similarly, a 1000x bigger TeraKamiokande or GigaKATRIN could constrain proton decay or put some precision on the neutrino mass but both of these are basically blue-collar problems.
[1] I used to like MOND but the more I've looked at it the more I've adopted the mainstream view of "this dark matter has a galaxy in it" as opposed to "this galaxy has dark matter in it". MOND fever is driven by a revisionist history where dark matter was discovered by Vera Rubin, not Zwicky [2] and that privileges galactic rotation curves (which MOND does great at) over many other kinds of evidence for DM.
[2] ... which I'd love to believe since Rubin did her work at my Uni!
alganet•7mo ago
PaulHoule•7mo ago
alganet•7mo ago
The article tries to put these personalities into a "master manipulator" figure. It doesn't matter if 99% of the text actually _criticizes_ these personalities.
What matters is the takeaway a typical reader would get from reading it. It's in the lines of "they're not tech geniuses, they're manipulators".
This takeaway is carefully designed to cater to selected audiences (well, the author claims to be a media trainer, so fair game, I guess).
I think the intent is to actually _promote_ these personalities. I know it sounds contradictory, but as I said, it gives them too much credit for "being good manipulators".
Which audiences are catered to and how they are expected to react is an exercise I'll leave for the reader.
That's a general overview of what this article does. Nothing related to actual claims.
PaulHoule•7mo ago
alganet•7mo ago
PaulHoule•7mo ago
Isn't that what you're saying here? The author pretty obviously thinks that Altman and company are horrible bullshitters and that's a bad thing, but you seem to think that somebody could come to the conclusion that they are actually really good bullshitters.
alganet•7mo ago
It's not about backlash. Media has several ways to deliver a different message, to different audiences, using a single piece.
What I mean by "audiences" is much more granular than "against" or "in favor".
The article attempts to do that (there are hints of this granularity of target profiles all over it), but it's not very subtle at doing it.