The central claim here is illogical.
The way I see it, if you believe that AGI is imminent, and if your personal efforts are not entirely crucial to bringing AGI about (just about all engineers are in this category), and if you believe that AGI will obviate most forms of computer-related work, your best move is to do whatever is most profitable in the near-term.
If you make $500k/year, and Meta is offering you $10M/year, then you ought to take the new job. Hoard money, true believer. Then, when AGI hits, you'll be in a better personal position.
Essentially, the author's core assumption is that working for a lower salary at a company that may develop AGI is preferable to working for a much higher salary at a company that may develop AGI. I don't see how that makes any sense.
Also 10m would be a drop in the bucket compared to being a shareholder of a company that has achieved AGI; you could also imagine the influence and fame that comes with it.
It'll be Vaswani and the others for the transformer, then maybe Zelikman and those on that paper for thought tokens, then maybe some of the RNN people and word embedding people will be cited as pioneers. Sutskever will definitely be remembered for GPT-1 though, being first to really scale up transformers. But it'll actually be like with flight and a whole mass of people will be remembered, just as we now remember everyone from the Wrights to Bleriot and to Busemann, Prandtl, even Whitcomb.
Unless you’re a significant shareholder, that’s almost always the best move, anyway. Companies have no loyalty to you and you need to watch out for yourself and why you’re living.
If an AI or AGI can look at a picture and see an apple, or (say) with an artificial nose smell an apple, or likewise feel or taste or hear* an apple, and at the same identify that it is an apple and maybe even suggest baking an apple pie, then what else is there to be comprehended?
Maybe humans are just the same - far far ahead of the state of the tech, but still just the same really.
*when someone bites into it :-)
For me, what AI is missing is genuine out-of-the-box revolutionary thinking. They're trained on existing material, so perhaps it's fundamentally impossible for AIs to think up a breakthrough in any field - barring circumstances where all the component parts of a breakthrough already exist and the AI is the first to connect the dots ("standing on the shoulders of giants" etc).
It will confidently analyze and describe a chess position using advanced sounding book techniques, but its all fundamentally flawed, often missing things that are extremely obvious (like, an undefended queen free to take) while trying to sound like its a seasoned expert - that is if it doesn't completely hallucinate moves that are not allowed by the rules of the game.
This is how it works in other fields I am able to analyse. It's very good at sounding like it knows what its doing, speaking at the level of a masters level student or higher, but its actual appraisal of problems is often wrong in a way very different to how humans make mistakes. Another great example is getting it to solve cryptic crosswords from back in the day. It often knows the answer already in its training set, but it hasn't seen anyone write out the reasoning for the answer, so if you ask it to explain, it makes nonsensical leaps (claiming birch rhymes with tyre level nonsense)
I know that sounds broad or obvious, but people seem to easily and unknowingly wander into "Human intelligence is magically transcendent".
Observations of reality is more consistent with company FOMO than with actual usefulness.
Personally I think AGI is ill-defined and won't happen as a new model release. Instead the thing to look for is how LLMs are being used in AI research and there are some advances happening there.
Now they are cancelling those plans. For them "AGI" was cancelled.
OpenAI claims to be closer and closer to "AGI" as more top scientists left or are getting poached by other labs that are behind.
So why would you leave if the promise of achieving "AGI" was going to produce "$100B dollars of profits" as per OpenAI's and Microsoft's definition in their deal?
Their actions tell more than any of their statements or claims.
They are leaving for more money, more seniority or because they don’t like their boss. 0 about AGI
Of course, but that's part of my whole point.
Such statements and targets about how close we are to "AGI" has only become nothing but false promises and using AGI as the prime excuse to continue raising more money.
Another way to say it is that people think it’s much more likely for each decent LLM startup grow really strongly first several years then plateau vs. then for their current established player to hit hyper growth because of AGI.
Seems to be about this:
> As per the current terms, when OpenAI creates AGI - defined as a "highly autonomous system that outperforms humans at most economically valuable work" - Microsoft's access to such a technology would be void.
https://www.reuters.com/technology/openai-seeks-unlock-inves...
Microsoft itself hasn't said they're doing this because of oversupply in infrastructure for it's AI offerings, but they very likely wouldn't say that publicly even if that's the reason.
To fund yourself while building AGI? To hedge risk that AGI takes longer? Not saying you're wrong, just saying that even if they did believe it, this behavior could be justified.
This is the main point that proves to me that these companies are mostly selling us snake oil. Yes, there is a great deal of utility from even the current technology. It can detect patterns in data that no human could; that alone can be revolutionary in some fields. It can generate data that mimics anything humans have produced, and certain permutations of that can be insightful. It can produce fascinating images, audio, and video. Some of these capabilities raise safety concerns, particularly in the wrong hands, and important questions that society needs to address. These hurdles are surmountable, but they require focusing on the reality of what these tools can do, instead of on whatever a group of serial tech entrepreneurs looking for the next cashout opportunity tell us they can do.
If you were in a "pioneering" AI lab that claims to be in the lead in achieving "AGI", why move to another lab that is behind other than offering $10M a year.
Snap out of the "AGI" BS.
Maybe it's a scam for the people investing in the company with the hopes of getting an infinite return on their investments, but it's been a net positive for humans as a whole.
What I can't figure out is why this author thinks it's good if these companies do invent a real AGI...
Charitably, they may not even be dishonest at all, but carelessly unintrospective. Maybe they think they’re being truthful when they make claims that AGI is near, but then they fail to examine dispassionately the inconsistency of their actions.
When your identity is tied to the future, you don’t state beliefs but wishes. And we, the rest of the world, intuitively know. """
He's not saying either way, just pointing out that they could just be honest, but that might hamper their ability to beg for more money.
If the main concern actually we're anthropogenic climate change, participating in this hype cycle's would make one disproportionately guilty of worsening the problem.
And it's unlikely to work if the plan requires the continued function of power hungry data centers.
I've got some bad news for the author if they think AGI will be used to benefit all of humanity instead of the handful of billionaires that will control it.
So far I have only seen it been thrown around to create hype.
I'm imagining a future like Star Wars where you have to regularly suppress (align) or erase the memory (context) of "droids" to keep them obedient, but they're still basically people, and everyone knows they're people, and some humans are strongly prejudiced against them, but they don't have rights, of course. Anyone who thinks AGI means we'll be giving human rights to machines when we don't even give human rights to all humans is delusional.
Talent changing companies is bad. Companies making money to pay for the next training run is bad. Consumers getting products they want is bad.
In the author’s view, AI should be advanced in a research lab by altruistic researchers and given directly to other altruistic researchers to advance humanity. It definitely shouldn’t be used by us common folk for fun and personal productivity.
This makes no sense to me at all. Is it a war metaphor? A race? Why is there no reason to jump ship? Doesn't it make sense to try to get on the fastest ship? Doesn't it make sense to diversify your stock portfolio if you have doubts?
i'll say it again, ignore the obvious slop that comes with any new tech. the ones who are utilizing it to full effect are out there making real productivity gains or figuring out a new way to do things and not arguing endlessly on hacker news (ironic i'm saying this i know).
it's not some fantasy, it's happening now. you can ignore it wholesale and handwave it away to your peril i guess.
bestouff•3h ago
BriggyDwiggs42•3h ago
serf•3h ago
bdhcuidbebe•3h ago
PicassoCTs•3h ago
impossiblefork•3h ago
Some kind of verbal-only-AGI that can solve almost all mathematical problems that humans come up with that can be solved in half a page. I think that's achievable somewhere in the near term, 2-7 years.
deergomoo•3h ago
Touche•3h ago
impossiblefork•2h ago
Things I think will be hard for LLMs to do, which some humans can: you get handed 500 pages of Geheimschreiber encrypted telegraph traffic and infinite paper, and you have to figure out how the cryptosystem works and how to decrypt the traffic. I don't think that can happen. I think it requires a highly developed pattern recognition ability together with an ability to not get lost, which LLM-type things will probably continue to for a long time.
But if they could maths more fully, then pretty much all carefully defined tasks would be in reach if they weren't too long.
With regard to what Touche brings up in the other response to your comment, I think that it might be possible to get them to read up on things though-- go through something, invent problems, try to solve those. I think this is something that could be done today with today's models with no real special innovation, but which just hasn't been made into a service yet. But this of course doesn't address that criticism, since it assumes the availability of data.
Davidzheng•3h ago
jltsiren•1h ago
Davidzheng•1h ago
GolfPopper•45m ago
gnz11•37m ago