frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Europe's first geostationary sounder satellite is launched

https://www.eumetsat.int/europes-first-geostationary-sounder-satellite-launched
42•diggan•48m ago•8 comments

X-Clacks-Overhead

https://xclacksoverhead.org/home/about
72•weinzierl•3d ago•14 comments

Local-first software: You own your data, in spite of the cloud

https://www.inkandswitch.com/essay/local-first/
4•gasull•23m ago•0 comments

Being too ambitious is a clever form of self-sabotage

https://maalvika.substack.com/p/being-too-ambitious-is-a-clever-form
454•alihm•17h ago•137 comments

The Moat of Low Status

https://usefulfictions.substack.com/p/learn-to-love-the-moat-of-low-status
215•jger15•3d ago•82 comments

What 'Project Hail Mary' teaches us about the PlanetScale vs. Neon debate

https://blog.alexoglou.com/posts/database-decisions/
17•konsalexee•3h ago•26 comments

Mini NASes marry NVMe to Intel's efficient chip

https://www.jeffgeerling.com/blog/2025/mini-nases-marry-nvme-intels-efficient-chip
385•ingve•23h ago•190 comments

Gecode is an open source C++ toolkit for developing constraint-based systems

https://www.gecode.org/
23•gjvc•6h ago•5 comments

Build Systems à la Carte (2018) [pdf]

https://www.microsoft.com/en-us/research/wp-content/uploads/2018/03/build-systems.pdf
20•djoldman•3d ago•5 comments

The messy reality of SIMD (vector) functions

https://johnnysswlab.com/the-messy-reality-of-simd-vector-functions/
81•mfiguiere•8h ago•48 comments

Problems the AI industry is not addressing adequately

https://www.thealgorithmicbridge.com/p/im-losing-all-trust-in-the-ai-industry
70•baylearn•4h ago•73 comments

The History of Electronic Music in 476 Tracks (1937–2001)

https://www.openculture.com/2025/06/the-history-of-electronic-music-in-476-tracks.html
74•bookofjoe•2d ago•20 comments

A 37-year-old wanting to learn computer science

https://initcoder.com/posts/37-year-old-learning-cs/
89•chbkall•6h ago•57 comments

N-Back – A Minimal, Adaptive Dual N-Back Game for Brain Training

https://n-back.net
53•gregzeng95•2d ago•13 comments

Incapacitating Google Tag Manager (2022)

https://backlit.neocities.org/incapacitate-google-tag-manager
188•fsflover•20h ago•125 comments

EverQuest

https://www.filfre.net/2025/07/everquest/
240•dmazin•22h ago•128 comments

Telli (YC F24) Is Hiring Engineers [On-Site Berlin]

https://hi.telli.com/join-us
1•sebselassie•8h ago

OBBB signed: Reinstates immediate expensing for U.S.-based R&D

https://www.kbkg.com/feature/house-passes-tax-bill-sending-to-president-for-signature
356•tareqak•14h ago•256 comments

Why I left my tech job to work on chronic pain

https://sailhealth.substack.com/p/why-i-left-my-tech-job-to-work-on
334•glasscannon•1d ago•206 comments

ADXL345 (2024)

https://www.tinytransistors.net/2024/08/25/adxl345/
43•picture•12h ago•3 comments

Baba Is Eval

https://fi-le.net/baba/
220•fi-le•2d ago•44 comments

Nvidia won, we all lost

https://blog.sebin-nyshkim.net/posts/nvidia-is-full-of-shit/
759•todsacerdoti•17h ago•409 comments

In a milestone for Manhattan, a pair of coyotes has made Central Park their home

https://www.smithsonianmag.com/science-nature/in-a-milestone-for-manhattan-a-pair-of-coyotes-has-made-central-park-their-home-180986892/
162•sohkamyung•4d ago•154 comments

Will Scaling Solve Robotics?

https://nishanthjkumar.com/blog/2023/Will-Scaling-Solve-Robotics-Perspectives-from-CoRL-2023/
5•jxmorris12•1d ago•2 comments

Go, PET, Let Hen - Curious adventures in (Commodore) BASIC tokenizing

https://www.masswerk.at/nowgobang/2025/go-pet-let-hen
12•masswerk•4h ago•1 comments

Show HN: I AI-coded a tower defense game and documented the whole process

https://github.com/maciej-trebacz/tower-of-time-game
278•M4v3R•1d ago•140 comments

Scientists capture slow-motion earthquake in action

https://phys.org/news/2025-06-scientists-capture-motion-earthquake-action.html
20•PaulHoule•3d ago•0 comments

QSBS Limits Raised

https://www.mintz.com/insights-center/viewpoints/2906/2025-06-25-qsbs-benefits-expanded-under-senate-finance-proposal
5•tomasreimers•3h ago•1 comments

Wind Knitting Factory

https://www.merelkarhof.nl/work/wind-knitting-factory
247•bschne•1d ago•60 comments

The story behind Caesar salad

https://www.nationalgeographic.com/travel/article/story-behind-caesar-salad
122•Bluestein•19h ago•83 comments
Open in hackernews

Problems the AI industry is not addressing adequately

https://www.thealgorithmicbridge.com/p/im-losing-all-trust-in-the-ai-industry
70•baylearn•4h ago

Comments

bestouff•3h ago
Are there some people here in HN believing in AGI "soonish" ?
BriggyDwiggs42•3h ago
I could see 2040 or so being very likely. Not off transformers though.
serf•3h ago
via what paradigm then? What out there gives high enough confidence to set a date like that?
bdhcuidbebe•3h ago
Theres usually some enlightened laymen in this kind of topic.
PicassoCTs•3h ago
St. Fermi says no
impossiblefork•3h ago
I might, depending on the definition.

Some kind of verbal-only-AGI that can solve almost all mathematical problems that humans come up with that can be solved in half a page. I think that's achievable somewhere in the near term, 2-7 years.

deergomoo•3h ago
Is that “general” though? I’ve always taken AGI to mean general to any problem.
Touche•3h ago
Yes, general means you can present it a new problem that there is no data on, and it can become a expert o that problem.
impossiblefork•2h ago
I suppose not.

Things I think will be hard for LLMs to do, which some humans can: you get handed 500 pages of Geheimschreiber encrypted telegraph traffic and infinite paper, and you have to figure out how the cryptosystem works and how to decrypt the traffic. I don't think that can happen. I think it requires a highly developed pattern recognition ability together with an ability to not get lost, which LLM-type things will probably continue to for a long time.

But if they could maths more fully, then pretty much all carefully defined tasks would be in reach if they weren't too long.

With regard to what Touche brings up in the other response to your comment, I think that it might be possible to get them to read up on things though-- go through something, invent problems, try to solve those. I think this is something that could be done today with today's models with no real special innovation, but which just hasn't been made into a service yet. But this of course doesn't address that criticism, since it assumes the availability of data.

Davidzheng•3h ago
what's your definition? AGI original definition is median human across almost all fields which I believe is basically achieved. If superhuman (better than best expert) I expect <2030 for all nonrobotic tasks and <2035 for all tasks
jltsiren•1h ago
Your "original definition" was always meaningless. A "Hello, World!" program is equally capable in most jobs as the median human. On the other hand, if the benchmark is what the median human can reasonably become (a professional with decades of experience), we are still far from there.
Davidzheng•1h ago
I agree with second part but not the first (far in capability not in timeline). I think you underestimate the distance of median wihout training and "hello world" in many economically meaningful jobs.
GolfPopper•45m ago
A "median human" can run a web search and report back on what they found without making stuff up, something I've yet to find an LLM capable of doing reliably.
gnz11•37m ago
How are you coming to the conclusion that "median human" is "basically achieved"? Current AI has no means of understanding and synthesizing new ideas the way a human would. It's all generative.
A_D_E_P_T•3h ago
> "This is purely an observation: You only jump ship in the middle of a conquest if either all ships are arriving at the same time (unlikely) or neither is arriving at all. This means that no AI lab is close to AGI."

The central claim here is illogical.

The way I see it, if you believe that AGI is imminent, and if your personal efforts are not entirely crucial to bringing AGI about (just about all engineers are in this category), and if you believe that AGI will obviate most forms of computer-related work, your best move is to do whatever is most profitable in the near-term.

If you make $500k/year, and Meta is offering you $10M/year, then you ought to take the new job. Hoard money, true believer. Then, when AGI hits, you'll be in a better personal position.

Essentially, the author's core assumption is that working for a lower salary at a company that may develop AGI is preferable to working for a much higher salary at a company that may develop AGI. I don't see how that makes any sense.

levanten•3h ago
Being part of the team that achieved AGI first would be to write your name in history forever. That could mean more to people than money.

Also 10m would be a drop in the bucket compared to being a shareholder of a company that has achieved AGI; you could also imagine the influence and fame that comes with it.

tharkun__•3h ago
*some people
blululu•1h ago
Kind of a sucker move here since you personally will 100% be forgotten. We are only going to remember one or two people who did any of this. Say Sam Altman and Ilya Sttsveker. Everyone else will be forgotten. The authors or the Transformer paper are unlikely to make it into the history books or even popular imagination. Think about the Manhattan Project. We recently made a movie remembering that one guy who did something on the Manhattan Project, but he will soon fade back into obscurity. Sometimes people say that it was about Einstein's theory of relativity. The only people who know who folks like Ulam were are physicists. The legions of technicians who made it all come together are totally forgotten. Same with the space program or the first computer or pretty much any engineering marvel.
cdrini•37m ago
Well depends on what you value. Achieving/contributing to something impactful first is for many people valuable even if it doesn't come with fame. Historically, this mindframe has been popular especially amongst scientists.
impossiblefork•12m ago
Personally I think the ones who will be remembered will be the ones who publish useful methods first, not the ones who succeed commercially.

It'll be Vaswani and the others for the transformer, then maybe Zelikman and those on that paper for thought tokens, then maybe some of the RNN people and word embedding people will be cited as pioneers. Sutskever will definitely be remembered for GPT-1 though, being first to really scale up transformers. But it'll actually be like with flight and a whole mass of people will be remembered, just as we now remember everyone from the Wrights to Bleriot and to Busemann, Prandtl, even Whitcomb.

bombcar•2h ago
>your best move is to do whatever is most profitable in the near-term

Unless you’re a significant shareholder, that’s almost always the best move, anyway. Companies have no loyalty to you and you need to watch out for yourself and why you’re living.

bsenftner•3h ago
Maybe I'm too jaded, I expect all this nonsense. It's human beings doing all this, after all. We ain't the most mature crowd...
lizknope•3h ago
I never had any trust in the AI industry in the first place so there was no trust to lose.
bsenftner•2h ago
Take it further, this entire civilization is an integrity void.
bsenftner•3h ago
Also, AGI is not just around the corner. We need artificial comprehension for that, and we don't even have a theory how comprehension works. Comprehension is the fusing of separate elements into new functional wholes, dynamically abstracting observations, evaluating them for plausibility, and reconstituting the whole - and all instantaneously, for security purposes, of every sense constantly. We have no technology that approaches that.
tenthirtyam•3h ago
You'd need to define "comprehension" - it's a bit like the Chinese room / Turing test.

If an AI or AGI can look at a picture and see an apple, or (say) with an artificial nose smell an apple, or likewise feel or taste or hear* an apple, and at the same identify that it is an apple and maybe even suggest baking an apple pie, then what else is there to be comprehended?

Maybe humans are just the same - far far ahead of the state of the tech, but still just the same really.

*when someone bites into it :-)

For me, what AI is missing is genuine out-of-the-box revolutionary thinking. They're trained on existing material, so perhaps it's fundamentally impossible for AIs to think up a breakthrough in any field - barring circumstances where all the component parts of a breakthrough already exist and the AI is the first to connect the dots ("standing on the shoulders of giants" etc).

Touche•2h ago
They might not be capable of ingenuity, but they can spot patterns humans can miss. And that accelerates AI research, where it might help invent the next AI that helps invent the next AI that finally can think outside the box.
bsenftner•2h ago
I do define it, right up there in my OP. It's subtle, you missed it. Everybody misses it, because comprehension is like air, we swim in it constantly, to the degree the majority cannot even see it.
add-sub-mul-div•2h ago
Was that the intention of the Chinese room concept, to ask "what else is there to be comprehended?" after producing a translation?
RugnirViking•56m ago
It's very very good at sounding like it understands stuff. Almost as good as actually understanding stuff in some fields, sure. But it's definitely not the same.

It will confidently analyze and describe a chess position using advanced sounding book techniques, but its all fundamentally flawed, often missing things that are extremely obvious (like, an undefended queen free to take) while trying to sound like its a seasoned expert - that is if it doesn't completely hallucinate moves that are not allowed by the rules of the game.

This is how it works in other fields I am able to analyse. It's very good at sounding like it knows what its doing, speaking at the level of a masters level student or higher, but its actual appraisal of problems is often wrong in a way very different to how humans make mistakes. Another great example is getting it to solve cryptic crosswords from back in the day. It often knows the answer already in its training set, but it hasn't seen anyone write out the reasoning for the answer, so if you ask it to explain, it makes nonsensical leaps (claiming birch rhymes with tyre level nonsense)

andy99•38m ago
Another way to put it is we need Artificial Intelligence. Right now the term has been co-opted to mean prediction (and more commonly transcript generation). The stuff you're describing are what's commonly thought of as intelligence, it's too bad we need a new word for it.
Workaccount2•16m ago
We only have two computational tools to work with - deterministic and random behavior. So whatever comprehension/understanding/original thought/consciousness is, it's some algorithmic combination of deterministic and random inputs/outputs.

I know that sounds broad or obvious, but people seem to easily and unknowingly wander into "Human intelligence is magically transcendent".

empiko•3h ago
Observe what the AI companies are doing, not what they are saying. If they would expect to achieve AGI soon, their behaviour would be completely different. Why bother developing chatbots or doing sales, when you will be operating AGI in a few short years? Surely, all resources should go towards that goal, as it is supposed to usher the humanity into a new prosperous age (somehow).
delusional•3h ago
Continuing in the same vain. Why would they force their super valuable, highly desirable, profit maximizing chat-bots down your throat?

Observations of reality is more consistent with company FOMO than with actual usefulness.

Touche•3h ago
Because it's valuable training data. Like how having Google Maps on everyone's phone made their map data better.

Personally I think AGI is ill-defined and won't happen as a new model release. Instead the thing to look for is how LLMs are being used in AI research and there are some advances happening there.

rvz•3h ago
Exactly. For example, Microsoft was building data centers all over the world since "AGI" was "around the corner" according to them.

Now they are cancelling those plans. For them "AGI" was cancelled.

OpenAI claims to be closer and closer to "AGI" as more top scientists left or are getting poached by other labs that are behind.

So why would you leave if the promise of achieving "AGI" was going to produce "$100B dollars of profits" as per OpenAI's and Microsoft's definition in their deal?

Their actions tell more than any of their statements or claims.

zaphirplane•3h ago
I’m not commenting on the whole just the rhetorical question of why would people leave.

They are leaving for more money, more seniority or because they don’t like their boss. 0 about AGI

Touche•3h ago
Yeah I agree, this idea that people won't change jobs if they are on the verge of a breakthrough reads like a silicon valley fantasy where you can underpay people by selling them on vision or something. "Make ME rich, but we'll give you a footnote on the Wikipedia page"
rvz•3h ago
> They are leaving for more money, more seniority or because they don’t like their boss. 0 about AGI

Of course, but that's part of my whole point.

Such statements and targets about how close we are to "AGI" has only become nothing but false promises and using AGI as the prime excuse to continue raising more money.

Game_Ender•2h ago
I think the implicit take is that if your company hits AGI your equity package will do something like 10x-100x even if the company is already big. The only other way to do that is join a startup early enough to ride its growth wave.

Another way to say it is that people think it’s much more likely for each decent LLM startup grow really strongly first several years then plateau vs. then for their current established player to hit hyper growth because of AGI.

leoc•1h ago
A catch here is that individual workers may have priorities which are altered due to the strong natural preference for assuring financial independence. Even if you were a hot AI researcher who felt (and this is just a hypothetical) that your company was the clear industry leader and had, say, a 75% chance of soon achieving something AGI-adjacent and enabling massive productivity gains, you might still (and quite reasonably) prefer to leave if that was what it took to make absolutely sure of getting of your private-income screw-you money (and/or private-investor seed capital). Again this is just a hypothetical: I have no special insight, and FWIW my gut instinct is that the job-hoppers are in fact mostly quite cynical about the near-term prospects for "AGI".
cm277•3h ago
Yes, this. Microsoft has other businesses that can make a lot of money (regular Azure) and tons of cash flow. The fact that they are pulling back from the market leader (OpenAI) whom they mostly owned should be all the negative signal people need: AGI is not close and there is no real moat even for OpenAI.
whynotminot•50m ago
Well, there’s clauses in their relationship with OpenAI that sever the relationship when AGI is reached. So it’s actually not in Microsoft’s interests for OpenAI to get there
PessimalDecimal•42m ago
I haven't heard of this. Can you provide a reference? I'd love to see how they even define AGI crisply enough for a contract.
diggan•19m ago
> I'd love to see how they even define AGI crisply enough for a contract.

Seems to be about this:

> As per the current terms, when OpenAI creates AGI - defined as a "highly autonomous system that outperforms humans at most economically valuable work" - Microsoft's access to such a technology would be void.

https://www.reuters.com/technology/openai-seeks-unlock-inves...

computerphage•38m ago
Wait, aren't they cancelling leases on non-ai data centers that aren't under Microsoft's control, while spending much more money to build new AI focused data centers that that own? Do you have a source that says they're canceling their own data centers?
PessimalDecimal•26m ago
https://www.datacenterfrontier.com/hyperscale/article/552705... might fit the bill of what you are looking for.

Microsoft itself hasn't said they're doing this because of oversupply in infrastructure for it's AI offerings, but they very likely wouldn't say that publicly even if that's the reason.

computerphage•17m ago
Thank you!
pu_pe•3h ago
I don't think it's as simple as that. Chatbots can be used to harvest data, and sales are still important before and after you achieve AGI.
worldsayshi•17m ago
It could also be the case that they think that AGI could arrive at any moment but it's very uncertain when and only so many people can work on it simultaneously. So they spread out investments to also cover low uncertainty areas.
redhale•1h ago
> Why bother developing chatbots or doing sales, when you will be operating AGI in a few short years?

To fund yourself while building AGI? To hedge risk that AGI takes longer? Not saying you're wrong, just saying that even if they did believe it, this behavior could be justified.

imiric•3m ago
Related to your point: if these tools are close to having super-human intelligence, and they make humans so much more productive, why aren't we seeing improvements at a much faster rate than we are now? Why aren't inherent problems like hallucination already solved, or at least less of an issue? Surely the smartest researchers and engineers money can buy would be dogfooding, no?

This is the main point that proves to me that these companies are mostly selling us snake oil. Yes, there is a great deal of utility from even the current technology. It can detect patterns in data that no human could; that alone can be revolutionary in some fields. It can generate data that mimics anything humans have produced, and certain permutations of that can be insightful. It can produce fascinating images, audio, and video. Some of these capabilities raise safety concerns, particularly in the wrong hands, and important questions that society needs to address. These hurdles are surmountable, but they require focusing on the reality of what these tools can do, instead of on whatever a group of serial tech entrepreneurs looking for the next cashout opportunity tell us they can do.

rvz•3h ago
Are we finally realizing that the term "AGI" is not only hijacked to become meaningless, but achieving it has always been nothing but a complete scam as I was saying before? [0]

If you were in a "pioneering" AI lab that claims to be in the lead in achieving "AGI", why move to another lab that is behind other than offering $10M a year.

Snap out of the "AGI" BS.

[0] https://news.ycombinator.com/item?id=37438154

bdhcuidbebe•3h ago
We know they hijacked AGI the same way they hijacked AI some years ago.
sys_64738•2h ago
I don't pay too close attention to AI as it always felt like man behind the curtain syndrome. But where did this "AGI" term even come from? The original term AI is meant to be AGI so when did "AI" get bastardized into what abomination it is meant to refer to now.
frde_me•30m ago
I don't know, companies investing in AI in the goal of AGI is now allowing me to effortlessly automate a whole suite of small tasks that weren't feasible before. (after all I pinged a bot on slack using my phone to add a field to an API, and then got a pull request in a couple of minutes that did exactly that)

Maybe it's a scam for the people investing in the company with the hopes of getting an infinite return on their investments, but it's been a net positive for humans as a whole.

coldcode•3h ago
I never trusted them from the start. I remember the hype that came out of Sun when J2EE/EJBs appeared. Their hype documents said the future of programming was buying EJBs from vendors and wiring them together. AI is of course a much bigger hype machine with massive investments that need to be justified somehow. AI is a useful tool (sometimes) but not a revolution. ML is much more useful a tool. AGI is a pipe dream fantasy pushed to make it seem like AI will change everything, as if AI is like the discovery that making fire was.
conartist6•3h ago
I love how much the proponents is this tech are starting to sound like the opponents.

What I can't figure out is why this author thinks it's good if these companies do invent a real AGI...

taormina•11m ago
""" I’m basically calling the AI industry dishonest, but I want to qualify by saying they are unnecessarily dishonest. Because they don’t need to be! They should just not make abstract claims about how much the world will change due to AI in no time, and they will be fine. They undermine the real effort they put into their work—which is genuine!

Charitably, they may not even be dishonest at all, but carelessly unintrospective. Maybe they think they’re being truthful when they make claims that AGI is near, but then they fail to examine dispassionately the inconsistency of their actions.

When your identity is tied to the future, you don’t state beliefs but wishes. And we, the rest of the world, intuitively know. """

He's not saying either way, just pointing out that they could just be honest, but that might hamper their ability to beg for more money.

PicassoCTs•3h ago
Im reading the "AI"-industry as a totally different bet- not so much, as a "AGI" is coming bet of many companies, but a "climate change collapse" is coming and we want to continue to be in business, even if our workers stay at home/flee or die, the infrastructure partially collapses and our central office burns to the ground-bet. In that regard, even the "AI" we have today, makes total sense as a insurance policy.
PessimalDecimal•29m ago
It's hard to square this with the massive energy footprint required to run any current "AI" models.

If the main concern actually we're anthropogenic climate change, participating in this hype cycle's would make one disproportionately guilty of worsening the problem.

And it's unlikely to work if the plan requires the continued function of power hungry data centers.

davidcbc•3h ago
> Right before “making tons of money to redistribute to all of humanity through AGI,” there’s another step, which is making tons of money.

I've got some bad news for the author if they think AGI will be used to benefit all of humanity instead of the handful of billionaires that will control it.

Findecanor•2h ago
AGI might be a technological breakthrough, but what would be the business case for it? Is there one?

So far I have only seen it been thrown around to create hype.

krapp•2h ago
AGI would mean fully sentient, sapient and human or greater equivalent intelligence in software. The business case, such that it exists (and setting aside Roko's Basilisk and other such fears) is slavery, plain and simple. You can just fire all of your employees and have the machines do all the work, faster, better, cheaper, without regards to pesky labor and human rights laws and human physical limitations. This is something people have wanted ever since the Industrial Revolution allowed robots to exist as a concept.

I'm imagining a future like Star Wars where you have to regularly suppress (align) or erase the memory (context) of "droids" to keep them obedient, but they're still basically people, and everyone knows they're people, and some humans are strongly prejudiced against them, but they don't have rights, of course. Anyone who thinks AGI means we'll be giving human rights to machines when we don't even give human rights to all humans is delusional.

danielbln•1h ago
AGI is AGI, not ASI though. General intelligence doesn't mean sapience, sentience or consciousness, it just means general capabilities across the board at the level of or surpassing human ability. ASI is a whole different beast.
lherron•1h ago
Honestly this article sounds like someone is unhappy that AI isn’t being deployed/developed “the way I feel it should be done”.

Talent changing companies is bad. Companies making money to pay for the next training run is bad. Consumers getting products they want is bad.

In the author’s view, AI should be advanced in a research lab by altruistic researchers and given directly to other altruistic researchers to advance humanity. It definitely shouldn’t be used by us common folk for fun and personal productivity.

computerphage•53m ago
> This is purely an observation: You only jump ship in the middle of a conquest if either all ships are arriving at the same time (unlikely) or neither is arriving at all. This means that no AI lab is close to AGI. Their stated AGI timelines are “at the latest, in a few years,” but their revealed timelines are “it’ll happen at some indefinite time in the future.”

This makes no sense to me at all. Is it a war metaphor? A race? Why is there no reason to jump ship? Doesn't it make sense to try to get on the fastest ship? Doesn't it make sense to diversify your stock portfolio if you have doubts?

JunkDNA•38m ago
I keep seeing this charge that AI companies have an “Uber problem” meaning the business is heavily subsidized by VC. Is there any analysis that has been done that explains how this breaks down (training vs inference and what current pricing is)? At least with Uber you had a cab fare as a benchmark. But what should, for example, ChatGPT actually cost me per month without the VC subsidy? How far off are we?
cratermoon•33m ago
https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the...
JunkDNA•5m ago
This article isn’t particularly helpful. It focuses on a ton of specific OpenAI business decisions that aren’t necessarily generalizable to the rest of the industry. OpenAI itself might be out over its skis, but what I’m asking about is the meta-accusation that AI in general is heavily subsidized. When the music stops, what does the price of AI look like? The going rate for chat bots like ChatGPT is $20/month. Does that go to $40 a month? $400? $4,000?
4ndrewl•28m ago
No-one authentically believes LLMs with whatever go-faster stripes are a path to AGI do they?
almostdeadguy•14m ago
Very funny to re-title this to something less critical.
ookblah•10m ago
everyone moving goalposts and "no true scotsman" i see. outside of your silicon valley frontmen and CEOs, 90% of people "hyping" AI aren't claiming AI is going to replace humans wholesale anytime soon lol. you still need humans in the loop.

i'll say it again, ignore the obvious slop that comes with any new tech. the ones who are utilizing it to full effect are out there making real productivity gains or figuring out a new way to do things and not arguing endlessly on hacker news (ironic i'm saying this i know).

it's not some fantasy, it's happening now. you can ignore it wholesale and handwave it away to your peril i guess.