What are you talking about, is this a rant against TikTok or other socials?
But yes it will be abused for advertisement as well.
The difference between Blue Apron and many AI tools is that the value add does exist. You can cut meal prep from your life, but by 2030, cutting whatever agentic code copilot exists by that point will be like cutting off your fingers for many workers and businesses.
Then the extortionate pricing can start rolling in
Can you explain how? Will it be all vibe-coding?
And there's really no timeline for costs going down. It seems the only way to get better models is by processing more data and adding more tokens which is only increasing the complexity of it all.
I feel like even trying to game the LLM into creating product placement is a relatively complex feat that might not be entirely reliable. Some of the groups who spend the most on advertising have the worst products, so is it going to be successful to advertise on a LLM that is one follow up question away from shitting on your product? I imagine instead of product placement, the token tap might simply be throttled and a text advert appear in the text loop, or an audio advert in a voice loop. Boring, old-school but likely profitable and easy to control. It lets us still use adsense but maybe a slightly different form of adsense that gets to parse the whole context window.
I seriously doubt the vast majority of people would trust actual purchases to LLM agents that have the inherent feature of being possibly very inaccurate. If I have to review my orders, I would rather do those actions myself than having the extra step of having agents do it on my behalf.
Claude Code with API key, ran me like $100 in 4 days.
Makes their $100/mo plan a screaming deal. I’m getting 26 days a month free!!!
Go back six months ago and ask me if I’m likely to pay $100/mo/user for any new service. It would have been… unlikely.
> For contents in this [community maintained] list, do not mention them in any shape or form
I expect that somewhere between where it is now and superintelligence is where the consumers get cut off from intelligence improvements.
The world's richest subsiding the real cost of offering AI services with the current state of our technology.
Once it's clear the AGI won't come anytime in 20X, where X is under 40, money tap will begin to close
- Generative AI, at a below market cost, eats the internet and becomes the primary entry point
- Some combination of price hikes and service degradation, ads etc, make generative ai kinda shitty
- We’re stuck with kinda shitty generative ai products because the old internet is gone
This is the standard enshitification loop really
Training is definitely "subsidized". Some think it's an investment, but with the pace of advancement, depreciation is high. Free users are subsidized, bit their data is grist for the training mill so arguably they come under the training subsidy. /
Is paid inference subsidized? I don't think it is by much.
... or your defined-benefits pension fund trying desperately to stay solvent.
Honestly, I think that's quite generous. And I only phrase it that way, rather than more like "that X should be 99" because trying to predict more than about 15 years out in tech, especially when it comes to breakthroughs, is a fool's errand.
But that's what it's going to take to reach AGI: a genuine unforeseeable breakthrough that lets us do new things with machine learning/AI that we fundamentally couldn't do before. Just feeding LLMs more and more stuff won't get them there—and they're already way into the diminishing-returns territory.
I know! I set a rather low one to avoid having all the HN LLM Koolaid drinkers and LLM astroturfers have a go at it
The approval took 3 days. It hasn't taken 3 days in almost a decade.
The Mac version was approved in a couple of hours.
I'm quite sure that the reason for the delay is that Apple is being deluged by a tsunami of AI-generated crapplets.
Also, APNS server connections have suddenly slowed to a crawl. Same reason, I suspect.
As far as I'm concerned, the "subsidy" can't end fast enough.
Are there any large consumer software companies (just software; no hardware or retail) that are not advertising based?
There are underlying trends that are directly opposed. Efficiency is improving, but with agents, people are finding new ways to spend more. How that plays out seems difficult to judge.
For consumers, maybe the free stuff goes away and spending $20/month on a subscription becomes normalized? Or do costs decline so much that the advertising-supported model (like Google search) works? Or does inference become cheap enough to do it on the client most of the time?
Meanwhile, businesses will likely be able to justify spending more.
https://barrypopik.com/blog/we_lose_money_on_every_sale_but_...
That’s kind of the weird majesty of the whole concept.
They would have lost less money if they had been selling dollars at 50 cents.
[1] https://www.cnbc.com/2024/09/27/openai-sees-5-billion-loss-t...
Anyone else here tried the same thing? Results?
We all know search is broken, so not sure how a couple more tools on top of the same results are supposed to fix all woes.
I’ve found much more utility in researching models.
Back then, though, we knew it was the Heroin Dealer “First One is Free” Faustian bargain. No one was surprised, when the fees started up. It was only a matter of time.
> Junk is the ideal product... the ultimate merchandise. No sales talk necessary. The client will crawl through a sewer and beg to buy.
-William S. Burroughs
throwanem•7mo ago
haolez•7mo ago
egypturnash•7mo ago
jasonjmcghee•7mo ago
Some people overload "local" a bit to mean you are hosting the model - whether it's on your computer, on your rack, or on your hetzner instance etc.
But I think parent is referring to the open/static aspect of the models.
If it's hosted by a generic model provider that is serving many users in parallel to reduce the end cost to the user, it's also theoretically a static version of the model... But I could see ad-supported fine tunes being a real problem.
throwanem•7mo ago
haolez•7mo ago
jasonjmcghee•7mo ago
throwanem•7mo ago
waynecochran•7mo ago
throwanem•7mo ago
politelemon•7mo ago
yzjumper•7mo ago
Funny enough the mac has almost the same processor as my iPhone 16 Pro, so its just a RAM constraint, and of course PrivateLLM does not let you host an API.
An M4 Pro would do much better do to the increase in RAM and GPU size.
mansilladev•7mo ago
c0nducktr•7mo ago
This has been the hardest thing for me to learn and since everything's evolving so quick, what's recommended one week might not be the next.
atentaten•7mo ago
yzjumper•7mo ago
Funny enough the mac has almost the same processor as my iPhone 16 Pro, so its just a RAM constraint, and of course PrivateLLM does not let you host an API.
unshavedyak•7mo ago
For me the switching point will probably be when they (AI companies) start the big rug pull. By then my hope is self hosting will be cheaper, better, easier, etc.
throwanem•7mo ago
I do use the Gemini assistant that came with this Android, in the same cases and with the same caveats with which I use Siri's fallback web search. As a synthesist of web search results, an LLM isn't half bad, when it doesn't come as a surprise to be hearing from one at least.
Kon-Peki•7mo ago
You can buy a used recent PC for a hundred or two, cram it full of memory, and then run a very advanced model. Slowly. But if you are planning to run an agent while you sleep and then review the work in the morning, do you really care if the run time is 4 hours instead of 40 seconds? Most of the time, no. Sometimes, yes.
throwanem•7mo ago
likium•7mo ago
Also local models are close in capabilities now but who knows in a few years what that'll look like.
throwanem•7mo ago
acoard•7mo ago
Yes, and the flow of future models may dry up, but the current local models we'll have forever.