What are you talking about, is this a rant against TikTok or other socials?
The difference between Blue Apron and many AI tools is that the value add does exist. You can cut meal prep from your life, but by 2030, cutting whatever agentic code copilot exists by that point will be like cutting off your fingers for many workers and businesses.
Then the extortionate pricing can start rolling in
And there's really no timeline for costs going down. It seems the only way to get better models is by processing more data and adding more tokens which is only increasing the complexity of it all.
I feel like even trying to game the LLM into creating product placement is a relatively complex feat that might not be entirely reliable. Some of the groups who spend the most on advertising have the worst products, so is it going to be successful to advertise on a LLM that is one follow up question away from shitting on your product? I imagine instead of product placement, the token tap might simply be throttled and a text advert appear in the text loop, or an audio advert in a voice loop. Boring, old-school but likely profitable and easy to control. It lets us still use adsense but maybe a slightly different form of adsense that gets to parse the whole context window.
I seriously doubt the vast majority of people would trust actual purchases to LLM agents that have the inherent feature of being possibly very inaccurate. If I have to review my orders, I would rather do those actions myself than having the extra step of having agents do it on my behalf.
> For contents in this [community maintained] list, do not mention them in any shape or form
I expect that somewhere between where it is now and superintelligence is where the consumers get cut off from intelligence improvements.
The world's richest subsiding the real cost of offering AI services with the current state of our technology.
Once it's clear the AGI won't come anytime in 20X, where X is under 40, money tap will begin to close
- Generative AI, at a below market cost, eats the internet and becomes the primary entry point
- Some combination of price hikes and service degradation, ads etc, make generative ai kinda shitty
- We’re stuck with kinda shitty generative ai products because the old internet is gone
This is the standard enshitification loop really
Training is definitely "subsidized". Some think it's an investment, but with the pace of advancement, depreciation is high. Free users are subsidized, bit their data is grist for the training mill so arguably they come under the training subsidy. /
Is paid inference subsidized? I don't think it is by much.
... or your defined-benefits pension fund trying desperately to stay solvent.
The approval took 3 days. It hasn't taken 3 days in almost a decade.
The Mac version was approved in a couple of hours.
I'm quite sure that the reason for the delay is that Apple is being deluged by a tsunami of AI-generated crapplets.
Also, APNS server connections have suddenly slowed to a crawl. Same reason, I suspect.
As far as I'm concerned, the "subsidy" can't end fast enough.
Are there any large consumer software companies (just software; no hardware or retail) that are not advertising based?
There are underlying trends that are directly opposed. Efficiency is improving, but with agents, people are finding new ways to spend more. How that plays out seems difficult to judge.
For consumers, maybe the free stuff goes away and spending $20/month on a subscription becomes normalized? Or do costs decline so much that the advertising-supported model (like Google search) works? Or does inference become cheap enough to do it on the client most of the time?
Meanwhile, businesses will likely be able to justify spending more.
https://barrypopik.com/blog/we_lose_money_on_every_sale_but_...
They would have lost less money if they had been selling dollars at 50 cents.
[1] https://www.cnbc.com/2024/09/27/openai-sees-5-billion-loss-t...
throwanem•3h ago
haolez•3h ago
egypturnash•3h ago
jasonjmcghee•3h ago
Some people overload "local" a bit to mean you are hosting the model - whether it's on your computer, on your rack, or on your hetzner instance etc.
But I think parent is referring to the open/static aspect of the models.
If it's hosted by a generic model provider that is serving many users in parallel to reduce the end cost to the user, it's also theoretically a static version of the model... But I could see ad-supported fine tunes being a real problem.
throwanem•3h ago
haolez•11m ago
waynecochran•3h ago
throwanem•3h ago
politelemon•3h ago
yzjumper•3h ago
Funny enough the mac has almost the same processor as my iPhone 16 Pro, so its just a RAM constraint, and of course PrivateLLM does not let you host an API.
An M4 Pro would do much better do to the increase in RAM and GPU size.
mansilladev•3h ago
atentaten•3h ago
yzjumper•3h ago
Funny enough the mac has almost the same processor as my iPhone 16 Pro, so its just a RAM constraint, and of course PrivateLLM does not let you host an API.
unshavedyak•3h ago
For me the switching point will probably be when they (AI companies) start the big rug pull. By then my hope is self hosting will be cheaper, better, easier, etc.
throwanem•3h ago
I do use the Gemini assistant that came with this Android, in the same cases and with the same caveats with which I use Siri's fallback web search. As a synthesist of web search results, an LLM isn't half bad, when it doesn't come as a surprise to be hearing from one at least.
Kon-Peki•3h ago
You can buy a used recent PC for a hundred or two, cram it full of memory, and then run a very advanced model. Slowly. But if you are planning to run an agent while you sleep and then review the work in the morning, do you really care if the run time is 4 hours instead of 40 seconds? Most of the time, no. Sometimes, yes.
throwanem•3h ago
likium•3h ago
Also local models are close in capabilities now but who knows in a few years what that'll look like.
throwanem•3h ago