Once the ads are injected directly into the main response is when things get interesting.
This would be where you post-process the LLM response with a second LLM to remove the ad..
A writes email with chatgpt to B.
B sees big blob of text and summarizes email with chatgpt.
Adding an LLM in the middle is just the next step.
Was he lying, or has OpenAI given up hope that this train wreck works economically without enshittification? Neither option is good, but I don't really see a third.
same thing could've been said for search results, so at least that part is still "safe".
Seems the playing field is a bit too open though, models are more fungible than the companies would hope so most of the current moat is brand based and seems like they're not ready to go all "Black Mirror" on us just yet.
Every time this comes up there are comments assuming that ads are being injected into the normal plans, but these are for the free tier and the new Go plan which warns you that it includes ads when you sign up.
Seeing how google has been fighting SEO for ages, what's going to happen when companies figure out how to inject ads into the model?
We haven't yet seen the problem of adversarial content in play, I think.
gxs•1h ago
It feels like we’ve been in the golden age and the window is coming to a close
Let the enshitification begin, I guess
2ndorderthought•1h ago
I really think the future is local compute. Or at least self hosted models.
SchemaLoad•1h ago
darepublic•1h ago
gbear605•1h ago
hansvm•1h ago
ossa-ma•1h ago
`Error: "The following domains are not accessible to our user agent: ['reddit.com']."`
wyre•1h ago
I’ve been building a harness the past few months and supports them all out of the box with an API key.
goosejuice•25m ago
eightysixfour•1h ago
CSMastermind•1h ago
Terretta•1h ago
128GB of RAM? Sure, the early to mid 4s releases, except maybe 4o. And on an M5 Max, about the same speed.
I wouldn't really bother under 64GB (meaning 32GB or less) except for entertainment value (chats, summaries, tasky read-only agent things).
kay_o•1h ago
2ndorderthought•51m ago
Then there are middle size ones which require multiple gpus which are like gpts latest flagships.
Then there is kimi 2.6 which is a monster that is beating opus in some benchmarks. https://www.reddit.com/r/LocalLLaMA/comments/1sr8p49/kimi_k2...
It's basically whatever you can afford. Any trash heap laptop can run code auto complete models locally no problem. The rest require some level of investment, an idle gaming pc, or a serious investment
rnxrx•1h ago
dannyw•1h ago
infinite_spin•54m ago
e.g. colleges pay for institutional subscriptions
2ndorderthought•46m ago
IX-103•20m ago
derektank•47m ago
iammrpayments•53m ago