``` adding an untrusted middleman to your information diet and all of your personal communications will eventually become a disaster that will be obvious in hindsight ```
...seems like it could be said for Google right now.
And I guarantee we all have seen ads generated by an LLM already. The front page of Reddit is filled with LLM posts whose comments are similarly rich with bots.
One common one is an image post of a snarky t-shirt where a high rated comment gives you a link to the storefront. The bots no longer need to recycle old posts and comments which can be easily detected as duplicates when an LLM can freshen it up.
I don’t trust LLMs even that far. Is it possible for “agentic AI” to send an email to my competitor with confidential company data attached? Absolutely it’s possible. So no, that statement doesn’t apply to Google as a company nearly as aptly as it applies to an agentic LLM.
My only pet peeve have been fines from eu being too gentle
It's like the difference of someone handing out printed tour guides vs an in-person tour guide. It's typically can be easier to tell which are the ads, the extent of the curation, etc. with the printed guide(but not always!). While with the in-person guide you just have to just have to take everything they say at face value since there's no other surrounding information to judge.
So, how will that go with LLM tools which start with you already entirely separated from the sources, with no real way to get to them?
I see people still interacting with them, upvoting their comments and being clueless that they are talking to a bot. If HN users can't detect them then reddit and X users do not stand chance.
LLM bots are being deployed all over social media, I'm convinced. I've been refraining from engaging in social media outside HN, so I'm not sure how widespread it is. I would invite folks to try this "debate tactic" and see how it goes.
The dead internet is coming for us...
I guess I'm going to have to get off the couch if I want to talk to real people.
It's such a meme at this point, I wouldn't put it past a human to reply with the poem in some sense of irony/spite/trolling/...
I really don't want to have to give this up, but I imagine soon enough this too will become enshittified. I mean, its already happening: https://openai.com/chatgpt/search-product-discovery/
Whats the long term solution here? Open Web UI with deepseek + tavily? Would it be profitable long term to have a "neutral" search engine, or will it be cost prohibitive moving forward?
For now, at least, OpenAI claim that those product suggestions (almost tempted to leave in my typo / phone's autocorrect of "subversions") are not ads, and that it's purely a feature designed to be useful for ChatGPT users.
Although this from the FAQ is a bit strange, and I do wonder if there's any business relationship between OpenAI and the "third party providers" that happens to involve money passing from the latter to OpenAI in commercial deals that are definitely not ad purchases...
> How Merchants Are Selected
> When a user clicks on a product, we may show a list of merchants offering it. This list is generated based on merchant and product metadata we receive from third-party providers. Currently, the order in which we display merchants is predominantly determined by these providers. We do not re-rank merchants based on factors such as price, shipping, or return policies. We expect this to evolve as we continue to improve the shopping experience.
> To that end, we’re exploring ways for merchants to provide us their product feeds directly, which will help ensure more accurate and current listings. If you're interested in participating, complete the interest form here, and we’ll notify you once submissions open.
( https://help.openai.com/en/articles/11128490-improved-shoppi... )
It's not exactly on the horizon but I think it's possible to build a web which rewards being trustworthy, rather than one that rewards attention mongering.
Whenever I get any summary or diatribe or lecture out of a chatbot all I know is that I have a major fact checking challenge. And I don’t have time for it. I cannot believe you are doing all that fact-checking.
Here's an example: https://chatgpt.com/share/6839b2a0-d4f4-8000-9224-f406589802...
I was traveling in Tokyo recently and took a picture of a van that was hosting what looked like a political rally in Akihabara with hand painted slogans on the outside. It wrote some python code to analyze the image segment by segment and eventually came up with the translations. Then it was able to find me the website for the political party, which had an entry for the rally that was held that day. I don't speak Japanese so its possible some of the translations were not accurate but it looked like it generally lined up and it ultimately got me what I wanted eventually.
I was there a year ago as well and tried doing similar translations and it had a very hard time with the hand painted kanji. Its really come a long way since then.
I also used it to find some obscure anime events the same day, most of which are only announced in Japanese on obscure websites. Being a non-speaker and not familiar with the websites it would have been a huge pain to google.
Every industry in America, and especially tech players, work to lock in their customers, paid or not. People who are dependent on their phones don't make choices like that, and anticompetitive behaviors are becoming less illegal and easier
At this point "vote with your wallet" is basically a delusion in contexts like this
This made me think of Asimov's Foundation, the "Church of the Galactic Spirit".
Those who knew how tech works were priests. The rest of the populace were pure consumers.
How is this even remotely different than Google Search? It's consulting Billions of pages to feed you a handful of results but mostly ads
It's true that there's nothing stopping Google Search from being a morally bankrupt operation though.
When Google search came about it had not yet been established that tech companies could "move fast" without consequence.
The result are remotely hosted tamper evident LLMs proving you get the same responses anyone else would, while being confidential.
All the tech for this already exists as open source, just waiting on people to package up a combined solution.
To me, it mitigates the problem slightly by making it less hidden.
IMO it's not a particularly interesting or novel message. We're already living in the perpetual disaster of that kind. You can say this about all social and traditional media and state propaganda, and it will remain true. What really matters is the level of trust you put in that middleman. High trust leans towards peace and being manipulated. Low trust leans towards violence and freedom of thought. Yada yada.
Remembering that the actual middleman is people who are making AI, and not the AI itself, is way more important.
Like the existing info on the web is trusted? Almost everyone's trying to shill something.
I'm as anti-AI as anyone, but what's the difference between LLM garbage and SEO garbage?
exrhizo•20h ago
sfitz•18h ago
wmf•18h ago