I use Kagi who returns excellent results, also when I need non AI verbatim queries.
Displaying what you searched for immediately is cannibalizing that market.
I'm guessing ads in AI results is the logical next step.
People don't know how to search, that's it. Even the HN population.
Every time this gets posted, I ask for one example of thing you tried to find and what keywords you used. So I'm giving you the same offer, give me for one thing you couldn't find easily on Google and the keywords you used, and I'll show you Google search is just fine.
How do you set up an encrypted file on linux that can be mounted and accessed same as a hard drive.
(note: luks, a few commands)
You will see a nonsensical ai summarization, lots of videos and junk websites being promoted then you'll likely find a few blogs with the actual commands needed. Nowhere is there a link to a manual for luks or similar.
This in the past had the no-ad straightforward blogs as first links, then some man pages, then other unrelated things for the same searches that i do now and get garbage.
try "mount luks encrypted file" or "luks file mount". too many words and any grammar at all will degrade your results. it's all about keywords
edit: after trying it myself i quickly realized the problem - luks related articles are usually about drives or partitions, not about files. this search got me what i wanted: "luks mount file -partition -filesystem" i found this article[1], which is in german (my native tongue), but contained the right information.
1: https://blog.netways.de/blog/2018/07/25/verschluesselten-fil...
It shoed 25 or so URLs as the source.
That "AI generated slop" IS Google's main response now. I posted it so that someone might have a look to see if/how correct it actually is, your response, that does not deign to even look, is less than helpful - if you want to complain about Google not being useful, how about your own response?
At the top there's a "featured snippet" from opensource.com, allegedly from 2021, that begins with: create an empty file (this turns out to mean a file of given size with no useful data in it, not a size-0 file), then make a LUKS volume using cryptsetup, etc.
First actual search result is a question on Ask Ubuntu (the Stack Exchange site dedicated to Ubuntu) headed "How do I create an encrypted filesystem inside a file?" which unless I'm confused is at least the correct question. Top answer there (from 2017) looks plausible and seems to be describing the same steps as the "featured snippet". A couple of other links to Ask Ubuntu are given below that one but they seem worse.
Next search result is a Reddit thread that describes how to do something different but possibly still of interest to someone who wants to do the thing you describe.
Next search result is a question on unix.stackexchange.com that turns out to be about something different; under it are other results from the same site, the first of which has a cryptsetup-based recipe that seems similar to the other plausible ones mentioned above.
Further search results continue to have a good density of plausible-looking answers to essentially the intended question.
This all seems fairly satisfactory assuming the specific answers don't turn out to be garbage, which doesn't look very likely; it seems like Google has done a decent job here. It doesn't specifically turn up the LUKS manual, but then that wasn't the question you actually asked.
Having done that search to find that the relevant command seems to be cryptsetup and the underlying facility is called LUKS, searches for <<cryptsetup manual>> and <<luks documentation>> (again, the first search terms that came to mind) look to me like they find the right things.
(Google isn't my first-choice search engine at present; DuckDuckGo provides similar results in all these cases.)
I am not taking any sides on the broader question of whether in general Google can give good search results if one picks the right words for it, but in this particular case it seems OK.
I still find online recipes convenient, but I don't blindly trust details like cooking time and temperature. (I mean, those things are always subject to variability, but now I don't trust the times to even be in the right ballpark.)
Happily, there are some cooks that I think deserve our trust, e.g. Chef John.
Badly summarise articles.
Outright invent local attractions that don’t exist.
Gave subtly wrong, misleading advice about employment rights.
All while coming across as confidently authoritative.
Is not false statistics. "Nobody wanted or asked for this" is literally true.
The article is about it encroaching in the domain of human communications. Mass adoption is the only way to justify the incredible financial promises.
I think there are lots of valid arguments against llm usage, but it’s extremely tiring to here how it’s not useful when I get so much use out of it.
maybe i'm doing something wrong here, but even ddg is annoying me with this.
Highways.
pretty much the whole population also wants tax cuts.
It's kind of insane out there in tax land.
In my European country you have to pay a toll to use a highway. Most people opt to use them, instead of taking the old 2-lane road that existed before the highway and is still free.
People would be less upset if ai is shown to support the person. This also allows that person to curate the output and ignore it if needed before sharing it, so it’s a win/win.
But is the big money in revolution?
Of course it’s a bubble! Most new tech like this is until it gets to a point where the market is too saturated or has been monopolised.
I bet if you go back to the printing press, telegraph, telephone, etc. you will find people saying "it's only a bubble!".
I don't think this is true. A lot of people had no interest until smartphones arrived. Doing anything on a smartphone is a miserable experience compared to using a desktop computer, but it's more convenient. "Worse but more convenient" is the same sales pitch as for AI, so I can only assume that AI will be accepted by the masses too.
We sat yesterday and watched a table of 4 lads drinking beer each just watch their phones. At the slightest gap in conversation, out they came.
They’re ruining human interaction. (The phone, not the beer-drinking lad.)
I almost never take my phone with me, especially when with my wife and son, as they always have theirs with them, although with elderly parents not in the best of health I really should take it more.
But it's something I see a lot these days, in fact, the latest Vodafone ad in the uk has a bunch of lads sitting outside a pub and one is laughing at something on his phone. There's also a betting ad where the guy is making bets on his phone (presumably) while in a restaurant with others!
I find this normalized behaviour somewhat concerning for the future.
[0] - https://abysspostcard.substack.com/p/party-like-it-is-1975
As text, email, other messages, websites, Facebook, etc. became available the draw became stronger and so did the addiction and the normalization of looking at your phone every 30 seconds while you were with someone.
Did SNL or anyone ever do a skit of a couple having sex and then "ding" a phone chimes and one of them picks it up and starts reading the message? And then the other one grabs their phone and starts scrolling?
So people didn't want to be walking around with a tether that allowed the whole world to call them where ever they were? Le Shock!
Now if they'd asked people if they'd like a small portable computer they could keep in touch with friends and read books, play games, play music and movies on where ever they went which also made phone calls. I suspect the answer might have been different.
Obviously saying “everyone” is hyperbole. There were luddites and skeptics about it just like with electricity and telephones. Nevertheless the dotcom boom is what every new industry hopes to be.
In 20 years AI will be pervasive and nobody will remember being one of the luddites.
Whether the opposition was massive or not, in proportion to the enthusiasm and optimism about the globally connected information superhighway, isn’t something I can quantify, so I’ll bow out of the conversation.
Internet of things was largely BS.
It's bullshit.
I mean, sure: there were people who hated the Internet. There still are! They were very clearly a minority, and almost exclusively older people who didn't like change. Most of them were also unhappy about personal computers in general.
But the Internet caught on very fast, and was very, very popular. It was completely obvious how positive it was, and people were making businesses based on it left and right that didn't rely on grifting, artificial scarcity, or convincing people that replacing their own critical thinking skills with a glorified autocomplete engine was the solution to all their problems. (Yes, there were also plenty of scams and unsuccessful businesses. They did not in any way outweigh the legitimate successes.)
By contrast, generative AI, while it has a contingent of supporters that range from reasonable to rabid, is broadly disliked by the public. And a huge reason for that is how much it is being pushed on them against their will, replacing human interaction with companies and attempting to replace other things like search.
>By contrast, generative AI, while it has a contingent of supporters that range from reasonable to rabid, is broadly disliked by the public.
It is absolutely wild how people can just ignore something staring right at them, plain as day.
ChatGPT.com is the 5 most visited site on the planet and growing. It's the fastest growing software product ever, with over 500M Weekly active users and over a billion messages per day. Just ChatGPT. This is not information that requires corporate espionage. The barest minimum effort would have shown you how blatantly false you are.
What exactly is the difference between this and a LLM hallucination ?
No condescension necessary.
Also, everyone who requires these sophisticated models now needs to send everything to the gatekeepers. You could argue that we already send a lot of data to public clouds. However, there was no economically viable way for cloud vendors to read, interpret, and reuse my data — my intellectual property and private information. With more and more companies forcing AI capabilities on us, it's often unclear who runs those models and who receives the data and what is really happening to the data.
This aggregation of power and centralisation of data worries me as much as the shortcomings of LLMs. The technology is still not accurate enough. But we want it to be accurate because we are lazy. So I fear that we will end up with many things of diminished quality in favour of cheaper operating costs — time will tell.
For the consumer side, you'll be the product, not the one paying in money just like before.
For the creator side, it will depend on how competition in the market sustains. Expect major regulatory capture efforts to eliminate all but a very few 'sanctioned' providers in the name of 'safety'. If only 2 or 3 remain, it might get realy expensive.
Open Source endeavors will have a hard time to bear the resources to train models that are competitive. Maybe we will see larger cooperatives, like a Apache Software Foundation for ML?
I suspect the Linux Foundation might be a more likely source considering its backers and how much those backers have provided LF by way of resources. Whether that's aligned with LF's goals ...
There are a number of reasons to do this: You want local inference, you want attention from devs and potential users etc.
Also the smaller self hostable models are where most of the improvement happens these days. Eventually they'll catch up with where the big ones are today. At this point I honestly wouldn't worry too much about "gatekeepers."
Perhaps, but see also SETI@home and similar @home/BOINC projects.
I’d support an Apache for ML but I suspect it’s unnecessary. Look at all of the money companies spend developing Linux; it will likely be the same story.
GPU: RTX 5090 (no rops missing), 32 GB VRAM
Quants: Unsloth Dynamic 2.0, it's 4-6 bits depending on the layer.
RAM is 96 GB: more RAM makes a difference even if the model fits entirely in the GPU: filesystem pages containing the model on disk are cached entirely in RAM so when you switch models (we use other models as well) the overhead of unloading/loading is 3-5 seconds.
The Key Value Cache is also quantized to 8 bit (less degrades quality considerably).
This gives you 1 generation with 64k context, or 2 concurrent generations with 32k each. Everything takes 30 GB VRAM, which also leaves some space for a Whisper speech-to-text model (turbo & quantized) running in parallel as well.
The scale issue isn't the LLM provider, it's the power grid. Worldwide, 250 W/capita. Your body is 100 W and you have a duty cycle of 25% thanks to the 8 hour work day and having weekends, so in practice some hypothetical AI trying to replace everyone in their workplaces today would need to be more energy efficient than the human body.
Even with the extraordinarily rapid roll-out of PV, I don't expect this to be able to be one-for-one replacement for all human workers before 2032, even if the best SOTA model was good enough to do so (and they're not, they've still got too many weak spots for that).
This also applies to open-weights models, which are already good enough to be useful even when SOTA private models are better.
> You could argue that we already send a lot of data to public clouds. However, there was no economically viable way for cloud vendors to read, interpret, and reuse my data — my intellectual property and private information. With more and more companies forcing AI capabilities on us, it's often unclear who runs those models and who receives the data and what is really happening to the data.
I dispute that it was not already a problem, due to the GDPR consent popups often asking to share my browsing behaviour with more "trusted partners" than there were pupils in my secondary school.
But I agree that the aggregation of power and centralisation of data is a pertinent risk.
In fact I also tried the communication part - outside of Outlook - but people don't like superficial AI polish
The same issue plagues many private companies. I’ve seen employees spend days drafting documents that a free tool like Mistral could generate in seconds, leaving them 30-60 minutes to review and refine. There's a lot of resistance from the public. They're probably thinking that their job will be saved if they refuse to adopt AI tools.
What I have seen is employees spending days asking the model again and again to actually generate the document they need, and then submit it without reviewing it, only for a problem to explode a month later because no one noticed a glaring absurdity in the middle of the AI-polished garbage.
AI is the worst kind of liar: a bullshitter.
That's basic human behavior and AI won't fix this. It will only make it worse, and that's my main gripe with AI.
I also feel an urge to build spaces in the internet just for humans, with some 'turrets' to protect against AI invasion and exploitation. I just don't know what content would be shared in those spaces because AI is already everywhere in content production.
The thing that really chafes me about this AI, irrespective of whether it is awesome or not, is emitting all of the information to some unknown server. To go with another Zappa reference, AI becomes The Central Scrutinizer[2].
I predict an increasing use of Free Software by discerning people who want to maintain more control of their information.
[1] https://www.youtube.com/watch?v=JPFIkty4Zvk
[2] https://en.wikipedia.org/wiki/Joe%27s_Garage#Lyrical_and_sto...
I'm not unwilling to use AI in places where I choose. But let's not pretend that just because people do use it in one place, they are willing to have it shoved upon them in every other place.
I was searching for something on Omnissa Horizon here: https://docs.omnissa.com/
It has some kind of ChatGPT integration, and I tried it and it found the answer I was looking for straight away, after 10 minutes of googling and manual searching had failed.
Seems to be not working at the moment though :-/
If you answer no, does that make you an unwilling user of social media? It’s the most visited sites in the world after all, how could randomly injecting it into your GPS navigation system be a poor fit?
All the anti-AI people I know are in their 30s. I think there are many in this age group that got use to nothing changing and are wishing it to stay that way.
We won’t solve climate change but we will have elaborate essays why we failed.
Or are they the only ones who understand that the rate of real information/(spam+disinformation+misinformation+lies) is worse than ever? And that in the past 2 years, this was thanks to AI, and people who never check what garbage AI spew out? And only they are who cares to not consume the shit? Because clearly above 50, most of them were completely fine with it for decades now. Do you say that below 30 most of the people are fine to consume garbage? I mean, seeing how many young people started to deny Holocaust, I can imagine it, but I would like some hard data, and not just some AI level guesswork.
I just don't participate in discussions about Facebook marketplace links friends share, or Instagram reels my D&D groups post.
So in a sense I agree with you, forcing AI into products is similar to forcing advertising into products.
I wonder how many uses of Chatgpt and such are malicious.
LLMs are not very predictable. And that's not just true for the output. Each change to the model impacts how it parses and computes the input. For someone claiming to be a "Prompt Engineer", this cannot work. There are so many variables that are simply unknown to the casual user: training methods, the training set, biases, ...
If I get the feeling I am creating good prompts for Gemini 2.5 Pro, the next version might render those prompts useless. And that might get even worse with dynamic, "self-improving" models.
So when we talk about "Vibe coding", aren't we just doing "Vibe prompting", too?
If you run an open source model from the same seed on the same hardware they are completely deterministic. It will spit out the same answer every time. So it’s not an issue with the technology and there’s nothing stopping you from writing repeatable prompts and promoting techniques.
Relying on model, seed, and hardware to get "repeatable" prompts essentially reduces an LLM to a very lossy natural language decompression algorithm. What other reason would someone have for asking the same question over and over and over again with the same input? If that's a problem you need solve then you need a database, not a deterministic LLM.
Are you sure of that? Parallel scatter/gather operations may still be at the mercy of scheduling variances, due to some forms of computer math not being associative.
Whenever people talk about "prompt engineering", they're referring to randomly changing these kinds of things, in hopes of getting a query pattern where you get meaningful results 90% of the time.
The reason changing one word in a prompt to a close synonym changes the reply is because it is the specific words used in a series that is how information is embedded and recovered by LLMs. The 'in a series' aspect is subtle and important. The same topic is in the LLM multiple times, with different levels of treatment from casual to academic. Each treatment from casual to formal uses different words, similar words, but different and that difference is very meaningful. That difference is how seriously the information is being handled. The use of one term versus another term causes a prompt to index into one treatment of the subject versus another. The more formal the terms used, meaning the synonyms used by experts of that area of knowledge, generate the more accurate replies. While the close synonyms generate replies from outsiders of that knowledge, those not using the same phrases as those with the most expertise, the phrases used by those perhaps trying to understand but do not yet?
It is not randomly changing things in one's prompts at all. It's understanding the knowledge space one is prompting within such that the prompts generate accurate replies. This requires knowing the knowledge space one prompts within, so one knows the correct formal terms that unlock accurate replies. Plus, knowing that area, one is in a better position to identify hallucination.
Higher level programming languages may make choices for coders regarding lower level functionality, but they have syntactic and semantic rules that produce logically consistent results. Claiming that such rules exist for LLMs but are so subtle that only the ultra-enlightened such as yourself can understand them begs the question: If hardly anyone can grasp such subtlety, then who exactly are all these massive models being built for?
If I have to do extensive subtle prompt engineering and use a lot of mental effort to solve my problem... I'll just solve the problem instead. Programming is a mental discipline - I don't need help typing, and if using an AI means putting in more brainpower, its fundamentally failed at improving my ability to engineer software
conceding that this may be the case, there are entire categories of problems that i am now able to approach that i have felt discouraged from in the past. even if the code is wrong (which, for the most part, it isn't), there is a value for me to have a team of over-eager puppies fearlessly leading me into the most uninviting problems, and somehow the mess they may or may not create makes solving the problem more accessible to me. even if i have to clean up almost every aspect of their work (i usually don't), the "get your feet wet" part is often the hardest part for me, even with a design and some prototyping. i don't have this problem at work really, but for personal projects it's been much more fun to work with the robots than always bouncing around my own head.
The only way to successfully use AI is to have sufficient skill to review the code it generates for correctness - which is a problem that is at least as skilful as simply writing the code
They need to understand what the code does.
I said no. Respect my preferences.
There are open source or affordable, paid alternatives for everything the author mentioned. However, there are many places where you must use these things due to social pressure, lock-in with a service provider (health insurance co, perhaps), and yes unfortunately I see some of these things as soon or now unavoidable.
Another commenter mentioned that ChatGPT is one of the most popular websites on the internet and therefore users clearly do want this. I can easily think of two points that refute that: 1. The internet has shown us time and time again that popularity doesn’t indicate willingness to pay (which paid social networks had strong popularity…?) 2. There are many extremely popular websites that users wouldn’t want to be woven throughout the rest of their personal and professional digital lives
As a data point, the "Stop Killing Games" one has passed the needed 1M signatures so is in good shape:
The point is that thinking number of signatures is a victory is naive.
You can't use this as an example of success until you actually achieve something.
"I don’t want AI customer service—but I don’t get a choice.
I don’t want AI responses to my Google searches—but I don’t get a choice.
I don’t want AI integrated into my software—but I don’t get a choice.
I don’t want AI sending me emails—but I don’t get a choice.
I don’t want AI music on Spotify—but I don’t get a choice.
I don’t want AI books on Amazon—but I don’t get a choice."
The last is especially egregious. I don’t want poorly-written (by my standards) books cluttering up bookstores, but all my life I’ve walked into bookstores and found my favorite genres have lots of books I’m not interested in. Do I have some kind of right to have stores only stock products that I want?
The whole thing is just so damn entitled. If you don’t like something, don’t buy it. If you find the presence of some products offensive in a marketplace, don’t shop there. Spotify is not a human right.
Probably no one enjoys AI books though. I did my best at devils advocate on that above.
Politicians often use AI to summarise proposals and amendments to the laws. And later vote based on those summaries. It's incredible how artifical bureaucracy is driven by artifical intelligence. And how citizens don't care to follow artificial laws that ruins humanity.
Of course you can opt out. People live in the backwoods of Alaska. But if you want to live a semi normal life there is no option. And absolutely people should feel entitled to a normal life.
What book store will stock AI slop that no-one wants to buy?
It's fun to say "let's go write a complete replacement for Microsoft Office" or the Adobe suite or what have you, but that has a truly astonishing upfront cost to get to a point where it's even servicing 50% of the use cases, let alone 95 or 99%.
Or there's other examples where it's not obvious there's sufficient interest to finance an alternative - how many people are going to pay for something that replicates solely the old functionality of Microsoft Paint or Notepad, for example.
My guess is you'd very quickly get a bunch of teams scrambling to produce something to compete and capture a huge market by charging a tenth the price. Funding is taken care of when winning there is worth so much
Maybe it won't happen overnight because they're huge software suites.. but it will happen. We need regulations to take care of anti-competitive practices - but after that the market seems to work pretty well for keeping companies in check
If all of the factory owners discover a type of widget to sell that can incidentally drive down wages the more units they move, it's unlikely for consumers to be provided much choice in their future widgets.
$30 blenders that break in 3 months haven't bankrupted Vitamix
If quality were a sufficiently motivating aspect, Google's deteriorating search wouldn't be a constant theme on this site, and people on the street would know where to download and play a FLAC file.
There's also a segment of the market that wants the FLAC, premium handcrafted experiences at top price. They're not in direct competition and both can co-exist
My initial point was that companies can't just exploit consumers relentlessly because the market won't let them. The good value option can't just box people in and show them only ads. I bet YouTube would love to show you unskippable ads for 75% of the video length. Good luck staying market leader with that
I don't think Google is a good example here. They've been actively trying to fight and failing against SEO and affiliate spam for a decade. No-one else has solved that problem either which is why Google remains at the top. I personally had a hand-crafted content site thrown out of their search results because of them going after spam
They’re not trying to satisfy customers: they’re answering shareholders. Our system is no longer about offering the best products, it’s about having the market share to force people to do business with you or maybe two other equally bad companies that constantly look for ways to extract more money from people to make shareholders happy. See: Two choices of smartphone OS, ISP regional monopolies or duopolies, two consumer OSes, a handful of mobile carriers, almost all available TVs models being “smart TVs” laden with spyware…
(I’m speaking from the US perspective, this may not be as pronounced elsewhere.)
The answer to this is regulation. See: https://www.msn.com/en-us/news/technology/apple-updates-app-...
Outside of a monopoly the best way to extract more money from people is to offer a better product. If AI is being forced and people do hate it, they'll move towards products that don't do that
What happened to Windows Recall being enabled by default? Surely it was in Microsoft's best interest to force it on people. But no, they reversed it after a huge backlash. You see this again and again
Of your examples, ISPs are the only one I can see that's hated without other options. Most people are quite happy with Windows/Mac/Android/iOS/Mint Mobile/Smart-TV-With-No-Internet-Access
The reality is that most people like many of the things you or I might find useless or annoying.
There are better products, but they are niche. You pay more for a non-smart TV because 1) there’s less demand, and 2) the business model is different and requires full payment up front rather than long term monetization.
But who are you or I to look at the market and declare that both sellers and buyers are wrong about what they want? I’m very suspicious of any position as paternalistic as that.
The OP's point is that increasingly, we don't have that choice, for example, because AI slop masquerades as if it were authored by human beings (that's, in fact, its purpose!), or because the software applications you rely on suddenly start pushing "AI companions" on you, whether you want them or not, or because you have no viable alternatives to the software applications you use, so you must put up with those "AI companions," whether you want them in your life or not.
Six-plus months ago they put a chatbot in the bottom right corner of their website that literally covers up buttons I use all the time for ordering, so that I have to scroll now in order to access those controls (Chrome, MacOS). After testing it with various queries it only seems to provide answers to questions in their pre-existing support documentation.
This is not about choice (see above, they are the only game in town), and it is not about entitlement (we're a tiny shop trying to serve our customers' often obscure book requests). They seemed to literally place the chatbot buttons onto their website with no polling of their users. This is an anecdotal report about Ingram specifically.
That's objective; subjectively, it feels like there are individuals who were given the ability to "try new stuff" and "break things" who chose to follow the hype around features that look like this. The chat button seems to me to be an exercise in following-the-herd which actually sucks for me as a user with it blocking my old buttons.
It's ridiculous to compare bad human books with bad AI books because there many human books which are life-changing, but there isn't a single AI book which isn't trash.
I don't think it's entitlement to make a well-mannered complaint about how little choice we actually have when it comes to the whims of the tech giants.
The whole point is that "just don't buy it" as a strategy doesn't work anymore for consumers to guide the market when the companies have employed the rock-for-dessert gambit to avoid having to try to sell their products on their merits.
Software is loyal to the owner. If you don't own your software, software won't be loyal to you. It can be convenient for you, but as time passes and interest changes, if you don't own software it can turn against you. And you shouldn't blame Microsoft or it's utilities. It doesn't owe you anything just because you put effort in it and invested time in it. It'll work according to who it's loyal to, who owns it.
If it bothers you, choose software you can own. If you can't choose software you own now, change your life so you can in the future. And if you just can't, you have to accept the consequences.
It's because the integrations with existing products are arbitrary and poorly thought through, the same way that software imposed by executive fiat in BigCo offices for trend-chasing reasons has always been.
petekoomen made this point recently in a creative way: AI Horseless Carriages - https://news.ycombinator.com/item?id=43773813 - April 2025 (478 comments)
I don't want shitty bolt-ons, I want to be able to give chatgtp/claude/gemini frontier models the ability to access my application data and make api calls for me to remotely drive tools.
The weirdest location I've found the most useful LLM-based feature so far has been Edge with it's automatic tab grouping. It doesn't always pick the best groups and probably uses some really small model, but it's significantly faster and easier than anything that I've had so far.
I hope they do bookmarks next and that someone copies the feature and makes it use a local model (like Safari or Firefox, I don't even care).
It's just rent-seeking. Nobody wants to actually build products for market anymore; it's a long process with a lot of risk behind it, and there's a chance you won't make shit for actual profit. If however you can create a "do anything" product that can be integrated with huge software suites, you can make a LOT of money and take a lot of mind-share without really lifting a finger. That's been my read on the "AI Industry" for a long time.
And to be clear, the integration part is the only part they give a shit about. Arguably especially for AI, since operating the product is so expensive compared to the vast majority of startups trying to scale. Serving JPEGs was never nearly as expensive for Instagram as responding to ChatGPT inquiries is for OpenAI, so they have every reason to diminish the number coming their way. Being the hip new tech that every CEO needs to ram into their product, irrespective of it does... well, anything useful, while also being so frustrating or obtuse for users to actually want to use, is arguably an incredibly good needle to thread, if they can manage it.
And the best part is, if OpenAI's products do actually do what they say on the tin, there's a good chance many lower rungs of employment will be replaced with their stupid chatbots, again irrespective of whether or not they actually do the job. Businesses run on "good enough." So it's great, if OpenAI fails, we get tons of useless tech injected into software products already creaking under the weight of so much bullhockety, and if they succeed, huge swaths of employees will be let go from entry level jobs, flooding the market, cratering the salary of entire categories of professions, and you'll never be able to get a fucking problem resolved with a startup company again. Not that you probably could anyway but it'll be even more frustrating.
And either way, all the people responsible for making all your technology worse every day will continue to get richer.
Those are two problems in this situation that are both bad for different reasons. It's bad to have all the money concentrated in the hands of a tiny number of losers (and my god are they losers) and AI as a technology is slated to, in the hands of said losers, cause mass unemployment, if they can get it working good enough to pass that very low bar.
Only a few bystanders seem to notice the IP theft and laundering, the adversarial content barriers to protect from scraping, the centralization of capital within the owners of frontier models, the dial-up of the already insane race to collect personal data, the flooding of every communication channel with AI slop and spam, and the inevitable impending enshittification of massive proportions.
I’ve seen the sausage get made, enough to know the game. They’re establishing new dominance hierarchies, with each iteration being more cynical and predatory, each cycle refined to optimally speedrun the rent seeking value extraction. Yes, there are still important discussions about the tech itself. But it’s the deployment that concerns everyone, not hypothetically, but right now.
Exhibit A: social media. In hindsight, what was more important: the core technologies or the business model and deployment?
I think this is the key idea. Right now it doesn't work that well, but if it did work as advertised, that would also be bad.
The AI community treats potential customers as invaders. If you report a problem, the entire thing turns on you trying to convince you that you're wrong, or that you reported a problem because you hate the technology.
It's pathetic. It looks like a viper's nest. Who would want to do business with such people?
Actual promising AI tech doesn't even get the center stage, it doesn't get a chance to do it.
Having all these popups announcing new integrations with AI chatbots showing up while you are just trying to do your work is pretty annoying. It feels like this time we are fighting an army of Clippies.
Everyone nodding along, yup yup this all makes sense
This is the next great upset. Everyone's hair is on fire and it's anybody's ball game.
I wouldn't even count the hyperscalers as certain to emerge victorious. The unit economics of everything and how things are bought and sold might change.
We might have agents that scrub ads from everything and keep our inboxes clean. We might find content of all forms valued at zero, and have no need for social networking and search as they exist today.
And for better or worse, there might be zero moat around any of it.
This is called an ad blocker.
> keep our inboxes clean
This is called a spam filter.
Raise subscription prices, don’t deliver more value, bundle everything together so you can’t say no. I canceled a small Workspace org I use for my consulting business after the price hike last year; also migrating away everything we had on GCP. Google would have to pay me to do business with them again.
Tyranny is a real thing which exists in the world and is not exemplified by “product manager adding text expansion to word processor.”
The natural state of capitalism is trying things which get voted on by money. It’s always subject to boom-bust cycles and we are in a big boom. This will eventually correct itself once the public makes its position clear and the features which truly suck will get fixed or removed.
That is what the natural state of capitalism _would_ be in a world of honest businesspeople and politicians.
Once upon a time, not too long ago, there was someone who would bag your groceries, and someone who would clean your window at the gas station. Now you do self-checkout. Has anyone asked for this? Your quality of life is worse, the companies are automating away humanity into something they think is more profitable for them.
In a society where you don't have government protection for such companies, there would be other companies who provide a better service whose competition would win. But when you have a fat corrupt government, lobbying makes sense, and crony-capitalism births monopolies which cannot have any competition. Then they do whatever they want to you and society at large, and they don't owe you, you owe them. Your tax dollars sponsor all of this even more than your direct payments do.
https://www.sciotoanalysis.com/news/2024/7/12/how-much-do-yo...
While government sponsored monopolies certainly exist, monopolies themselves are a natural outcome of competition.
Deregulation would break some monopolies while encouraging others to grow. The new monopolies may be far worse than the ones we had before.
Some are excited about it. Some are actually making something cool with AI. Very few are both.
The top of the list has got to be that one of their testimonials presented to investors is from "DrDeflowerMe". It's also interesting to me because they list financials which position them as unbelievably tiny: 6,215 subscribing accounts, 400 average new accounts per month, which to me sounds like they have a lot of churn.
I'm in my third year of subscribing and I'm actively looking for a replacement. This "Start Engine" investment makes me even more confident that's the right decision. Over the years I've paid nearly $200/year for this and watched them fail to deliver basic functionality. They just don't have the team to deliver AI tooling. For example: 2 years ago I spoke with support about the screen that shows you your credit card numbers being nearly unreadable (very light grey numbers on a white background), which still isn't fixed. Around a year ago a bunch of my auto transfers disappeared, causing me hundreds of dollars in late fees. I contacted support and they eventually "recovered" all the missing auto-transfers, but it ended up with some of them doubled up, and support stopped responding when I asked them to fix that.
I question if they'll be able to implement the changes they want, let alone be able to support those features if they do.
I don’t see the utility, all I see is slop and constant notifications in google.
You can say skill issue but that’s kind of the point; this was all dropped on me by people who don’t understand it themselves. I didn’t ask or want to built the skills to understand ai. Nor did my bosses: they are just following the latest wave. We are the blind leading the blind.
Like crypto ai will prove to be a dead end mistake that only enabled grifters
The reason your bosses are being obnoxious about making people use the internal AI tool is to push them into thinking about things like this. Perhaps at your company it’s genuinely not useful, but I’ve seen a lot of people say that who I’m pretty confident are wrong.
It is like Clippy, which no one wanted. Hopefully, like Clippy, "AI" will be scrapped at some point.
It seems here on the ground in non-tech bubble land, people use ChatGPT a ton and lean hard on AI features.
When Google judges the success of bolted on AI, they are looking at how Jane and John General Public use it, not how xleet007 uses it(or doesn't).
There is also the fact that AI is still just being bolted onto things now. The next iteration of this software will be AI native, and the revisions after that will iron out big wrinkles.
When settings menus and ribbon panels are optional because you can just tell the program what to do in plain English, that will be AI integration.
If you look at the survey results, a few things jump out.
Firstly, there's a strong age skew. The people most likely to benefit from AI features in their software are those who are judged directly on their computing productivity, i.e. the young. Around half of 18-35 year olds say they would pay extra, even . It's only amongst the old that this drops to 20%.
Secondly, when asked directly if they value a range of AI-driven features, they say yes.
The skew opens up because companies like OpenAI give AI services away for free. There's just a really strong expectation established by the tech industry that software is either free or paid for by a low and very price-stable monthly subscription. This is also true in AI: you only pay for ChatGPT if you want more features and smarter models. For the majority of things that people are doing with AI right now, the free version of ChatGPT is good enough. What remains is mostly low value stuff like better autocomplete, where indeed people are probably not that interested in paying more for it.
Unfortunately Ted Gioia tries to use this stat to imply people don't want AI at all, which is not only untrue but trivially untrue; ChatGPT is the fastest growing product in history.
I will pay people for the value they create. I won't pay for AI content, or AI integrations. They are not interesting or valuable to me.
Marsha Blackburn's amendment to remove the "AI legislation moratorium" from the "Big Beautiful Bill" passed the Senate 99-1.
People are getting really fed up with "AI", "crypto" and other scams.
So they may have been on to something
- me, a few years ago.
I find the whole situation with regard to AI utterly ridiculous and boring. While those algos might have some interesting applications, they're not as earth-shattering as we are made to believe, and their utility is, to me at least, questionable.
love this quote !
The whole sales-pitch for AI is predicated on FOMO - from developers being replaced by AI-enabled engineers to countries being left-behind by AI-slop. Like crypto, the idea is to get-big-fast, and become too big to fail. This worked for social-media but I find it hard to believe it can work for AI.
My hope is that: while some of the people can be fooled all the time, all the people cannot be fooled all the time.
People going to lord it over others in the pursuit of what they think is proper.
Society is over-rated, once it gets beyond a certain size.
Along the same lines, I am currently starting my morning with blocking ranges of IP addresses to get Internet service back, due to someone's current desire to SYN Flood my webserver, which being hosted in my office, affects my office Internet.
It may soon come to a point where I choose to block all IP addresses except a few to get work done.
People gonna be people.
sigh.
Please don't. I am going to read this email. Adding more text just makes me read more.
I am sure there's a common use case of people who get a ton of faintly important email from colleagues. But this is my personal account and the only people contacting me are friends. (Everyone else should not be summarized; they should be trashed. And to be fair I am very grateful for Gmail's excellent spam filtering.)
I say I imagine it's annoying because I've yet to actually be annoyed much but I get the idea. I actually quite like the Google AI bit - you can always not read it if you don't want to. AI generated content on youtube is a bit of a mixed bag - it tends to be kinda bad but you can click stop and play another video. My office 2019 is gloriously out of date and does that stuff I want without the recent nonsense.
And of course there's no way to disable it without also losing calculator, unit conversions, and other useful functionality.
Also:
> As per SimilarWeb data 61.05% of ChatGPT's traffic comes from YouTube, which means from all the social media platforms YouTube viewers are the largest referral source of its user base,
That's deeply suspect.
Only when I went to cancel[1], suddenly they made me aware that there was a "classic" subscription that was the normal price, without CoPilot. So they basically just upsized everyone to try to force uptake.
[1] - I'm in the AI business and am a user and abuser of AI daily, but I don't need it built directly into every app. I Already have AI subscriptions and local models and solutions.
daft_pink•11h ago
It’s like IPV6, if it really was a huge benefit to the end user, we’d have adopted it already.
immibis•10h ago
NitpickLawyer•10h ago
Just from current ARR announcements: 3b+ anthropic, 10b+ oai, whatever google makes, whatever ms makes, yeah people are already paying for it.
meheleventyone•9h ago
squidbeak•9h ago
"If it was any good, people would pay for it."
"The data shows people are paying for it."
"Aah but they don't know they're paying for it."
meheleventyone•9h ago
watwut•8h ago
And VC investments are distorting markets - unprofitable companies kill profitable ones before crashing.
supersparrow•10h ago
ethan_smith•6h ago
brookst•5h ago
nonplus•5h ago
After seeing something like blockchain run completely afoul/used for the wrong things and embraced by the public for it, I at least agree that AI has a value perception problem.
brookst•2h ago
bitwize•7m ago