1. Who's gonna pay back the investors their trillions of dollars and with what?
2. Didn't we have to start thinking about reducing energy consumption like at least a decade ago?
Some people are vegan, some people eat meat. Usually, these two parties get on best when they can at least understand each-other's perspectives and demonstrate an understanding of the kinds of concerns the other might have.
When talking to people about AI, I feel much more comfortable when people acknowledge the concerns, even if they're still using AI in their day-to-day.
Our mental models are inadequate to think about these new tools. We bend and stretch our familiar patterns and try to slap them on these new paradigms to feel a little more secure. It’ll settle down once we spend more time with the tech.
The discourse is crawling with over generalization and tautologies because of this. This won’t do us any good.
We need to take a deep breath, observe, theorize, experiment, share our findings and cycle.
But you haven't defined the framework. You know a bunch of people for which AI is bad for a bunch of handwavey reasons - not from any underlying philosophical axioms. You are doing what you are accusing others of. In my ethical framework the other stated things can be shown to be bad, it is not as clear for AI.
If you want to take a principled approach you need to define _why_ AI is bad. There have been cultures and religions across time that have done this for other emerging technologies, luddites, the amish, etc. They have good ethical arguments for this - and it's possible they are right.
Of course, none of these are caused by the technology itself, but rather by people who drive this cultural shift. The framework difference comes from people believing in short-term gains (like revenue, abusing the novelty factor, etc.) vs those trying to reasonably minimize harm.
OK - so your framework is "harm minimization". This is kind of a negative utilitarian philosophy. Not everyone thinks this way and you cannot really expect them to either. But an argument _for_ AI from a negative utilitarian PoV is also easy to construct. What if AI accelerates the discovery of anti-cancer treatments or revolutionizes green tech. What if AI can act as a smart resource allocator and enables small hi-tech sustainable communes. These are not things you can easily prove AI won't enable even within your framework.
One point which I consider worth making is that LLMs have helped enable a lot of people solve real world problems, even if the solutions are sometimes low quality. The reality is that in many cases the only choice is between a low quality solution and no solution at all. Lots of problems are too small or niche to be able to afford hiring a team of skilled programmers for a high quality solution.
Let’s stay with the (at minimum) low quality solution: What would do someone without IA?
- ask on a forum (Facebook, Reddit, askHN, spécialises forums…)
- ask a neighbor if he knows someone knowledgeable (2 or 3 relations can lead you to many experts)
- go to the library. Time consuming but you might learn something else too and improve knowledge and IQ
- think again about the problem (ask Why? Many times, think out of the box…)
- "a low quality solution"
- "a low-quality solution, but you also spend extra time (sometimes - other people's time) on learning to solve the problem yourself"
- "a high-quality solution, but you've spent years on becoming an expert is this domain"
It's good that you brought this up.
Often, learning to solve a class of problems is simply not a priority. Low-quality vide-coded tools are usually means to an end. And the end goals that they achieve are often not even the most important end goals that you have. Digging so deep into the details of those is not worth it. Those are temporary, ad-hoc things.
In the original post, the author references our previous discussion about "Worse Is Better". It's a very relevant topic! Over there, I actually made a very similar point about priorities. "Worse" software is "better" when it's just a component of a system where the other components are more important. You want to spend as much time as possible on those other components, and not on the current component.
A (translated) example that I gave in that thread:
> In the 1970s, K&R were doing OS research. Not PL research. When they needed to port their OS, they hacked a portable low-level language for that task. They didn't go deep into "proper" PL research that would take years. They ported their OS, and then returned straight to OS research and achieved breakthroughs in that area. As intended.
> It's very much possible that writing a general, secure-by-design instrument would take way more time than adding concrete hacks on the application level and producing a result that's just as good (secure or whatever) when you look at the end application.
I beleive, they aren't against all AI use and aren't against the use that you describe. They are against knowingly cutting corners and pushing the cost onto the users (when you have an option not to). Or onto anything else, be it the environment or the job market
This part is true. But at the same time, it's fairly easy to filter only real, established personal blogs and see that the same type of "practical" AI discourse (that the author dislikes) is present (and dominates) there too.
I haven't read Blindsight, though.
Ah, I see. The argument makes total sense if that's the case.
I'm just not used to talking about learned behaviors in terms of "evolution"
---
I agree with the connections that you make in this post. I like it.
But I disagree that purely-technical discussions around LLMs are "meaningless" and "miss the point". I think, appealing to reason through "it will make your own work more pleasant and productive" (for example, if you don't try to vibecode an app that you'll need to maintain later) is an activity that has a global positive effect too.
Why? Because the industry has plenty of cargo cults that don't benefit you, even at someone else's expense! This pisses me off the most. Irrationality. Selfishness is at least something that I can understand.
I'll throw in the idea that cultivating rationatily helps cultivate a compassionate society. No matter how you look at it, most people have compassion in them. You don't even need to "activate" it. But I feel like, due to their misunderstanding of a situation, or due to logical fallacies, people's compassion often manifests as actions that only make everything worse. The problem isn't that people don't try to help the others. A lot of people try, but do that wrong :(
A simple example: most of the politically-active people with a position that's opposite to yours. (The "yours" in this example is relative and applicable to anyone, I don't mean the author specifically)
In general, you should fight the temptation to percieve people around you as (even temporarily) ill-intentioned egoists. Most of the time, that's not the case. "Giving the benefit of the doubt" is a wonderful rule of thumb. Assume ignorance and circumstances, rather than selfishness. And try to give people tools and opportunities, instead of trying to influence their moral framework.
I'll also throw in another idea. If a problem has an (ethical) selfish solution, we should choose that. Why? Because it doesn't require any sacrifices. This drastically lowers the friction. Sacrifices are a last resort. Sacrifices don't scale well. Try to think more objectively whether that's the most efficient solution to the injustice that bothers you. Sacrifices allow to put yourself on a moral pedestal, but they don't always lead to the most humane outcomes. It's not a zero-sum game.
bn-l•6mo ago
What does this have to do with AI?
Also, why hedge everything you're about to say with a big disclaimer?:
> Disclaimer: I believe that what I’m saying in this post is true to a certain degree, but this sort of logic is often a slippery slope and can miss important details. Take it with a grain of salt, more like a thought-provoking read than a universal claim. Also, it’d be cool if I wasn’t harassed for saying this.
pjc50•6mo ago
The author's general concern about externalization of downsides.
> Also, why hedge everything you're about to say with a big disclaimer?
Because people are extremely rude on the internet. It won't make much of a difference to actual nitpicking, as I'm sure we'll see, it's more of a sad recognition of the problem.
Expurple•6mo ago
Because her previous (Telegram-only) post on a similar topic has attracted a lot of unfounded negative comments that were largely vague and toxic, rather than engaging with her specific points directly and rationally.
She even mentions it later in this post (the part about “worse is better”). Have you not read that? Ironically, you're acting exactly like those people who complain without having read the post.
> What does this have to do with AI?
It's literally explained in the same sentence, right after the part that you quote. Why don't you engage more specifically with that explanation? What's unclear about it?