1. Who's gonna pay back the investors their trillions of dollars and with what?
2. Didn't we have to start thinking about reducing energy consumption like at least a decade ago?
Some people are vegan, some people eat meat. Usually, these two parties get on best when they can at least understand each-other's perspectives and demonstrate an understanding of the kinds of concerns the other might have.
When talking to people about AI, I feel much more comfortable when people acknowledge the concerns, even if they're still using AI in their day-to-day.
Our mental models are inadequate to think about these new tools. We bend and stretch our familiar patterns and try to slap them on these new paradigms to feel a little more secure. It’ll settle down once we spend more time with the tech.
The discourse is crawling with over generalization and tautologies because of this. This won’t do us any good.
We need to take a deep breath, observe, theorize, experiment, share our findings and cycle.
But you haven't defined the framework. You know a bunch of people for which AI is bad for a bunch of handwavey reasons - not from any underlying philosophical axioms. You are doing what you are accusing others of. In my ethical framework the other stated things can be shown to be bad, it is not as clear for AI.
If you want to take a principled approach you need to define _why_ AI is bad. There have been cultures and religions across time that have done this for other emerging technologies, luddites, the amish, etc. They have good ethical arguments for this - and it's possible they are right.
Of course, none of these are caused by the technology itself, but rather by people who drive this cultural shift. The framework difference comes from people believing in short-term gains (like revenue, abusing the novelty factor, etc.) vs those trying to reasonably minimize harm.
OK - so your framework is "harm minimization". This is kind of a negative utilitarian philosophy. Not everyone thinks this way and you cannot really expect them to either. But an argument _for_ AI from a negative utilitarian PoV is also easy to construct. What if AI accelerates the discovery of anti-cancer treatments or revolutionizes green tech. What if AI can act as a smart resource allocator and enables small hi-tech sustainable communes. These are not things you can easily prove AI won't enable even within your framework.
One point which I consider worth making is that LLMs have helped enable a lot of people solve real world problems, even if the solutions are sometimes low quality. The reality is that in many cases the only choice is between a low quality solution and no solution at all. Lots of problems are too small or niche to be able to afford hiring a team of skilled programmers for a high quality solution.
Let’s stay with the (at minimum) low quality solution: What would do someone without IA?
- ask on a forum (Facebook, Reddit, askHN, spécialises forums…)
- ask a neighbor if he knows someone knowledgeable (2 or 3 relations can lead you to many experts)
- go to the library. Time consuming but you might learn something else too and improve knowledge and IQ
- think again about the problem (ask Why? Many times, think out of the box…)
bn-l•4h ago
What does this have to do with AI?
Also, why hedge everything you're about to say with a big disclaimer?:
> Disclaimer: I believe that what I’m saying in this post is true to a certain degree, but this sort of logic is often a slippery slope and can miss important details. Take it with a grain of salt, more like a thought-provoking read than a universal claim. Also, it’d be cool if I wasn’t harassed for saying this.
pjc50•4h ago
The author's general concern about externalization of downsides.
> Also, why hedge everything you're about to say with a big disclaimer?
Because people are extremely rude on the internet. It won't make much of a difference to actual nitpicking, as I'm sure we'll see, it's more of a sad recognition of the problem.