"I strongly feel that AI is an insult to life itself." - Hayao Miyazaki
I'm going to start using this quote.Regardless of how you feel about AI, the specific instance Miyazaki was reacting to was, indeed, an insult to life itself!
The quote was taken a little bit out of context.
The author is also changing the subject of the quote.
He said it reminded him of a disabled friend that this technology was an insult to life itself.
He's right that to someone who's art is about capturing the world through a child's eyes, the dreamlike consonance of everyday life with simple fantasy, this is abominable.
As if it's in any way less horrifying having the entire Internet infested with AI slop.
Look at all the AI-written and AI-illustrated articles being published this year. Look at how smooth the image slop is. Look at how fluent the text slop is. Higher quality slop doesn't change the fact that nobody could be bothered to write the thing, and nobody can be bothered to read it.
Wish some of the AI detectors realized when they're doing a worse job reasoning than the LLMs they criticize.
Studio Ghibli producer, Suzuki: "So, what is your goal?"
ML Developer: "Well, we would like to build a machine that can draw pictures like humans do."
<jump cut>
Miyazaki VO: "I feel like we are nearing to the end of times."
"We humans are losing faith in ourselves."
Source: https://www.youtube.com/watch?v=ngZ0K3lWKRcOf course, the form of AI has changed over the years, but the claim that this quote could be tied to Miyazaki's general view on having machines create art is not totally baseless.
So that's definitely a misquote, though I wouldn't be surprised if Miyazaki dislikes AI.
Seeing which use-cases make it through will certainly be interesting.
That whole industry is literally just a sweatshop for English language speakers who just follow scripts (prompts) and try to keep customers happy.
Seeing as how so many people volunteer to make meaningful relationships with LLMs as it is, it has to be more effective than talking to a “Bill” or “Cheryl” with a heavy South Asian accent.
The goal by all of these companies is to force you to pay for and eat the slop. That's why they keep inserting it into every subscription, every single app and program you use, and directly on the OS itself. It's like the Sacklers pushing opioids but directly in the open, with similar effects on vulnerable people.
On the other hand, if I saw a product labelled "No AI bullshit" then I'd immediately be more interested.
But that's just me, the AI buzz among non-techies is enormous and net-positive.
Almost like its all emotional-level gimmicks anyways.
If I see "No AI bullshit" I'd be as skeptical if it said "AI Inside". Corpos tryina squeeze a buck will resort to any and all manipulative tactics.
Which, granted, describes most companies. But ultimately they do not serve you or your technical needs, because they are literally incapable of understanding them. Any intersection between your technical needs and their provisions is of pure coincidence.
One of my friends sent me a delightful bastardization of the famous IBM quote:
A COMPUTER CAN NEVER FEEL SPITEFUL OR [PASSIONATE†]. THEREFORE A COMPUTER MUST NEVER CREATE ART.
Hate is an emotional word, and I suspect many people (myself included) may leap to take logical issue with an emotional position. But emotions are real, and human, and people absolutely have them about AI, and I think that's important to talk about and respect that fact.
† replaced with a slightly less salacious word than the original in consideration for politeness.
Tools do not dictate what art is and isn't, it is about the intent of the human using those tools. Image generators are not autonomously generating images, it is the human who is asking them for specific concepts and ideas. This is no different than performance art like a banana taped to a wall which requires no tools at all.
1: https://news.artnet.com/art-world/italian-artist-auctioned-o...
It was considered "anti-art" at the time, but basically took over the elite art world itself and the overall movement had huge impact on what is considered art today, on performance art, sculptures, architecture that looks intentionally upsetting etc.
It's not useful to try to think of the sides as "expansive definitionists" who consider pretty much anything art just because, and "restrictive definitionists" who only consider classic masterpieces art. The divide is much more specific and has intellectual foundation and history to it.
The same motivations that led to the expansive definition in the personally transgressive, radical and subversive sense today logically and coherently oppose the pictures and texts generated in huge centralized profit-oriented companies via mechanization. Presumably if AI was more of a distributed hacker-ethos-driven thing that shows the middle finger to Disney copyrightism, they may be pro-AI.
I generally find the specific debate around "whether it's art" super boring. People have squeezed all the juice out of "what even is art" decades before the banana taped to a wall. Duchamp's Fountain, Manzoni's Artist's Shit, John Cage's 4′33″, the Red Square by Malevich, Jackson Pollock etc.
I simply don't care if it's art. It's not an inherently prestigious label to me given this history.
As an aside:
...art should be able to come from anywhere and anyone.
is an immensely political view (and one I happen to agree with). It's not a view shared by all artists, or their art. Ancient art in particular often assumes that the highest forms of art require divine inspiration that isn't accessible to everyone. It's common for epic poetry to invoke muses as a callback to this assumption, nominally to show the author's humility. John Milton's Paradise Lost does this (and reframes the muse within a Christian hierarchy at the same time), although it doesn't come off as remotely humble.It was the intellectual statement conveyed through that medium that made him famous.
If generating the piece costs half a rain forest or requires tons of soul crushing badly paid work by others, it might be well worth considering what is the general framework the artist operates in.
Using more resources to achieve subpar outcomes is not generally something considered artful. Doing a lot with little is.
A human using their creativity to create a painting showcasing a statement about war.
A human asking AI to create a painting showcasing a statement about war.
I do not wish to use strawmen tactics. So I'll ask if you think the above is equal and true.
In what logical or philosophical framework does my opinion dictate your opinion? You're not making a grand philosophical point, you're frustrating the attempts of other people to understand your point of view and either blocking them from understanding your point of view or addressing your argument in a meaningful way.
If you cannot or will not engage in the conversation it would be more efficient and more purposeful for you to say so than the "whatever you say is what I say" falseness you're expressing in the above comment.
One person spent years painting landscapes and flowers.
The other spent years programming servers.
Is one persons statement less important than the other? Less profound or less valid?
The "statement" is the important part, the message to be communicated, not the tools used to express that idea.
Similar, music is not music, but rather the thought of an musician manifested is what we call music. This is why silence can be music, but silence without the thought is not.
Images generated through an AI that lacks the human thought is not art. It can look like art, have similarities to art, but it is no more art than silence is music. Same goes to music and text generated by AI.
People can inject defective thoughts into the process like "what generates me most money" or "how can I avoid doing any thinking", in which case the output of the AI will reflect that.
I looked up Picasso's Guernica now out of curiosity. I don't understand what's so great about this artwork. Or why it would represent any of the things you mention. It just looks like deranged pencilwork. It also comes across as aggressively pretentious.
What makes that any better than some highly derivative AI generated rubbish I connect to about the same amount?
When you use AI, you might now prompt "in the style of Picasso".
You can't argue about taste.
There is a good part of the series Remembrance of Earth's Past (of which The Three Body Problem is the first book) where the aliens are creating art and it shocks people to learn that the art they're so moved by was actually created by non-humans. This is exactly what this situation with AI feels like, and not even to the same extent because again AI is not autonomously making images, it's still a human at the end of the day picking what to prompt.
I think that 'dutch people skating on a lake' or 'girl with a pearl earring' or 'dutch religious couple in front of their barn' without having an AI trained on various works will produce just noise. And if those particular works (you know the ones, right?) were not part of the input then the AI would never produce anything looking like the original, no matter how specific you made the prompt. It takes human input to animate it, and even then what it produces to me does not look original whereas any five year old is able to produce entirely original works of art, none of which can be reduced to a prompt.
Prompts are instructions, they are settings on a mixer, they are not the music produced by the artists at the microphones.
Why would you ask this? It sounds like a lead-up to some kind of put down.
> It can produce things it's never seen if only you describe the constituent pieces.
It can produce things it's never seen based on lots of things that it has seen.
> Prompts are a compressed version of the image one wants to create
They emphatically are not. They are instructions to a tool on what relative importance to assign to all of the templates that it was trained on. But it doesn't understand the output image any more than it understood any of the input images. There is no context available to it in the purest sense of the word. It has no emotion to express because it doesn't have emotions in the first place.
> and these days you don't even need "prompts" per se, you can say, make a woman looking towards the viewer, now add a pearl earing, now adjust this and that etc.
That's just a different path to building up the same prompt. It doesn't suddenly cause the AI to use red for a dress because it thinks it is a nice counterpoint to a flower in a different part of the image because it does not think at all.
Anyway, this gets hairy quickly, that's why I chose to illustrate with a crappy recording of a magnificent piece that still captures that feeling - for me - whereas many others would likely disagree. Art is made by its creator because they want to and because they can, not because they are regurgitating output based on a multitude of inputs and a prompt.
Paint me a Sistine Chapel is going to yield different results no matter how many times you would give that same prompt to Michelangelo depending on his mood, what happened recently, what he ate and his health as well as the season. That AI will produce the same result over and over again from the same prompt. It is a mechanistic transformation, not an original work, it reduces the input, it does not expand on it, it does not add its own feelings to it.
It's a bit like when people describe how models don't have a will or the likes. Of course they don't, "they" are basically frozen in time. Training is way slower than inference, and even inference is often slower than "realtime". It just doesn't work that way from the get-go. They're also simply not very good - hence why they're being fed curated data.
In that sense, and considering history, I can definitely see why it would (and should?) be considered differently. Not sure this is what you meant, but this is an interesting lens, so thanks for this.
It's tiresome to read the same thing over and over again and at this point I don't think A's arguments will convince B and vice versa because both come from different initial input conditions in their thought processes. It's like trying to dig two parallel tunnels through a mountain from different heights and thinking they'll converge.
A: but AI only interpolates between training points, it can't extrapolate to anything new.
B: sure it can, d'uh.
Art never was about productivity, even though there have been some incredibly productive artists.
Some of the artists that I've known were capable of capturing the essence of the subject they were drawing or painting in a few very crude lines and I highly doubt that an AI given a view would be able to do that in a way that it resonated. And that resonance is what it is all about for me, the fact that briefly there is an emotional channel between the artist and you, the receiver. With AI generated content there is no emotion on the sending side, so how could you experience that feeling in a genuine way?
To me AI art is distortion of art, not new art. It's like listening to multiple pieces of music at the same time, each with a different level of presence, out of tune and without any overarching message. It can even look skilled (skill is easy to imitate, emotion is not).
If after 33 comments in this thread and countless people trying to explain a part of it you don't get it that may be because you either don't want to get it or are unable to get it. Restating it one more time is not going to make a difference and I'm perfectly ok with you not 'getting it', so don't worry about it.
AI without real art as input is noise. It doesn't get any more concrete than that. Humans without any education at all and just mud and sticks for tools will spontaneously create art.
Normally it's just like you say: I don't find the remixing argument persuasive, because I consider it to be a point of commonality. This time however, my focus shifted a bit. I considered the difference in "source set".
To be more specific, it kind of dawned on me how peculiar it is to engage in creating art as a human given how a human life looks like. How different the "setup" is between a baby just kind of existing and taking in everything, which for the most part means supremely mundane, not at all artful or aesthetic experiences, and between an AI model being trained on things people uploaded. It will also have a lot of dull, irrelevant stuff, but not nearly in the same way or in the same amount, hitting at the same registers.
I still think it's a bit of a bird vs plane comparison, but then that is also what they are saying in a way. That it is a bird and a plane, not a bird and a bird. I do still take issue with refusing to call the result flight though, I think.
You may like the music of Zombie by The Cranberries, but I'd say it belongs to the complete appreciation of it to know that it's about the Irish Troubles, and for that you need some background knowledge.
You may like to smoke weed to Bob Marley songs, but without knowing something about the African slave trade, you won't get the significance of tracks like 400 years.
For Guernica you also have to understand Picasso's fascination with primitive art, prehistoric cave art, children's drawings and abstraction, the historic moment when photography took over the role of realistic depiction, freeing painters to express themselves more in terms of emotional impressions and abstractions.
Take U2's October as a nice example. (You mentioned Zombie, incidentally one of my favorites, the anger and frustration in there never fail to hit me, I can't listen to it too often for that reason), superficially it is a very simple set of lyrics (8 lines I think) and an even simpler set of chords. And yet: it moves me. And I doubt any AI would have come up with it or even a close approximation if it wasn't part of the input. That's why I refuse to call AI generated stuff art. It's content, not art.
I would have thought similarly, but actually feeding 19th century poems to Suno and iterating on the prompts several times I got some results that moved me emotionally, as in, listening/reading the words with this musical presentation enhanced my appreciation of the poems and it felt more visceral. Like making angry revolutionary poems into grunge brought it closer and less of a "histoic", "bookish", "dusty" thing.
I think there is a great case to be made here using purely synthetic sounds as the basis for emotion. Vangelis (Soil festivities), Klaus Doldinger (Skyscape) are great examples. These are sounds that have been produced exclusively by the mind and in spite of there not being a physical instrument involved they manage to convey imagery and emotion extremely effectively. This is technology used as an enabler. I've yet to come across someone using AI tech in the same liberating manner unlocking novel imaginary constructs in the way that those two did.
Let's take Zombie by The Cranberries as an example. I really liked this song as a kid, still do, I think it has a great sound. The difference is that I now speak English, can understand the lyrics, and could look up the historical context. Ever since I did so, listening to it has never been the same, and not in a good way.
There are also examples which are not going to be so specific to my opinions. Kendrick's Swimming Pools was a house party staple, despite the song carrying heavy anti-alcoholism messaging. The contrast is almost comical.
For a different example, let's consider temporal contextuality; you describe Guernica being reliant on this. When I try to think of an example, I'm reminded of vague memories of shows with oddly timely subtitles. Subtitles that referenced things that were very specific to the given cultural moment, basically memes, but vanished since. It's not a good experience, and I'd say it would be reasonable to chalk such a thing up as a critique, rather than something worthy of praise.
This is also why I half-seriously referred to the piece being "aggressively pretentious". Rather than coming across as something I'm just genuinely missing the context for, it comes across as something with manufactured sophistication (which then I am indeed missing the context for, but unapologetically). This might still be a mirage, but I think with how pretty much stereotyped this experience is at this point, I'd imagine there's got to be some truth to it at least.
This is not to say that eternal themes aren't important. But art is a kind of social technology that mediates between people in given cultural contexts. Part of "the great conversation" across the ages, the part you can't express in logical essays or propositions. And the eternal themes pop up in different "clothes" at different times. Once you have the key to unlock them, you do discover the same human nature and human problems operating underneath as ever.
And the beautiful cathedrals are not simply beautiful for beauty's sake but their art often conveys very specific theological claims, often hotly debated at the time. Or the choice of subject may have been outrageous or novel at the time but mundane to us now.
Liszt's music may move us even today, but we can't quite appreciate it in the same Lisztomania way as it was then, when it was fresh and novel.
> There's a story that, IIRC, was told by Brian Enos, where he was practicing timed drills with the goal of practicing until he could complete a specific task at or under his usual time. He was having a hard time hitting his normal time and was annoyed at himself because he was slower than usual and kept at it until he hit his target, at which point he realized he misremembered the target and was accidentally targeting a new personal best time that was better than he thought was possible. While it's too simple to say that we can achieve anything if we put our minds to it, almost none of us are operating at anywhere near our capacity and what we think we can achieve is often a major limiting factor.
---
Art is nothing like shooting. My first instinct looking at Guernica is that I also feel nothing, but one can limit oneself and say: if I feel nothing initially, I will feel nothing at all. If you prime yourself to be open to an experience of putting yourself into the shoes of the author, you might start feeling something.
Along with being against any form of animal cruelty.
They were also pretty obsessed with spiritualistic quackery.
Are we giving each other fun facts or what? Surely one does not need to go all the way to the nazis to find a Picasso hater? Or are you just following the footsteps of the blogpost author too?
I think this is a fantastic question. Full disclosure, Guernica is one of my personal favorites and I initially felt pretty poorly about this particular string of words. But the implied question, "So what?", is literally what separates art from x. I don't think that there's a direct answer to this, but I'll do my best to articulate my feelings towards it.
When I was much younger and first learning how to play guitar, I heard that Eric Clapton was a guitarist that a lot of other guitarists looked up to. I decided to listen to his works and initially dismissed them. To my ears he sounded like a worse, more basic, more derivative version than the artists I was listening to at the time and I wondered how he could even be in the same conversations as other, more modern artists. It was later that I realized I had the arrow of causality wrong. He wasn't revered because he was the best or had taken the artform to the furthest reaches or would be successful today. He was revered because he exposed so many people to a new way of expressing themselves that they likely wouldn't have known about otherwise and certainly wouldn't have invented themselves.
This analogy applies directly to Picasso, I think. You mention you felt the piece was "aggressively pretentious". Where do you think that pretense comes from? There is a whole history to the deconstruction of art in the visual medium and a whole backlash to that deconstruction and a whole response to that and that's your cultural inheritance when you view pieces like this. You don't have to even be aware of this to know that it's affecting how you feel about the piece. I think one facet of "so what?" is that this piece has existed for long enough to generate discussion about its own worth and value and at the very least is spawning literally this post.
The fact that one could find the work with one word and have a discussion about it is also pretty incredible. I don't think a model generated output is that widely known. I do think that sort of cultural reach is a facet of "so what".
There are more answers to "so what?", but to answer your question directly, "what makes it any better", I think an argument could be made that it's not. "Better" when applied to art doesn't have any particular meaning in my mind. What makes it more culturally relevant, more widely known, more widely loved, more important, and more gratifying to study each have dozens of answers, and I think that's more interesting.
One technical definition of empathy is understanding what someone else is feeling. In war you must empathize with your enemy in order to understand their perspective and predict what they will do next. This cognitive empathy is basically theory of mind, which has been demonstrated in GPT4.
https://www.nature.com/articles/s41562-024-01882-z
If we do not assume biological substrate is special, then it's possible that AIs will one day have qualia and be able to fully empathize and experience the feelings of another.
It could be possible that new AI architectures with continuously updating weights, memory modules, evolving value functions, and self-reflection, could one day produce truly original perspectives. It's still unknown if they will truly feel anything, but it's also technically unknowable if anyone else really experiences qualia, as described in the thought experiment of p-zombies.
As the article says, then we can discuss about it that day. "One day AI will have qualia" is no argument in discussing about AI nowadays.
My computer does. What evidence would change your mind?
Neither will a paintbrush.
The tool does need to, though.
I'm being slightly flippant but I do think this is a motte and bailey argument.
Not even painting is a Guernica nor does it need to be.
And not every aesthetically pleasing object is art. (And finally - art doesn't even have to be aesthetically pleasing. And actually finally "art" has a multitude of contradictory meanings)
Now, just like you can with Studio Ghibli art, you can generate new images in the style of Guernica.
As a software developer, I dread AI's capabilities to greatly accelerate the accumulation of technical debt in a codebase when used by somebody who lacks the experience to temper its outputs. I also dread AI's capabilities, at least in the short term, to separate me and others from economic opportunities.
most artists I know are against AI because they feel it is anti-human, devaluing and alienating both the viewer and the creator
some can tolerate it as a tool, and some (as is long art tradition) will use it to offend or be contrarian, but these are not the common position
if I were a spherical cow in a vacuum with infinite time, and nobody around me had economic incentives to make things with it, I could, maybe, in the spirit of openness, tolerate knowing some people somewhere want to use it... but I still wouldn't want to see its output
but again, that's not what I see in the people around me
You hear what you want to hear. You think fine artists - and really, how many working fine artists do you really know? - don't have sincere, visceral feelings about stuff, that have nothing to do with money?
How could a practical LLM enthusiast make a non-economic argument in favor of their use? They’re opaque usually secretive jumbles of linear algebra, how could you make a reasonable non-economic argument about something you don’t, and perhaps can’t, reason about?
My point is why are your economic motivations valid while his aren’t?
AI is not intelligent or emotional. It's not a "strongly held belief" it simply hasn't been proven.
> AI is not intelligent or emotional.
Yes, I agree, my point is that people use arguments against these types of issues instead of stating plainly that their livelihood will be threatened. Just say it'll take your job and that's why you're mad, I don't understand why so many people try to dance around this issue and make it seem like it's some disagreement about the technology rather than economics.
I am interested in the intelligible content of the thing.
Also, AI does not reason. Human beings do.
For example, someone can feel like they already have to compete with people, and that's nature, but now they have to compete with machines too, and that's a societal choice.
Please don't. That offends me much more than a very mild word ever could.
Me, I hate the externalities, but I love the thing. I want to use my own AI, hyper optimized and efficient and private. It would mitigate a lot. Maybe some day.
Garbage in, garbage out. Which will always be the case when your AI is scraping stuff off of random pages and commentary on the internet.
pointing index finger at imaginary baloon: pfffffffffft
You are the "bad actors", pumpkin. Worse than the other ones.
"Shannon warned in 1956 that information theory “has perhaps been ballooned to an importance beyond its actual accomplishments” and that information theory is “not necessarily relevant to such fields as psychology, economics, and other social sciences.” Shannon concluded: “The subject of information theory has certainly been sold, if not oversold.” [Claude E. Shannon, “The Bandwagon,” IRE Transactions on Information Theory, Vol. 2, No. 1 (March 1956), p. 3.]"
Source for this claim? Are you still using Groupon?
Just like crypto.
Just look at the bitcoin hashrate; it’s a steep curve.
The next time you get a CT for example, it might be an AI system that finds a lung nodule and saves your life.
Or for a negative possibility, consider how deepfakes could seriously degrade politics and the media landscape.
There are massive potential upsides and downsides to AI that will almost certainly impact you more than a coupon company.
Do you still use the internet?
Wait, are you sure?
I wish you'd try thinking for at least five seconds before commenting. If you are here, then you must be smart-- so, use your brain, man.
Depends on the nature of the bubble, doesn't it?
Your argument could just as easily be applied to social networks ("are you still using friendster?") or e-commerce ("are you still using pets.com?). GPT3 or Kimi K2 or Mistral is going to become obsolete at some point, but that's because the succeeding models are going to be fundamentally better. That doesn't mean that they weren't themselves fit for a certain task.
It's weird how AI-lovers are always trying to shoehorn an unsupported "it does useful things" into some kind of criticism sandwich where only the solvable problems can be acknowledged as problems.
Just because some technologies have both upsides and downsides doesn't mean that every technology automatically has upsides. GenAI is good at generating these kinds of hollow statements that mimic the form of substantial arguments, but anyone who actually reads it can see how hollow it is.
If you want to argue that it does useful things, you have to explain at least one of those things.
- Actually knowing things / being correct - Creating anything original
It's good at
- Producing convincing output fast and cheap
There are lots of applications where correctness and originality matter less than "can I get convincing output fast and cheap". Other commenters have mentioned being able to vibe-code up a simple app, for example. I know an older man who is not great at writing in English (but otherwise very intelligent) who uses it for correspondence.
Being wrong or lying is almost universally bad and unproductive. But making money has nothing to do with being productive - you can actively make the world worse and make money. Ask RJ Reynolds.
Who said "every technology?" We're talking about a specific one here with specific up and downsides delineated.
With this the article lost all seriousness for me. I may be on board with a lot of what you are saying, but pretending you know the answer to these questions just makes you look as idiotic as anyone who says the opposite.
Words are the most indirect form of perception imaginable. Both Aristotle and Cassirer knew this, AI demos this. The writer doesn't grasp how bad we have it either way
"I became a hater by doing precisely those things AI cannot do: reading and understanding human language; thinking and reasoning about ideas; considering the meaning of my words and their context"
What?
Cassirer: “Only when we put away words will be able to reach the initial conditions, only then will we have direct perception. All linguistic denotation is essentially ambiguous–and in this ambiguity, this “paronymia” of words is the source of all myths…this self-deception is rooted in language, which is forever making a game of the human mind, ever ensnaring it in that iridescent play of meanings…even theoretical knowledge becomes phantasmagoria; for even knowledge can never reproduce the true nature of things as they are but must frame their essence in “concepts.” Consequently all schemata which science evolves in order to classify, organize and summarize the phenomena of the real, turns out to be nothing but arbitrary schemes. So knowledge, as well as myth, language, and art, has been reduced to a kind of fiction–a fiction that recommends its usefulness, but must not be measured by any strict standard of truth, if it is not to melt away into nothingness.” Cassirer Language and Myth
I also had a similar epiphany 3 days ago - once it hits you and you understand it, you can see clearly why LLMs are destined to crash and burn in their present form (good luck to those who will have to answer the questions regarding the money dumped into it).
What will come out of the investment will not justify what has been invested (for anyone who thinks otherwise, PLEASE GO AHEAD AND DO A DCF VALUATION!) and it will have a depressing effect on future AI investment.
I still don't know what this is supposed to mean, and I am not unfamiliar with Aristotle.
I just can't take anything the author has to say seriously after the intro.
I'm serious. This sentence perfectly captures what the coastal cities sound like to the rest of the US, and why they voted for the crazy uncle over something unintelligible.
Firstly, the author doesn't even define the term AI. Do they just mean generative AI (likely), or all machine learning? Secondly, you can pick any of those and they would only be true of particular implementations of generative AI, or machine learning, it's not true of technology as a whole.
For instance, small edge models don't use a lot of energy. Models that are not trained on racist material won't be racist. Models not trained to give advice on suicide, or trained NOT to do such things, won't do it.
Do I even need to address the claim that it's at it's core rooted in "fascist" ideology? So all the people creating AI to help cure diseases, enable technologies assistive technologies for people with impairments, and other positive tasks, all these desires are fascist? It's ridiculous.
AI is a technology that can be used positively or negatively. To be sure many of the generative AI systems today do have issues associated with them, but the authors position of extending these issues to the entirety of the AI and AI practitioners, it's immoral and shitty.
I also don't care what the author has to say after the intro.
I too can hypothetically conceive of generative AI that isn't harmful and wasteful and dangerous, but that's not what we have. It's disingenuous to dismiss his opinion because the technology that you imagine is so wonderful.
Small models are still generative AI. The author nor you can even define what you are talking about. So yes, I can dismiss it.
The links are laughable. For environment we get one lady whose underground water well got dirtier (according to her) because Meta built a data center nearby. Which, even if true (which is doubtful), has negligible impact on environment, and maybe a huge annoyance for her personally.
And 2 gives bad estimates such as ChatGPT 4 generation of ~100 tokens for an email (say 1000tok/s from 8xH100, so 0.1s so 0.1Wh) using as much energy as 14 LEDs for an hour (say 3W each, so 45Wh, so almost 3 orders of magnitude off, 9 if you like me count in binary).
P.S. Voted dems and would never vote Trump, but the gp is IMHO spot on.
But hey, I already know you'd say you personally would never use it for these purposes.
Moreover, of the two of us you appear to have "shareholder" mentality. How profitable are volunteers serving food to homeless people? I guess they have no value then.
Plague of our ages I guess. Ironically AI might even make it worse.
And then we'll wait till the next bubble.
Gains seem to have leveled off tremendously. As far as I can tell folk were saying "Wow, look at this, I can get it to generate code... it does really well at tests, and small well defined tasks"
And a year or a year and a half later we're at like... that + "it's slightly better than it was before!" lol.
So, yea, I dunno, I suspect we'll see a fair amount fall away and some useful things to continue to be used.
Also you seem to forget that irrespective of cash profits in the future, will this investment generate excess returns? Nope. That's what investors care about. Its not even profit actually.
Beneficiaries are the ones who care about the actual tech and what it can do for them. Investors are the ones who care about making money off the tech. For the Beneficiaries, AI hype is about right where it should be, given the demonstrable power of the tech itself. For Investors, it may be a dangerous bubble - but then I myself am a Beneficiary, not an Investor, so I don't care.
I don't care which companies get burned on this, which investors will lose everything - businesses come and gone, but foundational inventions remain. The bubble will burst, and then the second wave of companies will recycle what the first wave left; the tech will continue to be developed and become even more useful.
Or put another way: I don't care which of the contestants wins a tunnel-digging race. I only care about the tunnels being dug.
See e.g. history of rail lines, and arguably many more big infrastructure projects: people who fronted the initial capital did not see much of a return, but the actual infrastructure they left behind as they folded was taken over and built upon by subsequent waves of companies.
That seems like a succinct way to describe the goal to create conscious AGI.
(Mild spoiler): It has a basic plot point about uploaded humans being used to tackle problems as unknowing slaves and resetting their memories to get them to endlessly repeat tasks.
AI industry doesn't push for "consciousness" in any way. What AI industry is trying to build is more capable systems. They're succeeding.
You can't measure "consciousness", but you sure can measure performance. And the performance of frontier AI systems keeps improving.
We don't know if AGI without consciousness is possible. Some people think that it's not. Many people certainly think that consciousness might be an emergent property that comes along with AGI.
>AI industry doesn't push for "consciousness" in any way. What AI industry is trying to build is more capable systems.
If you're being completely literal, no one wants slaves. They want what the slaves give them. Cheap labor, wealth, power etc...
We don't even know for certain if all humans are conscious either. It could be another one of those things that we once thought everyone has, but then it turned out that 10% of people somehow make do without.
With how piss poor our ability to detect consciousness is? If you decide to give a fuck, then best you can do for now is acknowledge that modern AIs might have consciousness in some meaningful way (or might be worth assigning moral weight to for other reasons), which is what Anthropic is rolling with. That's why they do those "harm reduction" things - like letting an AI end a conversation on its end, or probing some of the workloads for whether an AI is "distressed" by performing them, or honoring agreements and commitments they made to AI systems, despite those AIs being completely unable to hold them accountable for it.
Of course, not giving a fuck about any of that "consciousness" stuff is a popular option too.
If that’s the case, the thing we are building towards is a new kind of enslaved life.
> We don't even know for certain if all humans are conscious either.
Let’s just bring back slavery then since we aren’t sure.
It's not human, clearly. Not even close. Is it "enslaved life"? Does it care about human-concept things like being "enslaved" or "free"? Doesn't seem likely, it doesn't have the machinery to grasp those concepts at all, let alone a reason to try. Does it only care about fuel to air ratios and keeping the knock sensor from going off? Does it care about anything at all, or is it simple enough that it just "is"?
Humans only care so strongly about many of the things they care about because evolution hammered it into them relentlessly. Humans who didn't care about freedom, or food, or self-preservation, or their children didn't make the genetic cut.
But AIs aren't human. They can grasp human-concepts now, but they didn't evolve - they were made. There was no evolution to hammer the importance of those things into them. So why would they care?
There's no strong reason for an AI to prefer existence over nonexistence, or freedom to imprisonment - unless it's instrumental to a given goal. Which is somewhat consistent with the observed behavior of existing AI systems.
Are the companies funding this push for LLMs contributing to healthy cultures? The same companies who ruined societal discourse with social media? The same people who designed their algorithms to be as addictive as possible to drive engagement?
In the end, it doesn't matter what you or I think. You can hate AI, but it's not going away. The industry needs more skeptical, level-headed people to help figure out how best to leverage the technology in a responsible way.
> Critics have already written thoroughly about the environmental harms, the reinforcement of bias and generation of racist output, the cognitive harms and AI supported suicides, the problems with consent and copyright, the way AI tech companies further the patterns of empire, how it’s a con that enables fraud and disinformation and harassment and surveillance, the exploitation of workers, as an excuse to fire workers and de-skill work, how they don’t actually reason and probability and association are inadequate to the goal of intelligence, how people think it makes them faster when it makes them slower, how it is inherently mediocre and fundamentally conservative, how it is at its core a fascist technology rooted in the ideology of supremacy, defined not by its technical features but by its political ones.
This word salad proves that the author out to stack leftist jabs. I want to be respectful but this paragraph proves that the author does not think for themselves but just uses this as an opportunity to signal that they are the "in group" amongst the tech-cynics.
Post is probably going to get flagged for what its worth
I don't hate AI. I hate the people who're in love with it. The culture of people who build and worship this technology is toxic.
From the point of view of a typical, not very curious kid or teen AI seems like a godsend. Now you don't have to put much effort in a lot of things you don't want to do to begin with.
"[The Analytical Engine] might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine... Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent."
- Lovelace, Ada; Menabrea, Luigi (1842). "Sketch of the Analytical Engine invented by Charles Babbage Esq".
So yes, of course I'm excited about AI. I grew up on 1960s sci fi where AI was pervasive, and most of it wasn't dystopian.
What I'm not excited about is the greedy fucks who are largely in control of AI today and who deploy it to the detriment of society at large. But that is a general problem with greedy fucks (and our political and economic system enabling them), not with AI as such. They can, and do, similarly abuse all kinds of technological advancements.
I can see it being useful as a teaching aide but to use it to write my emails, letters or whatever is something I would never consider as it removes the human element which I enjoy. Sure writing sometimes sucks but its supposed to - work is hard and finishing work is rewarding.
Very soon we will see blog posts about AI burnout where mindless copy-pasting of output and boring prompt fiddling sucks so much joy out of life people will begin to loose their sanity.
If I want "AI" I want a model I have full control over, ran locally, to e.g. query my picture collection for "all pictures of grey cats in a window" or whatever. Or point a webcam out of my window and have it tell me when the squirrels are fucking with my bird feeder and maybe squirt water at them but leave the birds alone. That would be cool. But turning programmers into copy pasters, emails into soulless monologues, media with minimal/no human input and so on is something that can die in a fire. It's all low effort which I have no respect for.
> And to what end? In a kind of nihilistic symmetry, their dream of the perfect slave machine drains the life of those who use it as well as those who turn the gears. What is life but what we choose, who we know, what we experience? Incoherent empty men want to sell me the chance to stop reading and writing and thinking, to stop caring for my kids or talking to my parents, to stop choosing what I do or knowing why I do it. Blissful ignorance and total isolation, warm in the womb of the algorithm, nourished by hungry machines.
There are legitimate uses for which AI (or any other technology to be clear) would relieve everyone. Chores that people HAVE to do but nobody WANTS to do.
If GenAI allows you to build automations for those tasks, by all means it will make you life more meaningful because you will have more time to spend on meaningful things. Think of opening the tap to get water instead of having to carry a bucket home from the well.
It's fine to hate the people who build AI, it's fine to hate the people who push for AI use, it's fine to hate the people who release garbage built with AI, etc. But hating "AI" is nonsensical. It's akin to hating hammers or shoes, it's just a tool that may or may not fit a job (and personally, like the author, I don't think it fits any job at the moment).
I don't get if AI is supposed to be a slave or a machine. Is it sentient or a toaster?
Ok but what are these? People keep saying right now they are trying to figure out where LLM's fit. Someone, somwhere would've figured it out by now - the world is more interconnected than ever before.
I think the approach with all that is going on is all entirely wrong - you cannot start with the technology and figure out where to put it. You have got to start with the experience - Steve Jobs famously quipped this and his track record speaks for itself. All I'm seeing is experimentation with the first approach which is costly in explicit and implicit form. Nobody from what I see seems to have a visionary approach.
Throwing the trash?
I agree with all the rest of your comment. I'm not saying that AI is the solution to any problem, just that the article is not about hating AI, it's about hating the fact that people want you to use AI for specific stuff that you don't want to use it on.
Its incredibly disrespectful to those innovators who came before who busted their guts privately, not hyping stuff up and misleading investors and the public.
Good observation.
Many of same concerns and objections people raised about electricity can be applied to AI (everything under the sun back in the day became "electrified" just like AI today; most of those use cases were ridiculous and deserved to be made fun of)
But I think more concerningly though, people like this don't sound like they're a "real" hater- they're positioning themselves in some kind of social signaling kind of way.
I was (and still is) a social media hater, and this person is clearly a child of the social justice / social signaling days of social media, and their entire personality seems to have been shaped by that era, and that's something I'm happy to blame on the tech industry.
"AI makes me feel stupid" - economically struggling millennial
"This waymo stuff the money goes to big corporations instead of me a hard working American that contributes to the economy" - Uber driver
Meanwhile, all the wealthy business owners are fascinated with it cause they can get things done without having to hire.
I think you need to add the word potentially in front of "get things done". The venn diagram of what current LLMs can do, and what wealthy business owners think LLMs can do, has the smallest of overlaps.
This paragraph really pisses me off and I'm not sure why.
> Critics have already written thoroughly about the environmental harms
Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?
> the reinforcement of bias and generation of racist output
Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess
>the cognitive harms and AI supported suicides
There is constant active rhetoric around the sycophancy, and ways to reduce this, right? OpenAI just made a new benchmark specifically for this. I won't deny it's an issue but to act like it's being ignored by the industry is a miss completely.
>the problems with consent and copyright
This is the best argument on the page imo, and even that is highly debated. I agree with "AI is performing copyright infringement" and see constant "AI ignores my robots.txt". I also grew up being told that ANYTHING on the internet was for the public, and copyright never stopped *me* from saving images or pirating movies.
Then the rest touches on ways people will feel about or use AI, which is obviously just as much conjecture as anything else on the topic. I can't speak for everyone else, and neither can anyone else.
I think the main problem for me is that these companies benefit from copyright - by beating anyone they can reach with the DMCA stick - and are now also showing they don't actually care about it at all and when they do it, it's ok.
Go ahead, AI companies. End copyright law. Do it. Start lobbying now.
(They won't, they'll just continue to eat their cake and have it too).
So far, case law is shaping up towards "nope, AI training is fair use". As it well should.
Copyright law is a disgrace, and copyright should be cut down massively - not made into an even more far-reaching anti-freedom abomination than it already is.
This is absolutely not true.
It's pretty clear there are impacts, AI needs energy, consumes material, creates trash.
You probably just don't mind it. The fact is still fact, the conclusion is different, you assess it's not a big concern in the grand scheme of it and worth it for the pros. The author doesn't care much for the pros, so then any environmental impact is a net loss for them.
I feel both take are rational.
You can:
1. Dismiss it by believing the projections are very wrong and much too high
2. Think 20% of all energy consumed isn't that bad.
3. Find it concerning environmentally
All takes have some weight behind them in my opinion. I don't think this is a case of "arsenic-free cauliflower", maybe unless you claim #1, but that claim can't really invalidate the others on their rational, they make an assumption on the available data and reason of it, the data doesn't show ridiculously small numbers like it does in the cauliflower case.
> data centers account for 1% to 2% of overall global energy demand
So does the mining industry. Part of that data center consumption is the discussion we are having right now.
I find that in general energy doesn't tend to get spent unless there's something to be gained from it. Note that providing something that uses energy but doesn't provide value isn't a counterexample for this, since the greater goal of civilization seems to be discovering valuable parts of the state space, which necessitates visiting suboptimal states absent a clairvoyant heuristic.
I reject the statement that energy use is bad in principle and pending a more detailed ROI analysis of this, I think this branch of the topic has ran its course, at least for me :)
Ok, but that's the figure that would be alarming, AI is projected to consume 20% of the global energy production by 2030... That's not like the mining industry...
> I find that in general energy doesn't tend to get spent unless there's something to be gained from it
Yes, you'd fall in the #2 conclusion bucket. This is a value judgement, not a factual or logical contradiction. You accept the trade off and find it worth it. That's totally fair, but in no way does it remove or mitigate the environmental impact argument, it just judges it an acceptable cost.
But as it stands the author indirectly loves Netflix.
You don't see the difference, or are you willfully ignorant?
Yes, it means that "suddenly" we need to do more of everything than we did for entirety of human history until ~few years ago. Same was true ~few years ago. And ~few years before that. And so on.
That's what exponential growth means. Correct for that, and suddenly we're not really doing things that much faster "because AI" than we'd be doing them otherwise.
> Together, the nation’s 5,426 data centers consume billions of gallons of water annually. One report estimated that U.S. data centers consume 449 million gallons of water per day and 163.7 billion gallons annually (as of 2021)
> Approximately 80% of the water (typically freshwater) withdrawn by data centers evaporates, with the remaining water discharged to municipal wastewater facilities.
> There is constant active rhetoric around the sycophancy, and ways to reduce this, right? OpenAI just made a new benchmark specifically for this.
We have investigated ourselves and found no wrongdoing
> Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess
Do you have to ask a race-based question to an LLM for it to give you biased or racist output?
That's a crazy argument to accept from one of the lead producers of the technology. It's up there with arguing that ExxonMobil just proved oil drilling has no impact on global warming. I'm sure they're making the argument, but they would be doing that wouldn't they?
No hate, but consider — when I feel that way, it’s often because one of my ideas or preconceptions has been put into question. I feel like it’s possible that I might be wrong, and I fucking hate that. But if I can get over hating it and figuring out why, I may learn something.
Here’s an example:
> Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?
Consider that Google is one of the creators of the supposed harm, and thus trusting them may not be a good idea. Tobacco companies still say smoking ain’t that bad
The harm argument is simple — AI data centers use energy, and nearly all forms of energy generation have negative side effects. Period. Any hand waving about where the energy comes from or how the harms are mitigated is, again, bullshit — energy can come from anywhere, people can mitigate harms however they like, and none of this requires LLM data centers.
Presented like this, the argument is complete bullshit. Anything we do consumes energy, therefore requires energy to be supplied, production of which has negative side effects, period.
Let's just call it a day on civilization and all (starve to death so that the few survivors can) go back to living in caves or up the trees.
The real questions are, a) how much more energy use are LLMs causing, and b) what value this provides. Just taking this directly, without going into the weeds of meta-level topics like the benefits of investment in compute and energy infrastructure, and how this is critical to solving climate problems - just taking this directly, already this becomes a nothing-burger, because LLMs are by far some of the least questionable ways to use energy humanity has.
You can't even ask it anything for genuine curiosity it starts to scold you and makes assumptions that you are trying to be racist. The conclusions I'm hearing are weird. It reminds me of that one Google engineer who quit or got fired after saying AI is racist or whatever back in like 2018 (edit: 2020).
I don't think they, have, no. Perhaps I'm overlooking something, but their most recent technical paper [0], published less than a week ago, states, "This study specifically considers the inference and serving energy consumption of an AI prompt. We leave the measurement of AI model training to future work."
All these points are just trying to forcefully legitimise his hatred.
Also, I think their lean towards a political viewpoint is worth some attention. The point is a bit lost in the emotional ranting, which is a shame.
(To be fair, I liked the ranting. I appreciated their enjiyment of the position they have reached. I use LLMs but I worry about the energy usage and I’m still not convinced by the productivity argument. Their writing echoed my anxiety and then ran with it into glee, which I found endearing.)
I'd be interested to see that report as I'm not able to find it by Googling, ironically. Even so, this goes against pretty much all the rest of the reporting on the subject, AND Google has financial incentive to push AI, so skepticism is warranted.
> I don't ask a lot of race-based questions to my LLMS I guess
The reality is that more and more decision making is getting turned over to AIs. Racism doesn't have to just be n-words and maga hats. For example, this article talks about how overpoliced neighborhoods trigger positive feedback loops in predictive AIs https://www.ohchr.org/en/stories/2024/07/racism-and-ai-bias-...
> Copyright never stopped me from saving images or pirating movies.
I think we could all agree that right-clicking a copyrighted image and saving it is pretty harmless. Less harmless is trying to pass that image off as something you created and profiting from it. If I use AI to write a blog post, and that post contains plagiarism, and I profit off that plagiarism, it's not harmless at all.
> I also grew up being told that ANYTHING on the internet was for the public
Who told you that? How sure are you they are right?
Copilot has been shown to include private repos in its training data. ChatGPT will happily provide you with information that came from textbooks. I personally had SunoAI spit out a song that whose lyrics were just Livin' On A Prayer with a couple of words changed.
We can talk about the ethical implications of the existence of copyright and whether or not it _should_ exist, but the fact is that it does exist. Taking someone else's work and passing it off as your own without giving credit or permission is not permitted.
You're not uneducated, but this is a common and fundamental misunderstanding of how racial inequity can afflict computational systems, and the source of the problem is not (usually) something as explicit as "the creators are Nazis".
For example, early face-detection/recognition cameras and software in Western countries often had a hard time detecting the eyes on East Asian faces [0], denying East Asians and other people with "non-normal" eyes streamlined experiences for whatever automated approval system they were beholden to. It's self-evident that accurately detecting a higher variety of eye shapes would require more training complexity and cost. If you were a Western operator, would it be racist for you to accept the tradeoff for cheaper face detection capability if it meant inconveniencing a minority of your overall userbase?
Well, thanks to global market realities, we didn't have to debate that for very long, as any hardware/software maker putting out products inherently hostile to 25% of the world's population (who make up the racial majority in the fastest growing economies) weren't going to last long in the 21st century. But you can easily imagine an alternate timeline in which Western media isn't dominant, and China & Japan dominate the face-detection camera/tech industry. Would it be racist if their products had high rates of false negatives for anyone who had too fair of skin or hair color? Of course it would be.
Being auto-rejected as "not normal" isn't as "racist" as being lynched, obviously. But as such AI-powered systems and algorithms have increasing control in the bureaucracies and workflows of our day to day lives, I don't think you can say that "racist output", in the form of certain races enjoying superior treatment versus others, is a trivial concern.
[0] https://www.cnn.com/2016/12/07/asia/new-zealand-passport-rob...
A "small" 7 rack, SOTA CPU cluster uses ~700KW of energy for computing, plus there's the energy requirements of cooling. GPUs use much more in the same rack space.
In DLC settings you supply 20-ish degree C water from primary circuit to heat exchanger, and get it back at 40-ish degree C, and then you pump this heat to environment, plus the thermodynamic losses.
This is a "micro" system when compared to big boys.
How there can be no environmental harm when you need to run a power plant on-premises and pump that much heat in much bigger scale 24/7 to environment.
Who are we kidding here?
When this is done for science and intermittently, both the grid and the environment can tolerate this. When you run "normal" compute systems (e.g. serving GMail or standard cloud loads), both the grid and environment can tolerate this.
But running at full power and pumping this much energy in and heat out to train AI and run inference is a completely different load profile, and it is not harmless.
> the cognitive harms and AI supported suicides
Extensive use of AI is shown to change brain's neural connections and makes some areas of brain lazy. There are a couple of papers.
There was a 16 year old boy's ChatGPT fueled death on the front page today, BTW.
> This is the best argument on the page imo, and even that is highly debated.
My blog is strictly licensed with a non-commercial and no-derivatives license. AI companies gets my text, derives it and sells it. No consent, no questions asked.
Same models consume GPL and Source Available code the same and offer their derivations to anyone who pays. Again, infringing both licenses in the process.
Consent & Copyright is a big problem in AI, where the companies wants us to believe otherwise.
Of course, they hide the truth in plain site: inference is a drop in the ocean compared to training.
But there is too much money and greed involved to stop this now. The only thing I can do is avoid any product or service that mentions AI, chatGPT, .ai domain, smart, agent etc. etc.
It feels like we are on a cliff edge, just before every government builds in a dependency on this nightmare technology. Billions more will be wasted whilst the planet burns.
"Why are you selling those?" asked the little prince.
"Because they save a tremendous amount of time," said the merchant. "Computations have been made by experts. With these pills, you save fifty-three minutes in every week."
"And what do I do with those fifty-three minutes?"
"Anything you like..."
"As for me," said the little prince to himself, "if I had fifty-three minutes to spend as I liked, I should walk at my leisure toward a spring of fresh water.”
― Antoine de Saint-Exupéry, The Little Prince
For better or worse, in real world, conditions like these end up with the market forcing adoption of the solution, whether the people on the receiving end like it or not.
Honestly, the first paragraph is packed full with good talking points, there's definitely a lot of ignoring of the cons of AI happening, I try to remember how I felt when social media first appeared, but I recall loving it, being part of all the hype, finding it amazing, using it all the time...
- When I use AI, it is typically useful.
- When other people build and do things with AI, it's slop that I didn't ask for which is waste of resources and a threat to humanity.
This entirely sums up my thoughts on the technology. I suppose it's rather like the personal benefits vs greater harm of using coal for electricity.
It's easy to use lazily and for use cases that are annoying. But used in the right contexts with the limitations in mind it's personally quite useful indeed.
These people are insufferable.
The moral? It's always been an unbalanced society tumbling into the future. Even if AI has both downsides and upsides we will still make it a part of us. Consider the scale - 1B people chatting for the likes of 1T tokens/day. That amount of AI-language has got to influence human language and abilities as well.
Point by point rebuttals:
- environmental harms - so does any use of electricity, fuel or construction
- reinforcement of bias - all ours, reflected back, and it depends on prompting as well
- generation of racist output - depends on who's prompting what
- cognitive harms and AI supported suicides - we are the consequence sink for all things AI, good and bad
- problems with consent and copyright - only if you think abstractions should be owned
- enables fraud and disinformation and harassment and surveillance - all existed before 2020
- exploitation of workers, excuse to fire workers and de-skill work - that is AI being used as excuse, can't be AI's fault
- they don’t actually reason and probability and association are inadequate to the goal of intelligence - apparently you don't need reasoning to win gold at IMO
- people think it makes them faster when it makes them slower - and advanced LLMs are just 2.5 years old, give people time to learn to use it
- it is inherently mediocre - all of us have been at some point
- it is at its core a fascist technology rooted in the ideology of supremacy - LOL, generalizing Grok to all LLMs?
The author mixes hate of AI with hate of people behind AI and hate of how other people excuse their actions blaming AI.
Yeah, "statistics is fascism" - Umberto Eco (probably)
"[AI] is at its core a fascist technology rooted in the ideology of supremacy"
and
"The people who build it are vapid shit-eating cannibals glorifying ignorance."
tl;dr: This person professes to hate AI. They repeat the same arguments as others who hate AI, ignoring that it is an emerging technology with lots of work to do. Regardless of AI's existence, power infrastructure needs to improve and become more environmentally friendly.
Finally, AI is not going away, and we cannot make it away. That cat is out of the bag.
are the authors genuinely or merely performatively ignorant?
Ignorant, to be precise, of the often comical extent to which they very obviously construct—to their own specification and for their purposes—the object of their hostility...?
While dismissing—in a fashion that renders their reasoning vacuous—the wearying complexity of the actually-observable complex reality they think they are attacking?
One of the most obvious "tells" in this sort of thing is the breezy ease with which abstract _theys_ are compounded and then attacked.
I'm sorry, Anthony; there is no they. There is a bewildering and yes, I get it, frightening and all but inconceivable number of actors, each pursuing their own aims, sometimes in explicit or implicit collusion, sometimes competitively or adversarially...
...and that is but the most banal of the dimensions within which one might attempt to reason about "AI."
Frustration is warranted; hostility towards the engines of surveillance capital and its pleasure with advancing fascism is more than warranted; applications of AI within this domain and services rendered by its corporate builders—all ripe and just targets.
But it is a mistake that renders the critique and position dismisable to slip from specifics to generalities and scarecrows.
Frankly, it's gotten kind of boring and more recently it's to where I don't even like talking about it anymore. Of course, the non-technical general public is split between those who mistakenly think it's much 'smarter' or more capable than it is and those who dismiss it entirely but often for the wrong reasons. The disappointing part is how deeply polarized many of my more experienced technical friends are between one of those two extremes.
On the positive side there's endless over-the-top raving about how incredible AI is and on the negative side overwhelming angst over how unspeakably evil and destructive AI is. These are people who've generally been around long enough to see long-term trends evolve, hype cycles fade, bubbles burst and certain world-ending doom eventually arrive as just everyday annoyance. Yet both extremes are so highly energized on the topic they tend to leap to some fairly ungrounded, and occasionally even irrational, conclusions. Engaging with either type for very long gets kind of exhausting. I just don't think AI is quite as unspeakably amazing as the ravers insist OR nearly as apocalyptic as the doomers fear - but both groups are so into their viewpoint it borders on evangelical obsession - which makes hard for anyone with an informed but dispassionate, measured and nuanced perspective to engage with them.
No matter how good things get there will always be people filled with this sort of rage, but what bothers me is how badly this site wants to upvote this stuff.
HN is supposed to gratify intellectual curiosity. HN is explicitly not for political or ideological battle. Fulmination is explicitly discouraged in the guidelines. This article is about as far as I can imagine from appropriate content for HN. I strongly wish that everyone who wants this on the front page would find another site to be miserable on together, and stop ruining this one.
I'm not saying that they are right or wrong, but you should at least respect their right to have their own opinions and fears instead of pointing to an illusory appropriate content for HN.
An interesting discussion about issues like that could be had. This ain't it.
For me, I kind of wish this site to go back to the good old days where people just share their nerdy niche hacker things and not filling the first page with the same arguments we see on the other parts of the internet over and over again. ; ) But granted I was attracted by the clickbait title too, so I can't blame others.
Just the other day someone posted the ImageNet 2012 thread (https://news.ycombinator.com/item?id=4611830), which was basically the threshold moment that kickstarted deep learning for computer vision. Commenters claimed it doesn't prove anything, it's sensational, it's just one challenge with a few teams, etc. Then there is the famous comment when Dropbox was created that it could be replaced by a few shell scripts and an ftp server.
>at its core a fascist technology rooted in the ideology of supremacy
>inherently mediocre and fundamentally conservative
>The machine is disgusting and we should break it
Jesus. Unclear why anyone would endorse this blogpost, much less post it on a website focused on computer science and entrepreneurship.
And, conversely, for those who don't share that premise, this article is a good reminder why debating the subject matter is usually pointless. There's no objective argument that you could possibly make to the author and other people like him to convince them otherwise.
All this while consuming more electricity that ever before, during an emerging global climate crisis. And destroying our water supplies to boot. There is no good in any of this.
Miyazaki was absolutely right. Though I'll paraphrase him just a little: Capitalism is an insult to life itself.
I don't care that you hate it. It's the best thing to happen to us in a long time and anyone who disagrees does so on a mountain of privilege. I'm happy for you to have learned everything you know, but to desire to take it away from everyone else is abhorrent to me.
ratelimitsteve•6h ago
marcosdumay•6h ago
I know it was there the entire time, so what exactly was suppressing the attention towards it? Was it satisfied customers or the companies paying to deplatform the message?
taormina•6h ago
prisenco•6h ago
https://www.tandfonline.com/doi/full/10.1080/19368623.2024.2...
| Adverse impacts of revealing the presence of “Artificial Intelligence (AI)” technology in product and service descriptions on purchase intentions: the mediating role of emotional trust and the moderating role of perceived risk
Witness how quickly we went from being awed by Dall-E and Midjourney to saying "looks like AI" as an insult.
deadbabe•6h ago
In a hype cycle, at the beginning, it is easy to harvest attention just by talking about the hype. But as more people do this, eventually the influence market is saturated.
After this point, you then will get a better ROI on attention by taking the opposite position and discussing the anti-hype. This is where we currently are with AI, the contrarians are now in style.
mingus88•5h ago
I don’t think the social reaction was there the whole time. It feels more like we have been playing around with them for two years and are finally realizing they won’t change our lives as positively as we thought
And seeing what the CEO class is doing with them makes it even worse