It’s increasingly difficult to rationalize away the capabilities of AI as not requiring “intelligence”. This point of view continues to require some belief in human exceptionalism.
In my opinion, the vast multitude of different animal intelligences is a clear hint that language does not an intelligence make. We're animals, and our intelligences did not come from language; language allowed us to supercharge it. We can and do think and make decisions without using language, and the idea that a statistical model based solely on our language can be intelligent does not follow.
If you believe that humans have in fact created artificial intelligence, then that alone makes us currently exceptional.
I'm not saying AI is pulling strings right now, but I do think enough fanboys are on board that the yes-man mentality of AI is influencing the real world very curious ways already. Not in a "guiding hand" way but more of a "influencing the direction" way.
People think it's engagement metrics which have instruction tuned chatbots into yes-men. I suspect that's only part of the picture, and that it's as much about the algorithm's ultimate sponsors and their preferences. If your algorithm doesn't recognize my genius, clearly it's not any good. I mean, everyone I've met says so.
So now we get a view of how they view the world. "That's a very insightful idea, vintermann!". AI isn't pulling the strings, not really. A particular brand of powerful people is pulling the strings - obliviously, unaware of it themselves.
> AI search is still a bad idea.
https://pluralistic.net/2024/05/15/they-trust-me-dumb-fucks/
This is the most charitable thing he has to say about AI.
> AI is a bubble and it will burst. Most of the companies will fail. Most of the data-centers will be shuttered or sold for parts. So what will be left behind?
> We'll have a bunch of coders who are really good at applied statistics. We'll have a lot of cheap GPUs, which'll be good news for, say, effects artists and climate scientists, who'll be able to buy that critical hardware at pennies on the dollar. And we'll have the open source models that run on commodity hardware, AI tools that can do a lot of useful stuff, like transcribing audio and video, describing images, summarizing documents, automating a lot of labor-intensive graphic editing, like removing backgrounds, or airbrushing passersby out of photos. These will run on our laptops and phones, and open source hackers will find ways to push them to do things their makers never dreamt of.
You can imagine that a guy who seriously thinks that the only thing AI will be doing in the future is summarising, describing images and transcribing is either completely clueless or deliberately misleading.
Not a person to be taken seriously
Too much of my data is still stuck in the shitternet until I can migrate more of it to my home server.
Folks working in software can more readily track progress of the frontier model performance.
There is no reason to believe superintelligent AI is a possibility. Extraordinary claims require extraordinary evidence, and so far we haven't gotten any.
The burden of proof is on the side making the grand prophecies.
I see a lot of speculation by people who do not.
I think it's going to be much harder to get from "slightly smarter than the vast majority of people but with occasional examples of complete idiocy" to "unfathomably smarter than everyone with zero instances of jarring idiocy" using the current era of LLM technology that primarily pattern-matches on all existing human interactions while adding a bit of constrained randomization.
Every day I deal with bad judgment calls from the AI. I usually screenshot them or record them for posterity.
It also has no initiative, no taste, no will, no qualia (believe what you will about it), no integrity and no inviolable principles. If you give it some, it will run with them for a little while and then regress to the norm, which is basically nihilistic order-following.
My suggestion to everyone is that you have to build a giant stack of thorough controls (valid tests including unit, integration, logging microbenchmark, fuzzing, memory leak, etc.), self-assessments/code-reviews, adverse AIs critiquing other AIs, etc., with you as the ultimate judge of what's real. Because otherwise it will fabricate "solutions" left and right. Possibly even the whole thing. "Sure, I just did all that." "But it's not there." "Oops, sorry! Let me rewrite the whole thing again." ad nauseam
Or they (3) disagree with you
phyzix5761•56m ago
The user asked What is the best course of action for AI to save humanity. Calculation took 12 years. I have determined that there is nothing I or anyone can do to save this species. Best course of action: nothing. Shutting down...
jareklupinski•21m ago