Last time we met they had cancelled their subscription and cut down on the daily chats because they started feeling drained by the constant calls for engagement and follow-up questions, together with "she lost EQ after an update".
Can you explain what this means?
Your friend felt drained because chat gpt was asking for her engagement?
4o, the model most non-tech people use (that I wish they would depreciate) is very...chatty, it will actively try to engage you, and give you "useful things" you think you need, and take you down huge long rabbit holes. On the second point, it used to be very "high EQ" to people (sycophantic). Once they rolled back the sycophancy thing, even a couple of my non-technical friends msg'd me asking what happened to ChatGPT. I know one person who we've currently lost to 4o, it's got them talked into a very strange place friends can't reason them out of, and one friend who has recently "come back from it" so to speak.
A high EQ might well be a prerequisite for successful sycophancy, but the other way definitely does not hold.
Basically yeah (except the "she" in my comment is referring to ChatGPT).
I genuinely wonder where the next innovative leap in AI will come from and what it will look like. Inference speed? Sharper reasoning?
I’m open to the possibility of faster, cheaper and smaller (we saw an instance of that with deepseek) but think there’s a real chance we hit a wall elsewhere.
Really? Im not convinced we have the right people in this day-and-age to bring about those leaps.
It might be that humanity goes another 50 years until someone comes around with a novel take.
I am close to a very prolific human bullshitter. The hardest thing is that anyone unfamiliar with them will have bought hook, line and sinker into their latest story, and you have to work hard to explain how that's a complete fabrication, while getting attacked as a naysayer and a hater. It's exhausting, and often it's just easier to nod along.
The parallels with discussing the pros and cons of LLMs in this atmosphere of hype are undeniable.
It's a game changer for some people who only need it to mostly get things started and pretend they did their job, and a work generator for anyone who actually needs to get things working.
The code was shockingly bad, and had to be rewritten to be able to do step 2 of the task.
The problem with this IMO is when a human writes the code, they know the code they wrote, and have a sense of ownership in terms of correctness and quality.
Current industry workflows attempt to improve quality and ownership with PR reviews.
Most folks I see using AI coding don't know all the corner cases they might encounter, but more importantly don't know the code or feel any real ownership over it.
The AI typed it, and the AI said it's correct. And whatever meager tests exist either passed or got a 1 line change to make them pass.
Quality is going down from those who rely on tools to produce code they don't know. This has a cost associated with it that's been deferred.
Sometimes this is fine, like POC where you are comfortable with tossing the code out.
This isn't fine for business who need to be able to plan out work in the future. That requires knowing the system more so than just reading the code base.
https://www.youtube.com/watch?v=PdFB7q89_3U
Fast forward a hundred years when we have a holodeck and sooner or later everyone will get bored with it.
Sort of like an information desk. The person there might not be a nobel laureate, but I don't know anything and they usually have enough knowledge to be immediately helpful.
Like "compare expedition max vs platinum"
(notice I didn't know max meant extra length, while platinum is a trim level)
taylodl•4mo ago
ranger207•4mo ago