a state of knowing enough to encounter danger, but not enough to mitigate.
For the same level of IC, AI tools will let people show off their skills more quickly, i.e. show faster how smart or dumb they are.
For nontechnical managers, it will let them ask lots of dumb questions and have stupid ideas that need refuting much faster, "I had a conversation with chatgpt and it says we can just ...". I don't consider that dangerous, that would be too flattering, it just makes the asymmetric bullshit (Brandoloni's law?) flow more efficiently.
Suppose you're wrong versus the machine; they will think you are less consistent, even though every problem context carries its own nuance.
Having to, in good faith, try both avenues every time sounds exhausting.
You'll have to refute a lot more bullshit AND at a later stage/time by Brandoloni's law.
However we arent even close to AI. We just got better search engines.
Which is a lot but there were plenty of mainstream users by then who would not be able to perceive that full amount of difference, if at all, a lot of times.
The main things that set Google apart in everybody's mind was that there were no ads and the pledge to not be evil was interpereted to mean they never would have ads. There was plenty of that already and people were tired of it.
That's the only promise they needed to break before they would be able to become the company they are today.
Their original reason for existing.
SMAAART•4mo ago
I have been thinking along the same lines, but you said it better.
fuzzfactor•4mo ago
The truly stupid will also seem smarter than they really are, just like anybody else, which is bound to fool more people than before.
And some dangerous people will use it to seem much less dangerous than they really are, when perhaps there is better-concealed escalation.
So if people are not careful, the upside of nobody getting any smarter may not be well-balanced against a downside that nobody knows how low it could go.