Using it efficiently is absolutely a skill. Just like google-fu is a skill. Or reading fast / skimming is a skill. Or like working with others is a skill. And so on and so on.
The kinds of things you'll learn are:
- What's even worth asking for? What categories of requests just won't work, what scope is too large, what kinds of things are going to just be easier to do yourself?
- Just how do you phrase the request, what kind of constraints should you give up front, what kind of things do you need to tell it that should be self-evident but aren't?
- How do you deal with sub-optimal output? Whe do you fix it yourself, when do you get the AI to iterate on it, when do you just throw out the entire sessions and start afresh?
The only way for it to not be a skill would be if how you use an AI either did not matter for the quality output, or if getting better results just a natural talent some people have and some don't. Both of those seem like pretty unrealistic ideas.
I think there's probably a discussion to be had about how deep or transferrable the skill is, but your opening gambit of "it's not a skill, stop trying to make it one" is not a productive starting point for such a discussion.
People claiming it's a skill should read up on experiments on behavior adaptation to stochastic rewards. Subjects develop elaborate "rain dances" in the belief that they can influence the outcome. Not unlike sports fans superstitions.
Because that’s the claim of all the AI companies. Right next to the claim that AGI is in reach.
The question is if all use AI will all text become too similar.
For all the talk about jobs and art, LLMs seem to love shitposting.
More like an absolute bumbling idiot of a colleague that you have to explain things over and over again and can’t ever trust to get anything right.
sam altman said AI would "clone his brain" by 2026. He is wrong, it already has.
I've listened to him speak many times and thats an accurate description. seriously, has he ever said even one interesting thing ?
There is so much friction when you try to do anything technical by talking to someone that don't know you, you have to know each other extremely well for there to be no friction.
This is why people prefer communicating in pseudo code rather than natural language when discussing programming, its really hard to describe what you want in words.
Except talking is not intuitive. It's an unbelievably hard skill. How many years have you spent on talking until you can communicate like an adult? To convey complicated political, philosophical, or technical ideas? To express your feelings honestly without offending others?
For most people it takes from 20 years to a lifetime. Personally I can't even describe a simple (but not commonly known) algorithm to another programmer without a whitboard.
From Anger to Denial to Bargaining. And we are starting out with flattery. Masterful gambit!
Instead of participating in slop coding (sorry, "AI collaboration"), I think I'll just wait for the author and their ilk to make their way across Depression and Acceptance.
...when anyone starts talking in universals like this, they're usually deep in some hype cycle.
This is a problematic approach that many people take; they posit that:
1) AI is fundamentally transformative.
2) People who don't acknowledge that simply haven't tried it.
However, I posit that:
3) People who think that haven't actually used it a serious capacity or are deliberately misrepresenting things.
The problem is that:
> In reality, I go back and forth with AI constantly—sometimes dozens of times on a single piece of work. I refine, iterate, and improve each part through ongoing dialogue. It's like having a thoughtful and impossibly fast colleague who's always available to help me develop and sharpen my ideas.
...is only true for trivial problems.
The author calls this out, saying:
> It won't excel at consistently citing specific papers, building codes, or case law correctly. (Advanced techniques exist for these tasks, but they're not worth learning when you're just starting out. For now, consider them out of scope.)
...but, this is really the heart of everything.
What are those advanced techniques? Seriously, after 30 days of using AI if all you're doing is:
> Prepare for challenging conversations by using ChatGPT to simulate potential scenarios, helping you approach interpersonal dynamics with empathy and grace.
Then what the absolute heck are you doing.
Stop gaslighting everyone.
Those 'advanced techniques' are all anyone cares about, because they are the things that are hard, and don't work.
In reality, it doesn't matter how much time you spend learning; the technology is fundamentally limited. It can't do some things.
Spending time learning how to do trivial things will never enable you to do hard things.
It's not missing the 'human touch'.
It's the crazy hallucinations, invalid logic, failure to do as told, flat out incorrect information or citations, inability to perform a task (eg. as an agent) without messing some other thing up.
There are a few techniques that can help you have an effective workflow; but seriously, if you're a skeptic about AI, spending a month doing trivial stuff like asking for '10 ideas about X' is an insult to your intelligence and doesn't address any of the concerns that, I would argue, skeptics and real people actually have about AI.
That’s the function of a tool. To help do something in a more relaxed manner. Learning to use it can take some time, but the acquired proficiency will compensate for that.
General public LLMs have been there for two years, and still today, there are no concrete uses cases that can have the definition of tools. It’s trust me bro! and warnings in small print.
There are some, but you won't like them. Three big examples:
a) Automating human interactions. (E.g., "write some birthday wishes for my coworker".)
b) Offensive jokes and memes.
c) Autogenerated NPC's for role-playing games.
So, generally things that don't require actual intelligence. (Weird that empathy is the first thing we managed to automate away with "AI".)
Anthropic used to do this with Claude's character until Claude 3, but then dropped it. OAI's image generation is consistently ahead in prompt understanding and abstraction, but they famously don't give a flying turd about nuances. Current models are produced by ML nerds that are handwaving the complexity away, not by experts in what they're trying to solve. If they want it to be usable now, they need to listen to this kind of people [1]. But I don't think they really care.
[1] https://yosefk.com/blog/the-state-of-ai-for-hand-drawn-anima...
shikon7•2h ago
That there is such a calendar for using ChatGPT in the style of topics like "how to eat healthy", "how to stay fit" or "how to be more confident" shows to me more than anything what impact AI has on our society.
ashoeafoot•2h ago