As someone who uses AI for coding, emails, design documents, and so on...
I'm always a bit confused by the "training" rhetoric. It's the easiest thing to use. Do people need training to use a calculator?
This isn't like using Excel effectively and learning all the features, functions and so on.
Maybe I overestimate my ability as a technically savvy person to leverage AI tools, but I was just as good at using on day 1 than 2 years later.
righthand•1h ago
No people need training for AI the same way they need training for proof-reading. Quality checking isn’t a natural process when something looks 80% complete and the approvers only care about 80% completeness.
My coworker still gets paid the same for turning in garbage as long as someone fixes it later.
dublinben•1h ago
>Do people need training to use a calculator?
Yes? Quite a bit of time was spent in math classes over the years learning to use calculators. Especially the more complicated functions of so-called graphing calculators. They're certainly not self-explanatory.
What does it say about your skill or the depth of this tool that you haven't gotten better at using it after 2 years of practice?
watwut•1h ago
One of this article claims that failure of AI projects is because the companies failed to train employees for AI. You do get value out of calculators without training. The training is there so you can unlock advanced more complicated functions.
The article come across as "AI can not fail, it can only be failed" argument.
happytoexplain•1h ago
In my experience, "training" usually means just telling people not to blindly trust the output. Like... read it. If you can't personally verify in a code-review capacity that what it wrote is apparently correct, then don't use it. The majority of people simply don't care - it's just blind copy-pasting from StackOverflow all over again, but more people are doing it more often. Of course, like most training, it's performative. 90% of the people making this mistake aren't capable of reviewing the output, so telling them to is pointless.
derektank•1h ago
I'm arguably much worse at using ChatGPT today than I was 2 years ago, as back then you needed to be more specific and constrained in your prompts to generate useful results.
Nowadays with larger context windows and just generally improved performance, I can ask a one sentence question and iterate to refine the output.
Cpoll•1h ago
Things I'd include in training:
- Mental model of how the AI works.
- Prompt engineering.
- Common failure modes.
- Effective validation/proofreading.
As for internal stuff like emails/design docs... I think using an AI to generate emails exposes a culture problem, where people aren't comfortable writing/sending concise emails (i.e. the data that went into the prompt).
NegativeK•1h ago
Are employees aware that they can't trust AI results uncritically, like the article mentions? See: the lawyers who have been disciplined by judges. Or doctors who aren't verifying all conversation transcriptions and medical notes generated by AI.
Does your organization have records retention or legal holds needs that employees must be aware of when using rando AI service?
Will employees be violating NDAs or other compliance requirements (HIPAA, etc) when they ask questions or submit data to an AI service?
For the LLM that has access to the company's documents, did the team rolling it out verify that all user access control restrictions remain in place when a user uses the LLM?
Is the AI service actually equivalent or better or even just good enough compared to the employees laid off or retasked?
This stuff isn't necessarily specific to AI and LLMs, but the hype train is moving so fast that people are having to relearn very hard lessons.
didibus•1h ago
I'm always a bit confused by the "training" rhetoric. It's the easiest thing to use. Do people need training to use a calculator?
This isn't like using Excel effectively and learning all the features, functions and so on.
Maybe I overestimate my ability as a technically savvy person to leverage AI tools, but I was just as good at using on day 1 than 2 years later.
righthand•1h ago
My coworker still gets paid the same for turning in garbage as long as someone fixes it later.
dublinben•1h ago
Yes? Quite a bit of time was spent in math classes over the years learning to use calculators. Especially the more complicated functions of so-called graphing calculators. They're certainly not self-explanatory.
What does it say about your skill or the depth of this tool that you haven't gotten better at using it after 2 years of practice?
watwut•1h ago
The article come across as "AI can not fail, it can only be failed" argument.
happytoexplain•1h ago
derektank•1h ago
Nowadays with larger context windows and just generally improved performance, I can ask a one sentence question and iterate to refine the output.
Cpoll•1h ago
As for internal stuff like emails/design docs... I think using an AI to generate emails exposes a culture problem, where people aren't comfortable writing/sending concise emails (i.e. the data that went into the prompt).
NegativeK•1h ago
Does your organization have records retention or legal holds needs that employees must be aware of when using rando AI service?
Will employees be violating NDAs or other compliance requirements (HIPAA, etc) when they ask questions or submit data to an AI service?
For the LLM that has access to the company's documents, did the team rolling it out verify that all user access control restrictions remain in place when a user uses the LLM?
Is the AI service actually equivalent or better or even just good enough compared to the employees laid off or retasked?
This stuff isn't necessarily specific to AI and LLMs, but the hype train is moving so fast that people are having to relearn very hard lessons.