Learning the cognitive framings for working with AI to boost your own efficiency is moderately easy but it feels impossible to the uninformed. Some even deeply believe that lifelong learning is itself a hoax.
Everyone who already knew code knew that efficiency in writing software does not come from typing speed, full stop.
Nobody wants to call out an employer on AI magical thinking in a tight job market though. You could lose your job.
theothertimcook•8mo ago
johnea•8mo ago
Like the tool's glowing appraisal of Altman, it also is self promoting.
To me, this represents one of the most serious issues with LLM tools: the opacity of the model itself. The code (if provided) can be audited for issues, but the model, even if examined, is an opaque statistical amalgamation of everything it was trained on.
There is no way (that I've read of) for identifying biases, or intentional manipulations of the model that would cause the tool to yield certain intended results.
There are examples of DeepState generating results that refuse to acknowledge Tienanmen square, etc. These serve as examples of how the generated output can intentionally be biased, without the ability to readily predict this general class of bias by analyzing the model data.