I asked to perform a computation, but is the answer what I expect?
If you already know the answer, what's the point of performing the computation? And if you don't already know the answer, what's the basis of your confidence in the result.
The opening sentence captures this dissonance of certainty vs. uncertainty dramatically:
"wc -l is pretty good at counting lines"
wc is not "pretty good" at it, it's perfect. If you don't understand the limits of this perfection in context, what business do you have integrating your work into any application where anything of value is at stake... including your own time.
But this drama represents a generational shift: the programmer is no longer interested in programming. He is a conductor or orchestrator of an AI that barfs up code, and which he then visually inspects.
The hazards of this approach are as grotesque as they are obvious: humans second guessing the outputs of programs they don't understand using a method of inspection of correctness about which humans have very sharp limits.
It's a very important distinction of expectations between understanding the utility of a command and second guessing it.
The exercise as presented distorts sampling commands to see what they do in principle into a development process of second-guessing a pernicious uncertainty that the word-counter actually counts all the words and a blooming confusion over what it means to count.
This leads towards a perverse proof of program correctness by direct inspection of arbitrary outputs: "I ran wc -l and it was right again". Maybe I should cross check it with cat -n except what do I do with all this output?!
Systems built using these methods will end is disaster. Gridlocked Waymos and commercial airliners that drop out of the sky.
I can't fault the author for the disturbance in thinking because it's pervasive. Today the term "software engineer" means someone who is practiced at marshaling computation beyond the point of uncertainty in correctness. It's a bragging point. And these days all the "engineers" (a title that has lost all meaning in business) are relegating their roles to checking whether AIs produce believable results for starkly ambiguous commands, where the only reasonable potential of the AI to produce useful results is due to its inference engine having crystallized previous "solutions".
Prior art is being rendered into gobbledygook without even an apostolic line of priests with sufficient vigilance to guard the flame of understanding.
_wire_•59m ago
I asked to perform a computation, but is the answer what I expect?
If you already know the answer, what's the point of performing the computation? And if you don't already know the answer, what's the basis of your confidence in the result.
The opening sentence captures this dissonance of certainty vs. uncertainty dramatically:
"wc -l is pretty good at counting lines"
wc is not "pretty good" at it, it's perfect. If you don't understand the limits of this perfection in context, what business do you have integrating your work into any application where anything of value is at stake... including your own time.
But this drama represents a generational shift: the programmer is no longer interested in programming. He is a conductor or orchestrator of an AI that barfs up code, and which he then visually inspects.
The hazards of this approach are as grotesque as they are obvious: humans second guessing the outputs of programs they don't understand using a method of inspection of correctness about which humans have very sharp limits.
It's a very important distinction of expectations between understanding the utility of a command and second guessing it.
The exercise as presented distorts sampling commands to see what they do in principle into a development process of second-guessing a pernicious uncertainty that the word-counter actually counts all the words and a blooming confusion over what it means to count.
This leads towards a perverse proof of program correctness by direct inspection of arbitrary outputs: "I ran wc -l and it was right again". Maybe I should cross check it with cat -n except what do I do with all this output?!
Systems built using these methods will end is disaster. Gridlocked Waymos and commercial airliners that drop out of the sky.
I can't fault the author for the disturbance in thinking because it's pervasive. Today the term "software engineer" means someone who is practiced at marshaling computation beyond the point of uncertainty in correctness. It's a bragging point. And these days all the "engineers" (a title that has lost all meaning in business) are relegating their roles to checking whether AIs produce believable results for starkly ambiguous commands, where the only reasonable potential of the AI to produce useful results is due to its inference engine having crystallized previous "solutions".
Prior art is being rendered into gobbledygook without even an apostolic line of priests with sufficient vigilance to guard the flame of understanding.
I for one do not welcome robot overloads.