Therefore, it doesn’t affect my work at all. The only thing that affects my prospects is the hype about AI.
Be a purple cow, the guy says. Seems to me that not using AI makes me a purple cow.
But that isn't what the author is talking about. The issues is, your good code can be equal to slop that works. What the author says needs to happen is, you need to find a better way to stand out. I suspect for many businesses where software superiority is not a core requirement, slop that works will be treated the same as non-slop code.
Until that slop that works leads to therac-26 or PostOfficeScandal2 electric boogaloo. Neither of those applications required software superior to their competitors, just working software
The average quality of software can only trend down so far before real world problems start manifesting, even outside of businesses with a hard requirement on "software superiority"
The stuff you get to see in open source, papers, academia- that's a very small curated 1% of the actual glue code written by an overworked engineer at 1am that holds literally everything together.
I was testing at Microsoft on the week that Windows 2000 shipped, showing them that Photoshop can completely freeze Windows (which is bad, and something they needed to know about).
The creed of a tester begins with faith in existence of trouble. This does not mean we believe anything is perfectible— it means we think it is necessary to be vigilant.
AI commits errors in a way and to a degree that should alarm any reasonable engineer. But more to the point: it tends to alienate engineers from their work so that they are less able to behave responsibly. I think testing is more important than ever, because AI is a Gatling gun of risk.
AI is not worthy of trust, and the sort of reasonable people I want to deal with won’t trust it and don’t. They deal with me because I am not a simulation of someone who cares— I am the real thing. I am a purple cow in terms of personal credibility and responsibility.
To the degree that the application of AI is useful to me without putting my credibility at risk, I will use it. It does have its uses.
(BTW, although I write code as part of my work, I stopped being a full-time coder in my teens. I am tester, testing consultant, expert witness, and trainer, now.)
AI makes slop
Therefore, spend more time to make the slop "better" or "different"
[No, they do not define what counts as "better" or "different"]
Where's your moat? If you can create the software with prompts so can your competitors.
Attackers knowing which model(s) you use could also do similar prompts and check the output code, to speculate what kind of exploits your software might have.
A lawyer knowing what model his opposition uses could speculate on their likely strategies.
Turns out being able to write the software is not the only, or even the most important factor in success.
It works great, but I can’t imagine skipping the refinement process.
Compare that to the approach you're using (which is what I'm also doing), and you're able have have AI stay much closer to what you're looking for, be less prone to damaging hallucinations, and also guide it to a foundation that's stable. The downside is that it's a lot more work. You might multiply your productivity by some single digit.
To me, that 2nd approach is much more reasonable than trying to 100x your productivity but actually end up getting less done because you end up stuck in a rabbit hole you don't know you're in and you'll never refine your way out of it.
Yes. I almost always end with "Do not generate any code unless it can help in our discussions as this is the design stage" I would say, 95% of my code for https://github.com/gitsense/chat in the last 6 months were AI generated, and I would say 80% were one shots.
It is important to note that I can easily get into the 30+ messages of back and forth before any code is generated. For complex tasks, I will literally spend an hour or two (that can span days) chatting and thinking about a problem with the LLM and I do expect the LLM to one shot them.
Having said all of that, I do believe AI will have a very negative affect on developers where the challenge is skill and not time. AI is implementing things that I can do if given enough time. I am literraly implementing things in months that would have taken me a year or more.
My AI search is nontrivial but it only took two months to write. I should also note the 5% that I needed to implement was the difference between throw away code and a usuable search engine.
Not sure I believe this. If you suddenly automate away 95% of any task, how could it be the case you retain 100% of your prior abilities?
>However my debugging and problem solving skills should increase
By "my", I assume you mean "my LLM"?
>I do think my writing proficiency will decrease though.
This alone is cause for concern. The ability for a human being to communicate without assistance is extremely important in an age where AI is outputting a significant fraction of all new content.
I need to review like crazy now, so it is not like I am handing off my understanding of the problem. If anything, I learn new things from time to time, as the LLM will generate code in ways that I haven't thought of before.
The AI genie is out of the bottle now and I do believe in a year or two, companies are going to start asking for conversations along with the LLM generated code, which is how I guess you can determine if people are losing their skill. When my code is fully published, I will include conversations for every feature/bug fix that is introduced.
> The ability for a human being to communicate without assistance is extremely important
I agree with this, but once again, it isn't like I don't have to review everything. When LLMs get much better, I think my writing skills may decline, but as it currently stands, I do find myself having to revised what the LLM writes to make it sound more natural.
Everything is speculation at this point, but I am sure I will lose some skills but I also think will gain new ones by being exposed to something that I haven't thought of before.
I wrote my chat app because I needed a more comfortable way to read and write *long* messages. For the foreseeable future, I don't see my writing proficiency to decrease in any significant manner. I can see myself being slower to write in the future though, as I find myself being very comfortable speaking to the LLM in a manner that I would not to a human. LLMs are extremely good at inferring context, so I do a lot lazy typing now to speed things up, which may turn into a bad habit.
Off-topic, but in biology circles I've heard this type of situation (where "it takes all the running you can do, to keep in the same place" because your competitors are constantly improving as well) called a "Red Queen's race" and really like the picture that analogy paints.
The induced remand for more goods and services therefore fills the gap, and causes people to work just as hard as before -- similarly to how a highway remains full after adding a lane
First we got transparent UIs, now everyone has them. Then we got custom icons, then Font Awesome commoditized them. Then flat UI until everyone copied it. Then those weird hand-painted Lottie illustrations, and now thanks to Gen-AI everyone has them. (Then Apple launched their 2nd gen transparent UI.)
But the one thing that neither caffeinated undergrads nor LLMs can pull off is making software efficient. That's why software that responds quickly to user input will feel magical and stand out in a sea of slow and bloated AI slop.
Interesting that radical abundance may create radical competition to utilize more abundant materials in an effort to maintain relative economic and social position.
But what’s unique today becomes slop tomorrow, AI or not.
Art has meaning. Old buildings feel special because they’re rare. If there were a thousand Golden Gate Bridges, the first wouldn’t stand out, as much.
Online, reproduction is trivial. With AI, reproducing items in the physical world will get cheaper.
No. When you have a city full of old houses all from the same era, maybe even by the same architect, the new building still looks ugly. The old house looks beautiful, even when you have hundreds copies next to it.
Then I copied the tool and data to a new directory and fully started over, with a more concrete description of the product I wanted in place and a better view of what components I would want, and began with a plan to implement one small component at a time, each with its own test screen, reviewing every change and not allowing any slop through (including any features that look fine from a code standpoint but are not needed for the product).
So far I'm quite happy with this.
For take #1 I said what tech to use and a high level description of the game and it's features. I guess I failed to mention this part, but when I threw take #1 away, I first used Claude + hand editing to update it to have a detailed description of each screen and feature in the game. So take #2 had a much more detailed description of exactly what was going to be built, but still, right in CLAUDE.md
I did also create a DEVELOPMENT-PLAN.md first with Claude and have been having it update it with what's been done before every commit. I don't know yet have a good idea of how impactful that part has been.
and
> So make your stuff stand out. It doesn't have to be "better." It just has to be different.
equals... craft?
Isn't that what has always mattered a great deal
Investing in your understanding and skill, on the other hand, has nearly limitless returns.
As someone on a very small team competing with a very big one I don't have time for anything that can't bring exponential returns. I have no time for LLMs.
LLMs promise to speed you up right now in direct proportion to the amount you pay for tokens while sacrificing your own growth potential. You'd have to be a cynic to do it -- you'd have to believe that your own ideas aren't even worth investing in over the long term
The point being made exactly that something beautiful has being cheapened.
"Then, within twenty minutes, we started ignoring the cows. … Cows, after you’ve seen them for a while, are boring"
Skill issue. I've been looking at cows for 40 years and am still enchanted by them. Maybe it helps that I think of cows as animals instead of story book illustrations; you'd get lynched if you claimed you got bored of your pet cat after 20 minutes.More flour more water. More water more flour.
kazinator•5mo ago
ares623•5mo ago
interstice•5mo ago