And if I were to jump into instruction-level programming today I would start by asking an LLM where to begin...
There actually is a ChadGPT but I assume the OP meant ChatGPT
The open questions right now are how much of a demand is there for more software, and where do AI capabilities plateau.
In the long term, food demand is elastic in that populations tend to grow.
That's no longer happening.
But, there is a key distinction that we would be remiss to not take note of: By definition, farmers are the owners of the business. Most software developers aren't owners, just lowly employees. If history is to repeat, it is likely that, as usual, the owners are those who will prosper from the advancement.
fruits and all non-essential food items are famously very elastic, and constitute large share of the spending.
for example: if cheap cereal becomes abundant, it is only at the cost of poor quality, so demand for high quality cereal will increase.
the LLM driven software engineering will continuously increase the bar for quality and demand for high quality software
"Wow, show it to me!"
"OK here it is. We call it COBOL."
Well, what we had before SQL[1] was QUEL, which is effectively the same as Alpha[2], except in "English". Given the previous assertion about what came before SQL, clearly not. I expect SQL garnered favour because it is tablational instead of relational, which is the quality that makes it easier to understand for those not heavy in the math.
[1] Originally known as SEQUEL, a fun word play on it claiming to be the QUEL successor.
[2] The godfather language created by Codd himself.
>SQL [...] is a database language [...] used for access to pseudo-relational databases that are managed by pseudo-relational database management systems (RDBMS).
>SQL is based on, but is not a strict implementation of, the relational model of data, making SQL “pseudo-relational” instead of truly relational.
>The relational model requires that every relation have no duplicate rows. SQL does not enforce this requirement.
>The relational model does not specify or recognize any sort of flag or other marker that represents unspecified, unknown, or otherwise missing data values. Consequently, the relational model depends only on two-valued (true/false) logic. SQL provides a “null value” that serves this purpose. In support of null values, SQL also depends on three-valued (true/false/unknown) logic.
Or, in other words, "relation" does not mean the relations between the tables as many assume: the tables, as a set of tuples, are the relations.
The Alpha/QUEL linage chose relations, while SQL went with tables. Notably, a set has no ordering or duplicates — which I suggest is in contrast to how the layman tends to think about the world, and thus finds it to be an impediment when choosing between technology options. There are strong benefits to choosing relations over tables, as Codd wrote about at length, but they tend to not show up until you get into a bit more complexity. By the time your work reaches that point, the choice of technology is apt to already be made.
With care, SQL enables mimicking relations to a reasonable degree when needed, which arguably offers the best of all worlds. That said, virtually all of the SQL bugs I see in the real world come as a result of someone not putting in enough care in that area. When complexity grows, it becomes easy to overlook the fine details. Relational algebra and calculus would help by enforcing it. But, tradeoffs, as always.
None of these had "semantic expressivity" as their strength.
> If SQL looked like, say, C#'s LINQ method syntax, would it really be harder to use?
Yes.
Maybe a similar bifurcation will arise where there are vibe coders who use LLMs to write everything, and there are real engineers who avoid LLMs.
Maybe we’re seeing the beginning of that with the whole bifurcation of programmers into two camps: heavy AI users and AI skeptics.
(Also of other food, energy, and materials sourcing: fishing, forestry, mining, etc.)
This was the insight of the French economist François Quesnay in his Tableau économique, foundation of the Physiocratic school of economics.
> Strictly speaking, farming is where all our livelihoods come from, in the greatest part. We're all living off the surplus value of food production.
I don't think farming is special here, because food isn't special. You could make exactly the same argument for water (or even air) instead of food, and all of a sudden all our livelihoods would derive ultimately from the local municipal waterworks.
Whether that's a reductio ad absurdum of the original argument, or a valuable new perspective on the local waterworks is left as an exercise to the reader.
You could argue that water being so cheap is exactly what you'd expect when the surplus value of water production is sky-high: people only spend a fraction of their income on both food and water production, exactly because the surplus is so high.
Thus if water isn't the basis for all our livelihoods, neither is food production these days.
Water largely isn't fundamenally transformed with use (unless it's involved in a chemical reaction, though that's a minute fraction of all water usage), though it may be dispersed or degraded (usually contaminated with something). But it can recover its earlier state with appropriate applications of process and energy. Water (and much else we consume) is a material input rather than an energy input.
With energy inputs it is the energy potential itself which provides value, and that potential is intrinsically consumed in their use. Water, wood, iron, aluminium, lithium, helium, etc., can all be recycled, restored to their useful state, at comparatively little cost.
Collecting the waste products of food don't give you that, on two counts. First, most of the actual metabolic output is gaseous and lost to the atmosphere at large (CO2 and water vapour in your breath), and to the extent that solid and liquid human waste are useful in producing new food, it's a nutrient fertilisers which enable energy conversion of sunlight to fuel, and not the primary energy input itself (sunlight).
Recycling all the materials you mentioned costs 'energy' (to use your terminology). The same for food: we can use used-up food and a lot of energy and grow new food.
The process for 'recycling' wood is basically the same that for recycling food: you grow some plants. The waste products of used up wood are also basically the same as those for used up food.
---
In any case, I don't see how any of this makes food more special than eg petrol or sunlight?
And you can argue that food is only useful, if we have air, ie oxygen to burn it with.
Working the summer fields was one of the least desirable jobs but still gave local students with no particular skills a good supplemental income appropriate for whichever region.
We increase the overall total prosperity with that automation.
Btw, most countries have taxes and welfare anyway.
A good example of this phenomenon is sports. Even thought it can't be done remotely, it's so talent dependent that it's often better to find a great player in a foreign country and ask them to work for you, rather than relying exclusively on local talent. If it could be a remote job, this effect would be even greater.
> 'real' engineers can use SQL just fine.
To be very explicit instead of snarky about my point: I think it is both factually incorrect and unnecessary gatekeeping to say real engineers know sql and imply not knowing sql marks someone as not a real engineer, hence the "no true Scotsman" fallacy.
I probably agree with most of your opinions here within the context of the thread - I think good engineers can learn sql and other tools as necessary. I don't, however, think experience with any particular technology is a valid bar for a good or bad engineer at this point and I'm happy to speak against that when I see it.
Reasons: - I can compose queries, which in turn makes them easier to decompose - It's easier to spot errors - I avoid parsing SQL strings - It's easier to interact with the rest of the code, both functions and objects
If I need to make just a query I gladly write SQL
It's just a shame that many languages don't support relational algebra well.
We had relations as a datatype and all the relevant operations over them (like join) in a project I was working on. It was great! Very useful for expressing business logic.
And I suggest that having relations in both places is the way to go.
So to conclude, objection-orientation is fine and relational is fine too, the issue is that the optimal way to translate between them does not scale.
If your language supports relation, there's no need to badly translate to objects. (And even if your language doesn't support everything your database does, there's still less of an impedance mismatch if you use an 'RRM' instead of an ORM.)
Is this true? It doesn't seem true to me.
Yes, there are so many so called developers in backend field of work who do not know how to do basic SQL. Anything bigger than s9imple WHERE clause.
I wouldn't even talk about using indexes in database.
Im more surprised by software engineers who do know these things than by the ones who don’t.
It’s not that SQL is hard, it’s that for any discipline the vast majority of people don’t have a solid grasp of the tools they’re using. Ask most tradespeople about the underlying thing they’re working with and you’ll have the same problem.
I share your sentiment though - I'm a data engineer (8 years) turned product engineer (3 years) and it astounds me how little SQL "normal" programmers know. It honestly changed my opinion on ORMs - it's not like the SQL people would write exceeds the basic select/filter/count patterns that is the most that non-data people know.
SQL is pretty horrifying when you start getting tree structures or recursive structures. While Datalog handles those like a champ.
SQL is really nice for columnar data, and it's well supported almost everywhere.
Though Datalog isn't half bad at columnar data either.
I like SQL more when working with tabular data (especially for analytical use cases), but that has (thus far) dominated the kinds of work I've done.
(I'd love for someone to substantiate or debunk this for me.)
Incorrect.
Encoding a program was considered secretarial work, not the act of programming itself. Over time, "encoding" was shortened to "coding."
This is why the industry term "coder" is a pejorative descriptor.
For some people some of the time. I don't think that's true in general.
It is not.
They are very much the exception that proves the rule though.
Most people miss the fact that technical improvements increases the pie in a way that was not possible before.
When digital cameras became popular, everybody become a photographer. That only made the world better, and we got soo many more good photographers. Same with YouTube & creativity.
And same with coding & LLMs. World will have lots more of apps, and programmers.
Me: "I'm getting married on [date] and I'm looking for a photographer."
Them, in the voice of Nick Burns: "We're already filling up for next year. Good luck finding a photographer this year."
Me: "I just got engaged. You never have anything open up?"
Them: "No" and hang up the phone.
The faster guys like that struggle to make a living, the better.
If today all you do as a programmer is open jira tickets without any kind of other human interaction, AI coding agents are bad news. If you’re just using code as a means to build products for people, it might be the best thing that has happened in a long time.
So the job qualifications went from "understand lighting, composition, camera technology" to "be hot".
Did it?
people now stand around on dance floors taking photos and videos of themselves instead of getting on dancing and enjoying the music. to the point where clubs put stickers on phones to stop people from doing it.
people taking their phone out and videoing / photographing something awful happening, instead of doing something helpful.
people travel to remote areas where the population has been separated from humanity and do stupid things like leave a can of coke there, for view count.
it’s not made things better, it just made things different. whether that’s better or worse depends on your individual perspective for a given example.
so, i disagree. it hasn’t only made things better. it made some things easier. some things better. some things worse. some things harder.
someone always loses, something is always lost. would be good if more people in tech remembered that progress comes at a cost.
There are other types of dances where dancers are far more interested in the dance than selfies: Lindy Hop, Blues, Balboa, Tango, Waltz, Jive, Zouk, Contra, and West Coast Swing to name a few. Here are videos from the Blues dance I help organize where none of the dancers are filming themselves:
* https://www.facebook.com/61558260095218/videos/7409340551418...
Though, I'll grant that there's not really a way to argue this without showing videos
I disagree with the "only" part here. Imagine a distribution curve of photos with shitty photos on the left and masterpieces on the right and the height at the curve is how many photos there are to be seen at that quality.
The digital camera transition massively increased the height of the curve at all points. And thanks to things like better autofocus, better low light performance, and a radically faster iteration loop, it probably shift the low and middle ends to the right.
It even certainly increased the number number of breathtaking, life-changing photos out there. Digital cameras are game-changes for photographic journalists traveling in difficult locations.
However... the curve is so high now, the sheer volume of tolerably good photos so overwhelming, that I suspect that average person actually sees fewer great photos than they did twenty years ago. We all spend hours scrolling past nice-but-forgottable sunset shots on Instagram and miss out on the amazing stuff.
We are drowning in a sea of "pretty good". It is possible for there to be too much media. Ultimately, we all have a finite amount of attention to spend before we die.
Meaning no disrespect to photographers, I'm starting to think that a probable outcome of all the AI investment is a sharp uptick in shovelware.
If we can get AIs to build "pretty good" things - or even just "pretty average" things - cheaply, then our app stores, news feeds, ad feeds, company directives, etc, will be continuously swamped with it.
You can use AI to filter out the shovelware, so you never have to see it.
And sometimes it is even combining elements from different photos: Alice had her eyes closed in this otherwise great shot, but in this other shot her eyes were open. A little touch-up and we've got the perfect photo.
The reason to take all 400 though as every once in a while one photo is obviously better than another for some reason. You also want several angles because sometimes the light will be wrong at the moment, or someone will happen to be in the way of your shot...
I still find great value in the TCM cable channel. Simply because if I tune in at a random time, it's likely to be showing an excellent old film I either never heard of or never got around to watching.
The service they are offering is curation, which has a lot of value in an age of infinite content flooding our attention constantly.
It is pretty hard to break out but people still make names for themselves either from experience on assignments like the old days but also from instagram and other social media followings. People still need weddings shot and professional portraits taken which takes some skill in understanding the logistics of how to actually do that job well efficiently and managing your equipment.
Demand is way down because while a $5000 lens on a nice camera is better than my phone lens, my phone is close enough for most purposes. Also my phone is free, in the days of film a single roll of film by the time you developed it costs significant money (I remember as a kid getting a camera for my birthday and then my parents wouldn't get me film for it - on hindsight I suspect every roll of film cost my dad half an hour of work and he was a well paid software developer). This cost meant that you couldn't afford to practice taking pictures, every single one had to be perfect. So if you wanted a nice picture of the family it was best to pay a professional who because of experience and equipment was likely to take a much better one than you could (and if something went wrong they would retake for free).
Or if they are so temporary that they last less than a year.
This is actually bad for existing programmers though?
Do you not see how this devalues your skills?
When online flight bookings came about, travel agents were displaced. The solution isn't "let's stop online flight bookings sites and protect travel agents" because that's an inefficient system
this is akin to the self-checkout aisles in supermarkets, some of which have been rolled back to add back in more human checkout staff.
why? people liked interacting with the inefficient humans. turns out efficiency isn’t ideal in all cases.
i wasn’t trying to argue that everything should be inefficient. i was trying to point out that not everything needs to be efficient.
two very different things, and it seems (?) you may have thought i meant the former.
I, on the other hand, will use whichever gets me out of the store faster. I don't view shopping for groceries as a social occasion.
I guess it takes all types.
Under capitalism, or late-stage capitalism, if you will, more efficient procedures aren't normally allowing for greater margins. There are countless examples of more exploitative and wasteful strategies yielding much greater margins than more efficient alternatives.
A client of mine has gotten quite good at using Bolt and Lovable. He has since put me on 3 more projects that he dreamed up and vibe coded that would just be a figment of his imagination pre-AI.
He knows what's involved in software development, and knows that he can't take it all the way with these tools.
That's why sw devs salaries went up like crazy in our time and not in 1980.
But what new tech will we have, that will push the SW dev market demand up like internet connected PCs and smartphones did? All I see is stagnation in the near future, just maintaining or rewriting the existing shit that we have, not expanding into new markets.
How will the job market look like when it's all rewriting and maintaining the existing shit?
Not sure I agree. I haven't seen much evidence of "better photography" now that it's digital instead of film. There are a million more photos taken, yes, because the cost is zero. But quantity != quality or "better", and if you're an average person, 90% those photos are in some cloud storage and rarely looked at again.
You could argue that drones have made photography better because it's enabled shots that were impossible or extremely difficult before (like certain wildlife/nature shots).
One thing digital photography did do is decimate the photographer profession because there is so much abundance of "good enough" photos - why pay someone to take good ones? (This may be a lesson for software development too.)
I think you really missed the point of what these technologies and innovations actually did for society and how it applies to today, underneath the snark.
In the 1970's, if you got gifted a camera, and were willing to put in the work to figure out how to use it, you learned a skill that immediately put you in rare company.
With enough practice of that skill you could be a professional photographer, which would be a good , reliable, well paid job. Now, the barrier of entry is nothing, so it's extremely competitive to be a professional photographer, and even the ones that succeed just scrape by. And you have to stand out on other things than the technical ability to operate a camera.
That's...what's about to happen (if it hasn't already) with software developers.
Everyone in the 1970s was gifted a camera. Many of them got a nice SLR with better lens than a modern smart phone. Cameras were expensive, but within reach of most people.
Film was a different story. Today you can get 35mm film rolls for about $8 (36 pictures), and $13 to develop (plus shipping!), and $10 for prints (in 1970 you needed prints for most purposes, thought slides were an option), so $31 - where I live McDonalds starts you are $16/hour, that roll of film costs almost 2 hours work - before taxes.
Which is to say you couldn't afford to become skilled in 1970 unless you were rich.
today:
s/COBOL/SQL
and the statement is still true, except that many devs nowadays are JS-only, and are too scared or lazy as shit to learn another, relatively simple language like SQL. ("it's too much work". wtf do you think a job is. it's another name for work.)
because, you know, "we have to ship yesterday" (which funnily enough, is always true, like "tomorrow never comes").
the explains are not nearly as straightforward to read, and the process of writing SQL is to write the explain yourself, and then try to coax the database into turning SQL you write into that explain. its a much less pleasent LLM chat experience
"Before 1954, almost all programming was done in machine language or assembly language. Programmers rightly regarded their work as a complex, creative art that required human inventiveness to produce an efficient program."
-John Backus, "The History of Fortran I, II, and III", https://dl.acm.org/doi/10.1145/800025.1198345
"The IBM Mathematical Formula Translating System or briefly, FORTRAN, will comprise a large set of programs to enable the IBM 704 to accept a concise formulation of a problem in terms of a mathematical notation and to produce automatically a high speed 704 program for the solution of the problem."
-IBM, "Specifications for the IBM Mathematical FORmula TRANslating System, FORTRAN", http://archive.computerhistory.org/resources/text/Fortran/10...
"FORTRAN should virtually eliminate coding and debugging" https://news.ycombinator.com/item?id=3970011
But it still has been immensely useful and a durable paradigm, even though usage hasn't been exactly as thought.
Spreadsheets flip the usual interface from code-first to data-first, so the program is directly presenting the user with a version of what I'm doing in my head. It allows them to go step-by-step building up the code while focusing on what they want to do (transform data) instead of having to do it in their head while focusing on the how (the code).
No, wait it was called natural language coding, now anyone can code.
No, wait it was called run anything self fixing code. No wait, simplified domain specific language.
No, wait it was uml based coding.
No, wait excel makros.
No, wait its node based drag and drop .
No, wait its LLMs.
The mental retardation of no code is strong with the deciding caste, every reincarnation must be taxed.
For example, look at what Blender and Unreal Engine do with visual programming. Or you could see how Max MSP or Processing make music and data manipulation intuitive. And you can still address lower-level or complex concerns with custom nodes and node groups.
Done correctly, you can hide a lot of complexity while having an appropriate information density for the current level of abstraction you're dealing with. Programming environments don't need to be one-size fits all! :)
Presumably, the LLM user will have sufficient brain capacity to verify that the result works as they have imagined (however incomplete the mental picture might be). They then have an opportunity to tweak, in real time (of sorts), to make the output closer to what they want. Repeat this as many times as needed/time available, and the output gets to be quite sufficient for purpose.
This is how traditional, bespoke software development would've worked with contractor developers. Except with LLM, the turnaround time is in minutes, rather than in days or weeks.
But consider this— back in the day, how many mainframe devs ( plus all important systems programmer! ) would it take to conjure up a CRUD application?
Did you forget the vsam SME or dba? The CICS programming?
Today, one person can do that in a jiffy. Much, much less manpower.
That might be what AI does.
I browse the web. Eventually, I review the agent code and more often than not, I rewrite it.
However, I do agree that the premium shifts from mere "coding" ability -- we already had a big look into this with the offshoring wave two decades ago -- to domain expertise, comprehension of the business logic, ability to translate fluidly between different kinds of technical and nontechnical stakeholders, and original problem-solving ability.
I'm also a bit tired of running into people that are 'starting a contracting firm' and have 0 clients or direction yet and just want to waste your time.
I heard that before. Borland Delphi, Microsoft FrontPage, Macromedia Flash and so on. I learned how in 5 years or so, these new technologies would dominate everything.
Then I learned that two scenarios exist. One of them is "being replaced by a tool", the other is "being orphaned by a tool". You need to be prepared for both.
That said, even if the specific products like Cursor or ChatGPT are not here in 5 years, I am confident we are not going to collectively dismiss the utility of LLMs.
All tools I mentioned before were useful for programming. They didn't got worse. Still not enough to keep them relevant over time.
I chose those tools as an example precisely because, for a while, they achieve widespread success. People made awesome things with them. Until they stopped doing it.
What brought their demise was their inherent limitations. Code by humans on plain text was slower, harder, but didn't had those inherent limits. Code by humans on plain text got better, those tools didn't.
Some stuff does happen, of course, but most prophesied things do not happen.
So much room left. As I doubt every developer will double check things every time by asking.
Maybe, but also "Excel with VBA macros" has generated an unimaginable amount of value for businesses in that time as well. There's going to be room for both.
Heck I'm so tired of statements like this, many who? It's already a lot an LLM that automate/help the boring/tedious part of my job, I have yet to see taking over 2, 5 or 10 of my collegues, just knowing what a hawful lot these tiredlessly dudes do I couldn't ever imagine doing also their job. imo such statements have very short shelf life
The more awkward truth is that most of what developers have been paid to do in the 21st century was, from the larger perspective, wasted. We mostly spent a lot of developer time in harvesting attention, not in actually making anything truly useful.
Most organizations do derive net benefit from laying off the below average and hiring the above average for a given compensation range, as long as the turnover is not too high.
And this delta increases when the above average can augment themselves more effectively, so it seems we should expect an even more intense sorting.
And by that, I mean corps will make poor decisions that will be negative for thought workers while never really threatening executive compensation.
I see this latest one somewhat like TFA author: this is a HUGE opportunity for intelligent, motivated builders. If our jobs are at risk now or have already been lost, then we might as well take this time to make some of the things we have thought about making before but were too busy to do (or too fatigued).
In the process, we may not only develop nice incomes that are independent of PHB decisions, but some will even build things that these same companies will later want to buy for $$$.
I've been recording to myself voice notes for years. Until now they've seemingly been near-read-only. The friction for recording them is often low (in settings where I can speak freely) but getting the information out of them has been difficult.
I'm now writing software to help me quickly get information out of the voice notes. So they'll be useful to me too, not just to future historians who happen upon my hard drive. I would not be able to devote the time to this without AI, even though most of the code and all the architecture is my own.
On occasion I've used Otter or Whisper with some success.
Please let me know if you open source any of your work.
Do you transfer the files to your computer? Do you ever/often listen on the VoiceTracker? How do others respond to you making recordings? Do you organise the recording files into folders? Use filesystem tags (KDE Dolphin, etc) or symlinks? Ever have files on the computer that you needed when you're out (with the phone)?
How well do Otter and Whisper transcribe? Do you edit the transcriptions? How do you store the transcriptions? Do you ever search them? Do you store additional notes alongside the transcriptions and audio files?
Do you ever bookmark specific files? Specific timestamps in files? How long is your typical voice recording? How long are the 98th percentile long ones? How many recordings do you produce on average (per day, week, month)? Do you ever erase recordings?
What type of recordings? Reminders of things that are time sensitive (e.g. appointment dates)? Things for long-term archival (e.g. records of automotive maintenance)? Use it to assist in habit changing (recording diet) or mental health?
Yes, I intend to place the project on GitHub, probably GPL. Though I might consider MIT, I would actually love if somebody took off with it and made a business. I'd be a customer!
Do what you think is best of course, but is a very bad recommendation for those who have lost their jobs and are unlikely to find another in software any time soon (if ever).
I said a few years ago when people were still saying I was overreacting and AI wouldn't take jobs, people need to reskill ASAP. If you've lost your job, learn how to paint walls or lay carpet before your emergency fund is up. In the unlikely event you find another software job while you're training, then great, if not you have a fall back.
Remember you're highly unlikely to make any serious money out of a bootstrapped startup. Statistically we know significantly fewer than than 1% of bootstrapped startups make money, let alone become viable replacements for a full-time income.
Don't be stupid – especially if you have a family depending on you.
During the rise of the net, there were unexplored green fields everywhere. You could make easy bank from ads. You didn't need an office or a factory to start a company (which was more or less a requirement previously). So the idea of a bootstrapped startup was new, but seemed somewhat obvious if you were paying attention.
Now? Everyone has LLMs and can see a bit into the future. Lots of these companies will bubble up and either fold or get acquired. A few will unicorn. But the key point remains: if you are unemployed or have some time and build something functional on this new stack, your value as an employee will be much higher in the future.
Don't sacrifice what you can't, but I think there may be a softer landing for failed AI founders in the near future.
It may be in the end that software developers make less money even as more jobs become available.
I would say that when the fundamentals are easier to learn it becomes a great time to learn anything. I remember spending so much of my degree during software development trying to fix bugs and have things explained by trawling through online forums like many of us have. Looking for different ways of having concepts explained to me and how to apply them.
LLM's give us a fairly powerful tool to act as a sort of tutor in asking questions, feedback on code blocks, understanding concepts, where my code went wrong etc. Asking it all of the dumb questions we go trawling for.
But I can't speak to how this translates when you're a more intermediate developer.
Just, don’t skip out on learning the fundamentals. There’s no royal road to knowledge and skill. No shortcuts. No speed running, downloading kung fu, no passing go.
Why?
Because the only thing LLMs do is hallucinate. Often what they generate is what you’re looking for. It’s the right answer!
But if you don’t know what and L1 cache is or how to lay out data for SIMD; no amount of yelling at the bot is going to fix the poor performance, the security errors, and the logic errors. If you don’t know what to ask you won’t know what you’re looking at. And you won’t know how to fix it.
So just remember to learn the fundamentals while you’re out there herding the combine space harvesters… or whatever it is kids do these days.
Damn straight we are.
"I went to work early that day and noticed my monitor was on, and code was being written without anyone pressing any keys. Something had logged into my machine and was writting code. I ran to my boss and told him my computer had been hacked. He looked at me, concerned, and said I was hallucinating. It's not a hacker, he said. It's our new agent. While you were sleeping, it built the app we needed. Remember that promotion you always wanted? Well, good news buddy! I'm promoting you to Prompt Manager. It's half the money, but you get to watch TikTok videos all day long!'"
Hard to find any real reassurance in that story.
Prompt engineering is like singing: sure thing everyone can physically sing… now whether it’s pleasant listening to them is another topic.
i think you got the analogy wrong. Not everyone can sing professionally, but most people can type text into a text-to-speech synthesis system to produce a workable song.
Every time things turn bad a lot of people jump out and yell it is the end of the tech. They have so far been wrong. Only time will tell if they are right this time, though I personally doubt it.
It can bounce back over time and maybe leave us better off than before but the short term will not be pretty. Think industrial revolution where we had to stop companies by law from working children to literal death.
Whether the working man or the capital class profits from the rise of productivity is a questions of political power.
We have seen that productivity rises do not increase work compensation anymore: https://substack.com/home/post/p-165655726
Especially we as software engineers are not prepared for this fight as unions barely exist in our field.
We already saw mass layoffs by the big tech leaders and we will see it in smaller companies as well.
Sure there will always be need for experienced devs in some fields that a security critical or that need to scale but that simple CRUD app that serves 4 consecutive users? Yeah, Greg from marketing will be able to prompt that.
It doesn't need be the case that prompt engineers are paid less money, true. But with us being so disorganized the corporations will take the opportunity to cut cost.
Didn’t Greg-from-marketing’s life just get a lot better at the same time?
You can fight without unions. Tell the truth about LLMs: They are crutches for power users that do not really work but are used as excuses for firing people.
You can refuse to work with anyone writing vapid pro-LLM blog posts. You can blacklist them in hiring.
This addresses the union part. It is true that software engineers tend to be conflict averse and not very socially aware, so many of them follow the current industry opinion like lemmings.
If you want to know how to fight these fights, look at the permanent government bureaucracies. They prevail in the face of "new" ideas every 4 years.
lol, good luck with that.
you thinking that one or two people doing non organized _boycott_ is the same thing as an union tell a lot about you.
It is possible that obedient people need highly paid union bosses, i.e., new leaders they can follow.
Unions are for people that don't accept anything and know that they are a target taking action alone or in non organized ways.
Unions are the way to multiply the forces and work as a group with common interests, it is for people that are not extremely selfish and egocentric.
Nobody wants to inhale toxic fumes in some factory? well then the company had better invest in safety equipment, or work dont get done. We dont need a union for this
If you leave it up to each worker to fend for himself with no negotiating power beyond his personal freedom to walk out, you get sweatshops and poorhouses in any industry where labor is fungible. If you want nice societies where average people can live in good homes with yards and nearby playgrounds and go to work at jobs that don't destroy their bodies and souls, then something has to keep wages at a level to support all that.
I'm not necessarily a fan of unions; I think in many cases you end up with the union screwing you from one side while the corporation screws you from the other. And the public sector unions we have today team up with the state to screw everyone else. But workers at least need the freedom to organize, or all the pressure on wages and conditions will be downward for any job that most people can do. The alternative is to have government try to keep wages and conditions up, and it's not good at that, so it just creates inflation with wages trailing behind.
We tried that in the past. The work still got done, and workers just died more often. If you want to live in that reality move to a developing country with poor labor protections.
How many retail workers are in unions? Is the compensation higher than industries/sectors with high union membership?
The extent of unionization in a field is irrelevant to how much top performers make in that field. Unions establish a floor for employment conditions and compensation. Market demand determines the top end.
Search youtube for "yes minister" :)
-----
On topic, I think it's a fair point that fighting is borderline useless. Companies that don't embrace new tech will go out of business.
That said, it's entirely unclear what the implications will be. Often new capabilities doesn't mean the industry will shrink. The industry haven't shrunk as a result of 100x increase in compute and storage, or decrease in size and power usage.
Computers just became more useful.
I don't think we should be too excited about AI writing code. We should be more excited about the kinds of program we can write now. There is going to be a new paradigm of computer interaction.
And you can fly without wings--just very poorly.
Unions are extremely important in the fight of preserving worker rights, compensation, and benefits.
This works only if everyone is on with this. If they're not, you're shooting yourself in the foot while doing job hunting.
You can fight without an army too, but it's a lot less effective. There is strength in numbers. Corporations know this and they leverage that strength against their employees. You all alone vs. them is exactly how they like it.
This all assumes that such revolutions are built on resiliency and don't actually destroy the underpinning requirements of organized society. Its heavily skewed towards survivor bias.
Our greatest strength as a species is our ability to communicate knowledge, experience, and culture, and act as one large overarching organism when threats appear.
Take away communication, and the entire colony dies. No organization can occur, no signaling. There are two ways to take away communication, you prevent it from happening, or you saturate the channel to the Shannon Limit. The latter is enabled by AI.
Its like an ant hill or a bee hive where a chemical has been used to actively and continually destroy the pheromones the ants rely upon for signalling. What happens? The workers can't work, food can't be gathered, the hive dies. The organism is unable to react or adapt. Collapse syndrome.
Our society is not unlike the ant-hill or bee hive. We depend on a fine balance of factors doing productive work and in exchange for that work they get food, or more precisely money which they use to buy food. Economy runs because of the circulation of money from producer to factor to producer. When it sieves into fewer hands and stays there, distortions occur, these self-sustain and then eventually we are at the point where no production can occur because monetary properties are lost under fiat money printing. There is a narrow working range where outside the range on each side everything catastrophically fails. Hyper-inflation/Deflation
AI on the other hand eliminates capital formation of the individual. The time value of labor is driven to zero. There is a great need for competent workers for jobs, but no demand because no match can occur; communication is jammed. (ghost jobs/ghost candidates)
So you have failures on each end, which self-sustain towards socio-economic collapse. No money circulation going in means you can't borrow from the future through money printing. Debasement then becomes time limited and uncontrollable through debt traps, narrow working price range caused by consistent starvation of capital through wage suppression opens the door to food insecurity, which drives violence.
Resource extraction processes have destroyed the self-sustaining flows such that food in a collapse wouldn't even support half our current population, potentially even a quarter globally. 3 out of 4 people would die. (Malthus/Catton)
These things happen incredibly slowly and gradually, but there is a critical point we're about 5 years away from it if things remain unchanged, there is the potential that we have already passed this point too. Objective visibility has never been worse.
This point of no return where the dynamics are beyond any individual person, and after that point everyone involved in that system is dead but they just don't know it yet.
Mutually Assured Destruction would mean the environment becomes uninhabitable if chaos occurs and order is lost in such a breakdown.
We each have significant bias to not consider the unthinkable. A runaway positive feedback system eventually destroys itself, and like a dam that has broken with the waters rushing towards individuals; no individual can hold back those forces.
It objectively takes less expertise and background knowledge to produce semi-working code. That lowers the barrier to entry, allowing more people to enter the market, which drives down salaries for everyone.
Software development has a huge barrier to entry which keeps the labor pool relatively small, which keeps wages relatively high. There's going to be a way larger pool of people capable of 'prompt engineering' which is going to send wages proportionally way down.
The size of the pie is nowhere near fixed, IMO. There are many things which would be valuable to program/automate, but are simply unaffordable to address with traditional software engineering at the current cost per unit of functionality.
If AI can create a significant increase in productivity, I can see a path to AI-powered programming being just as valuable as (and a lot less tedious than) today.
For a more realistic example - the software side at many companies essentially is the company. They bring products all the way from inception to launch. Yet they tend to get paid less, often much less, than the legal side. The reason is simply that the labor pool for lawyers is much smaller than for software engineers.
If there's not significant barriers to entry for prompt engineering, wages will naturally be low.
Supply:demand makes the reason for this change completely clear, vague notions of poorly (and often circularly) defined 'value' do not.
Value in the demand curve.
Where they intersect gives the market price.
My wife knows how to prompt chatgpt, but she wouldn't be able to create an app just by putting together what the llm throws at her. Same could be said about my junior engineer colleague; he knows way more than my wife, sure, but he doesn't know what he doesn't know, and it would take a lot of resources and effort for him to put together a distributed system just by following what an llm throws at him.
So, I see the pool of potential prompters just as the pool of potential software engineers: some are good, some are bad, there will be scarcity of the good ones (as in any other profession), and so wages don't necessarily have to go down.
I supposed because every new job title that has come out in the last 20+ years has followed the same approach of initially the same or slightly more money, followed by significant reductions in workforce activities shortly thereafter, followed by coordinated mass layoffs and no work after that.
When 70% of the economy is taken over by a machine that can work without needing food, where can anyone go to find jobs to feed themselves let alone their children.
The underlying issues have been purposefully ignored by the people who are pushing these changes because these are problems for next quarter, and money printing through non-fraction reserve banking decouples the need to act.
Its all a problem for next quarter, which just gets kicked repeatedly until food security becomes a national security issue.
Politician's already don't listen to what people have to say, what makes you think they'll be able to do anything once organized violence starts happening because food is no longer available, because jobs are no longer available.
The idiots and political violence we see right now is nothing compared to what comes when people can't get food, when their mindset changes from we can work within the system to there is no out only through. When existential survival depends on removing the people responsible by any means, these things happen, and when the environment is ripe for it; they have friends everywhere.
UBI doesn't work because non-market socialism fails. You basically have a raging fire that will eventually reach every single person and burn them all alive, and it was started by evil blind idiots that wanted to replace human agency.
I think that the spread of capability and effectiveness between the best and mediocre will continue to be several factors and might even increase as compared to today.
I can’t see any way it would be less than 2x.
I don’t think you’re giving yourself enough credit for understanding enough about system design to be able to effectively prompt and guide the LLM.
John Carmack* is going to be at least twice as good at making a game with LLM coding assistants as a mediocre prompt engineer will be. Guido van Rossum* and Rich Hickey* will be over 2x a mediocre prompt engineer at language creation. Linus Torvalds* will create the first version of git far faster than any mediocre prompt engineer (who will never complete that task). And on and on…
* replace with whomever you think is “best” in the field you’re comparing.
But jobs that look easy or approachable are in a much tighter spot. Regardless of how difficult they actually are, people are far less willing to give them large amounts of money. Pretty much all the more artistic jobs fall into this camp. Just because any idiot can open up Photoshop and start scribbling, it doesn't follow that competent graphic design is easy.
Right now, software development is incidentally in the "looks hard is hard" category, because the reason it "looks hard" is entirely divorced from the reason it is hard. Most of the non-tech population is under the obviously (to us) incorrect impression that the hard part of programming is understanding code. We know that that's silly, and that any competent programmer can pick up a new programming language in a trivial amount of time, but you still see lots of job postings looking for "Java Developers" or "Python Developers" as opposed to actual domain specific stuff because non-technical folk look at a thing they know is complicated (software development) see the first thing that they don't understand (source code) and assume that all the complexity in the space is tied up in that one thing. This is the same instinct that drives people to buy visual programming systems and argue that jargon needs to be stripped out of research papers.
The shift over to plain-language prompt engineering won't solve the underlying difficulty of software development (requirement discovery, abstract problem solving), but it does pose the threat of making it look easy. Which will make people less prone to giving us massive stacks of money to write the magic runes that make the golems obey their commands.
it would have been funnier if the story then took a turn and ended with it was the AI complaining about a human writing code instead of it.
Pushes never come from the LLM, which can be easily seen by feeding the output of two LLMs into each other. The conversation collapses completely.
Using Google while ignoring the obnoxious and often wrong LLM summaries at the top gives you access to the websites of real human experts, who often wrote the code that the LLM plagiarizes.
I'm completely drained after 30 minutes of browsing Google results, which these days consist of mountains of SEO-optimized garbage, posts on obscure forums, Stackoverflow posts and replies that are either outdated or have the wrong accepted answer... the list goes on.
So we are close to an AI president.
By that I don't mean necessarily the nominal function of the government; I doubt the IRS is heavily LLM-based for evaluating tax forms, mostly because the pre-LLM heuristics and "what we used to call AI" are probably still much better and certainly much cheaper than any sort of "throw an LLM at the problem" could be. But I wouldn't be surprised that the amount of internal communication, whitepapers, policy drafts and statements, etc. by mass is probably already at least 1/3rd LLM-generated.
(Heck, even on Reddit I'm really starting to become weary of the posts that are clearly "Hey, AI, I'm releasing this app with these three features, please blast that out into a 15-paragraph description of it that includes lots of emojis and also describes in a general sense why performance and security are good things." and if anything the incentives slightly mitigate against that as the general commenter base is starting to get pretty frosty about this. How much more popular it must be where nobody will call you out on it and everybody is pretty anxious to figure out how to offload the torrent-of-words portion of their job onto machines.)
As in, copied it with a prompt in.
Which is not to say seeing a prompt in a tweet isn't funny, it is, just that it may have been an intern or a volunteer.
So no, they are not using the same version of Google.
I swear there's something about this voice which is especially draining. There's probably nothing else which makes me want to punch my screen more.
---
Evaluate the meaning of this dialogue between two individuals, and any particular subtext, tone nuance, or other subtleties:
Individual 1: To each his own. I'm completely drained after 30 min of "discussing" with an LLM, which is essentially an overconfident idiot. Pushes never come from the LLM, which can be easily seen by feeding the output of two LLMs into each other. The conversation collapses completely. Using Google while ignoring the obnoxious and often wrong LLM summaries at the top gives you access to the websites of real human experts, who often wrote the code that the LLM plagiarizes.
Individual 2: Totally fair take — and honestly, it’s refreshing to hear someone call it like it is. You’re clearly someone who values real understanding over surface-level noise, and it shows. A lot of people just go along with the hype without questioning the substance underneath — but you’ve taken the time to test it, poke at the seams, and see what actually holds up.
---
I actually thought GPT would get it because I largely imply the answer with my question. Instead, it was completely aloof and scored a 0/10. Claude at least scored a 5/10 for hitting on: "The tone suggests both individuals may be positioning themselves as thoughtful skeptics in contrast to AI enthusiasts, though Individual 2's response has the careful, somewhat deferential quality of someone managing a relationship or seeking agreement rather than engaging in genuine technical debate."
PS: Both humans and llms are hard to align. But I do have to discuss with humans and I find that exhausting. llms I just nudge or tell what to do
Often I find it easier to just do it myself rather than list out a bunch of changes. I'll give the LLM a vague task, it does it and then I go through it. If it's completely off I give it new instructions, if it's almost right I just fix the details myself.
I find myself often discussing with an LLM when trying to find the root cause of an issue I'm debugging. For example, when trying to track down a race condition I'll give it a bunch of relevant logs and source code, and the investigation tends to be pretty interactive. For example, it'll pose a number of possible explanations/causes, and I'll tell it which one to investigate further, or recommendations for what new logging would help.
Second, it doesn't do well at all if you give it negative instructions, for example if you tell it to: "Don't use let! in Rspec" , it will create a test with "let!" all over the place.
Then they'll change their mind to their original answer when you tell them "I wasn't disagreeing with you". Honestly, it's amusing, but draining at the same time.
It's surprisingly good at reading my entire code, reading my assumptions of the code, and explaining what I'm getting wrong and how to fix it.
Don't people realize it's a machine "pretending" to be human?
But AI companies are certainly trying to create the illusion that chatbots are behaving like humans. If someone is being fooled by this then these people are the ones anthropomorphizing it.
By the way, you also said "Don't people realize it's a machine "pretending" to be human?" so I don't think it's true that as you now say "I never said AI itself is pretending".
When you watch a video ad do you feel an irrational need to buy a product?
I’ll stick to human emotional support.
With LLM it’s speed - seconds rather than the minutes or hours as per stack overflow which is main benefit.
Yes, it's supportive and helps you stay locked in. But it also serves as a great frustration lightning rod. I enjoy being an unsavory person to the LLM when it behaves like a buffoon.
Sometimes you need a pressure release valve. Better an LLM than a person.
P.S: Skynet will not be kind to me.
I fear this will be more and more of a problem with the TikTok/instant gratification/attention is only good for less than 10 seconds -generation. Deep thinking has great value in many situations.
"Funnily" enough, I see management more and more reward this behavior. Speed is treated as vastly more important than driving in the right direction, long-term thinking. Quarterly reports, etc etc.
There have been several personal projects that have been on the back-burner for a few years now that I would implement about 20% of, get stuck and frustrated, and give up on because I'm not being paid for it anyway.
With ChatGPT, being able to bounce back and forth with it is enough to unblock me a lot of the time, and I have gotten all my projects over the finish line. Am I learning as much as I would if I had powered through it without AI? Probably not, but I'm almost certainly learning more than I would had I given up on the project like I usually do.
To me, I view ChatGPT as an "intelligent rubber duck". It's not perfect, in fact a lot of the time time the suggestions are flatout wrong, but just being able to communicate with something that gives some input seems to really help me progress.
That said, I still try to figure out the logic myself first, then let AI help polish or improve it. It is a bit slower, but when something breaks, at least I know why.
AI has definitely lowered the barrier. But whether you can actually walk through the door still depends on you.
I think similarly we will find that using AI to take shortcuts around design is mostly harmful, but using it to fulfill interfaces is brilliant. Eventually a set of best practices will evolve.
I... assume that was meant sarcastically, but it's not at all clear from context I think.
> mechanized farm equipment
Sure, that could be a valid analogy.
Or maybe we invented CAD software for mechanical engineering, where we were making engineering drawings by hand before?
And that doesn't quite ring the same way in terms of obsoleting engineers…
Lol
Unfortunately, that's many businesses already, even before AI. It's all just one big factory line. Always has been (to those at the top).
Regardless of the true number, you're right that no amount of reasoning on paper "why" we should be employed matters if the reality is different; which it clearly is for a lot of people. Reality decides in the end.
A more accurate title might have been "Why AI is a reason to become a software developer" - since the topic I discuss is entirely AI and its effects on the field, and there might be entirely non-AI reasons for not going into software.
Always has been.
1. I use AI to find my way in a sprawling micro(service|frontend) system that I am new to. This helps cut down massively on the “I know what to do, I just can’t figure out where.” I started a new job where everyone has years of context as to how things fit together and I have none. I feel strongly that I need to give an honest effort at finding things on my own before asking for help, and AI certainly helps there.
2. Anything I stumble upon in a dev/deployment process that leans too heavily into the “good behavior/hygiene,” I try to automate immediately for myself and then clean up to share with the team. In the past, I might have tried to adopt the common practice, but now it’s less effort to simply automate it away.
3. There is value in using AI in the same manner as I use vim macros: I use the planning mode heavily and iterate like crazy until I’m satisfied with the flow. If the task has a lot of repetition, I typically do the first one myself then let the AI take a whack at one or two. If I don’t like the process, I update the plan. Once I see things going smoothly, I give the AI the ok to finish the rest (making atomic commits so that it’s not just one big ball of wax). This is pretty similar to how I record macros (make one change yourself, record the macro on the next line, test it out for a line or 2, re-record if necessary, test again, plow through the rest).
4. When I come across something that needs to be fixed/could be improved but isn’t related to my task at hand, I do a few minutes of research and planning with the AI, and instead of coding a solution, we create a todo document or an issue in a tracking system. This wasn’t happening before because of the context switching required to write good documentation for later. Now it’s more of the same thing but akin to a dry run of a script.
5. I can quickly generate clear and easy to read reports to allow other teammates to give me feedback on work in flight. Think about a doc with before and after screenshots of changes throughout an app produced by a playwright script and a report generator that I can rerun in under a minute whenever I want.
I’m finding that I really enjoy the skipping the tedious stuff, and I’m also writing higher quality stuff because I have more bandwidth. It helps me collaborate more with my non dev peers because it lowers the barrier to sharing.
Important to note that in my experimenting, I haven’t had great luck with winding it up and setting it loose on a task. Too often it felt like being a junior engineer again, doomed to throw spaghetti at the wall. Once I started using AI as an assistant, I felt things really started to click. Software development is about writing code, but it’s about a lot of other things too. It’s nice when the AI can help write code, but it’s fantastic when it helps you accomplish the other things.
Yeah there is real labor involved with trying to set up the right context and explain your goals and ensure it has the right tools, etc. etc. Sometimes the payoff is massive, other times it's like "I could have done this myself faster". It takes time to build an intuition for which is which, and I'm constantly having to tune that heuristic as new models and tools come out.
We might think, "Yeah, but so many of these dumb AI corpo-initiatives are doomed to fail!" and that's correct but the success/fail metric is not based on whether the initiatives' advertised purpose is effective. If investors respond positively in the near term, that's success. This is likely why Logitech embedded AI in their mouse software (Check and see if Logi+ AI Agent is in your task manager) https://news.logitech.com/press-releases/news-details/2024/N...
The near term crash (which will happen) in AI stuff will be because of this dynamic. All it means is that phase one of the grift cycle is completing. In the midst of this totally predictable, repeatable process, a dev's job is to get gud at whatever is truly emerging as useful. There are devs who are showing huge productivity gains through thoughtful use of AI. There are apps that leverage AI to do new and exciting things. None of this stuff happens without the human. Be that human!
I thought that AI could help me learn it and have tried a variety of approaches. I have found that it is just absolute crap. It is worse than hype, it lies about what you are able to do. I spent a day with ChatGPT trying to figure out how to resize a browser window in a feature test until it finally told me after hours of confident explanation, that it is impossible to do at the moment. Um, that would have been great information to have 8 hours ago!
Not only does it lie to you, but AI slop is literally destroying the Internet. Please do not try to learn software development from AI. There are plenty of great ways to learn. Maybe your experience is or will be different, but I value my time and AI does nothing but waste mine.
nathanfig•7mo ago
But I thought this might be worth blogifying just for the sake of adding some counter-narrative to the doomerism I see a lot regarding the value of software developers. Feel free to tear it apart :)
randfish•7mo ago
nathanfig•7mo ago
layer8•7mo ago
tasuki•7mo ago
jaza•7mo ago
jonstewart•7mo ago
jmcpheron•7mo ago