Edit: What I mean by this is there may be some circumstantial evidence (less hiring for juniors, more AI companies getting VC funding). We currently have no _hard_ evidence that programming has had a substantial speed increase/deskilling from LLMs yet. Any actual __science__ on this has yet to show this. But please, if you have _hard_ evidence on this topic I would love to see it.
I definitely think a lot of junior tasks are being replaced with AI, and companies are deciding it's not worth filling junior roles at least temporarily as a result.
I think team expansion is being reduced as well. If you took a dev team of 5, armed them all with Claude Code + training on where to use it and where not to I think you could get the same productivity as hiring 2 additional FTE software devs. I'm assuming your existing 5 devs fully adopt the tool and not reject it like a bad organ transplant. Maybe an analogy could be the invention of email reducing the need for corporate typing pools and therefore fewer jr. secretaries ( typists) are hired.
/i'm just guessing that being a secretary is in the career progression path of someone in the typing pool but you get the idea.
edit: one thing i missed in my email analogy is that when email was invented it was free and available to anyone that could set up sendmail/*.MTA
If anything, the expectations for an individual developer have never been higher, and now you’re not getting any 22-26 year olds with enough software experience to be anything but a drain on resources when the demand for profitability is yesterday.
Maybe we need to go back to ZIRP if only to get some juniors back on to the training schedule, across all industries.
For other insanely toxic and maladaptive training situations, also see: medicine in the US.
one last thing to point out then my lunch is over. I think AI coding agents are going to hit services/marketplaces like Fiverr especially hard. I think the AI agents are the new gig-economy with respect to code, I spent about $50 on Claude Code pay-as-you-go over the past 3 days to put together a website i've had in the back of my mind for months. Claude Code got it to a point where I can easily pick up and run with to finish it out over a few more nights/weekends. UI/UX is especially tedious for me and Claude Code was able to take my vague descriptions and make the interface nicely organized and contemporary. The architecture is perfectly reasonable for what i want to do ( Auth0 + react + python(flask) + postgres + an OAuth2 integration to a third party ). It got all of that about 95% right on the first try.. for $50!. Services/marketplaces like Fiverr have to be thinking really hard right now.
I am not sure how it's for others byt for me it's a lot harder to read chunk of code to understand and verify it than to take the problem head on with code and then maybe consult it using LLM.
If you only care about a single metric you can convince yourself to make all kinds of bad decisions.
> Nobody is lying.
Nobody is being honest either. That happens all the time.
The newbie prototype was never all that hard. You could, in my day, have a lot of fun that first week with dreamweaver, Visual Basic, or cargo cutting HTML.
There’s nothing wrong with this.
But to get much further than that ceiling you probably needed to crack a book.
“Build me a recipe app”, sure.
Building anything substantial has consistently failed for me unless you take claude or codex by the hand and guide them through it step by step.
> Every week there seems to be a new tool that promises to let anyone build applications 10x faster. The promise is always the same and so is the outcome.
Is the second sentence true? Regardless of AI, I think that programming (game development, web development, maybe app development) is easier than ever? Compare modern languages like Go & Rust to C & C++, simply for their ease-of-compilation and execution. Compare modern C# to early C#, or modern Java to early Java, even.
I'd like to think that our tools have made things easier, even if our software has gotten commensurately more complicated. If they haven't, what's missing? How can we build better tools for ourselves?
Also, I'm not sure anyone was making 10x claims about the tools you cite.
Think of the Game hits from the 90's. A room full of people made games which shaped a generation. Maybe it was orders of magnitude harder then, but today, it's multiple orders of magnitude more people required to make them.
Same is true for websites. Sure, the websites were dingy with poor UX and oodles of bugs... but the size of the team required to make them was absolutely tiny compared to today.
Things are simultaneously the best they've ever been, and the worst they've ever been, it's a weird situation to be in for sure.
But truthfully; orders of magnitude more powerful hardware was the real unlock.
Why is slack and discord popular? Because it's possible to use multiple gigabytes of ram for a chat client.
25 years ago? Multiple gigabytes of ram put your machine firmly in the "I have unlimited money and am probably a server doing millions of things" class.
I think this is more about rising consumer expectations than rising implementation difficulty.
You needed much more marketing budget in 2000 than today, I think you have that reversed. There is a reason indie basically wasn't a thing until steam could do marketing for you.
The market demands not just better, more complicated games, but mostly much higher art budgets. Go look at, say, Super Metroid, and compare it to Team Cherry's games in the same genre, made mostly by three people. Compare Harvest Moon from the 90s with Stardew Valley, made one person. Compare old school Japanese RPGs with Undertale, again with a tiny team. Lead developer who is also the lead music composer. And it's not like those games didn't sell: Every game I mentioned destroyed the old games in revenue, even though the per-unit price was tiny. Silksong managed to overload Steam on release!
And it's not just games. I was a professional programmer in the 90s. My team's job involved mostly work that today nobody would ever write, because libraries just do it for you. We just have higher demands than we ever did.
A pretty obvious one is that there's magnitudes more players these days and many more options for how they can play. Hell, there's even a few more billion people on the planet so it's more than just percentage of people owning systems that can play games. I'll let you think about others because I want to focus on what the patent said, but if top selling games weren't making at least an order of magnitude more money then that'd be a very concerning sign.
The parent said hardware was a big unlock and this is undoubtedly true. I don't just mean that with better hardware we can do more and I don't think the parent did either. Hardware is an unlock because it enables you to be incredibly lazy. If your players have powerful hardware you can get away with thinking less about optimization. You can get away with thinking less about memory management. You can get away with thinking less about file sizes.
The hardware inherently makes game development easier. We all know the quake fast inverse square root for a reason. Game development used to be famous for optimization for a reason. It was absolutely necessary. Many old games are famous for pushing the limits of the hardware. Where hardware was the major bottleneck.
But then look at things like you mentioned. Undertail is also famous for its poor code quality. All the dialogue in a single file using a bunch of switch statements? It's absurd!
But this is both a great thing and a terrible thing. It's great because it unlocks the door for so many to share their stories and games. But it's terrible because it wastes money, money that the consumer pays. It encourages a "good enough" attitude, where the bar keeps decreasing and faster than hardware can keep up. It is lazy and hurts consumers. It makes a naïve assumption that there's only one program running on a system at a time.
It's an attitude not limited to the game industry. We ship minimal viable products. The minimum moves, and not always up. It goes down when hardware can pick up the slack or when consumers just don't know any better.
Things like electron are great, since they can enable developers to get going faster. But at the same time it creates massive technical debt. The fact that billion dollar companies use a resource hog like that is not something to be proud of, it should be mocked and shamed. Needing a fucking browser to chat or listen to music?! It's nothing short of absurd! Consumers don't know any better but why devs celebrate this is beyond me.
People should move fast and break things. It's a good way to innovate and figure out how things work. But it has a cost. It leaves a bunch of broken stuff in its wake. Someone has to deal with that trash. I don't care much about the startup breaking some things but I sure do care when it's the most profitable businesses on the planet. They can pay for their messes. They create bigger messes. FFS, how does a company like Microsoft solve slow file browsers by just starting it early and running in the background?! These companies do half a dozen rounds of interviews and claim they have the best programmers? I call bullshit.
That's something that seems to eat up AAA games, each person they add adds less of a person due to communication effects and inefficiencies. That and massive amounts of created artwork/images/stories.
There are a lot of indie game studios that make games much more complicated than what was in the 90s, and have a lot less people than AAA teams.
And ya, tons of memory has unlocked tons of capability.
Causes: Bubble economics, perverse incentives, lack of objectivity, and more.
The good news is that huge competitive advantages are available to those who refuse to accept norms without careful evaluation.
Crapping out code that does the thing was never the hard part, the hard part is reading the crap someone did and changing it. There are tradeoffs here, perhaps you might invest in modeling up front and use more or less formal methods, or you're just great at executing code over and over very fast with small adjustments and interpreting the result. Either way you'll eventually produce something robust that someone else can change reasonably fast when needed.
The additions to Java and C# are a lot about functional programming concepts, and we've had those since forever way back in the sixties. Map/reduce/filter are old concepts, and every loop is just recursion with some degree of veiling, it's not a big thing whether you piece it together in assembly or Scheme, typing it out isn't where you'll spend most of your time. That'll be reading it once it's no longer yesterday that you wrote it.
If I were to invent a 10x-meganinja-dev-superpower-tool it would be focused on static and execution analysis, with strong extendability in a simple DSL or programming language, and decent visualisation API:s. It would not be 'type here to spin the wheels and see what code drops out', that part is solved many times over already, in Wordpress, JAXB oriented CRM and so on. The ability to confidently implement change in a large, complex system is enabled by deterministic immediate analysis and visualisation.
Then there are the soft skills. While you're doing it you need to keep bosses and "stakeholders" happy and make sure they do not start worrying about the things you do. So you need to communicate reliably and clearly, in a language they understand, which is commonly pictures with simple words they use a lot every day and little arrows that bring the message together. Whether you use this or that mainstream programming language will not matter at all in this.
I don't really know if AI makes programming easier or harder. At one side, you can explore any topic with AI. This is super powerful ability when it comes to learning. At another side, the temptation to offload your work to AI is big and if you do that, you'll learn nothing. So it comes down to a person type, I guess. Some people will use AI to learn and some people will use AI to avoid learning, both behaviours are empowered.
I have simple and useless answer how to solve that. Throw it all out. Start from the scratch. Start with simple CPU. Start with simple OS. Start with simple protocols. Do not write frameworks. Make the number of layers between your code and hardware as small as possible. So it's actually possible to understand it all. Right now the number of abstraction layers is too big. Of course nobody's going to do that, people will put more abstraction layers and it'll work, it always works. But that sucks. Software stack was much simpler 20-30 years ago. We didn't even had source control, I was the young developer who introduced subversion into our company, but we still delivered useful software.
Or does it just seem that way because you've had a whole lifetime to digest it one little bit at a time so that it all seems intuitive now? If "easy to understand and get started with" were the bar for programming capability, we'd have stopped with COBOL.
Except that at least for game development, C and C++ are still the go-to tools?
I'm not saying that they can actually do that per sé; switching costs are so low that if you are doing worse than an existing competitor, you'd lose that volume. Nor am I saying they are deliberately bilking folks -- I think it would be hard to do that without folks cottoning on.
But, I did see an interesting thread on Twitter that had me pondering [1]. Basically, Claude Code experimented with RAG approaches over the simple iterative grep that they now use. The RAG approach was brittle and hard to get right in their words, and just brute forcing it with grep was easier to use effectively. But Cursor took the other approach to make semantic searching work for them, which made me wonder about the intrinsic token economics for both firms. Cursor is incentivized to minimize token usage to increase spread from their fixed seat pricing. But for Claude, iterative grep bloating token usage doesn't harm them and in fact increases gross tokens purchased, so there is no incentive to find a better approach.
I am sure there are many instances of this out there, but it does make me inclined to wonder if it will be economic incentives rather than technical limitations that eventually put an upper limit on closed weight LLM vendors like OpenAI and Claude. Too early to tell for now, IMO.
[1] https://x.com/antoine_chaffin/status/2018069651532787936
As someone with 0 (zero) swift skills and who has built a very well functioning iOS app purely with AI, I disagree.
AI made me infinitly faster because without it I wouldn‘t even have tried to build it.
And yes, I know the limits and security concerns and understand enough to be effective with AI.
You can build functioning applications just fine.
It‘s complexity and novel problems where AI _might_ struggle, but not every software is complex or novel.
This is so frustratingly common.
That doesn't match my experience. I think AI tools have their own skill curve, independent of the skill curve of "reading/writing good code." If you figure out how to use the AI tools well, you'll get even more value out of them with expertise.
Use AI to solve problems you know how to solve, not problems that are beyond your understanding. (In that case, use the AI to increase your understanding instead.)
Use the very newest/best LLM models. Make the AI use automated tests (preferring languages with strict type checks). Give it access to logs. Manage context tokens effectively (they all get dumber the more tokens in context). Write the right stuff and not the wrong stuff in AGENTS.md.
I'd rather spend my time thinking about the problem and solving it, than thinking about how to get some software to stochasticaly select language that appears like it is thinking about the problem to then implement a solution I'm going to have to check carefully.
Much of the LLM hype cycle breaks down into "anyone can create software now", which TFA makes a convincing argument for being a lie, and "experts are now going to be so much more productive", which TFA - and several studies posted here in recent months - show is not actually the case.
Your walk-through is the reason why. You've not got magic for free, you've got something kinda cool that needs operational management and constant verification.
This is how I see hand-building software goes.
There's such a wide divergence of experience with these tools. Often times people will say that anyone finding incredible value in them must not be very good. Or that they fall down when you get deep enough into a project.
I think the reality is that to really understand these tools, you need to open your mind to a different way of working than we've all become accustomed to. I say this as someone who's made a lot of software, for a long time now. (Quite successfully too!)
In someways, while the ladder may be getting pulled up on Junior developers, I think they're also poised to be able to really utilize these tools in a way that those of us with older, more rigid ways of thinking about software development might miss.
When tools prove their worth, they get taken into to normal way software is produced. Older people start using them, because they see the benefit.
The key thing about software production is that it is a discussion among humans. The computer is there to help. During a review, nobody is going to look at what assembly a compiler produces (with some exceptions of course).
When new tools arrive, we have to be able to blindly trust them to be correct. They have to produce reproducible output. And when they do, the input to those tools can become part of the conversation among humans.
(I'm ignoring editors and IDEs here for the moment, because they don't have much effect on design, they just make coding a bit easier).
In the past, some tools have been introduced, got hyped, and faded into obscurity again. Not all tools are successful, time will tell.
That said, I don't think this negates what TFA is trying to say. The difficulty with software has always been around focusing on the details while still keeping the overall system in mind, and that's just a hard thing to do. AI may certainly make some steps go faster but it doesn't change that much about what makes software hard in the first place. For example, even before AI, I would get really frustrated with product managers a lot. Some rare gems were absolutely awesome and worth their weight in gold, but many of them just never were willing to go to the details and minutiae that's really necessary to get the product right. With software engineers, if you don't focus on the details the software often just flat out doesn't work, so it forces you to go to that level (and I find that non-detail oriented programmers tend to leave the profession pretty quickly). But I've seen more that a few situations where product managers manage to skate by without getting to the depth necessary.
Unfortunately, since the tech industry still largely skews young, reticence to chase every new hype cycle also feeds into the perception of an inability to learn new things, even after many prove to be fads (e.g., blockchain).
On the other hand, you're probably right...
Like seeing a PR and going "holy s**, would never have dreamed of doing it that way" - I have learned A LOT in a looooong SWE career from that...
Using AI/LLMs, you perhaps will create more commercial value for yourself or your employer, but it will not make you a better learner, developer, creator, or person. Going back to the electronic calculator analogy that people like to refer to these days when discussing AI, I also now think that, yes, electronic calculators actually made us worse with being able to use our brains for complex things, which is the thing that I value more than creating profits for some faceless corporation that happens to be my employer at the moment.
Like Herbie Hancock once said, a computer is a tool, like an axe. It can be used for terrible things, or it can be used to build a house for your neighbor.
It's up to people how we choose to use these tools.
Because every other post in here, for example, starts with "I vibe coded..." and not with "I learned something new today on ChatGPT".
3 years ago the idea of measuring productivity in lines of code would have been ridiculous. After AI, it is the norm.
Particularly when the human acts as the router/architect.
However, I've found Claude Code and Co only really work well for bootstrapping projects.
If you largely accept their edits unchanged, your codebase will accrue massive technical debt over time and ultimately slow you down vs semi-automatic LLM use.
It will probably change once the approach to large scale design gets more formalized and structured.
We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation.
Yes, AI will one shot crappy static sites. And you can vibe code up to some level of complexity before it falls apart or slows dramatically.
Containment of state also happens to benefit human developers too, and keep complexity from exploding.
I've found the same principles that apply to humans apply to LLMs as well.
Just that the agentic loops in these tools aren't (currently) structured and specific enough in their approach to optimally bound abstractions.
At the highest level, most applications can be written in simple, plain english (expressed via function names). Both humans and LLMs will understand programs much better when represented this way
Worse, as its planning the next change, it's reading all this bad code that it wrote before, but now that bad code is blessed input. It writes more of it, and instructions to use a better approach are outweighed by the "evidence".
Also, it's not tech debt: https://news.ycombinator.com/item?id=27990979#28010192
Debt doesn't imply it's productively borrowed or intelligently used. Or even knowingly accrued.
So given that the term technical debt has historically been used, it seems the most appropriate descriptor.
If you write a large amount of terrible code and end up with a money producing product, you owe that debt back. It will hinder your business or even lead to its collapse. If it were quantified in accounting terms, it would be a liability (though the sum of the parts could still be net positive)
Most "technical debt" is not buying the code author anything and is materialized through negligence rather than intelligently accepting a tradeoff
> term technical debt has historically been used
There are plenty of terms that we no longer use because they cause harm.
The primary difference between a programmer and an engineer.
Wait till you find out about programming languages and libraries!
> It will probably change once the approach to large scale design gets more formalized and structured
This idea has played out many times over the course of programming history. Unfortunately, reality doesn’t mesh with our attempts to generalize.
What I've found is that AI can be alright at creating a Proof of Concept for an app idea, and it's great as a Super Auto-complete, but anything with a modicum of complexity, it simply can't handle.
When your code is hundreds of thousands of lines, asking an agent to fix a bug or implement a feature based on a description of the behavior just doesn't work. The AI doesn't work on call graphs, it basically just greps for strings it thinks might be relevant to find things. If you know exactly where the bug lies, it can usually find it with context given to it, but at that point, you're just as good fixing the bug yourself rather than having the AI do it.
The problem is that you have non-coders creating a PoC, then screaming from the rooftops how amazing AI is and showing off what it's done, but then they go quiet as the realization sets in that they can't get the AI to flesh it out into a viable product. Alternatively, they DO create a product that people start paying to use, and then they get hacked because the code is horribly insecure and hard-codes API keys.
Have they so clearly? What's the evidence?
I have been coding for 20+ years and I have used AI agents for coding a lot, especially for the last month and a half. I can't say for sure they make me faster.They definitely do for some tasks, but over all? I can solve some tasks really quickly, but at the same time my understanding of the code is not as good as it was before. I am much less confident that is is correct.
LLMs clearly make junior and mid level engineers faster, but it is much harder to say for Senior.
> All software construction involves essential tasks, the fashioning of the complex conceptual structures that compose the abstract software entity, and accidental tasks, the representation of these abstract entities in programming languages and the mapping of these onto machine languages within space and speed constraints. Most of the big past gains in software productivity have come from removing artificial barriers that have made the accidental tasks inordinately hard, such as severe hardware constraints, awkward programming languages, lack of machine time. How much of what software engineers now do is still devoted to the accidental, as opposed to the essential? Unless it is more than 9/10 of all effort, shrinking all the accidental activities to zero time will not give an order of magnitude improvement.
AI, the silver bullet. We just never learn, do we?
The essence: query all the users within a certain area and do it as fast as possible
The accident: spending an hour to survey spatial tree library, another hour debating whether to make our own, one more hour reading the algorithm, a few hours to code it, a few days to test and debug it
Many people seem to believe implementing the algorithm is "the essence" of software development so they think the essence is the majority. I strongly disagree. Knowing and writing the specific algorithm is purely accidental in my opinion.
The essence: I need to make this software meet all the current requirements while making it easy to modify in the future.
The accident: ?
Said another way: everyone agrees that LLMs make it very easy to build throw away code and prototypes. I could build these kind of things when I was 15, when I still was on a 56k internet connection and I only knew a bit of C and html. But that's not what software engineers (even junior software engineers) need to do.
What the LLM-driven approach does is basically the same thing, but with a lossy compression of the software commons. Surely having a standard geospatial library is vastly preferable to each and every application generating its own implementation?
At the end, the 80% features and options will bloat the API and documentation, creating another layer of accidental activity: every user will need to rummage through the doc and something source code to find the 20% they need. Figuring how to do what you want with ImageMagick or FFmpeg always involved with a lot of reading time before LLM. (These libraries are so huge that I think most people only use more like 2% instead of 20% of them.)
Anyway, I don't claim AI would eliminate all the accidental activities and the current LLM surely can't. But I do think there are an enormous amount of them in software development.
Now, can it actually do those things? Not in my estimation. But from the perspective of a less experienced developer it can sure look like it does. It is, after all, primarily a plausibility engine.
I'm all for investing in integrating these generative tools into workflows, but as of yet they should not be given agency, or even the aesthetic appearance of agency. It's too tempting to the human brain to shut down when it looks like someone or something else is driving and you're just navigating and correcting.
And eventually, with a few more breakthroughs in architecture maybe this tech actually will make digital people who can do all the programming work, and we can all retire (if we're still alive). Until then, we need to defend against sleepwalking into a future run by dumb plausibility-generators being used as accountability sinks.
Just today I asked my clawbot to generate a daily report for me and it was able to build an entire scraping skill for itself to use for making the report. It designed it along with making decisions along the way including changing data sources when it realized one it was trying was blocking it as a bot.
No-code is the same trend that has abstracted out all the generic stuff into infrastructure layers, letting the developers to focus on Lambda functions, while everything in the lower levels is config-driven. This was happening all the time, pushing the developer to easier higher layers and absorbing all complexity and algorithmic work into config-driven layers.
Runtime cost of a Lambda function might far exceed that of a fully hand-coded application hosted on your local server. But there could be other factors to consider.
Same with AI. You get a jump-start with full speed, and then you can take the wheel.
Building a plane is easier than building software. That's why they don't have bootcamps for building planes or becoming a rocket engineer. Building rockets or planes as an engineer is a breeze so there's no point in making a bootcamp.
That's the awesome thing about being a swe, it's so hard that it's beyond getting a university degree, beyond requiring higher math to learn. Basically the only way to digest the concept of software is to look at these "tutorials" on the internet or have AI vibe code the whole thing (which shows how incredibly hard it is, just ask chatGPT).
My friend became a rocket engineer and he had to learn calculus, physics and all that easy stuff which university just transferred into his brain in a snap. He didn't have to go through an internet tutorial or bootcamp.
As most people here probably know, it's now called Xojo and in my opinion both somewhat outdated and expensive. So I'm not recommending it, but credit to were it's due and it certainly was due for early versions of REALbasic when it was still affordable shareware.
The problem with all RAD tools seems to be that they eventually morph into expensive corporate tools no matter what their origins were. I don't know any cross-platform exception (I don't count Purebasic as RAD and it's also not structured).
As for AI, it seems to be just the same. The right AI tool accelerates the easy parts so you have more time for the hard parts. Another thing that bothers me a lot when alleged "professionals" are arguing against everyday computing for everyone. They're accelerating the death of general computing platforms and in the end no one will benefit from that.
It's worth actually being specific about what differentiates a junior engineer from a senior engineer. There's two things: communication and architecture. the combination of these two makes you a better problem solver. talking to other people helps you figure out your blindspots and forces you to reduce complex ideas down to their most essential parts. the loop of solving a problem and then seeing how well the solution worked gives you an instinct for what works and what doesn't work for any given problem. So how do agents make you better at these two things?
If you are better at explaining what you want, you can get the agents to do what you want a lot better. So you'd end up being more productive. I've seen junior developers that were pretty good problem solvers improve their ability to communicate technical ideas after using agents.
Senior engineers develop instincts for issues down the road. So when they begin any project, they'll take this into account and work by thinking through this. They can get the agents to build towards a clean architecture from the get go such that issues are easily traceable and debuggable. Junior developers get better at architecture by using agents because they can quickly churn through candidate solutions. this helps them more rapidly learn the strengths and weaknesses of different architectures.
On the learning front, I spend the weekend asking Claude questions about Rust, and then getting it to write code that achieved the result I wanted. I also now have a much better understanding of the different options because I've gotten three different working examples and gotten to tinker with them. It's a lot faster to learn how an engine works when you have a working engine on a dyno than when you have no engine. Claude built me a diesel, a gasoline and an electric engine and then I took them apart.
This is why everyone's thirsty for senior/staff engineers who are AI powered right now, because their entire work experience was the typical SWE experience.
I cannot wait for the industry to have a highly skilled SWE drought in the next 5 years, so I can sweep in and become the AI powered engineer who saves the day because other junior-mid SWE's outsourced their problem solving way too early, either due to falling for the "don't be left behind" narrative (which is absurd because what about people who will get into CS in 6 years from now? Do they miss some metaphorical train?) or because their manager forced them to adopt the tools.
Uhhh… also skills and abilities? You won’t develop either of those by repeatedly asking an AI to solve problems for you.
"Writing software is easy, changing it is hard."
This is why so many new teams' first order of business is invariably a suggestion to "rewrite everything".
They're not going to do a better job or get a better product, it's just the only way they're going to get a software stack that does what they want.
^ Everything App for Personal use that I'm thinking about making public in some way
~50k loc with ~400 files. Docker, postgres, react + fastify I'd say between 15 and 20 hours of vibe coding
- Tasks, Goals, Habits
- Calendar showing all of the above with two way google sync
- Household sharing of markdown notes, goals and more
- Financial projections, spending, earning, recurring transactions and more
- Meal tracking with pics, last eaten, star rating and more
- Gantt chart for goals
- Dashboard for at a glance view
- PWA for android with layout optimizations
- Dark mode
... and more
Could've I done it in the last 5 years? Yes. It would've taken 3-4 months if not more though. Now we could talk 24/7 about whether it's clean code, super maintainable, etc. etc. The code written by hand wouldn't be either if it'd be me just doing a hobby project.
Shipping is rather straightforward as well thanks to LLM's. They hold your hand most of the way. Being a techie makes this much, much easier...
I think developers are cooked one way or another. Won't take long now. Same question asked a year ago was dramatically different. AI were helpful to some extent but couldn't code up basic things.
Tools don’t make you wiser or lazier by default — they amplify whatever habits you already have. If you’re using them to avoid thinking, that shows. If you’re using them to explore faster, that shows too.
Beginner’s mind isn’t about ignorance; it’s about being willing to try leverage where it exists.
So, sure, some people are going to be using AI to create professional software, but they aren't going to tell you about all the engines that blew up along the way, and who knows which ones are going to blow up in the future. But custom utility software might get a whole lot more common.
xiaohanyu•1w ago
Pretty nice description.
advisedwang•4d ago
In my own career I switched role to get more time on a area where I felt I needed more growth an practice. Turns out I never got really very good at it, and basically was just in a role I wasn't great at for 6 years. It was miserable. My lesson is "if you know you are bad at something, don't make it load-bearer in your life or career".
hobs•4d ago
In most professions barely anyone is doing the continual education or paying attention to the "scene" for that profession, if you do that alone you're probably already in the top 10%.
Joel_Mckay•4d ago
A Generalist knows less and less about more and more until he knows absolutely nothing about everything"
Getting paid well doing something you actually enjoy doing is key =3
https://stevelegler.com/2019/02/16/ikigai-a-four-circle-mode...