AI is definitely not ready for an Engineering role. My recent experience with ChatGPT5(Preview) via Visual Studio Code tells me that it might perform acceptably as a junior programmer. However, because I'm an old retired self taught programmer who only ever managed 1 other programmer, I lack the experience to know what's acceptable as a Junior Programmer at FAANG and elsewhere.
Share what you built and how you prompted and what you are making from it, and how many tokens you paid to rummage through.
You're making a distinction that might be interesting in some esoteric sense, but that doesn't exist in the industry. When it comes to software, the architects are also the construction crew. There's nothing to confuse if the terms are effectively synonymous.
Just be aware that AI is a tool not a replacement but a human apt at AI as a tool will replace the former.
But there is a "sweet spot" where it's amazing, specifically highly targeted tasks with a specific context. I wanted a simple media converter app that tied into ffmpeg, and I didn't want to install any of the spammy or bloated options I found... so I got Claude to build one. It took about 30 minutes and works great.
I also asked it to update some legacy project, and it fell down a testing a loop where it failed to understand the testing database was missing. Had I not had years of knowledge, I would've looked at the output and suggestions Claude was giving and spent hours on it... but it was a simple command that fixed it. As with all new tech, your milage will vary.
AI code tab-complete is fantastic. It's at least an order of magnitude more powerful than IDE-assisted auto refactors.
AI graphics design tools are probably the single best thing in the field. Editing photos, creating new graphics, making marketing materials, shooting and editing videos is now extremely easy. It's a 100x speed up and a 1000x cost reduction. You still have to re-roll the generations repeatedly, but with a competent editing tool you can speed run any design work. This is one area where non-experts can also use the tools.
How is this any different than the way I program?
I've contributed genuinely useful features to FLOSS projects "as well as irrelevant suggestions or code fixes with subtle problems", mostly the latter as there was always a few stages of improvement and/or finishing by the core devs of the program I used to haunt. Honestly, I was less than half as useful as the current crop of robots and they still tolerated (in fact, encouraged) my involvement.
Don’t sell yourself short. External contributors are extremely valuable as they are often users and provide real world validation of a need for whatever they are contributing. They also retain any knowledge for any feedback they receive that they can apply to future contributions. And they also become advocates for that software, helping it grow its user base.
LLMs, not so much.
We'll reap the productivity benefits from this new tool, create more work for ourselves, output will stabilize at a new level and salaries will stagnate again, as it always happens.
Everyday, I see ads on YouTube with smooth-talking, real-looking AI-generated actors. Each one represents one less person that would have been paid.
There is no exact measure of correctness in design; one bad bit does not stop the show. The clients don't even want real art. Artists sometimes refer to commercial work as "selling out", referring to hanging their artistic integrity on the hook to make a living. Now "selling out" competes with AI which has no artistic integrity to hang on the hook, works 24 hours a day for peanuts and is astonishingly prolific.
You can argue that is a bad thing (local designers/content producers/actors/etc lost revenue, while the money was sent to $BigTech) or that this was a good thing (lower cost to make ad means taxpayer money saved, paying $BigTech has lower chance of corruption vs hiring local marketing firm - which is very common here).
[1]https://www.cnnbrasil.com.br/tecnologia/video-feito-com-inte...
Of course that line of reasoning reduces similar to other automation / minimum wage / etc discussions
The thing is that they would not have paid for the actor anyway. It’s that having an “actor” and special effects for your ads cost nothing, so why not?
The quality of their ads went up, the money changing hands did not change.
Were AI-generated actors chosen over real actors, or was the alternative using some other low-cost method for an advertisement like just colorful words moving around on a screen? Or the ad not being made at all?
The existence of ads using generative AI "actors" doesn't prove that an actor wasn't paid. This is the same logical fallacy as claiming that one pirated copy of software represents a lost sale.
B) Even in cases where AI actors are used where there wouldn’t have been actors before, the skillset is still devalued, and it’s almost certainly temporary. Someone doing a worse version of what you do for 1% of your pay affects the market, and saving 99% is great incentive for companies to change their strategy until the worse version is good enough.
Probably took me the same amount of time to generate a pleasing video as I would have spent browsing Shutterstock. Only difference is my money goes to one corporation instead of the other.
As far as the video is concerned, it adds a bit of a wow factor to get people interested, but ultimately it's the same old graphs and bullet points with words and numbers that matter. And those could just as well have been done on an overhead transparency in 1987.
Among other things, this will remove most entry-level jobs, making senior-level actors more rare and expensive.
When GPT3.5 first landed a lifelong writer/editor saw a steep decrease in jobs. A year later the jobs changed to "can you edit this AI generated text to sound human", and now they continue to work doing normal editing for human or human-ish writing while declining the slop-correction deluge because it is terrible work.
I can't help but see the software analogy for this.
It took under a decade to get AI to this stage - where it can build small scripts and tiny services entirely on its own. I see no fundamental limitations that would prevent further improvements. I see no reason why it would stop at human level of performance either.
haha, well said, I've got to remember that one. HN is a smelly place when it comes to AI coping.
Now, there’s a little room between the two—maybe the site is full of coders on a cope train, hoping that we’ll be empowered by nice little tools rather than totally replaced. And, ya know, multiple posters with multiple opinions, some contradictions are expected.
But I do find it pretty funny to see the multiple posters here describe the site they are using as suffering from multiple, contradictory, glaringly obvious blindspots.
the article's subtitle is currently false, people collaborate more with the works of others through these systems and it would be extremely difficult to incentivize any equally signifciant number of the enterprise software shops, numerics labs, etc to share code: even joint ventures like Accenture do not scrape all their own private repos and report their patterns back to Microsoft every time they re-implement the same .NET systems over and over
It’s not under a decade for ai to get to this stage but multiple decades of work, with algorithms finally able to take advantage of gpu hardware to massively excel.
There’s already feeling that growth has slowed, I’m not seeing the rise in performance at coding tasks that I saw over the past few years. I see no fundamental improvements that would suggest exponential growth or human level of performance.
If we don't see some serious fencing, I will not be surprised by some spectacular AI-caused failures in the next 3 years that wipe out companies.
Business typically follows a risk-based approach to things, and in this case entire industries are yolo'ing.
ok, ok! just like you can find for much less computation power involved using a search engine, forums/websites having if not your question, something similar or a snippet [0] helping you solve your doubt... all of that free of tokens and companies profiting over what the internet have build! even FOSS generative AI can give billions USD to GPU manufacturers
[0] just a silly script that can lead a bunch of logic: https://stackoverflow.com/questions/70058132/how-do-i-make-a...
How about the fact that AI is only trained to complete text and literally has no "mind" within which to conceive or reason about concepts? Fundamentally, it is only trained to sound like a human.
An LLM base model isn't trained for abstract thinking, but it still ends up developing abstract thinking internally - because that's the easiest way for it to mimic the breadth and depth of the training data. All LLMs operate in abstracts, using the same manner of informal reasoning as humans do. Even the mistakes they make are amusingly humanlike.
There's no part of an LLM that's called a "mind", but it has a "forward pass", which is quite similar in function. An LLM reasons in small slices - elevating its input text to a highly abstract representation, and then reducing it back down to a token prediction logit, one token at a time.
This has been demonstrated so many times.
They don’t make mistakes. It doesn’t make any sense to claim they do because their goal is simply to produce a statistically likely output. Whether or not that output is correct outside of their universe is not relevant.
What you’re doing is anthropomorphizing them and then trying to explain your observations in that context. The problem is that doesn’t make any sense.
How can you say this when progress has so clearly stagnated already? The past year has been nothing but marginal improvements at best, culminating in GPT-5 which can barely be considered an upgrade over 4o in terms of pure intelligence despite the significant connotation attached to the number.
There is zero reason to think AI is some exception that will continue to exponentially improve without limit. We already seem to be at the point of diminishing returns. Sinking absurd amounts of money and resources to train models that show incremental improvements.
To get this far they have had to spend hundreds of billions and have used up the majority of the data they have access to. We are at the point of trying to train AI on generated data and hoping that it doesn’t just cause the entire thing degrade.
I suspect once you have studied how we actually got to where we are today, you might see why your lack of seeing any limitations may not be the flex you think it is.
You cannot use just a spell checker to write a book (no matter how bad) or photoshop (non-AI) plugins to automatically create meaningful artwork, replacing human intervention.
Business people "up the ladder" are already threatening with reducing the workforce and firing people because they can (allegedly) be replaced by AI. No writer was ever threatened by a spellchecker.
Hollywood studio execs are putting pressure on writers, and now they can leverage AI as yet another tool against them.
Amazing how someone writing for an IEEE website can't keep their eyes on the fundamentals.
It's horribly outdated way of thinking that an singular AI entity would be able to handle all stacks all problems directed at it because no developer is using it that way.
AI is a great tool for both coders and artists and these outlandish titles that grab attention really seem to be echo chambers aimed at people who are convinced that AI isn't going to replace them which is true but the opposite is also true.
I imagine there's a big difference in using AI for building, say, an online forum vs. building a flight control system, both in terms of what the AI theoretically can do, and in terms of what we maybe should or should not be letting the AI do.
With the recent models its now encroaching similarly on all fronts, I think the next few iterations we'll see LLM solidify itself as a meta-compiler that will be deployed locally for more FCS type systems.
At the end of the day the hazards are still same with or without AI, you need checks and bounds, you need proper vetting of code and quality but overall it probably doesn't make sense to charge an hourly rate because an AI would drastically cut down such billing schemes.
For me "replacement" is largely a 70~80% reduction in either hourly wages, job positions or both and from the job market data I see it can get there.
I also write high performance Go server code, where it works a lot less well. It doesn't follow my rules for ptr APIs or using sync mutexes or atomic operations across a code base. It (probably slightly older version than SOTA) didn't read deep call chains accurately for refactoring. It's still worth trying but if that was my sole work it would probably not be worth it.
On the other hand for personal productivity, emacs functions and config, getting a good .mcp.json, it is also very good and generates code that partakes in the exponential growth of good code. (Unlike data viz where there is a tendency to build something and then the utility declines over time).
And until the day that humans are no longer driving the bus that will remain the case.
Is it? I can, in most traditional programming languages commonly used to today, using decades old compiler technology, say something like "x = [1,2,3]" and it will, for example, systematically generate all the code necessary to allocate memory without any need for me to be any more explicit about it. It would be fair to say AI offers a higher level abstraction, like how most programming languages used today are a higher level abstraction over assembly, but fundamentally different it is not.
"generate a c program that uses gcc 128 bit floats and systematically generates all quadratic roots in order of the max size of their minimal polynomial coefficients, and then sort them and calculate the distribution of the intervals between adjacent numbers" is just code. You still have to write the code get AI to translate it into a lower-level abstraction. It doesn't magically go off and do its own thing.
uhhh, not sure even the best people or teams are very good at this either. Condemning AI for not being capable of something we're not capable of, ok...
“If it takes longer to explain to the system all the things you want to do and all the details of what you want to do, then all you have is just programming by another name.”
This is called the specification process, which hopefully is already occurring today.
There's so much self-serving bias in articles like this, as well as the comments on HN, Reddit, etc. It's good to critique AI, but that self-serving line is frequently crossed by many people.
I personally haven't written any significant code by hand since claude code landed. I also have a high tolerance for prompting and re-prompting. Some of my colleagues would get upset if it wasn't mostly one shotting issues and had a really low tolerance for it going off the rails.
Since gpt-5-high came out, I rarely have to re-prompt. Strong CI pipeline and well defined AGENTS.md goes an incredibly long way.
One weakness of coding agents is that sometimes all it sees are the codes, and not the outputs. That's why I've been working on agent instructions/tools/MCP servers that empower it with all the same access that I have. For example, this is a custom chat mode for GitHub Copilot in VS Code: https://raw.githubusercontent.com/Azure-Samples/azure-search...
I give it access to run code, run tests and see the output, run the local server and see the output, and use the Playwright MCP tools on that local server. That gives the agent almost every ability that I have - the only tool that it lacks is the breakpoint debugger, as that is not yet exposed to Copilot. I'm hoping it will be in the future, as it would be very interesting to see how an agent would step through and inspect variables.
I've had a lot more success when I actively customize the agent's environment, and then I can collaborate more easily with it.
What feels primitive to me is how we approach programming in industry as a process of trial and error rather than one of rigour.
These are tools that automate copy-pasting from Stack Overflow and GitHub, running tools, and generating a ton of noise to sift through. They hallucinate code, documentation, and various other artifacts that are sometimes useful and are occasionally complete BS.
Some people find that they can make useful tools out of these things. Great.
A real programmer is still a human.
Update: nothing wrong with trial and error as a process. I use it a lot. But there are lots of places where we use this method that seem inappropriate and sometimes even dangerous. Yet it’s the most common tool we have and everything starts to look like a nail.
That is, you have some context, ie the prompt and any other text, and the LLM produces a plausible continuation or alteration of that prompt and text.
My intuition leads me to a thought like: To progress, the context must compress into a fractal representation.
I feel very confident that someone smarter and MUCH better paid than me is already working on this.
These tools seem great because they are less sensitive than humans to the mess and lift us over the tedious work. But at the same time, they're giving us an excuse to not fix what needed to be fixed and, in doing so, they're adding more crap to the heap.
Maybe what we need is forcing the tools to build on a simpler base, so we can keep an understanding of the results.
Context doesn't work the same way as memory + experience in humans. While humans have an impression and a flexible mental model of any single domain, AI needs hard data, which is hard to manage with context and can't really be worked around by fine-tuning in practice, lest you have to retrain the model on each and every code merge.
If it's taking you that long to direct the AI, then either you're throwing too small a problem at it, or too big a problem at it, or you're not directing its attention properly.
In a way, your prompts should feel like writing user documentation:
Refactor all of the decode functions in decoder.rs to return the number of bytes decoded
in addition to the decoded values they already return. While refactoring, follow these principles:
* Use established best practices and choose the most idiomatic approaches.
* Avoid using clone or unwrap.
* Update the unit tests to account for the extra return values, and make sure the tests check the additional return values.
When you're finished, run clippy and fix any issues it finds. Then run rustfmt on the crate.
crooked-v•1h ago
worldsayshi•1h ago
anthonypasq•1h ago
gpt-5 was created to be able to service 200m daily active users.
bakugo•43m ago
Because they already did try making a much larger, more expensive model, it was called GPT-4.5. It failed, it wasn't actually that much smarter despite being insanely expensive, and they retired it after a few months.