Maybe it’s not trivial for them?
How do you know they weren’t trying to learn?
We used to have css editors that helped.
But I’m glad LLMs can now write CSS for me, and it nearly works perfect every time (I sometimes have to dig and fix things, and to push it to use the latest specs not old obsolete tricks)
I know centering things in CSS can be a lot harder than it seems.
Out of all examples you chose the hardest problem in computer science.
Ironically, my colleagues and I used “but can it center a div?” just today. It has been our running joke to measure usefulness of LLMs as they advance.
Fortunately for me, they are stuck in the specific styling of an enterprise app that has been in operation for a quarter and is now starting to reflect in their P&L account.
After all the security check-ins through the gates, and after being seated and coffee-ed, I walked to a desk and spent as much time just staring at the codes. I pretended to be in some deep trance code-thinking thing. Then, as the clock ticked by, I did the hand-clap thing that Stanley did in Swordfish. I wrote a few lines of code - a utility CSS class to center align a block element, preferably a DIV.
I walked out with people clamoring for my autograph and selfies with their Nokia N95. Later that evening, I sent an invoice that was ten times the monthly salary of the project lead working on that project.
/s
Bit ironic considering running it through a LLM would've fixed the 2nd part
To me it's mind boggling to not proofread a blog post. Posting it on HN is crazy too
I don't mind for online comments and the like but a blog post is crazy
Moreover,
> My first language is french
well french is very close to english in many ways, much closer than many other languages, so I find it a bit unfair to judge other ESL authors by the standards of a french native speaker's experience with english.
If you don't speak the language, but make an article like this (and are actively sharing it), the bar is most definitely above "you gotta get it proofread".
I'm quite sure you're going to get a pretty uniform response from non-native speakers across the globe if you are saying that we are poor little creatures and the world should lower this bar
I am non-native speaker, so n=1 here. Moreover, I said that I do not consider OP to be such a low bar. It may be informally written, but it did not cross my mind it would have been not understandable to some. I guess opinions may vary on this. Going back reading it I see quite a few obvious grammatical errors, surely they should have spent some effort fixing them before posting it here (nowadays it should be trivially easy to do) but again it is not to the point of not being understandable or hard to read, imo.
Thanks for the feedback
If the answer is low and yes, then speed matters more than perfection and you don't have to always know exactly how it works. We already accept some risk while outsourcing parts of our problems when using dependencies. The risk management puzzle is different but still fundamentally the same?
If you don't know how the thing work in principle though, it's not going to be easy to replace.
I would say that the main reason to not do this for any actual code you work on is that you will need to go into the code and fix bugs anyway when the LLM will, at some point inevitably, start consistently failing certain tasks. All the time you (think you had) saved by avoiding actually delving into the code will come back as technical debt then.
He does say that you should provide more context rather than just repeating yourself.
There is a distinction as far as having some engineering skills and knowledge to give the AI hints, but it might be more of a symbolic one than people realize. And as AI continues to improve, that will be more so.
But it does bring up a valid point that if we don't do any problem solving or thinking for ourselves, we will be totally helpless.
I think this is something that all students should be taught pro-actively. Actually there should have already been a national standard reinforcing to kids what happens to your brain if you don't use it and how important it is to actually read, write and solve problems sometimes rather than feeding everything into an AI.
On the other hand, the robustness and general capabilities of ML models continues to improve. We have models that can roughly generate a video game frame-by-frame from a text prompt and input image. How long until we get this for application software? Or a diffusion LLM that just outputs a full WASM source/binary or update from a text prompt or series of images in 4 steps and 5 seconds.
I think it could be a really interesting experiment to create a diffusion LLM trained on thousands and thousands of 6502 games and programs, with the full manual (including images) as input and the output being assembly or machine code along with a bit of metadata for RetroArch or something. Or maybe also include recorded gameplay data/visuals.
mandaputtra•20h ago
I'm sorry, It's just ~ I don't think that's how I want to use AI to code.
So I tell him, he needs to level up or else ~ the civilization will collapse, lol.
Again this is just opinion
theGeatZhopa•19h ago
I think he did "vibe" coding, but (may be) on a piece of code that doesn't throw errors. Code assistance then doesn't know/understand what went wrong - more context would help. BUT!
I have had an logical problem once that I couldn't resolve with my quarter-pounder only. So I used cgpt and asked to give it to me. And, you guessed, it didn't work out.
I did not really expect it to work out, but at least give me the logical structure as the worst case. So I asked and got garbage. So I asked again, again, again - more context than the context I gave was out of reach, as I needed the logical structure firstly to be able to comprehend it myself and to add further context.
Then, at some point, I got something delivered that looked promising from the perspective of logical structure. So I took this and formulated a new prompt - and. Yes! You guessed it. It didn't work out in code.
So, I did a quick truth table of expectations, formulated the prompt with the given logical structure and truth table and told assistant to check on the given code... AAA AND! You guessed it. I sent 10 times the same prompt "here truth table, here code, ....".
I noticed, the code changed more or less each time, with introduction of newish loops and methods and and and..
And somewhen then, it worked flawless out of nothing.
Sometimes it makes sense just to write "it doesn't work out":
- in case one doesn't know how to formulate the actual problem, because it's out of scope. It's not the same as not being able to program, but it may be just to difficult to grasp the logic/algorithms with own brain (even own/custom one)
- to get alternative suggestions which may even be better than earlier suggested ones.
So, as long I don't know why this young guy did what he did, as long I can't judge what he did. May be he is like me - is able to code, but has big problems to sustain that knowledge and skill, because of Claude!!!!!
gexla•19h ago
freehorse•18h ago
Of course, my workflow is very repl based and I assume different experiences exist for other contexts. Moreover, agent workflows have probably different pros and cons. It is great timewise that they can test code and fix bugs themselves, but essentially that's a bit like automating this prompt-reprompt loop. It saves time compared to that (at the cost of a lot more $$$ in tokens overall), but if the agent fails to solve the issues, yet again we are at square zero. What helps a lot is that work is done inside the codebase at least, which is supposed to be a good thing, but personally I have not understood how to use agents efficiently tbh.