frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Fei Fei Li: Spatial Intelligence is AI’s Next Frontier

https://drfeifei.substack.com/p/from-words-to-worlds-spatial-intelligence
57•mkirchner•1h ago•32 comments

Unexpected things that are people

https://bengoldhaber.substack.com/p/unexpected-things-that-are-people
367•lindowe•6h ago•186 comments

Writing your own BEAM

https://martin.janiczek.cz/2025/11/09/writing-your-own-beam.html
80•cbzbc•1d ago•11 comments

The lazy Git UI you didn't know you need

https://www.bwplotka.dev/2025/lazygit/
156•linhns•4h ago•55 comments

TTS Still Sucks

https://duarteocarmo.com/blog/tts-still-sucks
20•speckx•1h ago•27 comments

High-performance 2D graphics rendering on the CPU using sparse strips [pdf]

https://github.com/LaurenzV/master-thesis/blob/main/main.pdf
10•PaulHoule•39m ago•0 comments

Zeroing in on Zero-Point Motion Inside a Crystal

https://physics.aps.org/articles/v18/178
15•lc0_stein•1h ago•0 comments

Using Generative AI in Content Production

https://partnerhelp.netflixstudios.com/hc/en-us/articles/43393929218323-Using-Generative-AI-in-Co...
48•CaRDiaK•3h ago•21 comments

Error ABI

https://matklad.github.io/2025/11/09/error-ABI.html
48•todsacerdoti•20h ago•9 comments

Memory Safety for Skeptics

https://queue.acm.org/detail.cfm?id=3773095
42•steveklabnik•4h ago•27 comments

Registered OAuth Parameters

https://www.iana.org/assignments/oauth-parameters/oauth-parameters.xhtml#parameters
22•mooreds•6d ago•3 comments

Linux in a Pixel Shader – A RISC-V Emulator for VRChat

https://blog.pimaker.at/texts/rvc1/
12•rbanffy•54m ago•3 comments

Omnilingual ASR: Advancing automatic speech recognition for 1600 languages

https://ai.meta.com/blog/omnilingual-asr-advancing-automatic-speech-recognition/?_fb_noscript=1
47•jean-•4h ago•10 comments

Unix v4 Tape Found

https://discuss.systems/@ricci/115504720054699983
51•greatquux•4d ago•4 comments

Head in the Zed Cloud

https://maxdeviant.com/posts/2025/head-in-the-zed-cloud/
43•todsacerdoti•8h ago•8 comments

Benchmarking leading AI agents against Google reCAPTCHA v2

https://research.roundtable.ai/captcha-benchmarking/
79•mdahardy•6h ago•60 comments

Building a high-performance ticketing system with TigerBeetle

https://renerocks.ai/blog/2025-11-02--tigerfans/
56•jorangreef•2d ago•8 comments

Launch HN: Hypercubic (YC F25) – AI for COBOL and Mainframes

63•sai18•6h ago•42 comments

Dependent Types and How to Get Rid of Them

https://chadnauseam.com/coding/pltd/are-dependent-types-actually-erased
8•pie_flavor•1w ago•0 comments

Synesthesia helps me find four-leaf clovers (2023)

https://matthewjamestaylor.com/synesthesia-four-leaf-clovers
53•iansteyn•1w ago•36 comments

3D Heterogeneous Integration Powers New DARPA Fab

https://spectrum.ieee.org/3d-heterogeneous-integration
3•rbanffy•39m ago•0 comments

Canadian military will rely on public servants to boost its ranks by 300k

https://ottawacitizen.com/public-service/defence-watch/canadian-military-public-servants
62•Teever•5h ago•138 comments

Redmond, WA, turns off Flock Safety cameras after ICE arrests

https://www.seattletimes.com/seattle-news/law-justice/redmond-turns-off-flock-safety-cameras-afte...
196•dredmorbius•4h ago•190 comments

Pose Animator – An open source tool to bring SVG characters to life (2020)

https://blog.tensorflow.org/2020/05/pose-animator-open-source-tool-to-bring-svg-characters-to-lif...
126•jerlendds•6d ago•13 comments

Interesting SPI Routing with iCE40 FPGAs

https://danielmangum.com/posts/spi-routing-ice40-fpga/
86•hasheddan•9h ago•6 comments

LLMs are steroids for your Dunning-Kruger

https://bytesauna.com/post/dunning-kruger
271•gridentio•7h ago•222 comments

Cybersecurity breach at Congressional Budget Office remains a live threat

https://www.politico.com/live-updates/2025/11/10/congress/cbo-still-under-threat-00644930
12•mooreds•41m ago•0 comments

How cops can get your private online data

https://www.eff.org/deeplinks/2025/06/how-cops-can-get-your-private-online-data
231•jamesgill•6h ago•51 comments

Sysgpu – Experimental descendant of WebGPU written in Zig

https://github.com/hexops-graveyard/mach-sysgpu
3•coffeeaddict1•1h ago•0 comments

Asus Ascent GX10

https://www.asus.com/networking-iot-servers/desktop-ai-supercomputer/ultra-small-ai-supercomputer...
177•jimexp69•6h ago•166 comments
Open in hackernews

Vibe Code Warning – A personal casestudy

https://github.com/jackdoe/pico2-swd-riscv
193•jackdoe•10h ago

Comments

cnity•2h ago
Sometimes I read something on the internet and I think: finally someone has articulated something the way that I think about it. And it is very validating. And it cuts through a bunch of noise about how "oh you should be tuning and tweaking this prompt and that" and really speaks to the human experience. Thanks for this.
all2•2h ago
Same. After using AI for too long I get the same mental feeling as I do when scrolling endlessly on YouTube, a listless empty purposeless feeling that I find difficult to break out of without a whole night's rest.
mentalgear•1h ago
Maybe this is Doom-Coding (as Instagram's empty DoomScrolling).
jackdoe•1h ago
Programming was very meditative and fulfilling experience for me, "building something" whatever it is, now I can see it slipping through my fingers.

You know the feeling of starting a new mmorpg video game? The first time you enter a new world, you dont know what to do, where to go, there is no "optimal" way to play it, there are no guides, you just try things and explore and play and have fun. Every new project I start I have this feeling.

Few years later the game is a chore, you have daily quests, guides and optimal strategies and simmulations and if you dont play what elitistjerks say you are doing it wrong.

With AI it feels the game is never new.

all2•1h ago
> Programming was very meditative and fulfilling experience for me, "building something" whatever it is, now I can see it slipping through my fingers.

I've been characterizing it to others as the difference between hand-carving a post for a bed frame vs. letting a CNC mill do it. The artistry-labor is lost, and time-savings are realized. In the process, the meditation of the artist, the labor and blood, sweat, and tears are all lost.

It isn't 'bad', but it has this dulling effect on my mind. There's something about being involved at a deep level that is satisfying and uplifting to my mind. When I cede that to a machine, I have lost that satisfaction.

Some years ago, I noticed this same issue just looking at typing vs. hand-writing things. I _think_ very differently on paper than I do typing at a terminal. My mind is slow and methodical with a pen, as if I actually have time to think. At a keyboard, I am less patient, more prone to typing before I think.

CooCooCaCha•55m ago
I’m the opposite. I’d rather spend more time in a flow-like state where I’m dreaming of possibilities and my thoughts come to life quickly and effortlessly.

I often find tools frustrating because they are imperfect and even with the best tools you inevitably have to break from your flow sometimes to do stuff in a more manual way.

If a tool could take care of building while I remain in flow I’d be in heaven.

CooCooCaCha•1h ago
That’s interesting because i love computers and parts of programming. Algorithms are fascinating and I get a deep sense of satisfaction when my program works.

But at the same time I find programming to be a frustrating experience because I want to spend as much time as possible thinking about what I’m trying to build.

In other words I’d rather spend time in the dream-like space of possibilities, and iterating on my thoughts quickly than “dropping down” to reality and thinking through how I’m actually going to build it, what algorithms to use, how to organize code, etc.

Because of that I’ve found vibe coding to be enjoyable even if it’s not perfect.

mfro•44m ago
Love of the process vs the product
all2•6m ago
These are intertwined, though, and rather tightly in some cases. Game dev is an excellent example of this.
abathologist•1h ago
Careful, that way leads roboticization, according to me https://news.ycombinator.com/item?id=44010933 :|
cyanydeez•2h ago
Some think current AI is like Excel and you just need to know the hotkeys and formulas.

Others see its mostly a slot machine that more often than not gives you almost right answers.

Knowing how the psychology of gambling machine design is maybe a big barrier between these people.

NewsaHackO•2h ago
[flagged]
dang•1h ago
"Edit out swipes."

"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

https://news.ycombinator.com/newsguidelines.html

csallen•2h ago
> After about 3-4k lines of code I completely lost track of what is going on... Overall I would say it was a horrible experience, even though it took 10 hours to write close to 10000 lines of code

It's hard to take very much away from somebody else's experiences in this area. Because if you've been doing a substantial amount of AI coding this year, you know that the experience is highly dependent on your approach.

How do you structure your prompts? How much planning do you do? How do you do that planning? How much review do you do, and how do you do it? Just how hands-on or hands-off are you? What's in your AGENTS.md or equivalent? What other context do you include, when, why, and how? What's your approach to testing, if any? Do you break down big projects into smaller chunks, and if so, how? How fast vs slow are you going, i.e. how many lines of code are you letting the AI write in any given time period? Etc.

The answers to these questions vary extremely wildly from person to person.

But I suspect a ton of developers who are having terrible experiences with AI coding are quite new to it, have minimal systems in place, and are trying "vibe coding" in the original sense of the phrase, which is to rapidly prompt the LLM with minimal guidance and blindly trust its code. In which case, yeah, that's not going to give you great results.

nemomarx•2h ago
After all of that effort is it faster than coding stuff yourself? This feels like getting into project management because you don't want to learn a new Library in something
csallen•2h ago
Yes, it often is much faster, and significantly so.

There are also times where it isn't.

Developing the judgment for when it is and isn't faster, when it's likely to do a good job vs isn't likely, is pretty important. But also, how good of a job it does is often a skill issue, too. IMO the most important and overlooked skill is the having the foresight and the patience to give it the context it needs to do a good job.

sodapopcan•1h ago
> There are also times where it isn't.

Should this have the "Significantly so" qualifier as well?

beezlewax•2h ago
All that effort and the writing of very specific prompts in very specific ways in order to create a determenistic output just feels like a bad version of a programming language.

If we're not telling the computer exactly what do then we're leaving the LLM to make (wrong) assumptions.

If we are telling the computer exactly what to do via natural language then it is as complicated as normal programming if not more complicated.

As least that's how I feel about it

lazide•1h ago
Have you ever used a WYSIWG editor?

One of the most frustrating (but common) things is you do v1. It looks good enough.

Then you go to tweak it a little (say move one box 10-15 pixels over, or change some text sizing or whatever), and it loses its mind.

So then you spend the next several days trying every possible combination of random things to get it to actually move the way you want. It ends up breaking a bunch of other things in the process.

Eventually, you get it right, and then never ever want to touch it ever again.

psunavy03•1h ago
Still relevant . . . https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...
tetha•1h ago
Personally, I find it faster if I use LLMs for the use cases I've found them to work well.

One example is just laborious typing-heavy stuff. Like I recently needed a table converted to an enumeration. 5 years ago I'd have spent half a day to figure out a way to sed/awk/perl that transformation. Now I can entertain an AI for half an hour or so to either do the transformation (which is easy to verify) or to setup a transformation script.

Or I enjoy that I can give an LLM a problem and 2-3 solution approaches I'd see, and get back 4-5 examples on how that code would look like in those solution approaches, and some more. Again, this would take me 1-2 days and I might not see some of the more creative approaches. Those approaches might also be entire nonsense, mind you.

But generating large amounts of code just won't be a good, time-efficient idea long-term if you have to support and change it. A lot of our code base is rather simple python, but it carries a lot of reasoning and thought behind it. Writing that code is not a bottleneck at all.

brandall10•2h ago
> ...it took 10 hours to write close to 10000 lines of code...

So there couldn't have been much in the way of planning, process, review, etc.

csallen•2h ago
Yeah that's my read. True vibe coding, minimal guidance, ofc it was a mess.
verdverm•2h ago
I spent considerable time trying to coax the agentic systems into decent coding capabilities. The thing that struck me most is how creative they are at finding new ways to fail and make me adjust my prompt.

It got tiring, so I'm on a break from ai coding until I have bandwidth to build my own agent. I don't think this is something we should be outsourcing to the likes of OpenAI, Microsoft, Anthropic, Google, Cursor, et al. Big Tech has shown their priorities lie elsewhere from our success and well being

LeafItAlone•1h ago
>The thing that struck me most is how creative they are at finding new ways to fail

Wow, they are really going for that human-like behavior aren’t they?

verdverm•1h ago
If we're talking about emulating users, sure, but this is supposed to be a tool that helps me get my job done.

If (i.e.) you dig into how something like copilot works, they do dumb things like ask^ the LLM to do glob matching after a file read (to pull in more instructions)... just use a damn glob library instead of a non-deterministic and known to be unreliable method

^ it's just a table in the overall context, so "asking" is a bit anthropomorphizing

cluckindan•1h ago
I would consider a bunch of ”dumb/power user” agents more useful than coding agents. The more they fail to use my software, the better!
zahlman•44m ago
> ^ it's just a table in the overall context, so "asking" is a bit anthropomorphizing

I interpreted GP as just saying that you are already anthropomorphizing too much by supposing that the models "find" new ways to fail (as if trying to defy you).

verdverm•40m ago
most humans do not seek out ways to defy after a certain age

I did not mean to imply active choice by "find", more that they are reliably non-deterministic and have a hard time sticking to, or easy time ignoring, the instructions I did write

prmph•14m ago
Exactly my experience too. I'm now using AI like 25% of the time or less. I always get to a point where I see that using agentic coding is making me not want to actually think, there's no way anyone can convince me that that is a superior approach, because every time I took days off the agents to actually think, I came up with a far superior architecture and code that even rendered much of what the agents were hammering away at moot.

Agentic coding is like a drug or slot machine, it slowly draws you in with the implicit promise of getting much for little. The only ways it is useful to me now is for very focused tasks where I have spent a lot of time defining the architecture down to the last detail, and the agents are used to fill in the blanks, as it were.

I also think I could write a better agent, and as to why the bog corps have not done so is baffling to me. Just event getting the current agents to obey the guidelines in the agent .md files is a struggle. They forget pretty much everything two prompts down the line. Why can't the CLI systemically prompt them to check every time, etc.?

Something tell me the future is about domain-aware agents that help users to wring better performance out of the models, based on some domain-specific deterministic guardrails.

mronetwo•1h ago
This sounds dreadful and boring. Like who's interested in writing AGENTS.md...?
danielbln•1h ago
Do you not write documentation for what you build? Or guidelines for others on how to build it?
verdverm•1h ago
Are writing docs for humans using what you built anything like writing docs for what you want an ai to build?

Do you need to write long-ass, hyper-detailed instructions for your coworkers?

danielbln•1h ago
I do not, but I don't do that for LLMs either. Conventions and documentation I write and present are as succinct or lengthy as they need to be, no matter if the recipient is human or machine.
thomasfromcdnjs•1h ago
Find a codebase that you wrote that you enjoy, ask Claude to analyse it and write an agents.md based off of it.
hotpaper75•1h ago
I completely agree with this approach. I just finished an intensive coding session with Cursor, and my workflow has evolved significantly. Previously, I'd simply ask the AI to implement entire features and copy-paste code until something worked. Now I take a much more structured approach: I scope changes at the component level, have the agent map out dependencies (state hooks, etc.), and sometimes even use a separate agent to prototype the UI before determining the necessary architecture changes. When tackling unfamiliar territory, I pause and build a small toy example myself first before bringing Cursor into the picture. This shift has been transformative. I used to abandon projects once they hit 5K lines because I'd get lost in complexity. Now, even though I don't know every quirk of my codebase, I have a clear mental model of the architecture and understand the key aspects well enough to dive in, debug, and make meaningful progress across different parts of the application. What's interesting is that I started very deliberately—slowly mapping out the architecture, deciding which libraries to use or avoid, documenting everything in an agent.md file. Once I had that foundation in place, my velocity increased dramatically. It feels like building a castle one LEGO brick at a time, with Cursor as my construction partner.
h4ck_th3_pl4n3t•1h ago
How about sharing your working prompts then, so that others can learn from it?
danielbln•1h ago
There is no "working prompt". There is context that is highly dependant on the task at hand. Here are some general tips:

- tell it to ask you clarifying questions, repeatedly. it will uncover holes and faulty assumptions and focus the implementation once it gets going

- small features, plan them, implement them in stages, commit, PR, review, new session

- have conventions in place, coding style, best practices, what you want to see and don't want to see in a codebase. we have conventions for python code, for frontend code, for data engineering etc.

- make subagents work for you, to look at a problem from a different angle (and/or from within a different LLM altogether)

- be always critical and dig deeper if you have the feeling that something is off or doesn't make sense

- good documentation helps the machine as well as the human

And the list goes on.

baxtr•1h ago
The way you describe it vibe coding results are a proxy for a person’s ability to plan.

Since vibe coding is so chaotic, rigorous planning is required, which not every developer had to do before.

You could simply "vibe" code yourself, roam, explore, fix.

Is that a fair description of your comment?

verdverm•1h ago
It's far more than planning. You have to "get to know your LLM" and it's quirks so you know how to write for it, and when they release new updates (cut off time or version), you have to do it again. Same for the agentic frameworks, when they change their system prompts and such.

It's a giant, non-deterministic, let's see what works good based on our vibes, mess of an approach right now. Even within the models, architecturally, there are recent results that indicate people are trying out weird things to see if they work, unclear if they are coming from first-principle intuition and hypothesis formation, or just throwing things at the wall to see what sticks

riskable•1h ago
Coding with an AI is an amplifier. It'll amplify your poor planning just as much as it amplifies your speed at getting some coding task done.

An unplanned problem becomes amplified by 10-100x worse than if you coded things slowly, by hand. That's when the AI starts driving you into Complexity Corner™ (LOL) to work around the lack of planning.

If all you're ever doing is using prompts like, `write a function to do...` or `write a class...` you're never going to run into the sorts of super fucking annoying problems that people using AI complain the most about.

It's soooooo tempting to just tell the AI to make complete implementations of things and say to yourself, "I'll clean that up later." You make so much progress so fast this way it's unreal! Then you hit Complexity Corner™ where the problem is beyond the (current) LLM's capabilities.

Coding with AI takes discipline! Not just knowledge and experience.

danielbln•21m ago
I agree, but would maybe argue that the level of instructions can be slightly higher level than "write function" or "write class" without ending up in a veritable cluster fuck, especially if guard rails are in place.
fhennig•1h ago
I think you're making a fair comment, but it still irks me that you're quite light on details on what the "correct" approach is supposed to be, and it irks me also because it seems to now be a pattern in the discussion.

Someone gives a detailed-ish account of what they did, and that it didn't work for them, and then there are always people in the comments saying that you were doing it wrong. Fair! But at this point, I haven't seen any good posts here on how to do it _right_.

I remember this post which got a lot of traction: https://steipete.me/posts/just-talk-to-it 8 agents in parallel and so on, but light on the details.

getnormality•1h ago
This dynamic reminds me of an experience I had a year ago, when I went down a Reddit rabbit hole related to vitamins and supplements. Every individual in a supplement discussion has a completely different supplement cocktail that they swear by. No consensus ever seems to be reached about what treatment works for what problem, or how any given individual can know what's right for them. You're just supposed to keep trying different stuff until something supposedly works. One must exquisitely adjust not only the supplements themselves, but the dosage and frequency, and a bit of B might be needed to cancel out a side effect of A, except when you feel this way you should do this other thing, etc etc etc.

I eventually wrote the whole thing off as mostly one giant choose-your-own-adventure placebo effect. There is no end to the epicycles you can add to "perfect" your personal system.

makk•1h ago
Try using spec kit. Codex 5 high for planning; Claude code sonnet 4.5 for implementation; codex 5 high for checking the implementation; back to Claude code for addressing feedback from codex; ask Claude code to create a PR; read the PR description to ensure it tracks your expectations.

There’s more you’ll get a feel for when you do all that. But it’s a place to start.

seattle_spring•1h ago
Has the definition of "vibe coding" changed to represent all LLM-assisted coding? Because from how I understand it, what you're talking about is not "vibe coding."
energy123•1h ago

  > that's not going to give you great results.
I'm not sure that's what OP is saying. The results per se might be fine, but it was not a fun experience.
tinfoilhatter•1h ago
Still waiting to see that large, impressive, complex, open-source project that was created through vibe coding / vibe engineering / whatever gimmicky phrase they come up with next!
cactusplant7374•48m ago
I suspect you've gotten lucky. I do a lot of planning and prompt editing and have plenty of outrageous failures that don't make any sense given the context.
BeetleB•6m ago
> How do you structure your prompts? How much planning do you do? How do you do that planning? How much review do you do, and how do you do it? Just how hands-on or hands-off are you? What's in your AGENTS.md or equivalent? What other context do you include, when, why, and how? What's your approach to testing, if any? Do you break down big projects into smaller chunks, and if so, how? How fast vs slow are you going, i.e. how many lines of code are you letting the AI write in any given time period? Etc.

It wouldn't be vibe coding if one did all that ;-)

The whole point of vibe coding is letting the LLM run loose, with minimal checks on quality.

krainboltgreene•2h ago
"Our products would be so many mirrors in which we saw reflected our essential nature."

All the way from 1844.

mronetwo•2h ago
> After about 3-4k lines of code I completely lost track of what is going on, and I woudn't consider this code that I have written, but adding more and more tests felt "nice", or at least reassuring.

> There was a some gaslighting, particularly when it misunderstood dap_read_mem32 thinking it is reading from ram and not MEM-AP TAR/DRW/RDBUFF protocol, which lead to incredible amount of nonsense.

> Overall I would say it was a horrible experience, even though it took 10 hours to write close to 10000 lines of code, I don't consider this my project, and I have no sense of acomplishment or growth.

Ah yes, we can now mass produce faulty code, we feel even more alienated from our work, the sense of achievement gets taken away, no ownership, barely any skill growth. Wonderful technology. What a time to bring value to the shareholders!

lazide•1h ago
Just wait until you see how it’s being used for robot boyfriends/girlfriends/porn. Just…. Wow.
onion2k•2h ago
Is this what programming is now?

No.

Vibe coding in the sense of handing all responsibility and accountability for the code in a change request over to AI and then claiming the bad code is the fault of AI is not a thing. It's still your change request regardless of how you created it. If you write every line it's yours. If you copy it from SO into your editor and committed it, that's your choices, and therefore your code. If you prompted an LLM to write something, you are responsible for that.

If there is AI slop in your codebase it is only because you put it there.

causal•1h ago
IMO this is why Claude Sonnet is better than ChatGPT: Sonnet is so much better at clarifying, drawing diagrams, writing documentation. It TRIES really hard to keep you in the loop, but of course you can choose to ignore everything it writes and just say "do more" without understanding anything.
yodsanklai•1h ago
Pretty much my experience, LLMs have taken the fun out of programming for me. My coding sessions are:

1. write prompt

2. slack a few minutes

3. go to 1

4. send code for review

I know what the code is doing, how I want it to look eventually, and my commits are small and self-contained, but I don't understand my code as much because I didn't spend so much time manipulate it. Often I spend more time in my loops than if I was writing the code myself.

I'm sure that with the right discipline, it's possible to tame the LLM, but I've not been able to reach that stage yet.

vorticalbox•1h ago
I’ve stopped getting LLM to code and use it to spitball ideas, solutions etc to the issue.

This lets you get a solution plan done, with all the files and then you get to write the code.

Where I do let it code is in tests.

I write a first “good” passing test then ask it to create all the others bad input etc. saves a bunch of time and it can copy and paste faster then I can.

ipaddr•1h ago
I have felt similiar thoughts. You start off with a mental model of how to develop an app based on experience. You can quickly get the pieces working and wire them up.

What get's lost is when you normally develop an app that takes days you create a mind model as you go along that you take with you throughout the day. In the shower you may connect some dots and reimagine the pieces in a more compelling way. When the project is done you have mental model of all of the different pieces; thoughts of where to expand and fears of where you know the project will bottleneck with a mental note to circle back when you can.

When you vibe code you don't get the same highs and lows. You don't mentally map each piece. It's not a surprise that opening up and reading the code is the most painful thing but reading my own code is always a joy.

danielbln•8m ago
I feel I still get that just not on the code level but on the systems level. I know which systems exist, how they connect, how the data flows. The lower level code and implementation details stay foggy, because I didn't write them, but I did design and/or spec the involved systems and data models.
mentalgear•1h ago
This reflects my XP as well: use LLMs for semantic search. Do not trust it with your code.

> Overall I would say it was a horrible experience, even though it took 10 hours to write close to 10000 lines of code, I don't consider this my project, and I have no sense of acomplishment or growth.

> In contrast, using AI to read all the docs (which are thousands of pages) and write helpful scripts to decode the oscilloscope data, create packed C structs from docs and etc, was very nice, and I did feel good after.

esafak•1h ago
Suppose it was 10,000 lines of solid code. That would still require of dozens of PRs to be digestible, and the attendant time to review. Our attention is the bottleneck now.
alganet•1h ago
What Í'm doing a lot is vibe coding and stashing. Not even a public branch, just git stash the whole thing the LLM writes.

Also, I stack the stash. When I vibe code, I pop it, let it work on its own mess, then I stash it again.

One project has almost 13.000 lines of vibe mess, all stashed.

One good thing, is that the stash builds. It's just that I don't want to release more code than I can read. It's a long review queue that is pre-merged somehow.

Once in a while I pick something from there, then I review it and integrate into the codebase more seriously. I don't have the throughput to review it all, and not all projects can be yolo'd.

afarviral•1h ago
How can you maintain that much stashed code between commits? I assume you refer to it and manually code using the "mess" as inspo? I don't know stash works much deeper than stashing things I might need later so I can pull from remote.
alganet•1h ago
It works quite well for me.

I don't use it as inspiration. It's like I said: code that is not reviewed yet.

It takes the idea of 50 juniors working for you one step ahead. I manage the workflow in a way that they already made the code they wrote merge and build before I review it. When it doesn't, I delete it from the stash.

I could keep a branch for this. Or go even deeper on the temptation and keep multiple branches. But that's more of my throughput I have to spent on merging and ensuring things build after merging. It's only me. One branch, plus an extra "WIP". Stash is perfect for that.

Also, it's one level of stashing. It's stacked in the sense that it keeps growing, but it's not several `git stash pop`s that I do.

One thing that helps is that I already used this to keep stuff like automation for repos that I maintain. Stuff the owner doesn't want or isn't good enough to be reused. Sometimes it was hundreds of lines, now it's thousands.

verdverm•1h ago
I just force push the same commits, so you won't know if it was me or the ai that wrote various parts /s

I actually lead my commit messages with (human) or (agent) now

You could try using a git worktree that never gets pushed

alganet•1h ago
I prefer working with the one commit per PR philosophy, linear history and every commit buildable, so I always force push (to the PR branch, but never to master). Been doing it for ages. Bisecting this kind of history is a powerful tool.
verdverm•1h ago
yup, this is my preferred method as well, but I will wait to squash the commits at PR merging time, depending on project / code host

I have one client where force push and rebase are not allowed, knots of history are preferred for regulatory compliance, so I'm told. Bisecting is not something I've heard done there before

alganet•50m ago
Squashing works great for bisecting.

I like rebasing! It works great for bisecting, reverting (squash messes that up), almost everything. It just doesn't play well with micro commits (which unfortunatelly have become the norm).

The force pushing to the PR branch is mostly a consequence of that rebase choice, in order to not pollute the main branch. Each change in main/master must be meaningful and atomic. Feature branches are other way to achieve this, but lots of steps involved.

hatthew•57m ago
why not make many local commits and then squash before rebase/push/merge?
tcdent•1h ago
It's to be expected that HN would have a contrarian take but I find it ironic that the amount of criticism toward technological innovation in an industry that is rooted fundamentally in technological innovation is so common.
xomiachuna•1h ago
The stakes are higher than when it comes to any previous controversial technology/bikeshed. This is not "react bad", but rather "push for trusting a black box with engineering bad".
immibis•1h ago
The saying is "don't get high on your own supply" for a reason.
tcdent•32m ago
That saying applies to one thing: drugs. It's not something you can extrapolate across industries.

What is a programming language in the first place if not a programer satiating their own need for a better tool?

stevage•1h ago
This is reassuring. I started vime coding a side project and quickly got repulsed by the feeling of disconnection and lack of ownership. I put it on the shelf for a bit then came back and started over, writing all the code myself (but with a bit of VS Code autocomplete and a lot of assistance from ChatGPT). Super satisfying.
iammjm•1h ago
I feel with people that say that "AI have take the fun out of programming" for them, but at the same time I think to myself: is it about doing, or is it about getting things done? Like I imagine someone in the past loved their job walking each night through their city, lighting up the gas-powered street lights. And then one day someone else implemented electric street lights, and the first person lost the job they loved. But in the end, its about providing light to the city streets, no? For the great majority of work, it is not about fun, but about doing something other people need or want. For me, AI allows me to realize my ideas, and get things done. Some of it might be good, some of it might be bad. I put at least as much time, attention and effort as the "real" programmers do, but my time goes into thinking and precisely defining what I want, cutting it up into smaller logical modules, testing, identifying and fixing bugs, iterating all the time.
minimaxir•1h ago
How people derive utility varies from person to person and I suspect is the root cause of most AI generation pipeline debates, creative and code-wise. There are two camps that are surprisingly mutually exclusive:

a) People who gain value from the process of creating content.

b) People who gain value from the end result itself.

I personally am more of a (b): I did my time learning how to create things with code, but when I create things such as open-source software that people depend on, my personal satisfaction from the process of developing is less relevant. Also, getting frustrated with code configuration and writing boilerplate code is not personally gratifying.

dlisboa•1h ago
You like having the painting, you just don't like to paint. You can think of a painting and have it appear before you.

That's OK, but surely you can see how painters wouldn't enjoy that in the slightest.

grim_io•1h ago
You can still enjoy painting, but there is no guarantee that you will be paid for it.
brandall10•1h ago
Historically, many master painters used teams of assistants/apprentices to do most of the work under their guidance, with them only stepping in to do actual painting in the final details.

Similar with famous architects running large studios, mostly taking on a higher level conceptual role for any commissions they're involved in.

Traditionally in software (20+ years ago) architects typically wouldn't code much outside of POC work, they just worked with systems engineers and generated a ton of UML to be disseminated. So if we go back to that type of role, it somewhat fits in with agentic software dev.

gridspy•59m ago
Sure, but you currently cannot teach AI models to generate novel art in the same way that you can teach a human apprentice.
brandall10•56m ago
I was addressing the 'enjoyment' factor, when at the end of the day, esp. at scale, it's a job to produce something someone paid for.
dlisboa•52m ago
That's where we're at a marked disagreement. "It's just a way to get paid" reduces every human knowledge to a monetary transaction, so the value of any type of learning is only worth what is being paid for.

Thankfully the people that came before us didn't see it that way otherwise we wouldn't even have anything to program on.

glenstein•1h ago
>For the great majority of work, it is not about fun, but about doing something other people need or want

The essence of this, I think, is that a sense of craftsmanship and appreciation for craft often goes hand in hand with the ethos of learning and understanding what you are working with.

So there is the issue of who rightly deserves to get the satisfaction out of the getting things done. But there's also the fact that satisfaction goes hand in hand with craft, with knowledge. And that informs a perspective of being able to do things.

I finally read Adrift, 76 Days at Sea, a fantastic book about surviving in a life raft while drifting across the ocean. But the difference between life and death was an individual with an abundance of practical survival, sailing and navigation knowledge. So there's something to the idea of valuing the ability to call on hard earned deep knowledge, and a relationship to knowledge that doesn't abstract it away.

Almost paralleling questions of hosting your own data or entrusting it in centralized services.

dangus•48m ago
Craft is in the eye of the beholder.

I’ve never even able to make a mobile app before. My skillset was just a bit too far off and my background more in the backend.

Now I have a complete app thanks to AI. And I do feel a sense of accomplishment.

For some people building furniture from IKEA is an accomplishment. But a woodworker building an IKEA piece isn’t going to feel great about it.

It sounds like the person who made this repo didn’t need help but used the help anyway and had a bad time.

jackdoe•20m ago
> It sounds like the person who made this repo didn’t need help but used the help anyway and had a bad time.

tbh, it would've taken me 10x the time, the docs are not very obvious rp2350 is fairly new, and its riscv is not used as much and is afterthought, if I was writing it for arm it would've been much easier as the arm swd docs are very clear.

I am also new to the pico world.

It is not easy to make myself do something when I know its going to take 10 times longer and its going to be 10 times harder, even if I know I will feel 10 times better.

You know when they say "find what for you is play and for others is work"? well..

allenu•18m ago
> I’ve never even able to make a mobile app before. My skillset was just a bit too far off and my background more in the backend. > Now I have a complete app thanks to AI. And I do feel a sense of accomplishment.

AI is such an existential threat to many of us since we value our unique ability to create things with our skills. In my opinion, this is the source of immediate disgust that a lot of people have.

A few months ago, I would've bristled at the idea that someone was able to write a mobile app with AI as that is my personal skillset. My immediate reaction when learning about your experience would've been, "Well, you don't really know how to do it. Unlike myself, who has been doing it for many, many years."

Now that I've used AI a bit more, like yourself, I've been able to do more that I wasn't able to before. That's changed my perspective of how I look at skills now, including my own. I've recognized that AI is devaluing our unique skillsets. That obviously doesn't feel great, but at the same time I don't know if there's much to be done about that. It's just the way things are now, so the best I can do is lean into the new tools available and improve in other ways.

chickensong•46m ago
Re: craft vs git 'er dun, I don't think these have to be mutually exclusive. AI-boosted development is definitely different from the old ways, but the craft approach is a mindset and AI is just another tool.

In some ways, I find that agent-assisted development opens doors to producing even higher quality code. Little OCD nitpicks, patterns that appear later in development, all the nice but not really necessary changes...these time-consuming refactors are now basically automated in 1-shot by an agent.

People who rush to ship the minimum were writing slop long before LLMs. At least now we have these new tools to refactor the slop faster and easier than ever.

TomasBM•1h ago
Yeah, this resonates with me.

As much as I dislike not having a good mental model of all the code that does things for me, ultimately, I have to concede the battle to get things done. This is not that different from importing packages that someone else wrote, or relying on codebases of my colleagues.

That said, even if I have to temporarily give up on understanding, I don't believe there's any reason to completely surrender control. I'll call a technician when things need fixing right away, but that doesn't mean I shouldn't learn (some of) the fixes myself.

gyomu•1h ago
> is it about doing, or is it about getting things done?

No, this is a false dichotomy and slippery slope dangerous thinking.

It’s about building a world where we can all live in and find meaning, joy, dignity, and fulfillment, which requires a balance between pursuing the ends and preserving the means as worthwhile human pursuits.

If I am eating a delicious meal but the people preparing it had a miserable time, or it was prepared entirely by robots controlled by nefarious people using the profits to harm society, I don’t want it.

Human society and civilization is for the benefit of humans, not for filling checkboxes above all else.

rustystump•55m ago
I guess you are a vegan too right? I get this take but it is naivety. Not everything must pass the morality purity test.

Did mass processed food production stop people from cooking or enjoying human made food? No it did not. The same is true in almost all domains where a form of industrialization happens.

nullgeo•3m ago
> Did mass processed food production stop people from cooking or enjoying human made food?

Yeah but what if I'm getting pitted against my coworkers who are vibe coding and getting things done faster than I am. Some people write code with pride because it's their brainchild. AI completely ruins the fun for those people when they have to compete against their coworkers for productivity.

I'm not in disagreement with you or the GP comment, but this it is super hard to make nuanced comments about GenAI.

prmph•36m ago
I was a bit disappointed by your response because, from the way you started it, I was expecting a stronger argument. I do agree with your point, but I think a key aspect of the false dichotomy is that there is evidence that AI is not actually "getting things done"
denysvitali•58m ago
I truly enjoy programming, but the most frustrating part for me was that I had many ideas and too little time to work on everything.

Thanks to AI I can now work on many side projects at the time, and most importantly just (as you mentioned) get stuff done quickly and most of the time in good enough (or sometimes excellent) results.

I'm both amazed and a bit sad, but the reality is that my output has increased significantly - although the quality might have dropped a bit in certain areas.

Time is limited, and if I can increase my results in the same way as the electric street lights, I can simply look back at the past and smile that I lived in a time where lighting up gas-powered street lights was considered a skill.

As you perfectly put it, it's not about the process per se, it's about the result. And the result is that now the lights are only 80% lit. In a few months / years we'll probably reach the threshold where the electric street lights will be brighter than the gas-powered ones, and you'd be a fool if you decide to still light them up one by one.

shepherdjerred•52m ago
I’m in the same bucket. I absolutely love programming. What I love even more is being able to do all of these projects and fast-forward through them.
tobyjsullivan•56m ago
This reminds me of the debate around Soylent when that came out. Are meals for enjoyment, flavour, and the experience or are they about consuming nutrients and providing energy?

I’d say that debate was largely philosophical with proponents on both sides. And really the answer might be that both things are true for different people at different times. Though I also observe that soylent did not, by and large, end up replacing meals for the vast majority.

zahlman•48m ago
> Like I imagine someone in the past loved their job walking each night through their city, lighting up the gas-powered street lights. And then one day someone else implemented electric street lights, and the first person lost the job they loved. But in the end, its about providing light to the city streets, no?

Lighting or extinguishing a gas lamp does not allow for creative expression.

Writing a program does.

The comparison is almost offensive.

> For the great majority of work, it is not about fun, but about doing something other people need or want.

Some of us write code for reasons that are not related to employment. The idea that someone else might find the software useful is speculative, and perhaps an expression of hubris; it's not the source of motivation.

> I put at least as much time, attention and effort as the "real" programmers do, but my time goes into thinking and precisely defining what I want, cutting it up into smaller logical modules, testing, identifying and fixing bugs, iterating all the time.

So does the time of the "real programmers".

aetherspawn•15m ago
Not everyone needs creative expression to enjoy their job, sometimes it’s about the process (sales people, mechanics, etc)
tjr•44m ago
I am reminded of Dijkstra's remark on Lisp, that it "has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts."

(I imagine that this is not limited to Lisp, though some languages may yield more or less results.)

If we consider programming entirely as a means to and end, with the end being all that matters, we may lose out on insights obtained while doing the work. Whether if those insights are of practical value, or economic value, or of no value at all, is another question, but I feel there is more likely to be something gained by actually doing the programming, compared to actually lighting the street lamps.

(Of course, what you are programming matters too. Many were quick to turn to AI for "boilerplate"; I doubt many insights are found in such code.)

allenu•44m ago
Programming really is fascinating as a skill because it can bring so much joy to the practitioner on a day-to-day problem-solving level while also providing much value to companies that are using it to generate profit. How many other professions have this luxury?

As a result, though, I think AI taking over a lot of what we're able to do has the dual issue of making your day to day rough both as a personally-enriching experience but also as a money-making endeavor.

I've been reading The Machine That Changed the World recently and it talks about how Ford's mass production assembly line replaced craftsmen building cars by hand. It made me wonder if AI will end up replacing us programmers in a similar way. Craftsmen surely loved the act of building a vehicle, but once assembly lines came along, it no longer made sense to produce cars in that fashion since more unskilled labor could get the job done faster and cheaper. Will we get to a place where AI is "good enough" to replace most developers? You could always argue that craftspeople could generate better code, but I can see a future where that becomes a luxury and unnecessary if tools do most of the work well enough.

yodsanklai•30m ago
> is it about doing, or is it about getting things done?

It's both. When you climb a mountain, the joy is reaching the summit after the hard hike. The hike is hard but also enjoyable in itself, and makes you appreciate reaching the top even more.

If there's a cable car or a road leading to the summit, the view may still be nice, but I'll go hiking somewhere else.

Vegenoid•22m ago
Making things is often not just about making the thing right in front of you, but about building the skills to make bigger and better things. When you consider the long view, the struggle that makes it harder to make the thing at hand is well worth it. We have long considered taking shortcuts that don’t build skills to be detrimental in the long term. This pretty much only stops being the case when the thing you are short cutting becomes totally irrelevant. We have yet to see how the atrophying of programming skills will affect our collective ability to make reliable and novel software.

In my experience, I have not seen much new software that I’m happy about that is the fruit of LLMs. I have experienced web apps that I’ve been using for years getting buggier.

teaearlgraycold•11m ago
I feel that too much reliance on LLMs will leave engineers with at best a linear increase in skill over time, compared to the exponential returns of accumulated knowledge. For some I fear they will actually get negative returns when using AI.
dvfjsdhgfv•21m ago
The correct analogy would be that half of the lights wouldn't light up randomly and then you'd have to go out anyway but in a hurry and only to certain ones just do discover you need to get back 20 minutes later because there is another problem with the same light, and your boss would expect that you do everything much faster and you end up frustrated even more.
agumonkey•2m ago
so far the economy is not build on getting things done alone

the promise of progress is that not having to do chores will make us happier, it's partly true, and partly false

people hate doing too much of too harmful things, beside that if you need me to redo your shelves, or help you get milk in the morning, i'm happy to oblige

but back to the point of things getting done and the march of progress, we're entering a potential kurzweil runaway, where computer understand and operate on the world faster, better and longer than us, leaving us with having nothing to do, so we'll see, but i'm not betting a lot on that, it's gonna be toxic (big 4 becoming our main dependency, unstable and a potential depression frenzy)

look at how often people say "i wanna do something that matters", "i wanna help others".. it's a bit strange to say this because we spend our lives maintaining the worlds to be comfortable, but having everything done for you all the time might not be heaven on earth (even in the ideal best case)

raphman•1h ago
Given the code has been completely vibe-coded, what does this mean in practice?:

> Copyright (c) 2025

Whose copyright? IIRC, it is consensus that AI cannot create copyrightable works. If the author does not own the copyright, can they add a legally binding license? If not, does this have any legal meaning?:

> IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY

bdangubic•1h ago
> After about 3-4k lines of code I completely lost track of what is going on

full stop here, there is nothing you can write after this…

abathologist•1h ago
A key -- perhaps THE key -- remark here, IMO is the following:

> I do want to make things, and many times I dont want to know something, but I want to use it

This confesses the desire to make, to use, and to make use of, without ANY substantive understanding.

Of course this seems attractive for some reasons, but it is a wrong, degenerative way to be in the world. Thinking and being belong together. Knowing and using are two dimensions of the same activity.

The way of these tools is a making without understanding, a using without learning, a way of being that is thoughtless.

There's nothing preventing us from thoughtful, rigorous, enriching use of generative ML, except that the systems we live and work in don't want us to be thoughtful and enriched and rigorous. They want us pliant and reactive and automated and sloppy.

We don't have to bend to their wants tho.

glenstein•1h ago
>Of course this seems attractive for some reasons, but it is a wrong, degenerative way to be in the world.

I share your sense that there's something psychologically vivid and valuable in that passage, but it's part of an implicit bargain that's uncontroversial in other respects - I don't have to be an electrician to want a working light switch. I don't personally inspect elevators or planes or, in many cases, food. It's the basic bargain of modernity.

I suppose, to your point, the important distinction here is that I wouldn't call myself an electrician if my relationship to the subject matter doesn't extend beyond the desire to flip a switch.

abathologist•44m ago
I'd argue that you understand what a light switch does well enough to use it effectively for its purpose.

When me move from just making use of something to using something to make with, that is when we should have a deeper understanding I think.

Does that sound right?

> the important distinction here is that I wouldn't call myself an electrician if my relationship to the subject matter doesn't extend beyond the desire to flip a switch.

Yeah, that seems right to me!

hackmack10•1h ago
This is very well said. I thought I was just burned out over the past several months. Truth is, I'm just reviewing AI code slop all day and I fucking hate it. It's exhausting.
rzzzt•1h ago
You can ask for Mermaid syntax and receive nicely formatted block diagrams.
lux_sprwhk•1h ago
I don't get it. Any time I step into a human-dev project, I feel exactly the same. Whenever a program gets large enough to be useful, it's too complex for anyone to understand without putting some work into it.

It's like spaghetti code only existed after 2022.

kpil•1h ago
I think that the important conclusion to make of this is that publicly available code is not created or even curated by humans anymore, and it will be fed back into data sets for training.

It's not clear what the consequences are. Maybe not much, but there's not that much actual emergent intelligence in LLMs, so without culling by running the code there's seems to be a risk that the end result is a world full of even more nonsense than today.

This already happened a couple of years ago for research on word frequency in published texts. I think the consensus is that there's no point in collecting anymore since all available material is tainted by machine generated content and doesn't reflect human communication.

hackncheese•1h ago
This is a really interesting point, I wonder if this will have a similar effect to model poisoning
johnnyApplePRNG•1h ago
I think we'll be fine. AIs definitely generate a lot of garbage, but then they have us monkeys sifting through it, looking for gems, and occasionally they do drop some.

My point is, AI generated code still has a human directing it the majority of the time (I would hope!). It's not all bad.

But yea, if you're 12 and just type "yolo 3d game now" into Claude Code, I'd say I'd be worried about that but then immediately realized no... that'd be awesome.

So yea, I think we'll be fine.

mmaunder•1h ago
“I fucking hate this.”

This is how apologetic and hateful towards AI that you need to be to rank on HN if you admit to having AI do any heavy lifting. It’s the same in the art community. Take any credit for the creation, express any joy in the process, and you will be crucified. This comment will be found here, at the very bottom, in an extremely light shade of grey. But know that those of us who enjoy the process, who claim our creations as our own, and who are creating novel and useful software, while pushing the boundaries of this exciting new capability, are quiet and growing rapidly in number. We’re quiet because we’re just sick of your bullshit.

bobbyblackstone•1h ago
this is why if you want to use the machine to code.

you need to

plan, build guards, provide scope and desirables and test, retest, xref everything.

the machine codes, then stops and checks the rules, backtests and then continues.

as with all progression, structure matters most.

also, spaghetti code is the future. adapt or die tbh.

"huhuhu look at his spaghetti code, muppet " .... "but it works and is 3 months ahead of schedule ... ." ... "oh" ... "and there is documentation"

_pdp_•1h ago
Let me play the devil's advocate here for a brief moment. I suspect that developers will adapt to the new norms.
jackpepsi•56m ago
I resonate with what the author said about losing track of the Mental Model. I think that's the key to enjoying the process or not. I.e. the building up or utilising of that mental model (my own understanding) is they key to finding software development joyful.

Specifically:

"Easy but boring project" case: For projects where I am already familiar with a strong and sensible architecture then I find AI enjoyable to work with as a simple speed boost. I know exactly what I'm asking AI to do at every stage and can judge it's results well. It's not that interesting to me to code these components myself because I've done it before several times. My mental model of the problem space and a good solution is complete. I get some satisfaction from using my mental model.

"Challenging but interesting project" case: For projects where I don't yet understand the best architecture then I will inevitably ask AI to connect Component A to Component B without yet understanding that there should be a Component C. Because I don't have the understanding of the problem space. The thing is before AI I may have made this mistake myself, I just would have had the satisfaction of learning at the same time.

Given the time with these type of projects I basically write them twice: First pass making it work but as a huge mess, but building a mental model of the real problem space along the way. Second pass refactoring and getting it right, creating now a mental model of a good solution. Only after two passes would it be a project I would feel is done correctly and be happy (joyful) to publish it.

I have found AI enables you to get the first pass working much quicker, but without the learning along the way of the mental model to inform how to make the second pass properly. So If I want the challenging project to be joyful I still need to invest the time to learn from the first pass.

And that specific learning task I enjoy more if I do it iteratively as the AI and I build together, it's less enjoyable if I sit down afterwards and only inspect the code.

SO if I want a challenging project to be joyful I have to continue investing the time in the first phase to do the learning. AI just gives the opportuntity to produce a messy working prototype without learning anything, which may or may not make sense for the business side of things.

wessorh•43m ago
after reading this I purchased fuck-ai.com and decided to write a little website to accumulate writings such as the OP, alas the ai-written code isn't done yet, gotta say I feel simular to waht the author experenced.
cadamsdotcom•42m ago
The solution is to ground your model.

In code, one way I’ve found to ground the model and make its output trustworthy is test-driven development.

Make it write the tests first. Make it watch the tests fail. Make it assert to itself that they fail for the RIGHT reason. Make it write the code. Make it watch the tests pass. Learn how to provide it these instructions and then take yourself out of the loop.

When you’re done you’ve created an artefact of documentation at a microscopic level of how the code should behave, which forms a reference for yourself and future agents for the life of the codebase.

mbesto•40m ago
> I don't consider this my project, and I have no sense of acomplishment or growth.

Trigger warning incoming... if you are in a for-profit company, does the business really care whether you feel accomplished as long as you are producing code? As an analog - the assembly line worker on a highly automated Tesla assembly line is essentially a replaceable commodity at this point.

> The main issue is taste, when I write code I feel if its good or bad, as I am writing it, I know if its wrong, but using claude code I get desensitized very quickly and I just can't tell, it "reads" OK, but I don't know how it feels. In this case it happened when the code grew about 4x, from 1k to 4k lines. And worse of all, my mental model of the code is completely gone, and with it my ownership.

Does the code work? If so, why does any of this matter?

In an age of automated manufacturing, I've noticed more and more independent wood workers. This is okay - but you aren't going to supply the world's furniture needs with thousands or hundreds of thousands of artisan wood workers.

DauntingPear7•3m ago
But a chair cannot be copied from one home to another. Code uniquely can be. Good (perhaps artisanal) code is useful and better for everyone. The foundational improvements by a single person can get magnified throughout a project, while with other crafts the quality of their output does not have the same effect.
cmpalmer52•34m ago
I haven’t done any serious web coding in years, so when I needed a little web page dashboard, I thought I’d do it 100% vibe coded.

Problem statement: We have four major repos spanning two different Azure DevOps servers/instances/top-level accounts. To check the status of pull requests required a lot of clicks and windows and sometimes re-logging in. So we wanted a dashboard customized to our needs that puts all active pull requests on each repo into a single page, links them to YouTrack, links them to the Azure DevOps pages, auto-refreshes, and flags them by needing attention for approval, merge conflicts, and unresolved comments. And it would use PATs for access that are only stored locally and not in the code or repo.

AI used: I began by describing the project goals to ChatGPT 5 and having it suggest a basic architecture. Then I used the Junie agent in JetBrain’s WebStorm to develop it. I gave it the ChatGPT output and told it to create a Readme and the project guidelines. Then I implemented it step by step (basic page layout, fill with dummy data, add Azure API calls, integrate with YouTrack, add features).

By following this step by step iteration, almost every step was a one-shot success - only once that I remember did it do something “wrong” - but sometimes I caught it being repetitive or inconsistent, so I added a “maximize code reuse and put all configuration in one place” step.

After about 3 hours, some of which was asking it code to my standards or change look and feel, I had a very full featured application. Three different views - the big picture, PRs that need my attention, and active PRs grouped by YouTrack items. I gave it to the team, they loved it and suggested a few new features. Another hour with the Junie Agent and I incorporated all the suggestions. Now we all use it every day.

I purposefully didn’t hand edit a single line of code. I did read the code and suggested improvements, but other than that, I think a user with no programming experience could have done it (particularly if they asked chatGPT on the side, “Now what?”). And it looked a helluva lot better than it would have if I coded it because I’m rusty and lazy.

Overall, it was my biggest success story of AI coding. We’ve been experimenting with AI bug triage, creating utility functions, and adding tests to our primary apps (all .NET Maui) but with a huge code base, it often missing things or makes bad assumptions.

But this level of project was near perfect capability to execution. I don’t know how much my skills helped me manage the project, but I know that I didn’t write the code. And it was kinda fun.

andrewstuart•29m ago
>> I fucking hate this.

>> And I can not help, but feel dusgust and shame. Is this what programming is now?

I love it. LLM assisted programming lets me do things I would never have been able to do on my own.

Never a greater leap in programming than the LLM.

No doubt the process is messy and uncertain and all about wild goose chases but that’s improving with every new release of the LLMs.

If you understand everything the LLM wrote then you’re holding it wrong.

I don’t hear developers disowning their work because they didn’t write the machine code that the compiler and linker output. LLM assisted programming is no different.

I’m excited about it and can’t wait to see where it all goes.

anonymousiam•29m ago
Reading through to the end of the README.md on the GitHub page, I noticed that he's claiming copyright on the code, even though he admits that 3/4 of it is machine generated, and he doesn't understand it all.

It reminded me of the legal challenges for copyright of content that was not created by a human. In every case that I'm aware of so far, courts have ruled that content that wasn't created by a person cannot be copyrighted.