frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Eight years of wanting, three months of building with AI

https://lalitm.com/post/building-syntaqlite-ai/
204•brilee•4h ago

Comments

PaulHoule•4h ago
Note I believe this one because of the amount of elbow grease that went into it: 250 hours! Based on smaller projects I’ve done I’d say this post is a good model for what a significant AI-assisted systems programming project looks like.
simondotau•2h ago
This essay perfectly encapsulates my own experience. My biggest frustration is that the AI is astonishingly good at making awful slop which somehow works. It’s got no taste, no concern for elegance, no eagerness for the satisfyingly terse. My job has shifted from code writer to quality control officer.

Nowhere is this more obvious in my current projects than with CRUD interface building. It will go nuts building these elaborate labyrinths and I’m sitting there baffled, bemused, foolishly hoping that THIS time it would recognise that a single SQL query is all that’s needed. It knows how to write complex SQL if you insist, but it never wants to.

But even with those frustrations, damn it is a lot faster than writing it all myself.

pizzafeelsright•36m ago
Trim your scope and define your response format prior to asking or commanding.

Most of my questions are "in one sentence respond: long rambling context and question"

DareTheDev•2h ago
This is very close to my experience. And I agree with the conclusion I would like to see more of this
bvan•2h ago
This a very insightful post. Thanks for taking the time to share your experience. AI is incredibly powerful, but it’s no free-lunch.
Aurornis•2h ago
Refreshing to see an honest and balanced take on AI coding. This is what real AI-assisted coding looks like once you get past the initial wow factor of having the AI write code that executes and does what you asked.

This experience is familiar to every serious software engineer who has used AI code gen and then reviewed the output:

> But when I reviewed the codebase in detail in late January, the downside was obvious: the codebase was complete spaghetti14. I didn’t understand large parts of the Python source extraction pipeline, functions were scattered in random files without a clear shape, and a few files had grown to several thousand lines. It was extremely fragile; it solved the immediate problem but it was never going to cope with my larger vision,

Some people never get to the part where they review the code. They go straight to their LinkedIn or blog and start writing (or having ChatGPT write) posts about how manual coding is dead and they’re done writing code by hand forever.

Some people review the code and declare it unusable garbage, then also go to their social media and post how AI coding is completely useless and they’re not going to use it for anything.

This blog post shows the journey that anyone not in one of those two vocal minorities is going through right now: A realization that AI coding tools can be a large accelerator but you need to learn how to use them correctly in your workflow and you need to remain involved in the code. It’s not as clickbaity as the extreme takes that get posted all the time. It’s a little disappointing to read the part where they said hard work was still required. It is a realistic and balanced take on the state of AI coding, though.

vasco•1h ago
Those extreme takes are taken mostly for clicks or are exaggerated second hand so the "other side's" opinion is dumber than it is to "slam the naysayers". Most people are meh about everything, not on the extremes, so to pander to them you mock the extremes and make them seem more likely. It's just online populism.
libraryofbabel•1h ago
Agree. This is such a good balanced article. The only things that still make the insights difficult to apply to professional software development are: this was greenfield work and it was a solo project. But that’s hardly the author’s fault. It would however be fantastic to see more articles like this about how to go all in on AI tools for brownfield projects involving more than one person.

One thing I will add: I actually don’t think it’s wrong to start out building a vibe coded spaghetti mess for a project like this… provided you see it as a prototype you’re going to learn from and then throw away. A throwaway prototype is immensely useful because it helps you figure out what you want to build in the first place, before you step down a level and focus on closely guiding the agent to actually build it.

The author’s mistake was that he thought the horrible prototype would evolve into the real thing. Of course it could not. But I suspect that the author’s final results when he did start afresh and build with closer attention to architecture were much better because he has learned more about the requirements for what he wanted to build from that first attempt.

airstrike•52m ago
It's a very accurate and relatable post. I think one corollary that's important to note to the anti-AI crowd is that this project, even if somewhat spaghettified, will likely take orders of magnitude less time to perfect than it would for someone to create the whole thing from scratch without AI.

I often see criticism towards projects that are AI-driven that assumes that codebase is crystalized in time, when in fact humans can keep iterating with AI on it until it is better. We don't expect an AI-less project to be perfect in 0.1.0, so why expect that from AI? I know the answer is that the marketing and Twitter/LinkedIn slop makes those claims, but it's more useful to see past the hype and investigate how to use these tools which are invariably here to stay

kaoD•35m ago
> this project, even if somewhat spaghettified, will likely take orders of magnitude less time to perfect than it would for someone to create the whole thing from scratch without AI

That's a big leap of faith and... kinda contradicts the article as I understood it.

My experience is entirely opposite (and matches my understanding of the article): vibing from the start makes you take orders of magnitude more time to perfect. AI is a multiplier as an assistant, but a divisor as an engineer.

airstrike•22m ago
vibing is different from... steering AI as it goes so it doesn't make fundamentally bad decisions
zahlman•50m ago
I feel like recently HN has been seeing more takes like this one and at least slightly less of the extremist clickbaity stuff. Maybe it's a sign of maturity. (Or maybe it's just fatigue with the cycle of hyping the absolute-latest model?)
senko•20m ago
It takes time for people to go through these experiences (three months, in OP's case), and LLMs have only been reasonably good for a few months (since circa Nov'25).

Previously, takes were necessarily shallower or not as insightful ("worked with caveats for me, ymmv") - there just wasn't enough data - although a few have posted fairly balanced takes (@mitsuhiko for example).

I don't think we've seen the last of hypers and doomers though.

yojo•47m ago
+1

I’ve been driving Claude as my primary coding interface the last three months at my job. Other than a different domain, I feel like I could have written this exact article.

The project I’m on started as a vibe-coded prototype that quickly got promoted to a production service we sell.

I’ve had to build the mental model after the fact, while refactoring and ripping out large chunks of nonsense or dead code.

But the product wouldn’t exist without that quick and dirty prototype, and I can use Claude as a goddamned chainsaw to clean up.

On Friday, I finally added a type checker pre-commit hook and fixed the 90 existing errors (properly, no type ignores) in ~2 hours. I tried full-agentic first, and it failed miserably, then I went through error by error with Claude, we tightened up some exiting types, fixed some clunky abstractions, and got a nice, clean result.

AI-assisted coding is amazing, but IMO for production code there’s no substitute for human review and guidance.

ffsm8•7m ago
Fwiw, the article mirrors my experience when I started out too, even exactly with the same first month of vibecoding, then the next project which I did exactly like he outlined too.

Personally, I think it's just the natural flow when you're starting out. If he keeps going, his opinion is going to change and as he gets to know it better, he'll likely go more and more towards vibecoding again.

It's hard to say why, but you get better at it. Even if it's really hard to really put into words why

hbarka•28m ago
> Some people never get to the part where they review the code. They go straight to their LinkedIn or blog and start writing (or having ChatGPT write) posts about how manual coding is dead and they’re done writing code by hand forever. Some people review the code and declare it unusable garbage, then also go to their social media and post how AI coding is completely useless and they’re not going to use it for anything. This blog post shows the journey that anyone not in one of those two vocal minorities is going through right now.

What’s really happening is that you’re all of those people in the beginning. Those people are you as you go through the experience. You’re excited after seeing it do the impossible and in later instances you’re critical of the imperfections. It’s like the stages of grief, a sort of Kübler-Ross model for AI.

billylo•2h ago
Thank you. The learning aspect of reading how AI tackles something is rewarding.

It also reduces my hesitation to get started with something I don't know the answer well enough yet. Time 'wasted' on vibe-coding felt less painful than time 'wasted' on heads-down manual coding down a rabbit hole.

rokob•2h ago
> architecture is what happens when all those local pieces interact, and you can’t get good global behaviour by stitching together locally correct components

This is a great article. I’ve been trying to see how layered AI use can bridge this gap but the current models do seem to be lacking in the ambiguous design phase. They are amazing at the local execution phase.

Part of me thinks this is a reflection of software engineering as a whole. Most people are bad at design. Everyone usually gets better with repetition and experience. However, as there is never a right answer just a spectrum of tradeoffs, it seems difficult for the current models to replicate that part of the human process.

4b11b4•2h ago
Great write-up with provenance
myultidevhq•1h ago
The 8-year wait is the part that stands out. Usually the question is "why start now" not "why did it take 8 years". Curious if there was a specific moment where the tools crossed a threshold for you, or if it was more gradual.
bdcravens•1h ago
For me, the amount of tedium that comes with any new project before I can get to the "good stuff" is a blocker. It's so easy to sit down with excitement, and then 3 hours later, you're still wrestling with basic dependencies, build pipelines, base CSS, etc.
8organicbits•1h ago
Have you tried using starting templates for projects? For many platforms there are cookiecutters or other tools to jump over those.
jayd16•15m ago
It's kind of click bait tho. "I took 3 months and AI to build a SQLite tool" is not going to stand out. The 8 year wait gives a sense of scale or difficulty but that's actually an illusion and does not reflect the task itself.
The_Goonies1985•1h ago
The author mentions a C codebase. Is AI good at coding in C now? If so, which AI systems lead in this language?

Ideally: local; offline.

Or do I have to wrestle it for 250 hours before it coughs up the dough? Last time I tried, the AI systems struggled with some of the most basic C code.

It seemed fine with Python, but then my cat can do that.

Morpheus_Matrix•34m ago
C is actually one of the better supported languages for AI assistants these days, a lot better than it was a year or two ago. The hallucination of APIs problem has improved alot. Models like Claude Sonnet and Qwen 2.5 Coder have much stronger recall of POSIX/stdlib now. The harder remaining challenge with C is that AI still struggles with ownership and lifetime reasoning at scale. It can write correct isolated functions but doesnt always carry the right invariants across a larger codebase, which is exactly the architecture problem the article describes.

For local/offline Qwen 2.5 Coder 32B is probably your strongest option if you have the VRAM (or can run it quantized). Handles C better than most other local models in my experience.

zer00eyz•1h ago
This article is describing a problem that is still two steps removed from where AI code becomes actually useful.

90 percent of the things users want either A) dont exist or B) are impossible to find, install and run without being deeply technical.

These things dont need to scale, they dont need to be well designed. They are for the most part targeted, single user, single purpose, artifacts. They are migration scripts between services, they are quick and dirty tools that make bad UI and workflows less manual and more managable.

These are the use cases I am seeing from people OUTSIDE the tech sphere adopt AI coding for. It is what "non techies" are using things like open claw for. I have people who in the past would have been told "No, I will not fix your computer" talk to me excitedly about running cron jobs.

Not everything needs to be snap on quality, the bulk of end users are going to be happy with harbor freight quality because it is better than NO tools at all.

throw5•1h ago
> This article is describing a problem that is still two steps removed from where AI code becomes actually useful.

But it does a good job of countering the narrative you often see on LinkedIn, and to some extent on HN as well, where AI is portrayed as all-capable of developing enterprise software. If you spend any time in discussions hyping AI, you will have seen plenty of confident claims that traditional coding is dead and that AI will replace it soon. Posts like this is useful because it shows a more grounded reality.

> 90 percent of the things users want either A) dont exist or B) are impossible to find, install and run without being deeply technical. These things dont need to scale, they dont need to be well designed. They are for the most part targeted, single user, single purpose, artifacts.

Yes, that is a particular niche where AI can be applied effectively. But many AI proponents go much further and argue that AI is already capable of delivering complex, production-grade systems. They say, you don't need engineers anymore. They say, you only need product owners who can write down the spec. From what I have seen, that claim does not hold up and this article supports that view.

Many users may not be interested in scalability and maintainability... But for a number of us, including the OP and myself, the real question is whether AI can handle situations where scalability, maintainability and sound design DO actually matter. The OP does a good job of understanding this.

lubujackson•1h ago
Long term, I think the best value AI gives us is a poweful tool to gain understanding. I think we are going to see deep understanding turn into the output goal of LLMs soon. For example, the blocker on this project was the dense C code with 400 rules. Work with LLMs allowed the structure and understanding to be parsed and used to create the tool, but maybe an even more useful output would be full documentation of the rules and their interactions.

This could likely be extracted much easier now from the new code, but imagine API docs or a mapping of the logical ruleset with interwoven commentary - other devtools could be built easily, bug analysis could be done on the structure of rules independent of code, optimizations could be determined on an architectural level, etc.

LLMs need humans to know what to build. If generating code becomes easy, codifying a flexible context or understanding becomes the goal that amplifies what can be generated without effort.

intensifier•1h ago
article looks like a tweet turned into 30 paragraphs. hardly any taste.
throw5•54m ago
Yes, how dare someone take an idea, develop it, and publish it outside the algorithm-driven rage pit. Truly terrible behavior! /s

Expanding a thought beyond 280 characters and publishing it somewhere other than the X outrage machine is something we should be encouraging.

edfletcher_t137•54m ago
> Of all the ways I used AI, research had by far the highest ratio of value delivered to time spent.

Seconded!

pwr1•51m ago
This resonates. I had a project sitting in my head for years and finally built it in about 6 weeks recently. The AI part wasn't even the hard part honestly, it was finally commiting to actually shipping instead of overthinking the architecture. The tools just made it possible to move fast enough that I didn't lose momentum and abandon it like every other time.
jillesvangurp•22m ago
This is the hardest it's ever going to be. That's been my mode for the last year. A lot of what I did in the last month was complete science fiction as little as six months ago. The scope and quality of what is possible seems to leap ahead every few weeks.

I now have several projects going in languages that I've never used. I have a side project in Rust, and two Go projects. I have a few decades experience with backend development in Java, Kotlin (last ten years) and occasionally python. And some limited experience with a few other languages. I know how to structurer backend projects, what to look for, what needs testing, etc.

A lot of people would insist you need to review everything the AI generates. And that's very sensible. Except AI now generates code faster than I can review it. Our ability to review is now the bottleneck. And when stuff kind of works (evidenced by manual and automated testing), what's the right point to just say it's good enough? There are no easy answers here. But you do need to think about what an acceptable level of due diligence is. Vibe coding is basically the equivalent of blindly throwing something at the wall and seeing what sticks. Agentic engineering is on the opposite side of the spectrum.

I actually emphasize a lot of quality attributes in my prompts. The importance of good design, high cohesiveness, low coupling, SOLID principles, etc. Just asking for potential refactoring with an eye on that usually yields a few good opportunities. And then all you need to do is say "sounds good, lets do it". I get a little kick out of doing variations on silly prompts like that. "Make it so" is my favorite. Once you have a good plan, it doesn't really matter what you type.

I also ask critical questions about edge cases, testing the non happy path, hardening, concurrency, latency, throughput, etc. If you don't, AIs kind of default to taking short cuts, only focus on the happy path, or hallucinate that it's all fine, etc. But this doesn't necessarily require detailed reviews to find out. You can make the AI review code and produce detailed lists of everything that is wrong or could be improved. If there's something to be found, it will find it if you prompt it right.

There's an art to this. But I suspect that that too is going to be less work. A lot of this stuff boils down to evolving guardrails to do things right that otherwise go wrong. What if AIs start doing these things right by default? I think this is just going to get better and better.

dirtbag__dad•15m ago
> Tests created a similar false comfort. Having 500+ tests felt reassuring, and AI made it easy to generate more. But neither humans nor AI are creative enough to foresee every edge case you’ll hit in the future; there are several times in the vibe-coding phase where I’d come up with a test case and realise the design of some component was completely wrong and needed to be totally reworked. This was a significant contributor to my lack of trust and the decision to scrap everything and start from scratch.

This is my experience. Tests are perhaps the most challenging part of working with AI.

What’s especially awful is any refactor of existing shit code that does not have tests to begin with, and the feature is confusing or inappropriately and unknowingly used multiple places elsewhere.

AI will write test cases that the logic works at all (fine), but the behavior esp what’s covered in an integration test is just not covered at all.

I don’t have a great answer to this yet, especially because this has been most painful to me in a React app, where I don’t know testing best practices. But I’ve been eyeing up behavior driven development paired with spec driven development (AI) as a potential answer here.

Curious if anyone has an approach or framework for generating good tests

senthilnayagam•14m ago
when he decided on rust, he could have looked up sqlite port, libsqlite does a pretty good job.

Artemis II crew see first glimpse of far side of Moon [video]

https://www.bbc.com/news/videos/ce3d5gkd2geo
148•mooreds•2h ago•100 comments

Eight years of wanting, three months of building with AI

https://lalitm.com/post/building-syntaqlite-ai/
205•brilee•4h ago•40 comments

A tail-call interpreter in (nightly) Rust

https://www.mattkeeter.com/blog/2026-04-05-tailcall/
32•g0xA52A2A•1h ago•1 comments

A Claude Code skill that makes Claude talk like a caveman, cutting token use

https://github.com/JuliusBrussee/caveman
404•tosh•8h ago•241 comments

Codex is switching to API pricing based usage for all users

https://help.openai.com/en/articles/20001106-codex-rate-card
51•ccmcarey•1h ago•27 comments

Nanocode: The best Claude Code that $200 can buy in pure JAX on TPUs

https://github.com/salmanmohammadi/nanocode/discussions/1
25•desideratum•2h ago•3 comments

Finnish sauna heat exposure induces stronger immune cell than cytokine responses

https://www.tandfonline.com/doi/full/10.1080/23328940.2026.2645467#abstract
167•Growtika•3h ago•92 comments

Computational Physics (2nd Edition)

https://websites.umich.edu/~mejn/cp2/
8•teleforce•1h ago•0 comments

Baby's Second Garbage Collector

https://www.matheusmoreira.com/articles/babys-second-garbage-collector
19•matheusmoreira•3d ago•2 comments

Someone at BrowserStack Is Leaking Users' Email Address

https://shkspr.mobi/blog/2026/04/someone-at-browserstack-is-leaking-users-email-address/
298•m_km•3h ago•78 comments

Lisette a little language inspired by Rust that compiles to Go

https://lisette.run/
197•jspdown•10h ago•99 comments

The threat is comfortable drift toward not understanding what you're doing

https://ergosphere.blog/posts/the-machines-are-fine/
579•zaikunzhang•7h ago•407 comments

German implementation of eIDAS will require an Apple/Google account to function

https://bmi.usercontent.opencode.de/eudi-wallet/wallet-development-documentation-public/latest/ar...
470•DyslexicAtheist•18h ago•430 comments

Friendica – A Decentralized Social Network

https://friendi.ca/
65•janandonly•6h ago•29 comments

Hightouch (YC S19) Is Hiring

https://hightouch.com/careers#open-positions
1•joshwget•5h ago

My Google Workspace account suspension

https://zencapital.substack.com/p/sad-story-of-my-google-workspace
212•zenincognito•5h ago•104 comments

OpenScreen is an open-source alternative to Screen Studio

https://github.com/siddharthvaddem/openscreen
397•jskopek•4d ago•68 comments

Iguanaworks has closed and our products are no longer sold

http://iguanaworks.net/products/usb-ir-transceiver.html
71•ripe•3h ago•9 comments

Phone-free bars and restaurants on the rise across the U.S.

https://www.axios.com/2026/04/05/phone-free-restaurants-bars-bans-restrictions-offline
37•Brajeshwar•2h ago•23 comments

Introduction to Computer Music (2009) [pdf]

https://composerprogrammer.com/introductiontocomputermusic.pdf
201•luu•15h ago•61 comments

Tracing Goroutines in Realtime with eBPF

https://sazak.io/articles/tracing-goroutines-in-realtime-with-ebpf-2026-03-31
20•darccio•3d ago•3 comments

Shared mutable state in Rust (2022)

https://draft.ryhl.io/blog/shared-mutable-state/
28•vinhnx•3d ago•6 comments

StackOverflow: Retiring the Beta Site

https://meta.stackoverflow.com/questions/438628/retiring-the-beta-site
28•stefankuehnel•1h ago•17 comments

Aegis – open-source FPGA silicon

https://github.com/MidstallSoftware/aegis
87•rosscomputerguy•11h ago•8 comments

Perfmon – Consolidate your favorite CLI monitoring tools into a single TUI

https://github.com/sumant1122/Perfmon
10•paperplaneflyr•2h ago•2 comments

Scientists Figured Out How Eels Reproduce (2022)

https://www.intelligentliving.co/scientists-finally-figured-out-how-eels-reproduce/
74•thunderbong•3d ago•16 comments

Show HN: OsintRadar – Curated directory for osint tools

https://osintradar.com/
51•lexalizer•11h ago•5 comments

SPF/PC v4 for MS-DOS, FreeDOS, x86

https://github.com/moshix/SPFPC
16•hggh•2h ago•4 comments

Common drug tests lead to tens of thousands wrongful arrests a year

https://www.cnn.com/2026/04/05/us/colorado-field-drug-test-law
81•rawgabbit•5h ago•52 comments

Show HN: A game where you build a GPU

https://jaso1024.com/mvidia/
868•Jaso1024•1d ago•171 comments