frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Ask HN: What percentage of your coding is now vibe coding?

2•mbm•1y ago
As a rough estimate...

Comments

90s_dev•1y ago
Proudly zero. I just wrote and posted an article explaining why. The short version: genuine engineering is an abandoned skill I want to revive.
leakycap•1y ago
Zero.

But there wasn't this much hate for people who copied random Javascript off whatever site LYCOS linked you to back in the day. Vibe coding for non-critical applications doesn't seem all that different to me.

JohnFen•1y ago
Zero
latexr•1y ago
Zero. I care about the code I write and value doing things well and building knowledge through deep understanding. Over the years I’ve proven to myself (and others) that approach improves both speed and accuracy, as well as reduce the need for rewrites because experience increases the chance I’ll get it right early on and design in a way that I don’t paint myself into corners.

I’ve noticed that coding with an LLM leads to severely diminished knowledge retention and learning (not to mention it’s less fun), and I suspect overuse would lead to a degree of dependency I don’t wish for myself.

joeismailyan•1y ago
Depends on the task. I use AI for planning/figuring out how to implement stuff. Probably 80% is with AI to bounce ideas off and figure things out.

Writing the code, probably 30% is with AI. Our product requires a lot of context for AI to get stuff right so it's challenging to get it to write good, working code. If it's a small thing that doesn't require a lot of context then I use AI.

I use various tools for this, let me know your needs and I can provide recommendations.

chrisrickard•1y ago
Vibe coding in the traditional sense (coined by Karpathy back in Feb): 20%

Vibe coding using detailed, structured requirements (from tools like Userdoc): 65%

khedoros1•1y ago
Very little. It's directly forbidden for my day job, and if I'm programming anything in my off hours, it's for my own enjoyment.

All of the code that I've generated by LLM has backed itself into a corner very early on, so I tend to use that as a starting point, then fix and refactor. I've made some toy-sized programs that way (but hours quicker than I would've looking up library documentation on my own).

I've had good luck refining my understanding of some concepts, talking through design of pieces of code, and basically generating snippets of example code on demand. Even in those limited cases, I end up relying on my own experience to determine what's helpful and what's crap. They're usually intertwined.

codeqihan•1y ago
Partly. Mostly I write it myself, and only ask the LLM when I encounter problems.
apothegm•1y ago
I almost never tell it to just write me a thing (what I think of as vibe coding). (2%)

I sometimes write a pretty detailed doc or spec; have the AI draft an implementation; then review and fix it myself. I try to keep this to “reasonable PR” size, a few hundred lines (a module or two) max, and will do a few rounds per hour. (~25%)

I will often stub out modules or classes (sometimes with docstrings) and tab-complete big chunks of them. (And then turn tab completion off and rage-code the rest by hand because the AI is so far off base.) (~25%)

I will often tell the AI to write tests for stubbed methods prior to implementation. I then double check the tests before moving on to manual or AI-assisted implementation. This is usually in increments of a single AI request/response. (~35%)

I will occasionally ask the AI to change existing code and tests, usually in a single request/response. I’ve had very mixed results with this. (~10%)

I have been finding myself writing code in smaller standalone libraries and then assembling those into larger and larger composites so that each library is a size a model can more realistically reason about; and for the layers on top of it the AI wont fill its context up reading all that source instead of just the public API docs.

rstuart4133•1y ago
Zero.

I've now convinced myself current LLM's are much closer to a "stochastic parrot" than an AGI in all areas other than natural language processing. In natural language they are super-human, meaning they can wordsmith better than most humans and are far faster at it than all humans.

That means it you are writing something it's seen a lot of before in it's training data in a language that's somewhat forgiving (so, not C), vibe coding might have 1/2 a chance. I don't do that. But if you're building UI's in javascript using a common framework it might work for you.

One of Apple's First Employees Looks Back at 50 Years

https://www.nytimes.com/2026/04/01/technology/apple-employee-50-years.html
1•nxobject•1m ago•0 comments

Working Was the Beginning

https://themackabu.dev/blog/ant-part-two
2•theMackabu•7m ago•0 comments

Node.js 26.0.0 (Current)

https://nodejs.org/en/blog/release/v26.0.0
2•partsch•10m ago•0 comments

What if LLMs are mostly crystallized intelligence?

https://www.lesswrong.com/posts/Zxw3ZcmSdndpQyJ6M/what-if-llms-are-mostly-crystallized-intelligence
2•joozio•18m ago•0 comments

AWS lets agents drive virtual desktops which could cost 500k tokens per click

https://www.theregister.com/2026/05/06/aws_workspaces_agent_access/
4•beardyw•24m ago•1 comments

PostHog Code

https://posthog.com/code
3•dotmanish•24m ago•0 comments

Bitter Lessons from the ISSpresso

https://mceglowski.substack.com/p/bitter-lessons-from-the-isspresso
3•kome•26m ago•0 comments

.de domains were 'down' for 2 hours

https://status.denic.de/pages/incident/592577eab611ce1e0d00046f/69fa60ef9d12f5057a974f38
2•riedel•27m ago•1 comments

AI is starting to beat doctors at making correct diagnoses

https://www.science.org/content/article/ai-starting-beat-doctors-making-correct-diagnoses
5•rxmux•33m ago•0 comments

SensorHub – The event-driven version of Clawhub (giving AI agents "ears")

https://world2agent.ai/hub
4•WayLonWen•34m ago•0 comments

Pensero – I knew I was doing the work. I just couldn't prove it

https://pensero.ai/blog/i-knew-i-was-doing-the-work.-i-just-couldn-t-prove-it.
2•sabatesduran•36m ago•0 comments

New PyXHDL Release (Python Frontend To VHDL And Verilog)

https://github.com/davidel/pyxhdl
2•dadaz•36m ago•0 comments

Academics Need to Wake Up on AI

https://www.popularbydesign.org/p/academics-need-to-wake-up-on-ai
3•barry-cotter•46m ago•1 comments

The reinvention of tradition: charismatic leadership in Silicon Valley

https://www.tandfonline.com/doi/epdf/10.1080/1600910X.2026.2666346
2•kome•48m ago•0 comments

Reverse-engineering the 1998 Ultima Online demo server

https://draxinar.github.io/articles/2026-05-01-uodemo-reverse-engineering.html
3•notsentient•49m ago•0 comments

Singapore introduces caning for boys who bully others at school

https://www.theguardian.com/world/2026/may/06/singapore-caning-school-bullies
5•rustoo•51m ago•0 comments

Bottlenecks and Productivity

https://calnewport.com/on-bottlenecks-and-productivity/
2•tapanjk•52m ago•0 comments

Google DeepMind workers vote to unionize over military AI deals

https://www.wired.com/story/google-deepmind-workers-vote-to-unionize-over-military-ai-deals/
6•ascorbic•55m ago•0 comments

RFK Jr. plans to curb antidepressants, which he falsely compares to heroin

https://arstechnica.com/health/2026/05/rfk-jr-plans-to-curb-antidepressants-which-he-falsely-comp...
6•rbanffy•1h ago•1 comments

Complete .de TLD was off-line for 4 hours

https://blog.denic.de/en/denic-reports-dnssec-disruption-affecting-de-domains/
3•teekert•1h ago•2 comments

Reflections on the motive power of heat (1890)

https://gutenberg.org/cache/epub/78610/pg78610-images.html
1•petethomas•1h ago•0 comments

Ask HN: Is the future everyone having 100 MCP processes running on their PC?

3•ex-aws-dude•1h ago•2 comments

Integrate Cashfree Payments in less than 7 minutes

https://tech.cashfree.com/building-cashfree-agent-skills-a-task-aware-knowledge-layer-for-ai-codi...
1•shritama_saha•1h ago•0 comments

Gitea Runner 1.0.0 is released

https://blog.gitea.com/release-of-runner-1.0.0/
2•jandeboevrie•1h ago•0 comments

Second Circuit Sidesteps "Server Test" in Embedded Video Copyright

https://natlawreview.com/article/second-circuit-sidesteps-server-test-embedded-video-copyright-ru...
1•petethomas•1h ago•0 comments

Some deaf children are hearing again because of a new gene therapy

https://www.vox.com/future-perfect/487590/gene-therapy-crispr-deafness-food-and-drug-administration
2•yanis_t•1h ago•0 comments

A game-changer for good health? Scientists believe 'we are when we eat'

https://www.theguardian.com/commentisfree/2026/may/05/game-changer-good-health-scientists-we-are-...
4•akbarnama•1h ago•0 comments

Can language models rebuild programs from scratch?

https://programbench.com
2•beau•1h ago•1 comments

American History X was a hit but ego blew my career, says director

https://www.thetimes.com/culture/film/article/tony-kaye-edward-norton-american-history-x-zbwcg7chq
2•petethomas•1h ago•0 comments

The guide to RL environments: building and scaling them in the LLM era

https://huggingface.co/spaces/AdithyaSK/rl-environments-guide
2•babelfish•1h ago•0 comments