frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Beginning January 2026, all ACM publications will be made open access

https://dl.acm.org/openaccess
1289•Kerrick•9h ago•142 comments

1.5 TB of VRAM on Mac Studio – RDMA over Thunderbolt 5

https://www.jeffgeerling.com/blog/2025/15-tb-vram-on-mac-studio-rdma-over-thunderbolt-5
140•rbanffy•2h ago•45 comments

We pwned X, Vercel, Cursor, and Discord through a supply-chain attack

https://gist.github.com/hackermondev/5e2cdc32849405fff6b46957747a2d28
562•hackermondev•5h ago•218 comments

Trained LLMs exclusively on pre-1913 texts

https://github.com/DGoettlich/history-llms
134•iamwil•2h ago•46 comments

Texas is suing all of the big TV makers for spying on what you watch

https://www.theverge.com/news/845400/texas-tv-makers-lawsuit-samsung-sony-lg-hisense-tcl-spying
469•tortilla•2d ago•253 comments

GPT-5.2-Codex

https://openai.com/index/introducing-gpt-5-2-codex/
353•meetpateltech•6h ago•207 comments

How China built its ‘Manhattan Project’ to rival the West in AI chips

https://www.japantimes.co.jp/business/2025/12/18/tech/china-west-ai-chips/
187•artninja1988•6h ago•187 comments

Skills for organizations, partners, the ecosystem

https://claude.com/blog/organization-skills-and-directory
226•adocomplete•7h ago•138 comments

Classical statues were not painted horribly

https://worksinprogress.co/issue/were-classical-statues-painted-horribly/
550•bensouthwood•12h ago•263 comments

Great ideas in theoretical computer science

https://www.cs251.com/
29•sebg•2h ago•1 comments

T5Gemma 2: The next generation of encoder-decoder models

https://blog.google/technology/developers/t5gemma-2/
91•milomg•5h ago•16 comments

AI vending machine was tricked into giving away everything

https://kottke.org/25/12/this-ai-vending-machine-was-tricked-into-giving-away-everything
59•duggan•3h ago•2 comments

Show HN: Picknplace.js, an alternative to drag-and-drop

https://jgthms.com/picknplace.js/
110•bbx•2d ago•61 comments

Delty (YC X25) Is Hiring an ML Engineer

https://www.ycombinator.com/companies/delty/jobs/MDeC49o-machine-learning-engineer
1•lalitkundu•4h ago

Firefox will have an option to disable all AI features

https://mastodon.social/@firefoxwebdevs/115740500373677782
259•twapi•6h ago•226 comments

Show HN: Stop AI scrapers from hammering your self-hosted blog (using porn)

https://github.com/vivienhenz24/fuzzy-canary
117•misterchocolat•2d ago•87 comments

How did IRC ping timeouts end up in a lawsuit?

https://mjg59.dreamwidth.org/73777.html
122•dvaun•1d ago•13 comments

FunctionGemma 270M Model

https://blog.google/technology/developers/functiongemma/
148•mariobm•6h ago•39 comments

The Scottish Highlands, the Appalachians, Atlas are the same mountain range

https://vividmaps.com/central-pangean-mountains/
82•lifeisstillgood•5h ago•21 comments

I've been writing ring buffers wrong all these years (2016)

https://www.snellman.net/blog/archive/2016-12-13-ring-buffers/
58•flaghacker•2d ago•22 comments

Meta Segment Anything Model Audio

https://ai.meta.com/samaudio/
137•megaman821•2d ago•19 comments

How to hack Discord, Vercel and more with one easy trick

https://kibty.town/blog/mintlify/
105•todsacerdoti•5h ago•18 comments

Your job is to deliver code you have proven to work

https://simonwillison.net/2025/Dec/18/code-proven-to-work/
618•simonw•10h ago•524 comments

Show HN: Learning a Language Using Only Words You Know

https://simedw.com/2025/12/15/langseed/
29•simedw•3d ago•8 comments

TRELLIS.2: state-of-the-art large 3D generative model (4B)

https://github.com/microsoft/TRELLIS.2
57•dvrp•2d ago•12 comments

Using TypeScript to obtain one of the rarest license plates

https://www.jack.bio/blog/licenseplate
140•lafond•10h ago•144 comments

The Legacy of Nicaea

https://hedgehogreview.com/web-features/thr/posts/the-legacy-of-nicaea
33•diodorus•5d ago•23 comments

Local WYSIWYG Markdown, mockup, data model editor powered by Claude Code

https://nimbalyst.com
14•wek•4h ago•5 comments

Please just try HTMX

http://pleasejusttryhtmx.com/
426•iNic•10h ago•361 comments

The <time> element should do something

https://nolanlawson.com/2025/12/14/the-time-element-should-actually-do-something/
62•birdculture•3d ago•21 comments
Open in hackernews

Two kinds of vibe coding

https://davidbau.com/archives/2025/12/16/vibe_coding.html
53•jxmorris12•3h ago

Comments

wrs•3h ago
Aaargh, I hate it when useful terms get diffused to meaninglessness. No, there’s one kind of vibe coding. The definition of vibe coding is letting the LLM write the code and not looking at it. That’s what the word “vibe” is there for.
platevoltage•2h ago
I have no idea why an experienced developer who uses LLM's to make them more productive would want to degrade their workflow by calling it "vibe coding".
ares623•1h ago
It’s a chance to become the next Uncle Bob in a new era of software
exe34•2h ago
you're still allowed to alternate between letting it do and consolidating, no?
acedTrex•2h ago
no, vibe coding is explicitly NOT looking at the output.
MisterTea•2h ago
From my understanding, the vibe part means you go along with the vibe of the LLM meaning you don't question the design choices the LLM makes and you just go with the output it hands you.
hackable_sand•2h ago
I'm ngl, when I first heard "vibe coding" I immediately imagined programming from memory.
parpfish•1h ago
My mind went… elsewhere. Specifically, the gutter.

https://en.wikipedia.org/wiki/Teledildonics

bitwize•1h ago
Unsurprisingly, the Rust community has you covered there also:

https://github.com/buttplugio/buttplug

https://github.com/Gankra/cargo-mommy (has integration with the former)

pessimizer•1h ago
> Aaargh, I hate it when useful terms get diffused to meaninglessness.

I think that when you say this, you have an obligation to explain how the term "vibe coding" is useful, and is only useful by the definition that you've become attached to.

I think that the author is accepting that there's no such thing as the vibe coding that you've defined (except for very short and very simple scripts), and that in all other cases of "vibe coding" there will be a back and forth between you and the machine where you decide whether what it has done has satisfied your requirements. Then they arbitrarily distinguish between two levels of doing that: one where you never let the LLM out of the yard, and the other where you let the LLM run around the neighborhood until it gets tired and comes back.

I think that's a useful distinction, and I think that the blog makes a good case for it being a useful distinction. I don't find your comment useful, or the strictness of definition that it demands. It's unrealistic. Nobody is asking an LLM to do something, and shipping whatever it brings back without any follow-up. If nobody is doing that, a term restricted to only that is useless.

doctoboggan•1h ago
I agree with you that there is one original definition, but I feel like we've lost this one and the current accepted definition of vibe coding is any code is majority or exclusively produced by an LLM.

I think I've seen people use the "vibe engineering" to differentiate whether the human has viewed/comprehended/approved the code, but I am not sure if that's taken off.

dbtc•1h ago
In that case, "blind" would be more accurate.
WhyOhWhyQ•2h ago
"the last couple weeks"

When I ran this experiment it was pretty exhilarating for a while. Eventually it turned into QA testing the work of a bad engineer and became exhausting. Since I had sunken so much time into it I felt pretty bad afterwards that not only did the thing it made not end up being shippable, but I hadn't benefitted as a human being while working on it. I had no new skills to show. It was just a big waste of time.

So I think the "second way" is good for demos now. It's good for getting an idea of what something can look like. However, in the future I'll be extremely careful about not letting that go on for more than a day or two.

stantonius•2h ago
This happened to me too in an experimental project where I was testing how far the model could go on its own. Despite making progress, I can't bare to look at the thing now. I don't even know what questions to ask the AI to get back into it, I'm so disconnected from it. Its exhausting to think about getting back into it; id rather just start from scratch.

The fascinating thing was how easy it was to lose control. I would set up the project with strict rules, md files and tell myself to stay fully engaged, but out of nowhere I slid into compulsive accept mode, or worse told the model to blatantly ignore my own rules I set out. I knew better, but yet it happened over and over. Ironically, it was as if my context window was so full of "successes" I forgot my own rules; I reward-hacked myself.

Maybe it just takes practice and better tooling and guardrails. And maybe this is the growing pains of a new programmers mindset. But left me a little shy to try full delegation any time soon, certainly not without a complete reset on how to approach it.

parpfish•2h ago
I’ll chime in to say that this happened to me as well.

My project would start good, but eventually end up in a state where nothing could be fixed and the agent would burn tokens going in circles to fix little bugs.

So I’d tell the agent to come up with a comprehensive refactoring plan that would allow the issues to be recast in more favorable terms.

I’d burn a ton of tokens to refactor, little bugs would get fixed, but it’d inevitably end up going in circles on something new.

danabramov•58m ago
Curious if you have thoughts on the second half of the post? That’s exactly what the author is suggesting a strategy for.
newspaper1•1h ago
I've had the opposite results. I used to "vibe code" in languages that I knew, so that I could review the code and, I assumed, contribute myself. I got good enough results that I started using AI to build tools in languages I had no prior knowledge of. I don't even look at the code any more. I'm getting incredible results. I've been a developer for 30+ years and never thought this would be possible. I keep making more and more ambitious projects and AI just keeps banging them out exactly how I envision them in my mind.

To be fair I don't think someone with less experience could get these results. I'm leveraging every thing I know about writing software, computer science, product development, team management, marketing, written communication, requirements gathering, architecture... I feel like vibe coding is pushing myself and AI to the limits, but the results are incredible.

WhyOhWhyQ•59m ago
I've got 20 years of experience, but w/e. What have you made?
newspaper1•28m ago
I don't want to dox myself since I'm doing it outside my regular job for the most part, but frameworks, apps (on those frameworks), low level systems stuff, linux-y things, some P2P, lots of ai tools. One thing I find it excels at is web front-end (which is my least favorite thing to actually code), easily as good as any front-end dev I've ever worked with.
WhyOhWhyQ•25m ago
I think my fatal error was trying to make something based on "novel science" (I'll be similarly vague). It was an extremely hard project to be fair to the AI.

It is my life goal to make that project though. I'm not totally depressed about it because I did validate parts of the project. But it was a let down.

newspaper1•22m ago
Baby steps is key for me. I can build very ambitious things but I never ask it to do too much at once. Focus a lot on having it get the docs right before it writes any code (it'll use the docs) make the instructions reflexive (i.e. "update the docs when done"). Make libraries, composable parts... I don't want to be condescending since you may have tried all of that, but I feel like I'm treating it the same as when I architect things for large teams, thinking in layers and little pieces that can be assembled to achieve what I want.

I'll add that it does require some banging your head against the wall at times. I normally will only test the code after doing a bunch of this stuff. It often doesn't work as I want at that point and I'll spend a day "begging" it to fix all of the problems. I've always been able to get over those hurdles, and I have it think about why it failed and try to bake the reasoning into the docs/tests... to avoid that in the future.

WhyOhWhyQ•14m ago
I did make lots of design documents and sub-demos. I think I could have been cleverer about finding smaller pieces of the project which could be deliverables in themselves and which the later project could depend on as imported libraries.
imiric•1h ago
> I think the "second way" is good for demos now.

It's also good for quickly creating legitimate looking scam and SEO spam sites. When they stop working, throw them away, and create a dozen more. Maintenance is not a concern. Scammers love this new tech.

yen223•39m ago
This argument can be used to shut down anything that makes coding faster or easier. It's not a convincing argument to me.
keyle•32m ago
Advertising campaigns as well, which, arguably, fits your categories.
danabramov•59m ago
I believe the author explicitly suggests strategies to deal with this problem, which is the entire second half of the post. There’s a big difference between when you act as a human tester in the middle vs when you build out enough guardrails that it can do meaningful autonomous work with verification.
WhyOhWhyQ•50m ago
I'm just extremely skeptical about that because I had many ideas like that and it still ended up being miserable. Maybe with Opus 4.5 things would go better though. I did choose an extremely ambitious project to be fair. If I were to try it again I would pick something more standard and a lot smaller.

I put like 400 hours into it by the way.

stantonius•5m ago
This is so relatable it's painful: many many hours of work, overly ambitious project, now feeling discouraged (but hopefully not willing to give up). It's some small consolation to me to know others have found themselves in this boat.

Maybe we were just 6 months too early to start?

Best of luck finishing it up. You can do it.

irrationalfab•23m ago
+1... like with a large enough engineering team, this is ultimately a guardrails problem, which in my experience with agentic coding it’s very solvable, at least in certain domains.
bloppe•2h ago
Someone should start an anthology of posts claiming "I vibe-coded this toy project. Software Engineering is dead."
ubertaco•1h ago
I bet we could vibe-post a bunch of them, even! Blogging is dead!
predkambrij•2h ago
Things about those approaches did and will change more when LLMs are getting better. I got some unbelievable good results back in March, then I was tasking LLMs too hard problems and got bunch of frustrations, then learned to prompt better (to give LLMs methods to test their own work). It's an art to do good balance of spending time writing prompts that will work. A prompt could be "fix all the issues on Github", but maybe it going to fail :)
Dr_Birdbrain•2h ago
I’m unclear what has been gained here.

- Is the work easier to do? I feel like the work is harder.

- Is the work faster? It sounds like it’s not faster.

- Is the resulting code more reliable? This seems plausible given the extensive testing, but it’s unclear if that testing is actually making the code more reliable than human-written code, or simply ruling out bugs an LLM makes but a human would never make.

I feel like this does not look like a viable path forward. I’m not saying LLMs can’t be used for coding, but I suspect that either they will get better, to the point that this extensive harness is unnecessary, or they will not be commonly used in this way.

peacebeard•2h ago
I have a feeling that the real work of writing a complex application is in fully understanding the domain logic in all its gory details and creating a complete description of your domain logic in code. This process that OP is going through seems to be "what if I materialize the domain logic in tests instead of in code." Well, at first blush, it seems like maybe this is better because writing tests is "easier" than writing code. However, I imagine the biggest problem is that sometimes it takes the unyielding concreteness of code to expose the faults in your description of the domain problem. You'd end up interacting with an intermediary, using the tests as a sort of interpreter as you indirectly collaborate with the agent on defining your application. The cost of this indirection may be the price to pay for specifying your application in a simpler, abstracted form. All this being said, I would expect the answers to "is it easier? is it faster?" would be: well, it depends. If it can be better, it's certainly not always better.
agumonkey•1h ago
I asked absurd question to chatgpt 4o when it came out, by mixing haskell and lisp books terminology (say design an isomorphic contravariant base class so that every class fills <constraint>). The result was somehow consistent and it suddenly opened my brain to what kind of stupid things i could explore.

Felt like I became a phd wannabe in 5 minutes

epgui•1h ago
Am I the only one who, rather than being impressed, is recoiling in horror?
gaigalas•1h ago
There's something wrong with this vibe coded stuff, any kind of it.

_It limps faster than you can walk_, in simple terms.

At each model release, it limps faster, but still can't walk. That is not a good sign.

> Do we want this?

No. However, there's a deeper question: do people even recognize they don't want this?

pessimizer•1h ago
I'm only doing the first kind right now - I'm not really letting the thing loose ever, even when I'm not great at the language it's writing in. I'm constantly making it refactor and simplify things.

But I'm optimistic about the second way. I'm starting to think that TDD is going to be the new way we specify problems i.e by writing constraints, LLMs are going to keep hacking at those constraints until they're all satisfied, and periodically the temperature will have to be jiggled to knock the thing out of a loop.

The big back and forth between human and machine would be in the process of writing the constraints, which they will be bad at if you're doing anything interesting, and good at if you're doing something routine.

The big question for me is "Is there a way to write complete enough tests that any LLM would generate nearly the same piece of software?" And to follow up, can the test suite be the spec? Would that be an improvement on the current situation, or just as much work? Would that mean that all capable platforms would be abstracted? Does this mean the software improves on its own when the LLM improves, or when you switch to a better LLM, without any changes to the tests?

If the future is just writing tests, is there a better way to do it than we currently do? Are tests the highest-level language? Is this all just Prolog?

ofconsequence•1h ago
> I dislike the term "vibe coding". It means nothing and it's vague.

It has a clear and specific definition. People just misuse and abuse the term.

Karpathy coined it to describe when you put a prompt into an LLM and then either run it or continue to develop on top of it without ever reviewing the output code.

I am unable to tell from TFA if the author has any knowledge or skills in programming and looked at the code or if they did in fact "vibe code".

keyle•25m ago
I find it's ok to vibe code something digestible like a ZSH function to do X or Y. An image converter, or something along those lines.

Anything that involves multiple days of work, or that you plan on working on it further, should absolutely not be vibe coded.

A) you'll have learnt pretty much nothing, or will retain nothing. Writing stuff by hand is a great way to remember. A painful experience worthwhile of having is one you've learnt from.

B) you'll find yourself distanced from the project and the lack personal involvement of 'being in the trenches' means you'll stop progressing on the software and move on back to something that makes you feel something.

Humans are by nature social creatures, but alone they want to feel worthwhile too. Vibe coding takes away from this positive reinforcement loop that is necessary for sticking with long running projects to achievement.

Emotions drive needs, which drives change and results. By vibe coding a significant piece of work, you'll blow away your emotions towards it and that'll be the end of it.

For 'projects' and things running where you want to be involved, you should be in charge, and only use LLMs for deterministic auto-completion, or research, outside of the IDE. Just like managing state in complex software, you need to manage LLMs' input to be 'boxed in' and not let it contaminate your work.

My 5c. Understanding the human's response to interactions with the machines is important in understanding our relationship with LLMs.

newspaper1•18m ago
I get a huge emotional reward by conjuring up something that I dreamed of but wouldn't have had time to build otherwise. The best description I can give is back in the day when you would beat a video game to see the ending.
rrix2•16m ago
I've been asking for little tutorials or implementation plans for things, and demanding that the model not write any code itself. Following the advice of Geoffrey Litt.[1] I find reviewing code written by my coworkers to be difficult when i'm being paid for it, surely i'm not gonna review thousands of lines of auto-generated code and the comprehensive tests required to trust them in my free time...!

So I've been learning kotlin & android development in the evenings and i find this style of thing to be so much more effective as a dev practice than claude code and a better learning practice than following dev.to tutorials. I've been coding for almost 20 years and find most tutorial or documentation stuff either targeted to someone who has hardly programmed at all, or just plain old API docs.

Asking the langlemangler to generate a dev plan, focusing on idiomatic implementation details and design questions rather than lines of code, and to let me fill in the algorithm implementations, it's been nice. I'll use the jetbrains AI autocomplete stuff for little things or ask it to refactor a stinky function but mostly I just follow the implementation plan so that the shape of the whole system is in my head.

Here's an example:

> i have scaffolded out a new project, an implementation of a library i've written multiple times in the last decade in multiple languages, but with a language i haven't written and with new design requirements specified in the documentation. i want you to write up an implementation plan, an in-depth tutorial for implementing the requirements in a Kotlin Multi Platform library. > i am still learning kotlin but have been programming for 20 years. you don't need to baby me, but don't assume i know best practices and proper idioms for kotlin. make sure to include background context, best practices, idioms, and rationale for the design choices and separation of concerns.

This produced a 3kb markdown file that i've been following while I develop this project.

[1]: https://x.com/geoffreylitt/status/1991909304085987366