more open source, better open source
perhaps also more forking (not only absolute but also relative)
contribution dynamics are also changing
I'm fairly optimistic that generative ai is good for open source and the commons
what I'm also seeing is open source projects that had not so great ergonomics or user interfaces in general are now getting better thanks to generative ai
this might be the most directly noticeable change for users of niche open source
Except it's on Github and it's forks and starts.
Also, it's a scarcity mindset.
I don't agree that sibling to my comment: "make money by getting papers cited". it is not a long-term solution, much as Ad revenue is broken model for free software, also.
I'm hopeful that we see some vibe-coders get some products out that make money, and then pay to support the system they rely on for creating/maintaining their code.
Not sure what else to hope for, in terms of maintaining the public goods.
I think the title is clickbait.
The conclusion is:
"Vibe coding represents a fundamental shift in how software is produced and consumed. The productivity gains are real and large. But so is the threat to the open source ecosystem that underpins modern software infrastructure. The model shows that these gains and threats are not independent: the same technology that lowers costs also erodes the engagement that sustains voluntary contribution."
The dangers I see rather in projects drowning in LLM slop PR's, instead of less engagement.
And the benefits of LLMs to open source in lowering the cost to revive and maintain (abandoned) projects.
LLM's did help with quickly researching dependencies unknown to me and investigating build errors, but ideally I want to set it up in a way, that the agent can work on its own, change -> try to build -> test it. Once that works half automated, I call it success.
https://bsky.app/profile/gaborbekes.bsky.social/post/3md4rga...
(Note, I receive a thanks in the paper.)
> given everything we know about OSS incentives from prior studies and how easy it is to load an OSS library with your AI agent, the demand-reducing effect of vibe coding is larger than the productivity-increasing effect
but that would be a mouthful
This is also just untrue. There is a study showing that the productivity gain is -20%, developers (and especially managers) just assume it is +25%. And when they are told about this they still feel they are +20% faster. It's the dev equivalent of mounting a cool-looking spoiler to your car.
There are productivity gains, but they're in the fuzzy tasks like generating documentation and breaking up a project into bite-sized tasks. Or finding the right regex or combination of command line flags, and that last one I would triple verify if it was anything difficult to reverse.
However trying to get it to do anything other than optimise code or fix small issues it struggles. It struggles with high level abstract issues.
For example I currently have an issue with ambiguity collisions e.g.
Input: "California"
Output: "California, Missouri"
California is a state but also city in Missouri - https://github.com/tomaytotomato/location4j/issues/44
I asked Claude several times to resolve this ambiguity and it suggested various prioritisation strategies etc. however the resulting changes broke other functionality in my library.
In the end I am redesigning my library from scratch with minimal AI input. Why? because I started the project without the help of AI a few years back, I designed it to solve a problem but that problem and nuanced programming decisions seem to not be respected by LLMs (LLMs dont care about the story, they just care about the current state of the code)
The project, or your brain? I think this is what a lot of LLM coders run into - they have a lot of intrinsic knowledge that is difficult or takes a lot of time and effort to put into words and describe. Vibes, if you will, like "I can't explain it but this code looks wrong"
Essentially I ask an LLM to look at a project and it just sees the current state of the codebase, it doesn't see the iterations and hacks and refactors and reverts.
It also doesn't see the first functionality I wrote for it at v1.
This could indeed be solved by giving the LLM a git log and telling it a story, but that might not solve my issue?
FWIW - it works a lot better to have it interact via the CLI than the MCP.
I suppose a year ago we were talking about prompt engineers, so it's partly about being good at describing problems.
You have to tell it about the backstory. It does not know unless you write about it somewhere and give it as input to the model.
It does not struggle, you struggle. It is a tool you are using, and it is doing exactly what you're telling it to do. Tools take time to learn, and that's fine. Blaming the tools is counterproductive.
If the code is well documented, at a high level and with inline comments, and if your instructions are clear, it'll figure it out. If it makes a mistake, it's up to you to figure out where the communication broke down and figure out how to communicate more clearly and consistently.
It's fine to critique your own tools and their strengths and weaknesses. Claiming that any and all failures of AI are an operator skill issue is counterproductive.
Document, document, document: your architecture, best practices, preferences (both about code and how you want to work with the LLM and how do you expect it to behave it).
It is time consuming, but it's the only way you can get it to assist you semi-successfully.
Also try to understand that LLM's biggest power for a developer is not in authoring code as much as assistance into understanding it, connecting dots across features, etc.
If your expectation is to launch it in a project and tell it "do X, do Y" without the very much needed scaffolding you'll very quickly start losing the plot and increasing the mess. Sure, it may complete tasks here and there, but at the price of increasing complexity from which it is difficult for both you and it to dig out.
Most AI naysayers can't be bothered with the huge amount of work required to setup a project to be llm-friendly, they fail, and blame the tool.
Even after the scaffolding, the best thing to do, at least for the projects you care (essentially anything that's not a prototype for quickly validating an idea) you should keep reading and following it line by line, and keep updating your scaffolding and documentation as you see it commit the same mistakes over and over. And part of scaffolding requires also to put the source code of your main dependencies. I have a _vendor directory with git subtrees for major dependencies. LLMs can check the code of the dependencies, the tests, and figure out what they are doing wrong much quicker.
Last but not least, LLMs work better with certain patterns, such as TDD. So instead of "implement X", it's better to "I need to implement X, but before we do so, let's setup a way for testing and tracking our progress against". You can build an inspector for a virtual machine, you can setup e2es or other tests, or just dump line by line logs in some file. There's many approaches depending on the use case.
In any case, getting real help for LLMs for authoring code (editing, patching, writing new features) is highly dependent on having good context, good setup (tests, making it write a plan for business requirements and one for implementation) and following and improving all these aspects as you progress.
My project is quite well documented and I created a Prompt a while back along with some mermaid diagrams
https://github.com/tomaytotomato/location4j/tree/master/docs
I can't remember the exact prompt I gave to the LLM but I gave it a Github issue ticket and description.
After several iterations it fixed the issue, but my unit tests failed in other areas. I decided to abort it because I think my opinionated code was clashing with the LLM's solution.
The LLM's solution would probably be more technically correct, but because I don't do l33tcode or memorise how to implement Trie or BST my code does it my way. Maybe I just need to force the LLM to do it my way and ignore the other solutions?
Make sure there’s a holdout the agent can’t see that it’s measured against. (And make sure it doesn’t cheat)
https://softwaredoug.com/blog/2026/01/17/ai-coding-needs-tes...
> One incredible thing was the ability to easily merge what was worth merging from forks, for instance
I agree, this is amazing, and really reduces the wasted effort. But it only works if you know what exists and where.
But IMO the primitives we need are also fundamentally different with AI coding.
Commits kind of don't matter anymore. Maybe PR's don't matter either, except as labels. But CI/hard proof that the code works as advertised is gold, and this is something git doesn't store by default.
Additionally, as most software moves to being built by agents, the "real" git history you want is the chat history with your agent, and its CoT. If you can keep that and your CI runs, you could even throw away your `git` and probably still have a functionally better AI coding system.
If we get a new Github for AI coding I hope it's a bit of a departure from current git workflows. But git is definitely extensible enough that you could build this on git (which is what I think will ultimately happen).
I've been a senior engineer doing large scale active-active, five nines distributed systems that process billions of dollars of transactions daily. These are well thought out systems with 20+ folks on design document reviews.
Not all of the work falls into that category, though. There's so much plumbing and maintenance and wiring of new features and requirements.
On that stuff, I'm getting ten times the amount of work done with AI than I was before. I could replace the juniors on my team with just myself if I needed to and still get all of our combined work done.
Engineers using AI are going to replace anyone not using AI.
In fact, now is the time to start a startup and "fire" all of these incumbent SaaS companies. You can make reasonable progress quickly and duplicate much of what many companies do without much effort.
If you haven't tried this stuff, you need to. I'm not kidding. You will easily 10x your productivity.
I'm not saying don't review your own code. Please do.
But Claude emits reasonable Rust and Java and C++. It's not just for JavaScript toys anymore.
- - - - - - - - - - - -
Edit:
Holy hell HN, downvoted to -4 in record time. Y'all don't like what's happening, but it's really happening.
I'm not lying about this.
I provided my background so you'd understand the context of my claims. I have a solid background in tech.
The same thing that happened to illustration and art is happening here, to us and to our career. And these models are quite usable for production code.
I can point Claude to a Rust HTTP handler and say, "using this example [file path], write a new endpoint that handles video file uploads, extracts the metadata, creates a thumbnail, uploads them to the cloud storage, and creates the relevant database records."
And it does it in a minute.
I review the code. It's as if I had written it. Maybe a change here or there.
Real production Rust code, 100 - 500 LOC, one shotted in one minute. It even installs the routes and understands the HTTP framework DSL. It even codegens Swagger API documentation and somehow understands the proc macro DSL that takes Rust five minutes to compile.
This tech is wizardry. It's the sci fi stuff we dreamed of as kids.
I don't get the sour opinions. The only thing to fear is big tech monopolozation.
I suppose the other thing to worry about is what's going to happen to our cushy $400k salaries. But if you make yourself useful, I think it'll work out just fine.
Perhaps more than fine if you're able to leverage this to get ahead and fire your employer. You might not need your employer anymore. If you can do sales and wear many hats, you'll do exceedingly well.
I'm not saying non-engineers will be able to do this. I'm saying engineers are well positioned to leverage this.
There was a submission to a blog post discussing applications of AI but it got killed for some reason.
https://news.ycombinator.com/item?id=46750927
I remain convinced that if you use AI to write code then your product will sooner or later turn into a buggy mess. I think this will remain the case until they figure out how to make a proper memory system. Until then, we still have to use our brains as the memory system.
One strategy I've seen that I like is using AI to prototype, but then write actual code yourself. This is what the Ghostty guy does I believe.
I agree that AI can write decent Rust code, but Rust is not a panacea. From what I heard, Cursor has a lot of vibe-coded Rust code, but it didn't save it from being, as I said, a buggy mess.
FYFY
so not now, then?
What are you talking about? Illustrators and artists are not being replaced by AI or required to use AI to "keep up" in the vast majority of environments.
> "I don't get the sour opinions."
The reasoning for folks' "sour opinions" has been very well-documented, especially here on HN. This comment reads like people don't like AI because they think it's slow or something, which is not the case.
There are lots of people claiming this. Many of whom have a solid background. Every now and then I check out someone's claim (checking the code they've generated). I've yet to find an AI-generated codebase that passed that check so far.
Perhaps yours is the one that does, but as we can't see the code for ourselves, there's no way for us to really know. And it's hard to take your word for it when there are so many people falsely making the same claims.
I expect a lot of HNers have had this experience.
I gave you an upvote FWIW, after all, I mean, my job's codebase is already a buggy mess, so it doesn't hurt to throw AI on it, which is what I do.
> You might not need your employer anymore. If you can do sales and wear many hats, you'll do exceedingly well.
Wasn't this the case before AI as well?
It feels that vibe coding may exacerbate fragmentation (10 different vibe-coded packages for the same thing) and abandonment (made it in a weekend and left it to rot) for open source software.
Did you read it?
It isn't saying that LLMs will replace major open source software components. It said that the "reward" for providing, maintaining and helping curate these OSS pieces; which is the ecosystem they exist in, just disappears if there is no community around it, just an LLM ingesting open source code and spitting out a solution good or bad.
We've already seen curl buckle under the pressure, as their community minded, good conscious effort to give back to security reports, collapsed under the weight of slop.
This is largely about extending that thesis to the entire ecosystem. No GH issues, no PRs, no interaction. No kudos on HN, no stars on github, no "cheers mate" as you pass them at a conference after they give a great talk.
Where did you get that you needed to see a Linux kernel developed from AI tools, before you think the article's authors have a point?
Oh... so nothing's gonna change for me then...
> In vibe coding, an AI agent builds software by selecting and assembling open-source software (OSS),
Are they talking about indirectly due to prior training of the model? No agent I use is selecting and assembling open source software. That's more of an integration type of job not software development. Are they talking about packages and libraries? If yes, that's exactly how most people use those too.
I mean like this:
> often without users directly reading documentation, reporting bugs, or otherwise engaging with maintainers.
and then,
> Vibe coding raises productivity by lowering the cost of using and building on existing code, but it also weakens the user engagement through which many maintainers earn returns.
Maintainers who earn "returns" must be such a small niche as to be insignificant. Or do they mean things like github stars?
> When OSS is monetized only through direct user engagement, greater adoption of vibe coding lowers entry and sharing, reduces the availability and quality of OSS, and reduces welfare despite higher productivity.
Now the hypothesis is exactly the opposite. Do agents not "select and assamble" OSS anymore? And what does this have to do with how OSS is "monetized"?
> Sustaining OSS at its current scale under widespread vibe coding requires major changes in how maintainers are paid.
Sustaining OSS insofar as maintainers do it for a living requries major changes. Period. I don't see how vibe coding which makes all of this easier and cheaper is changing that equation. Quality is a different matter altogether and can still be achieved.
I am seeing a bunch of disjointed claims taken as truth that I frankly do not agree with in the first place.
What would the result of such a study even explain?
AI agents can select and load the appropriate packages and libraries without the user even knowing the name of the library, let alone that of the developer. This reduces the visibility of developers among users, who are now less likely to give a star, sponsor, offer a job, recommend the library to others etc.
Even as a business user, say an agency building websites, I could have been a fedn of certain js frameworks, hosting meetups, buying swags, sponsoring development. I am less likely to do that if I have no idea what framework is powering the websites I build.
Our argument is that rewards fall faster with vibe coding than productivity increases. OSS developers lose motivation, they stop maintaining existing libraries, don't bother sharing new ones (even if they keep writing a lot of code for themselves).
The future will absolutely not be "How things are today + LLMs"
The paradigm now for software is "build a tool shed/garage/barn/warehouse full of as much capability for as many uses possible" but when LLMs can build you a custom(!) hammer or saw in a few minutes, why go to the shed?
Sure, leftpad and python-openai aren't hugely valuable in the age of LLMs, but redis and ffmpeg are still as useful as ever. Probably even more useful now that LLMs can actually know and use all their obscure features
But I think the reality is: LLMs democratise access to coding. In a way this decreases the market for complete solutions, but massively increases the audience for building blocks.
Vibe coders don't code, they let code. So LLMs democratise access to coders.
Sure, there will be more personalized apps for those who have a lot of expertise in a domain and gain value from building something that supports their specific workflow. For the vast majority of the population, and the vast majority of use cases, this will not happen. I'm not about to give up the decades of experience I've gained with my tools for something I vibe coded in a weekend.
Every problem or concern you raise will adapt to the next world because those things are valuable. These concerns are temporary, not permanent.
I really, really don't care
I didn't get into programming for the money, it's just been a nice bonus
Exactly the same for me! If kind of feel like an artist whose paintings are worth more more easily than a paint or music artist… But boy would I be poor if this art were worthless!
I had a job where in short we had a lot of pain points with software that we had no resources permitted to fix them. With a mix past experience, googling I started writing some internal web based tools to fix these gaps. Everyone was happy. This is where I see vibe coding being really helpful in the higher level stuff like higher level scripting and web based tools. Just my opinion based on my experience.
Whatever it is, the future will also certainly not be what it was a couple decades ago - that is, every one inventing their own solution to solved problems, resulting in a mess of tools with no standardization. There is a reason libraries/frameworks/etc exist.
A good question but there's a good answer: Debugged and tested code.
And by that, I mean the FULL spectrum of debugging and testing. Not just unit tests, not even just integration tests, but, is there a user that found this useful? At all? How many users? How many use cases? How hard has it been subjected to the blows of the real world?
As AI makes some of the other issues less important, the ones that remain become more important. It is completely impossible to ask an LLM to produce a code base that has been used by millions of people for five years. Such things will still have value.
The idea that the near-future is an AI powered wonderland of everyone getting custom bespoke code that does exactly what they want and everything is peachy is overlooking this problem. Even a (weakly) superhuman AI can't necessarily anticipate what the real world may do to a code base. Even if I can get an AI to make a bespoke photo editor, someone else's AI photo editor that has seen millions of person-years of usage is going to have advantages over my custom one that was just born.
Of course not all code is like this. There is a lot of low-consequence, one-off code, with all the properties we're familiar with on that front, like, there are no security issues because only I will run this, bugs are of no consequence because it's only ever going to be run across this exact data set that never exposes them (e.g., the vast, vast array of bash scripts that will technically do something wrong with spaces in filenames but ran just fine because there weren't any). LLMs are great for that and unquestionably will get better.
However there will still be great value in software that has been tested from top to bottom, for suitability, for solving the problem, not just raw basic unit tests but for surviving contact with the real world for millions/billions/trillions of hours. In fact the value of this may even go up in a world suddenly oversupplied with the little stuff. You can get a custom hammer but you can't get a custom hammer that has been tested in the fire of extensive real-world use, by definition.
Because I thought I needed a hammer for nails (employee payroll) but then I realized I also need it to screw (sales), soldering (inventory management) and cleanup (taxes).
Oh and don't forget that next month the density of iron can lower up to 50%.
Good points. It does feel like that happens quite often
1) Your specific analogy is kinda missing something important: I don't want my tools working differently every time I use them, also it's work to use LLMs. A hammer is kind of a too-simple example, but going with it anyway: when I need a hammer, I don't want my "LLM" generating a plastic one, then having to iterate for 30 minutes to get it right. It takes me far less than 30 minutes to go to my shed. A better example is would be a UI, even if it was perfect, do you want all the buttons and menus to be different every time you use the tool? Because you generate a new one each time instead of "going to the shed"?
2) Then there's the question, can an LLM actually build, or does it just regurgitate? A hammer is an extremely we'll understood tool, that's been refined over centuries, so I think an LLM could do a pretty good job with one. There are lots of examples, but that also means the designs the LLM is referencing are probably better than the LLM's output. And then for things not like that, more unique, can the LLM even do it at all or with a reasonable amount of effort?
I think there's a modern phenomenon where making things "easier" actually results in worse outcomes, a degraded typical state vs. the previous status quo, because it turns what was once a necessity into a question of personal discipline. And it turns out when you remove necessity, a lot of people have a real hard time doing the best thing on discipline alone. LLMs might just enable more of those degenerate outcomes: everyone's using "custom" LLM generated tools all the time, but they all actually suck and are worse than if we just put that effort into designing the tools manually.
When (not if) software breaks in production, you need to be able to debug it effectively. Knowing that external libraries do their base job is really helpful in reducing the search space and in reducing the blast radius of patches.
Note that this is not AI-specific. More generally, in-house implementations of software that is not your core business brings costs that are not limited to that of writing said implementation.
Because software developers typically understand how to implement a solution to problem better than the client. If they don't have enough details to implement a solution, they will ask the client for details. If the developer decides to use an LLM to implement a solution, they have the ability to assess the end product.
The problem is software developers cost money. A developer using an LLM may reduce the cost of development, but it is doubtful that the reduction in cost will be sufficient to justify personalized applications in many cases. Most of the cases where it would justify the cost would likely be in domains where custom software is in common use anyhow.
Sure, you will see a few people using LLMs to develop personalized software for themselves. Yet these will be people who understand how to specify the problem they are trying to solve clearly, will have the patience to handle the quirks and bugs in the software they create, and may even enjoy the process. You may even have a few small and medium sized businesses hiring developers who use LLMs to create custom software. But I don't think you're going to see the wholesale adoption of personalized software.
And that only considers the ability of people to specify the problem they are trying to solve. There are other considerations, such as interoperability. We live in a networked world after all, and interoperability was important even before everything was networked.
I very much like to use the years of debugging and innovation others spent on that very same problem that I'm having.
What's interesting in reading comments like this is reading the same type of message across a bunch of different fields and aspects of life.
"When continents move, not only the weather changes"
If GenAI keeps increasing it's abilities and doesn't bankrupt a number of companies first, I think it's going to make a lot of people bubbles that encompass their entire lives. It's not difficult to imagine little pockets of hyperreality were some peoples lives are only feed by generated content and their existence starts to behave more like a video game than having any grounding in the physical. It's going to be interesting what the fractured remains of society look like in that future.
It can be mitigated by PR submitters doing a review and edit pass prior to submitting a PR. But a lot of submitters don't currently do this, and in my experience the average quality of PRs generated by AI is definitely significantly lower than those not generated by AI.
With the time they save using AI, they can get much more work done. So much that having other engineers learn the codebase is probably not worth it anymore.
Large scale software systems can be maintained by one or two folks now.
Edit: I'm not going to get rate limited replying to everyone, so I'll just link another comment:
I need to make decisions about how things are implemented. Even if it can pick “a way” that’s not necessarily going to be a coherent design that I want.
In contrast for review I already made the choices and now it’s just providing feedback. More information I can choose to follow or ignore.
e.g. Vibe coding defeats GNOME developers' main argument for endlessly deleting features and degrading user experience - that features are ostensibly "hard to maintain".
Well, LLMs are rapidly reducing development costs to 0.
The bottleneck for UI development is now testing, and here desktop Linux has advantage - Linux users have been trained like Pavlov's dogs to test and write detailed upstream bug reports, something Windows and macOS users just don't do.
At some point the investors want to see profit.
Also it's a formal system and process, "vibe" coding is anything but. Call me curmudgeony (?) but I don't think "vibe coding" should be a phrase used to describe LLM assisted software engineering in large / critical systems.
Oh sweet summer child.
> Well, LLMs are rapidly reducing development costs to 0.
And maintainance costs along with technical debt rapidly goes up.
People (the community and employers) previously were impressed because of the amount of work required. Now that respect is gone as people can't automatically tell on the surface if this is a low effort vibe code or something else.
Community engagement has dropped. Stars aren't being given out as freely. People aren't actively reading your code like they use to.
For projects done before llms you can still link effort and signal but for anything started now.. everyone assumes it's llm created. No one want to read that code and not in the same way you would read other humans. Fewer will download the project.
Many of the reasons why I wrote open source is gone. And knowing the biggest/only engagement will come from llms copying your work giving you no credit.. what's the point?
Nobody cares if you wrote 5000 LOC, what they care about is what it does, how it does it, how fast and how good it does it, and none of those qualifiers are about volume.
Without any kind of offence implied: As maintainer of a few open source projects, I'm happy if it stops being an employability optimisation vector. Many of the people who don't code for fun but to get hired by FAANG aren't really bringing joy to others anyway.
If we end up with a small web of enthusiasts who write software for solving challenges, connecting intellectually with likeminded people, and altruism—then I'm fine with that. Let companies pay for writing software! Reduce the giant dependency chains! Have less infrastructure dedicated to distributing all that open source code!
What will remain after that is the actual open source code true to the idea.
but for others coding will become an art and craft like woodworking or other hobbies that require mastery.
CNC saws use to take pencil draws as input and now they can handle files. People always made handmade furniture while CNCs existed.
Open source projects around a need will continue. Things like youtube downloader fills a need. But many projects were showing off what you as a developer can write to impress a community. Those are dead. Projects that showcased new coding styles or ways to do things are dead.
Faang open source employment was never a thing. Faang filtered by leetcode, referrals, clout and h1 visas.
I can't think of even a single example of OSS being monetized through direct user engagement. The bulk of it just isn't monetized at all, and what is monetized (beyond like a tip jar situation where you get some coffee money every once in a while) is primarily sponsored by enterprise users, support license sales, or through grants, or something like that. A few projects like Krita sell binaries on the steam store.
From the tools which were used to design and develop the models (programming languages, libraries) to the operating systems running them to the databases used for storing training data .. plus of course they were trained mostly on open source code.
If OSS didn't exist, it's highly unlikely that LLMs would have been built.
would anyone want SlopHub Copilot if it had been trained exclusively on Microsoft's code?
(rhetorical question)
"most" maintainers make exactly zero dollars. Further, OSS monetization rarely involves developer engagement, it's been all about enterprise feature gating
[1] https://git-scm.com/book/en/v2/Distributed-Git-Distributed-W...
GPL is a dead man walking since you can have any LLM cleanroom a new implementation in a new language from a public spec with verifiable "never looked at the original source" and it can be more permissively-licensed however you wish (MIT, BSD etc).
case in point, check out my current deps on the project I'm currently working on with LLM assist: https://github.com/pmarreck/validate/tree/yolo/deps
"validate" is a project that currently validates over 100 file formats at the byte level; its goal is to validate as many formats as possible, for posterity/all time.
Why did I avoid GPL (which I am normally a fan of) since this is open-source? I have an even-higher-level project I'm working on, implementing automatic light parity protection (which can proactively repair data without a RAID/ZFS setup) which I want to make for sale, whose code will (initially) be private, and which uses this as a dependency (no sense in protecting data that is already corrupted).
Figured I'd give this to the world for free in the meantime. It's already found a bunch of actually-corrupt files in my collection (note that there's still some false-positive risk; I literally released this just yesterday and it's still actively being worked on) including some cherished photos from a Japan trip I took a few years ago that cannot be replaced.
It has Mac, Windows and Linux builds. Check the github actions page.
I was under the impression that copyright was only available for works created by people.
My guess is instead of Googling "library that does X" people are asking AI to solve the problem and it's regurgitating a solution in place? That's my theory anyway.
---
Concrete example of a no: I set up [1] in such a way that anyone can implement a new blog -> rss feed; docs, agents.md, open-source, free, etc...
Concrete example of a yes: Company spends too much money on simple software.
--- Our Vision ---
I feel the need to share: https://grove.city/
Human Flywheel: Human tips creator <-> Creator engages with audience
Agent Flywheel: Human creates creative content <-> Agent tips human
Yes, it uses crypto, but it's just stablecoins.
This is going to exist in some fashion and all online content creation (OSS and other) will need it.
---
As with everything, it Obvious
Is anyone replacing firefox, chromium, postgres, nginx, git, linux, etc? It would be idiotic to trade git for a vibe coded source control. I can't even imagine the motivations, maybe "merges the way I like it"?
Not sure, but anyone who's saying this stuff hasn't even taken the basic first level glance at what it would entail. By all means, stop paying $10 a month to "JSON validator SaSS", but also don't complain with the little niggling bugs, maintenance and organization that comes with it. But please stop pretending you can just vibe code your own Kafka, Apache, Vulkan, or PostGRES.
Yes, you can probably go faster (possibly not in the right direction if inexperienced), but ultimately, something like that would still require very senior, experienced person, using the tool in a very guided way with heavy review. By why take on the maintenance, the bug hunting, and everything else, unless that is your main business objective?
Even if you can 10x, if you use that to just take on 10x more maintenance, you haven't increased velocity. To really go faster, that 10x must be focused on the right objective -- distinctive business value. If you use that 10x to generate hundreds of small tools you now have to juggle and maintain, that have no docs or support, no searchable history of problems solved, you may have returned yourself to 1x (or worse).
This is the old "we'll write our own inhouse programming language" but leaking out to apps. Sure, java doesn't work _exactly_ the way you want it to, you probably have complaints. But writing your own lang will be a huge hit to whatever it was you actually wanted to use the language for, and you lose all the docs, forums, LSP / debugging tools, ecosystem, etc.
ktallett•1h ago
I was very sceptical but I will admit I think vibe coding has a place in society, just what it is yet is still to be determined. It can't help most for sure but it can help some in some situations.
Cthulhu_•1h ago
If they don't exist, AND the author is comitted to maintaining them instead of just putting it online, sure. But one issue I see is that a lot of these tools you describe already exist, so creating another one (using code assist tools or otherwise) just adds noise IMO.
The better choice is to research and plan (as you say in your first sentence) before comitting resources. The barrier to "NIH" is lowered through code assistants, which risks reducing collaboration in open source land in favor of "I'll just write my own".
Granted, "I'll write my own" has always felt like it has a lower barrier to entry than "I'm going to search for this tool and learn to use it".
cess11•1h ago
data-ottawa•1h ago
Maybe the best feature of vibe coding is that it makes the regret factor of poor early choices much lower. Its kind of magic to go "you know what, I was wrong, let's try this approach instead" and not having to spend huge amounts of time fixing things or rewriting 80% of the project.
It's made it a lot more fun to try building big projects on my own, where I would go into decision paralysis or prematurely optimize and never start the meat or learning of the core project.
Its also been nice to have agents review my projects for major issues, so I feel more confident sharing them.
fc417fc802•59m ago
Setting out to implement a feature only to immediately get bogged down in details that I could probably get away with glossing over. LLMs short circuit that by just spitting something out immediately. Of course it's of questionable quality, but once you get something working you can always come back and improve it.
hayd•1h ago
I haven't worked out how to do this for my own projects.
Once you've set it up it's not too hard to imagine an AI giving an initial PR assessment... to discard the worst AI slop, offer some stylistic feedback, or suggest performance concerns.