Someone on Hacker News called my project "open weights", arguing that without sharing the prompts and process that created the code, I was essentially doing the AI equivalent of releasing model weights without the training data. The code was visible, but the inputs weren't.
That comment led me down a rabbit hole about what "open source" actually means in an AI-assisted world. The problem: Open source was designed assuming humans wrote code. If you could read the code, you could understand how it was made. AI breaks that assumption. When Claude writes a function based on my prompt, the code tells you what it does, but not why it exists in that form.
My proposal: "Open Method", sharing not just the code, but the process. The prompts, the workflow, the PRDs, the decisions. Enough that someone else could understand not just what you built, but how you built it.
I wrote about this in more depth here: https://dev.to/olaproeis/beyond-open-source-why-ai-assisted-...
Some context on Ferrite:
900+ GitHub stars
Approved for Flatpak (Flathub) release
Just got code signing approved
All development methodology is documented in docs/ai-workflow/
I'm not saying everyone must share their prompts. But I think the open source community should discuss what transparency looks like when AI is writing our code.
Questions for discussion:
If you were reviewing an AI-assisted PR, what would you want to see?
Should repos have an AI_METHOD.md alongside README.md?
Does "open source" need to evolve for the AI era?