I remember when people were crying about how much power a google search uses. This is the same thing all over again and it is as pointless now as it was back then.
https://arstechnica.com/ai/2025/08/google-says-it-dropped-th...
> Google says it dropped the energy cost of AI queries by 33x in one year. The company claims that a text query now burns the equivalent of 9 seconds of TV.
So, autocomplete done by deterministic algorithms in IDEs are okay but autocomplete done by LLM algorithms - no, that's banned? Ok, surely everybody agrees with that, it's policy after all.
How it is possible to distinguish between the two in the vast majority of cases where the hand written code and autocompleted code is byte-by-byte identical?
Are we supposed to record video of us coding to show that we did type letters one by one?
> 2. Recommending generative AI tools to other community members for solving problems in the postmarketOS space.
Is searching for pieces of code considered parts of solving problems?
Then how do we distinguish between finding a a required function by grepping code or by asking LLM to search for it?
Can we ask LLM questions about postmarketOS? Like, "what is the proper way to query kernel for X given Z"?
If a community members asks this question and I already know the answer via LLM, then am I now banned from giving the correct answer?
--
Don't get me wrong. I am sick and tired of the vomit-inducing AI bullshit (as opposed to the tremendous help that LLMs provide to experienced devs).
I fail to see how a policy like this is even enforceable let alone productive and sane.
On the other hand, I absolutely see where is this policy coming from. It seems that projects are having a hard time navigating the issue and looking for ways to eliminate the insurmountable amount of incoming slop.
I think we still haven't found a right way to do it.
Because autocomplete still requires heavy user input and a SWE at the top of the decision making tree. You could argue that using Claude or Codex enables you to do the same thing, but there's no guarantee someone isn't vibecoding and then not testing adequately to ensure, firstly, that everything can be debugged, and secondly, that it fits in with the broader codebase before they try to merge or PR.
Plenty of people use Claude like an autocomplete or to bounce ideas off of, which I think is a great use case. But besides that, using a tool like that in more extreme ways is becoming increasingly normalized and probably not something you want in your codebase if you care about code quality and avoiding pointless bugs.
Every time I see a post on HN about some miracle work Claude did it's always been very underwhelming. Wow, it coded a kernel driver for out of date hardware! That doesn't do anything except turn a display on... great. Claude could probably help you write a driver in less time, but it'll only really work well, again, if you're at the top of the hierarchy of decision making and are manually reviewing code. No guarantees of that in the FOSS world because we don't have keyloggers installed on everybody's machine.
But again: how do we distinguish between manual code input and sophisticated autocomplete?
that ship has sailed with codex 5.3 in 90% SWE jobs, unfortunately. I expect the next 9% won't survive the following 12 months and the last 1% is done within 5 years.
it isn't even about principles - projects not using gen AI will become basically irrelevant, the pace of gen AI allowed competitors will be too great.
mono442•1h ago
MonkeyClub•1h ago
ACCount37•1h ago
I can understand "untested AI-genned code is bad, and thus anything that reeks of AI is going to be scrutinized" - especially given that PostmarketOS deals a lot with kernel drivers for hardware. Notoriously low error margins. But they just had to go out of their way and make it ideological rather than pragmatic.
yehoshuapw•1h ago
ACCount37•1h ago
Having an LLM helps, especially when you're facing a new subsystem you're not familiar with, and trying to understand how things are done there. They still can't do the heavy duty driver work by themselves - but are good enough for basic guidance and boilerplate.
trollbridge•1h ago
hedora•50m ago
This applies to the person you're replying to too.
I think their policy is poorly thought out, and that little good will come of it. At best, it'll cause drama in the project, and discourage useful contributions. It's a shame, since we desperately need an alternative to the phone duopoly.
crimsonnoodle58•1h ago
If you use AI to make repetitive tasks less repetitive, and clean up any LLM-ness afterwards, would they notice or care?
I find blanket bans inhibitive, and reeks of fear of change, rather than a real substantive stance.
jonathrg•1h ago
https://docs.postmarketos.org/policies-and-processes/develop...
crimsonnoodle58•1h ago
jsheard•1h ago
The AI policy linked from the OP explains why. It's half not wanting to deal with slop, and half ethical concerns which still apply when it's used judiciously.
zozbot234•44m ago
That never happens. It's actually easier to write the code from scratch and avoid LLMness altogether.
egorfine•21m ago
But at the same time I cannot imagine reverting to code with no help of LLMs. Asking stackoverflow and waiting for hours to get my question closed down instead of asking LLM? No way.
jonathrg•1h ago
trollbridge•1h ago
Joker_vD•1h ago
Joker_vD•1h ago
As long as they align with the correct (i.e. yours) values, of course. When they adopt the wrong values, it's not fine.
jonathrg•1h ago
debugnik•40m ago
ForHackernews•1h ago
hu3•59m ago
And I highly doubt iOS and Android are free from LLM assisted code at this point.
mpol•41m ago
pantalaimon•25m ago
hu3•16m ago
Not even humans can do that. Documentation needs to at least be reverse-engineered and understood before implementation.
imadr•32m ago
Why every time some person/group of people enact an anti-LLM policy in their project, other people feel the personal need to stress how useful LLMs are and how that project is bound to fail if they don't use it?
Postmarketos clearly exists and works, EVEN if LLMs were absolutely perfect for speeding up development ten folds, is there any absolute moral necessity to use them?
Also isn't this just moving the goalpost that LLM fanatics love to point out?
hu3•17m ago
Because AI-assisted code is most probably already present in devices they use.
And I dare say that even for PostmarktOS:
1) There's no way they can prevent AI-assisted code to reach their codebase.
2) They will most probably change this policy in the future lest other forks/projects outpace them in terms of utility and they get reduced to a carriage in a car world.
raincole•12m ago
The stance is to deter random vibe-coders trying to resume-max by submitting PRs to known open source projects. There are so many of them rn. Hopefully by making it clear (some of) them will realize doing that is just wasting their tokens.
hu3•3m ago
But to be clear their AI instance is as clear-cut as can be. Their instance IS INDEED to "prevent AI-assisted code to reach their codebase".
> The following is not allowed in postmarketOS:
> Submitting contributions fully or in part created by generative AI tools to postmarketOS.
source: https://docs.postmarketos.org/policies-and-processes/develop...
surgical_fire•1h ago
Why don't you share the list of very useful things you created instead, mono442?
nananana9•1h ago
mono442•57m ago
jsheard•54m ago
qsera•51m ago
mono442•25m ago
qsera•21m ago
hakube•42m ago
lm28469•29m ago
The vibecoder paradox, everyone is 10x as productive, no one can show even a 1.2x increase in anything (besides bot generated comments, traffick and other background noise)