frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

CMA targets Google AI overviews in move to loosen search dominance

https://www.ft.com/content/5b6881e5-81a6-4497-928e-58b3706bb2eb
1•1vuio0pswjnm7•1m ago•0 comments

Chrome will make popular scripts load faster (by picking winners)

https://danfabulich.medium.com/chrome-will-make-popular-scripts-load-faster-by-picking-winners-bc...
1•jesup•4m ago•1 comments

Cache Is King, a Roadmap

https://nemorize.com/roadmaps/cache-is-king
1•reverseblade2•4m ago•0 comments

Networks Hold the Key to a Decades-Old Problem About Waves

https://www.quantamagazine.org/networks-hold-the-key-to-a-decades-old-problem-about-waves-20260128/
1•jnord•5m ago•0 comments

Agentic Memory Poisoning: How Long-Term AI Context Can Be Weaponized

https://instatunnel.my/blog/agentic-memory-poisoning-how-long-term-ai-context-can-be-weaponized
1•birdculture•6m ago•0 comments

Four Traditions Revisited

https://third-bit.com/2026/01/28/four-traditions-revisited/
1•mooreds•6m ago•0 comments

Please Don't Say Mean Things about the AI I Just Invested a Billion Dollars In

https://www.mcsweeneys.net/articles/please-dont-say-mean-things-about-the-ai-that-i-just-invested...
3•randycupertino•7m ago•0 comments

Markdown Tables Have Never Looked Better (2025)

https://charm.land/blog/glamour-tables/
2•mooreds•7m ago•0 comments

Show HN: Badgewind – Generate SVG Badges Styled with Tailwind via URL

https://github.com/agmmnn/badgewind
1•agmmnn•8m ago•0 comments

Artificial Intelligence can generate a feeling of intimacy

https://uni-freiburg.de/en/artificial-intelligence-can-generate-a-feeling-of-intimacy/
1•geox•10m ago•0 comments

snapshot.debian.org

https://snapshot.debian.org/
1•gjvc•13m ago•1 comments

The Means-Testing Industrial Complex

https://lpeproject.org/blog/the-means-testing-industrial-complex/
2•gok•13m ago•0 comments

Against the Neurotypical Inquisition

https://objectivegrok.com/
1•lywald•15m ago•0 comments

Ask HN: Who does online joins anymore?

1•simianwords•16m ago•0 comments

BunnyCDN Is Having an Outage

https://status.bunny.net
1•punkpeye•17m ago•1 comments

Forward-Deployed Job Titles

https://a16z.com/forward-deployed-job-titles/
1•donutshop•17m ago•0 comments

In 6 violent encounters, evidence contradicts immigration officials' narratives

https://www.reuters.com/world/us/evidence-contradicts-trump-immigration-officials-accounts-violen...
6•petethomas•18m ago•0 comments

State Department confirms federal censorship shield law incoming

https://prestonbyrne.com/2026/01/28/state-department-confirms-federal-granite-act-incoming/
3•MassPikeMike•20m ago•0 comments

XChat (Twitter E2EE) Security Review by Trail of Bits [pdf]

https://github.com/trailofbits/publications/blob/master/reviews/2025-10-x-xchat-securityreview.pdf
2•some_furry•21m ago•0 comments

Chiune Sugihara

https://en.wikipedia.org/wiki/Chiune_Sugihara
2•handfuloflight•23m ago•0 comments

DashPane – A faster app switcher for macOS with fuzzy search

1•jbetala7•24m ago•0 comments

Forge – Transform nested JSON into governed dbt models for BQ/Snowflake

https://forge.foxtrotcommunications.net/portal
1•brady_bastian•24m ago•1 comments

The UK paid £4.1M for a bookmarks site

https://mahadk.com/posts/ai-skills-hub
10•JustSkyfall•27m ago•0 comments

Rogue agents and shadow AI: Why VCs are betting big on AI security

https://techcrunch.com/2026/01/19/rogue-agents-and-shadow-ai-why-vcs-are-betting-big-on-ai-security/
1•PaulHoule•30m ago•0 comments

Detecting Spoilage with a Transcription-Based Biosensor

https://enviromicro-journals.onlinelibrary.wiley.com/doi/10.1111/1751-7915.70267
1•gnabgib•31m ago•0 comments

Seatbelt Basalt

https://en.wikipedia.org/wiki/Seatbelt_basalt
2•Neuronaut•31m ago•0 comments

The Browser You Trust

https://pikseladam.com/29-01-2026-the-browser-you-trust/
1•pikseladam•32m ago•0 comments

FAA ignored warnings before DCA crash: "100% preventable"—federal investigators

https://www.washingtonpost.com/transportation/2026/01/27/final-ntsb-hearing-dca-crash/
8•bookofjoe•33m ago•1 comments

Assessing internal quality while coding with an agent

https://martinfowler.com/articles/exploring-gen-ai/ccmenu-quality.html
1•tortilla•34m ago•0 comments

Ubiquiti: The U.S. tech enabling Russia's drone war

https://hntrbrk.com/ubiquiti/
4•foliveira•36m ago•0 comments
Open in hackernews

Jellyfin LLM/"AI" Development Policy

https://jellyfin.org/docs/general/contributing/llm-policies/
107•mmoogle•2h ago

Comments

hamdingers•1h ago
> LLM output is expressly prohibited for any direct communication

I would like to see this more. As a heavy user of LLMs I still write 100% of my own communication. Do not send me something an LLM wrote, if I wanted to read LLM outputs, I would ask an LLM.

giancarlostoro•1h ago
Yeah I use LLMs to show me how to shorten my emails because I can type for days. It helps a lot for when I feel like I just need a short concise email but I still write it all myself.
gonzalohm•1h ago
Same can go for LLM code. I don't want to review your code if it was written by an LLM

I only use LLM to write text/communication because that's the part I don't like about my work

adastra22•1h ago
I’m glad they have a carve out for using LLMs to translate to, or fix up English communications. LLMs are a great accessibility tool that is making open source development truly global. Translation and grammar fix up is something LLMs are very, very good at!

But that is translation, not “please generate a pull request message for these changes.”

Gigachad•1h ago
Better to use Google Translate for this than ChatGPT. Either ChatGPT massively changes the text and slopifies it, or people are lying about using it for translation only because the outputs are horrendous. Google Translate won't fluff out the output with garbage or reformat everything with emoji.
embedding-shape•50m ago
"Translate this from X to X, don't change any meaning or anything else, only translate the text with idiomatic usage in target language: X"

Using Google Translate probably means you're using a language model in the end anyways behind the scenes. Initially, the Transformer was researched and published as an improvement for machine translation, which eventually led to LLMs. Using them for translation is pretty much exactly what they excel at :)

habinero•44m ago
Yep. If you don't know the language, it's best not to pretend you do.

I've done this kind of thing, even if I think it's likely they speak English. (I speak zero Japanese here.) It's just polite and you never know who's going to be reading it first.

> Google翻訳を使用しました。問題が発生した場合はお詫び申し上げます。貴社のウェブサイトにコンピュータセキュリティ上の問題が見つかりました。詳細は下記をご覧ください。ありがとうございます。

> I have found a computer security issue on your website. Here are details. Thank you.

mort96•1h ago
Why would you want to use a chat bot to translate? Either you know the source and destination language, in which case you'll almost certainly do a better job (certainly a more trustworthy job), or you don't, in which case you shouldn't be handling translations for that language anyway.

Same with grammar fixes. If you don't know the language, why are you submitting grammar changes??

MarsIronPI•1h ago
No, I think GP means grammar fixes to your own communication. For example if I don't speak Japanese very well and I want to write to you in Japanese, I might write you a message in Japanese, then ask an LLM to fix up my grammar and check my writing to make sure I'm not sounding like a complete idiot.
mort96•55m ago
I have read a lot of bad grammar from people who aren't very good at the language but are trying their best. It's fine. Just try to express yourself clearly and we figure it out.

I have read text where people who aren't very good at the language try to "fix it up" by feeding it through a chat bot. It's horrible. It's incredibly obvious that they didn't write the text, the tone is totally off, it's full of obnoxious ChatGPT-isms, etc.

Just do your best. It's fine. Don't subject your collaborators to shitty chat bot output.

pessimizer•44m ago
You seem to be judging business communications by weird middle-class aesthetics while the people writing the emails are just trying to be clear.

If you think that every language level is always sufficient for every task (a fluency truther?), then you should agree that somebody who writes an email in a language that they are not confident in, puts it through an LLM, and decides the results better explain the idea they were trying to convey than they had managed to do is always correct in that assessment. Why are you second guessing them and indirectly criticizing their language skills?

mort96•32m ago
Running your words through ChatGPT isn't making you clear. If your own words are clear enough to be understood by ChatGPT, they're clear enough to be understood by your peers. Adding ChatGPT into the mix only ensures opportunity for meaning to be mangled. And text that's bad enough as to be ambiguous may be translated to perfectly clear text that reflects the wrong interpretation of your words, risking misunderstandings that wouldn't happen if the ambiguity was preserved instead of eliminated.

I have no idea what you're talking about with regard to being a "fluency truther", I think you're putting words into my mouth.

habinero•17m ago
Agreed. Humans are insanely good at figuring out intent and context, and running stuff through an LLM breaks that.

The times I've had to communicate IRL in a language I don't speak well, I do my best to speak slowly and enunciate and trust they'll try their best to figure it out. It's usually pretty obvious what you're asking lol. (Also a lot of people just reply with "Can I help you?" in English lol)

I've occasionally had to email sites in languages I don't speak (to tell them about malware or whatever) and I write up a message in the simplest, most basic English I can. I run that through machine translation that starts out with "This was generated by Google Translate" and include both in the email.

Just do your best to communicate intent and meaning, and don't worry about sounding like an idiot.

denkmoon•1h ago
For translating communications like "Here is my PR, it does x, can you please review it", not localisation of the app.
SchemaLoad•1h ago
"I just used it to clean up my writing" seems to be the usual excuse when someone has generated the entire thing and copy pasted it in. No one believes it and it's blatantly obvious every time someone does this.
ChadNauseam•15m ago
Sometimes I ramble for a long time and ask an LLM to clean it up. It almost always slopifies it to shreds. Can't extract the core ideas, matches everything to the closest popular (i.e. boring to read) concept, etc.
newsclues•7m ago
Using software for translation is fine as long as the original source is also present for native speakers to check and any important information that is machine translated should be read by humans to test
Kerrick•56m ago
Relevant: https://noslopgrenade.com
gllmariuty•33m ago
yeah, you could ask a LLM, but are you sure you know what to ask?

like in that joke with the mechanic which demands $100 for hitting the car once with his wrench

darkwater•1h ago
Seems perfectly legit and hopefully it will help creating new contributors that learn and understand what the AI helped them generate.
lifetimerubyist•1h ago
> Violating this rule will result in closure/deletion of the offending item(s).

Should just be an instant perma-ban (along with closure, obviously).

Hamuko•1h ago
Seems a bit disproportionate. I'd say that's more of a "repeat offender" type of solution.
lifetimerubyist•1h ago
Whats disproportionate is the mountains of slop out there and the amount of people think they can just sling slop for cheap online cred.
MarsIronPI•1h ago
Once might just be a script kiddie not knowing any better. More than once is a script kiddie refusing to know any better.
SchemaLoad•1h ago
Submitting a pure slop PR and description is a very high level offense that is obviously not acceptable.
giancarlostoro•1h ago
I think at some point we will need a "PEP-8" for LLM / AI code contributions document that is universally reusable and adoptable per project, call it an "Agent Policy" or what have you, that any agent worth its Salt should read before touching a codebase and warn the user that their contributions might not be accepted or what have you, depending on project policy, just like we have GPL, BSD, MIT, etc it would probably make sense to have it, especially for those of us who are respectful to a projects needs and wishes. I think there's definitely room for sane LLM code / vibe coded code, but you have to put in a little work to validate your changes, run every test, ensure that you understand the output and implications, not just shove a PR at the devs and hope they accept it.

A lot of the time open source PRs are very strategic pieces of code that do not introduce regressions, an LLM does not necessarily know or care, and someone vibe coding might not know the projects expectations. I guess instead of / aside from a Code of Conduct, we need a sort of "Expectation of Code" type of document that covers the projects expectations.

embedding-shape•40m ago
> that any agent worth its Salt should read before touching a codebase and warn the user that their contributions might not be accepted

Are you talking about some agent that is specific for writing FOSS code or something? Otherwise I don't see why we'd want all agents to act like this.

As always, it's the responsibility of the contributor to understand both the code base and contributing process, before they attempt to contribute. If they don't, then you might receive push-back, or have your contribution deleted, and that's pretty much expected, as you're essentially spamming if you don't understand what you're trying to "help".

That someone understands this before contributing, is part of understanding how FOSS works when it's about collaborating on projects. Some projects have very strict guidelines, others very lax, and it's up to you to figure out what exactly they expect from contributors.

ChristianJacobs•1h ago
This seems fair, tbh. And I fully agree on the policy for issues/discussions/PRs.

I know there will probably be a whole host of people from non-English-speaking countries who will complain that they are only using AI to translate because English is not their first (or maybe even second) language. To those I will just say: I would much rather read your non-native English, knowing you put thought and care into what you wrote, rather than reading an AIs (poor) interpretation of what you hoped to convey.

nabbed•1h ago
Although: "An exception will be made for LLM-assisted translations if you are having trouble accurately conveying your intent in English."
ChristianJacobs•1h ago
I am quite obviously blind, but I still stand by my sentiment. I would rather have a "bad" but honest PR body than a machine translated one where the author isn't sure about what it says. How will you know if what it says is what you meant?
fragmede•50m ago
突然出現一大段外文文字會讓很多人感到反感。即使不能百分之百確定翻譯準確,大多數使用者仍然更傾向於將其翻譯成英語。
adastra22•1h ago
There is a carve out exception for this in the doc.
bjackman•1h ago
I think the spirit of the policy also allows you to write your own words in your own language and have an AI translate it.

(But also, for a majority of people old fashioned Google Translate works great).

(Edit: it's actually a explicit carveout)

transcriptase•1h ago
I suspect the vast number of individuals in developing countries currently spamming LLM commits to every open source project on earth, and often speak neither the project or programming language are not going to pay much attention to this policy. It’s become a numbers game of automation blasting “contributions” at projects with name recognition and hoping you sneak one in for your resume/portfolio.
estimator7292•1h ago
Policy is not put in place to prevent anything. Policy is put in place so that you have a sign to point at while you lock a PR thread.
FanaHOVA•1h ago
People can write horrible PRs manually just as well as they do with AI (see Hacktoberfest drama, etc).

"LLM Code Contributions to Official Projects" would read exactly the same if it just said "Code Contributions to Official Projects": Write concise PRs, test your code, explain your changes and handle review feedback. None of this is different whether the code is written manually or with an LLM. Just looks like a long virtue signaling post.

getmoheb•1h ago
Virtue signaling? That seems like an uncharitable reading.

The point, and the problem, is volume. Doing it manually has always imposed a de facto volume limit which LLMs have effectively removed. Which I understand to be the problem these types of posts and policies are designed to address.

mort96•1h ago
A large enough difference in degree becomes a difference in kind. Chat bots have vastly inflated the amount of shitty PRs, to the degree that it needs different solutions to manage.
micromacrofoot•1h ago
These seem fair, but it's the type of framework that really only catches egregious cases — people using the tools appropriately will likely slip through undetected.
anavid7•1h ago
> LLM/"AI"

love the "AI" in quotes

wmf•52m ago
It's incongruous to me to put "AI" in scare quotes while allowing it to be used. It is intelligent.
antirez•1h ago
Good AI policies (like this one) can be spotted since the TLDR is "Don't submit shitty code". As such, good AI policies should be replaced by "Contribution policies" that says "Don't submit shitty code".
darkwater•1h ago
I think the gist and the "virality" of this policy is:

1) we accept good quality LLM code

2) we DO NOT accept LLM generated human interaction, including PR explanation

3) your PR must explain well enough the change in the description

Which summed together are far more than "no shitty code". It's rather no shitty code that YOU understand

anthonypasq•1h ago
> 1) we accept good quality LLM code

there is no such thing as LLM code. code is code, the same standards have always applied no matter who or what wrote it. if you paid an indian guy to type out the PR for you 10 years ago, but it was submitted under your name, its still your responsibility.

mort96•1h ago
I don't agree at all. There's a huge difference between "someone wrote this code and at least understands the intention and the problem it's trying to solve" and "the chat bot just generated this code, nobody understands what the intention is". I'm comfortable having a conversation with a human about code they wrote. It's pointless to have a conversation with a human about code they didn't write and don't understand.

The quality of "does the submitter understand the code" is not reflected in the text of the diff itself, yet is extremely important for good contributions.

JaggedJax•1h ago
I'm not sure when this policy was introduced, but fairly recently Jellyfin released a pretty major update that introduced a lot of bugs and performance issues. I've been watching their issue tracker as they work through them and have noticed it's flooded with LLM generated PRs and obviously LLM generated PR comments/descriptions/replies. A lot of the LLM generated PRs are a mishmash of 2-8 different issues all jumbled into a single PR.

I can see how frustrating it is to wade through those and they are distracting and taking time away from them actually getting things fixed up.

bjackman•1h ago
I have lately taken to this approach when I raise bugs:

1. Fully human-written explanation of the issue with all the info I can add

2. As an attachment to the bug (not a PR), explicitly noted as such, an AI slop fix and a note that it makes my symptom go away.

I've been on the receiving end of one bug report in this format and I thought it was pretty helpful. Even though the AI fix was garbage, the fact that the patch made the bug go away was useful signal.

Gigachad•1h ago
The open for anyone PR model might be at risk now. How can maintainers be expected to review unlimited slop coming in. I can see a lot of open source just giving up on allowing community contribution. Or maybe only allowing trusted members to contribute after they have demonstrated more than passing interest in the project.
h4kunamata•59m ago
>LLM output is expressly prohibited for any direct communication

One more reason to support the project!!

patchorang•39m ago
I very much like the no LLM output in communication. Nothing is worse than getting huge body of text the sender clearly hasn't even read. Then you either have to ignore it or spend 15 minutes explaining why their text isn't even relevant to the conversation.

Sort of related, Plex doesn't have a desktop music app, and the PlexAmp iOS app is good but meh. So I spent the weekend vibe coding my own Plex music apps (macOS and iOs), and I have been absolutely blown away at what I was able to make. I'm sure code quality is terrible, and I'm not sure if a human would be able to jump in there and do anything, but they are already the apps I'm using day-to-day for music.

Cyphase•31m ago
In other words, you are responsible for the code you submit (or cause to be submitted via automated PRs), regardless of how fancy your tools are.

That said I understand calling it out specifically. I like how they wrote this.

Related:

> https://news.ycombinator.com/item?id=46313297

> https://simonwillison.net/2025/Dec/18/code-proven-to-work/

> Your job is to deliver code you have proven to work

Amorymeltzer•17m ago
There was a discussion recently on the Wikimedia wikitech-l discussion list, and one participant had a comment I appreciated:

>I'm of the opinion if people can tell you are using an LLM you are using it wrong.

They continued:

>It's still expected that you fully understand any patch you submit. I think if you use an LLM to help you nobody would complain or really notice, but if you blindly submit an LLM authored patch without understanding how it works people will get frustrated with you very quickly.

<https://lists.wikimedia.org/hyperkitty/list/wikitech-l@lists...>