frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Movie Posters from Africa That Are So Bad, They're Good

https://www.utterlyinteresting.com/post/bizarre-movie-posters-from-africa-that-are-so-bad-they-re...
180•bookofjoe•2h ago•57 comments

Let's Help NetBSD Cross the Finish Line Before 2025 Ends

https://mail-index.netbsd.org/netbsd-users/2025/10/26/msg033327.html
288•jaypatelani•5h ago•144 comments

10k Downloadable Movie Posters From The 40s, 50s, 60s, and 70s

https://hrc.contentdm.oclc.org/digital/collection/p15878coll84/search
237•bookofjoe•1w ago•49 comments

The bug that taught me more about PyTorch than years of using it

https://elanapearl.github.io/blog/2025/the-bug-that-taught-me-pytorch/
185•bblcla•3d ago•40 comments

Books by People – Defending Organic Literature in an AI World

https://booksbypeople.org/
11•ChrisArchitect•1h ago•1 comments

Formal Reasoning [pdf]

https://cs.ru.nl/~freek/courses/fr-2025/public/fr.pdf
76•Thom2503•6h ago•16 comments

Alzheimer's disrupts circadian rhythms of plaque-clearing brain cells

https://medicine.washu.edu/news/alzheimers-disrupts-circadian-rhythms-of-plaque-clearing-brain-ce...
3•gmays•35m ago•0 comments

Advent of Code 2025: Number of puzzles reduce from 25 to 12 for the first time

https://adventofcode.com/2025/about#faq_num_days
254•vismit2000•9h ago•150 comments

Eavesdropping on Internal Networks via Unencrypted Satellites

https://satcom.sysnet.ucsd.edu/
113•Bogdanp•5d ago•14 comments

Validating Your Ideas on Strangers

https://jeremyaboyd.com/post/validating-your-ideas-on-strangers
10•tacon•2d ago•4 comments

A worker fell into a nuclear reactor pool

https://www.nrc.gov/reading-rm/doc-collections/event-status/event/2025/20251022en?brid=vscAjql9kZ...
555•nvahalik•16h ago•396 comments

Show HN: FlashRecord – 2MB Python-native CLI screen recorder

https://github.com/Flamehaven/FlashRecord
5•Flamehaven•1h ago•0 comments

Pico-Banana-400k

https://github.com/apple/pico-banana-400k
309•dvrp•16h ago•57 comments

You Already Have a Git Server

https://maurycyz.com/misc/easy_git/
276•chmaynard•7h ago•211 comments

Ask HN: How to boost Gemini transcription accuracy for company names?

12•bingwu1995•6d ago•10 comments

Asbestosis

https://diamondgeezer.blogspot.com/2025/10/asbestosis.html
169•zeristor•9h ago•117 comments

Myanmar military shuts down a major cybercrime center, detains over 2k people

https://apnews.com/article/scam-centers-cybercrime-myanmar-a2c9fda85187121e51bd0efdf29c81da
54•bikenaga•3h ago•8 comments

The Linux Boot Process: From Power Button to Kernel

https://www.0xkato.xyz/linux-boot/
366•0xkato•19h ago•75 comments

Writing a RISC-V Emulator in Rust

https://book.rvemu.app/
77•signa11•10h ago•31 comments

Making the Electron Microscope

https://www.asimov.press/p/electron-microscope
5•mailyk•1h ago•0 comments

Clojure Land – Discover open-source Clojure libraries and frameworks

https://clojure.land/
128•TheWiggles•9h ago•30 comments

Connect to a 1980s Atari BBS through the web

https://www.southernamis.com/ataribbsconnect
46•JPolka•8h ago•2 comments

D2: Diagram Scripting Language

https://d2lang.com/tour/intro/
213•benzguo•19h ago•55 comments

The Journey Before main()

https://amit.prasad.me/blog/before-main
269•amitprasad•22h ago•102 comments

Bitmovin (YC S15) Is Hiring Engineering ICs and Managers in Europe

https://bitmovin.com/careers
1•slederer•11h ago

The FSF considers large language models

https://lwn.net/Articles/1040888/
78•birdculture•4h ago•36 comments

LaserTweezer – Optical Trap

https://www.gaudi.ch/GaudiLabs/?page_id=578
55•o4c•10h ago•5 comments

PCB Edge USB C Connector Library

https://github.com/AnasMalas/pcb-edge-usb-c
131•walterbell•15h ago•55 comments

Torchcomms: A modern PyTorch communications API

https://pytorch.org/blog/torchcomms/
20•paladin314159•22h ago•2 comments

Why I code as a CTO

https://www.assembled.com/blog/why-i-code-as-a-cto
250•johnjwang•2d ago•219 comments
Open in hackernews

The FSF considers large language models

https://lwn.net/Articles/1040888/
78•birdculture•4h ago

Comments

isodev•3h ago
> There is also, of course, the question of copyright infringements in code produced by LLMs, usually in the form of training data leaking into the model's output

Well yes, LLMs like Claude Code are merely a "copyright violation as a service". Everyone is so focused on the next new "AI" feature but we haven't actually resolved the issue of all model providers using stolen code to train their models and their lack of transparency on sourced training data.

1gn15•2h ago
Copyright violation is not stealing, and training is not copyright violation (it's already been ruled as fair use, multiple times).
inglor_cz•2h ago
I think the concerning problem is when the LLM reproduces some copyrighted code verbatim, and the user doesn't even stand a chance to know it.
1gn15•2h ago
Yes, but that's not what the grandparent comment was talking about.
isodev•1h ago
If I’m the grandparent comment, it was a big part of what I mean. Stolen/Unknown content goes in for training, verbatim or very close “inspired by” code comes out and there is no way to verify the source - “violation as a service”.
fluidcruft•14m ago
Verbatim duma are one thing but otherwise this seems closer to the issue of plagiarism than copyright. If someone studies the Linux kernel and then builds a new kernel that follows some of the design decisions and idioms that's not really copyright infingement.

The bigger issue (spiritually anyway) seems to be the need to develop free software LLM tools the same way FSF needed to develop free compilers. That's what's going to keep users from being able to adapt and control their machines. The issue is more ecological that programmers equipped with LLM are likely much more productive at creating and modifying code.

Some of the rest seems more like saying that anyone who studies GCC internals is forever tainted and must write copyleft code for life which seems laughable to me. Again this is more a topic of plagiarism than copyright which are fairly similar but actually different and not as clear cut.

CamperBob2•57m ago
When that happens, it's because the code was trivial enough to be compressed to a minuscule handful of bits... either because it literally is trivial, or because it's common enough to have become part of our shared lexicon.

As a society, we don't benefit from copyright maximalism, despite how trendy it is around here all of a sudden. See also Oracle v. Google.

isodev•1h ago
Not really, only a handful of authorities have weighed on that and most of them in a country where model providers literally buy themselves policy and judges.
matheusmoreira•55m ago
Yeah, copyright infringement isn't stealing, copyright shouldn't even exist to begin with.

I just think it's especially asinine how corporations are perfectly willing to launder copyrighted works via LLMs when it's profitable to do so. We have to perpetually pay them for their works and if we break their little software locks it's felony contempt of business model, but they get to train their AIs on our works and reproduce them infinitely and with total impunity without paying us a cent.

It's that "rules for thee but not for me" nonsense that makes me reach such extreme logical conclusions that I feel empathy for terrorists.

blibble•27m ago
> it's already been ruled as fair use, multiple times

most countries don't have a concept of fair use

but they nearly all have copyright law

quantummagic•2m ago
That fact in itself is a worse injustice than anything the LLM companies are doing. At the very least, it should be open to use in reporting, parody, and critique. Having no concept of such fair-use is oppressive and stifling.
falcor84•1h ago
Wasn't copyleft essentially intended to be "copyright violation as a service"? I.e. making it impossible for an individual working with copyleft code to use copyright to assert control over the code?
wvenable•6m ago
Copyleft requires strong copyright protections. Without a license, you have no rights at all to use the code. If you want to use the code, because it's copyrighted, you have abide by the terms of the license.
bgwalter•3h ago
It looks like the FSF is going to sit this one out like the SaaS revolution, to which they reacted late with the AGPL but did not push it. They are not working on a new license and Siewicz is already low-key pushing in favor of LLMs:

"Many years ago, he said, photographs were not generally seen as being copyrightable. That changed over time as people figured out what could be done with that technology and the creativity it enabled. Photography may be a good analogy for LLMs, he suggested."

I have zero trust in the FSF since they backstabbed Stallman.

EDIT: Criticizing anything from LWN, be it Debian, Linux or FSF related, results in instant downvotes. LWN is not a critical publication and just lionizes whoever has a title and bloviates on a mailing list or at a conference.

gjvc•2h ago
Yes. 100% agree.
lukan•2h ago
"I have zero trust in the FSF since they backstabbed Stallman."

The controversial line might have also been that one.

bgwalter•2h ago
Sure, but remember that the Stallman situation started with a highly clumsy Minsky/Epstein mail on an MIT mailing list. The Epstein coverup was bipartisan and now all tech companies are ostensibly on Trump's side and even finance his ballroom.

Are there any protests or demands for the cancellation of Trump, Clinton, Wexner, Black, Barak?

I have not seen any. The cancel tech people only go after those who they perceive as weak.

inglor_cz•2h ago
Cancellation of Stallman was the low point of that period, at least within tech, but it also made quite a lot of people aware that this monster of a practice must be resisted, or it will devour everyone unchecked. (Or, at least, anyone.)
wizzwizz4•1h ago
You're forgetting the "second cancellation", where people brought legitimate (and often long-standing) criticisms against Richard Stallman. Cancelling a philosopher for having bad takes on age of consent, but otherwise drawing the line between "rape" and "not rape" in a sensible place, is not a good idea; but removing a community leader for a long history of applied misogyny is much more appropriate.
pessimizer•1h ago
> the "second cancellation", where people brought legitimate (and often long-standing) criticisms against Richard Stallman.

No, the reason why this "second cancellation" is vague is because it was the typical feeding frenzy that happens after a successful cancellation, where people hop on to paint previously uninteresting slanders in a new light. Stallman, before saying something goofy about Epstein, was constantly slandered by people who hated what he stood for and by people that were jealous of him. After he said the goofy thing, they all piled in to say "you should have listened to me." The "second cancellation" is when "he asked me out once at a conference" becomes redolent of sexual assault.

None of them seem to like the politics of Free Software, either. They attempt to taint the entire philosophy with the false taint of Stallman saying that sleeping with older teenagers that seemed to be consenting isn't the worst crime in the world. The people who attacked him for that would defend any number of intimately Epstein-related people to the death; the goal imo was to break (or to take over and steer into a perversion of itself) Free Software. Every one of them was the "it's not fair to say that about Apple" type.

wizzwizz4•1h ago
> it was the typical feeding frenzy that happens after a successful cancellation

It was actually a few years later, prompted by Richard Stallman's reinstatement by the board. I don't know what you mean by "feeding frenzy", but I habitually ignore the unreasonable voices in such cases: it's safe to assume I'm not talking about those.

> "he asked me out once at a conference"

That wasn't the main focus of the criticism I saw. However, there is an important difference between an attendee asking someone out at a conference, and an invited speaker (or organiser) asking someone out at a conference. If you're going to be in a leadership position, you need to be aware of power dynamics.

That's a running theme throughout all of the criticism of Richard Stallman, if you choose to abstract it that way: for all he's written on the subject, he doesn't understand power dynamics in social interactions. He's fully capable of understanding it, but I think he prefers the simpler idea of (right-)libertarian freedom. (And by assuming he expects others to believe he'll behave according to his respect of the (right-)libertarian freedom of others, you can paint a very sympathetic picture of the man. That doesn't mean he should be in a leadership position for an organisation as important as the FSF, behaving as he does.)

> None of them seem to like the politics of Free Software, either.

Several of them are involved in other Free Software projects. To the extent those people have criticisms of the politics of Free Software, it's that it doesn't go far enough to protect user freedoms. (I suspect I shouldn't have got involved in this argument, since I'm clearly missing context you take for granted.)

serf•1m ago
>there is an important difference between an attendee asking someone out at a conference, and an invited speaker (or organiser) asking someone out at a conference. If you're going to be in a leadership position, you need to be aware of power dynamics.

so one side of social messaging is "Don't bother trying to look for a date if you're not a CEO, worth millions, have a home, an education, a plan, a yacht and a summer home" ,

and the other side is

"If you're powerful you'd better know that any kind of question needs to be re-framed with the concept of a power dynamic involvement, and that if you're sufficiently powerful there is essentially no way to pursue a relationship with a lesser mortal without essentially raping them through the power dynamics of the question itself and the un-deniability of a question asked by such a powerful God."

... and you say birth rates are declining precipitously?

Pretty ridiculous. It used to be that we used conventions as the one and only time to flatten the social hierarchy -- it was the one moment where you could talk and have a slice of pizza with a billionaire CEO or actor or whatever.

Re-substantiating the classism within conventions just pushes them furthest into corporate product marketing and employment fairs -- in other words it turns it into shit no one wants to attend without being paid to sit in a booth.

But all of that isn't the problem : the problem lies with personal sovereignty.

If someone doesn't want to do something, they say no. If they receive retribution because of that no we then investigate the retribution and as a society we turn the ne'er-do-well into a social pariah until they have better behavior.

There is a major problem when we as a society have decided "No, the problem is with the underlying pressure of what a no 'may mean' for their future." 'May' being the operative word.

We have turned this into a witch-chase , but for maybe-witches or those who may turn into witches without any real evidence of witch craft that prompted the chase.

'Power dynamics's is shorthand for "I was afraid i'd be fired if I denied Stallman." ; did anything resembling this ever occur?

bgwalter•58m ago
These measures are not applied equally though.

Deb Nicholson, PSF "Executive Director", won an FSF award in 2018, handed to her by Stallman himself. Note that at that time at least one of Stallman's embarrassing blog posts was absolutely already known:

https://www.fsf.org/news/openstreetmap-and-deborah-nicholson...

In 2021 Deb Nicholson then worked to cancel Stallman:

https://rms-open-letter.github.io/

In 2025 Deb Nicholson's PSF takes money from all new Trump allies, including from those that finance the ballroom and the destruction of the historical East Wing like Google and Microsoft. Will Deb Nicholson sign a cancellation petition for the above named figures?

wizzwizz4•45m ago
I don't think Deb Nicholson values many of the ideas that those people stand for. What would be the point of trying to reform the organisations they're a part of?
bgwalter•27m ago
The PSF could reject donations from Microsoft and Google. Deb Nicholson was previously at the OSI, which is widely thought to be ... industry friendly, so that is unlikely to happen.

They could also have done research in 2018 before accepting the award, which is standard procedure for politicians etc. But of course they wanted the award for their career.

duped•13m ago
Millions of people have been trying for a decade to
pessimizer•2h ago
I have no idea how to criticize them because I have no idea what to say about LLMs irt the GPL, other than that Free Software should try its best to legally protect itself from LLMs being trained on its code.

I've always been in favor of the GPLs being pushed as proprietary, restrictive licenses, and being as aggressive in enforcement as any other restrictive license. GPL'd software is public property. The association with Open Source, "Creative Commons" and "Public Domain" code is nothing but a handicap; proprietary code can take advantage of all permissively licensed code without pretending that it shares anything in terms of philosophy, and without sharing back unless it finds it strategically advantageous.

> They are not working on a new license and Siewicz is already low-key pushing in favor of LLMs

I just have no idea what I would put in a new license, or what it means to be "in favor" of LLMs. Are Free Software supporters just supposed to not use them, ever? Even if they're only trained on permissively licensed code? Do you think that it means that people are pushing to allow LLMs to train on GPL-licensed software?

I just don't understand what you're trying to say. I also have zero trust in the FSF over Stallman, simply because I don't hear people who speak like Stallman at the FSF i.e. I think his vision was pushed out along with his voice. But I do not understand what you're getting at.

bgwalter•2h ago
More or less what you said in your last paragraph: Stallman also reacted late to the web revolution, but at least he was passionate. That passion seems gone.

I don't see any sense of urgency in the reported discussion or any will to fight against large corporations. The quoted parts in the article do not seem very prepared, there are a lot of maybes, no clear stance and no overarching vision that LLMs must be fought for software freedom.

badsectoracula•3h ago
> The prompt used to create the code should also be provided. The LLM-generated code should be clearly marked.

I have a feeling the people who write these haven't really used LLMs for programming because even just playing around with them will make it obvious that this makes no sense - especially if you try to use something local based that lets you rewrite the discussion at will, including any code the LLM generated. E.g. sometimes when trying to get Devstral make something for me, i let it generate whatever (sometimes buggy/not working) code it comes up with[0] and then i start editing its response to fix the bug so that further instructions are under the assumption it generated the correct code from the get go instead of trying to convince it[0] to fix the code it generated. In such a scenario there is no clear separation between LLM-generated code and manually written code nor any specific "prompt" (unless you count all snapshots of the entire discussion every time one hits the "submit" button as a series of prompts, which technically is what the LLM using as a prompt instead of what the user types, but i doubt this was what the author had in mind).

And all that without taking into account what someone commented in the article about code not even done in a single session but with plans, restarting from scratch, summarizing, etc (and there are tools to automate these too and those can use a variety of prompts by themselves that the end user isn't even aware of).

TBH i think if FSF wants to "consider LLMs" they should begin by gaining some real experience using them first - and bringing people with such experience on board to explain things for them.

[0] i do not like anthropomorphizing LLMs, but i cannot think of another description for that :-P

cxr•2h ago
What you're describing isn't any different from a branch of commits between two people practicing a form of continuous integration where they commit whatever they have (whether it breaks the build or not, or is buggy, etc.), capped off by a merge commit when it's finally in the finished state.
badsectoracula•2h ago
Eh, i do not think these are comparable, unless you really stretch the idea of what is a "commit", who makes it and you consider all sorts of destructive modifications of branch history and commits normal.
falcor84•1h ago
Agreed, it's almost like requiring that code always come with full transcripts of all the meetings where the team discussed the next steps.
jmathai•54m ago
> I have a feeling the people who write these haven't really used LLMs for programming because even just playing around with them will make it obvious that this makes no sense

This is one problem with LLM generated code. It is very greenfield. There’s no correct or even good way to do it. Because it’s a little bit unbounded in possible approaches and quality of output.

I’ve tried tracking prompt history in many permutations as a means to documenting and making rollbacks more possible. I hasn’t felt like that's the right way to think about it.

1gn15•2h ago
> A member of the audience pointed out that the line between LLMs and assistive (accessibility) technology can be blurry, and that any outright ban of the former can end up blocking developers needing assistive technology, which nobody wants to do.

This is because LLMs are a type of assistive technology, usually for those with mental disabilities. It's a shame that mental disabilities are still seen as less important than physical disabilities. If one takes them seriously, one would realize that banning LLMs is inherently ableist. Just make sure that the developer takes accountability for the submitted code.

somewhereoutth•27m ago
1. Understand that code that has been wholly or partly LLM generated is tainted - it has (in at least some part) been created neither by humans nor by a deterministic, verifiable, process. Any representations to its quality are therefore void.

2. Ban tainted code.

Consider code that (in the old days) had been copy pasted from elsewhere. Is that any better than LLM generated code? Why yes - to make it work a human had to comb through it, tweaking as necessary, and if they did not then stylistic cues make the copy pasta quite evident. LLMs effectively originate and disguise copy pasta (including mimicking house styles), making it harder/impossible to validate the code without stepping through every single statement. The process can no longer be validated, so the output has to be. Which does not scale.

acoustics•15m ago
It depends on the nature of the code and codebase.

There have been many occasions when working in a very verbose enterprise-y codebase where I know exactly what needs to happen, and the LLM just types it out. I carefully review all 100 lines of code and verify that it is very nearly exactly what I would have typed myself.