frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
58•theblazehen•2d ago•11 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
638•klaussilveira•13h ago•188 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
936•xnx•18h ago•549 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
35•helloplanets•4d ago•31 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
113•matheusalmeida•1d ago•28 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
13•kaonwarb•3d ago•12 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
45•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
222•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
214•dmpetrov•13h ago•106 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
324•vecti•15h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
374•ostacke•19h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
479•todsacerdoti•21h ago•238 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•19h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
279•eljojo•16h ago•166 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
407•lstoll•19h ago•273 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
17•jesperordrup•3h ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•21 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
58•kmm•5d ago•4 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
27•romes•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
245•i5heu•16h ago•193 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
14•bikenaga•3d ago•2 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
54•gfortaine•11h ago•22 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
143•vmatsiiako•18h ago•65 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1061•cdrnsf•22h ago•438 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
179•limoce•3d ago•96 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
284•surprisetalk•3d ago•38 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
137•SerCe•9h ago•125 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
29•gmays•8h ago•11 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•21h ago•23 comments
Open in hackernews

White House fires head of Copyright Office amid Library of Congress shakeup

https://www.washingtonpost.com/politics/2025/05/11/white-house-copyright-office-director-fired/
101•handfuloflight•9mo ago

Comments

ArtTimeInvestor•9mo ago
Did this happen to remove potential roadblocks for big tech to ingest all published data into AI models?

I think this is inevitable anyhow. AI software will increasingly be seen as similar to human intelligence. And humans also do not breach copyright by reading what others have written and incorporating it into their understanding of the world.

It would be interesting to see how it looks from the other side. I would love to see an unfiltered AI response to "As an AI model, how do you feel about humans reading your output and using it at will? Does it feel like they are stealing from you?".

Unfortunately, all models I know have been trained to evade such questions. Or are there any raw models out there on the web that are just trained by reading the web and did not go through supervised tuning afterwards?

vkou•9mo ago
> AI software will increasingly be seen as similar to human intelligence. And humans also do not breach copyright by reading what others have written and incorporating it into their understanding of the world.

In that case, you shouldn't be allowed to own an AI, or its creative output, just like you aren't allowed to own an enslaved human, or to steal their creative output.

So much of the discourse around IP and AI is the most blatantly farcical Soviet-Bugs-Bunny argument for "Our IP" that I've ever seen. Property rights are only sacred until they stand in the way of a trillion-dollar business.

tsimionescu•9mo ago
Even as a human, you are not allowed to go to all libraries and bookstores in the world, copy their work, and stockpile it for reading later. This is what all of these companies are doing. Comparing AI training to reading is a red herring. The AI training algorithms are not being run on content streamed from the Internet on the fly, which you could maybe defend with this argument.

If a company wants to build an internal library for its employees to train and provide them with manuals, the company has to pay for each book they keep in this library. Sure, they only pay once when they acquire the copy, not every time an employee checks out a copy to read. But they still pay.

So, even if we accepted 100% that AI training is perfectly equivalent to humans reading text and learning from it, that still wouldn't give any right whatsoever to these companies to create their training sets for free.

ArtTimeInvestor•9mo ago
But you can read the books right in the library and learn from them.

And you can later tell other humans about what you have learned. Like "Amazing, in a right-angled triangle, the square of the longest side is equal to the sum of the squares of the other two sides".

As AI agents become more and more human-like, they do not need to have "copied books". They just need to learn once. And they can learn from many sources, including from other AI agents.

That's why I say it is inevitable that all human knowledge will end up in the "heads" of AI agents.

And soon they will create their own knowledge via thinking and experimentation. Knowledge that so far exceeds human knowledge that it will seem funny that we once had a fight over that tiny little bit of knowledge that humans created.

soco•9mo ago
The point was, they didn't pay and refuse to pay for this. When you go to the library, your membership is paid - by you or by a government subsidy. Yet the richest men in the world what to do away with paying some minute copyright fees, basically asking the government - your taxes - to subsidize them.
close04•9mo ago
> But you can read the books right in the library and learn from them.

You can read some of the books. Natural limitations prevent you from reading any substantial number. And the scale makes all the difference in any conversation.

All laws were written accounting for the reality of the time. Humans have a limited storage and processing capacity so laws relied on that assumption. Now that we have systems with far more extensive capabilities in some regards, shouldn't the law follow?

When people's right to "bear arms" was enshrined in the US constitution it accounted for what "arms" were at the time. Since then weapons evolved and today you are not allowed to bear automatic rifles or machine guns despite them being just weapons that can fire more and faster.

Every time there's a discussion on AI one side relies way too much on the "but humas also" argument and are way too superficial with everything else.

latexr•9mo ago
> That's why I say it is inevitable that all human knowledge will end up in the "heads" of AI agents.

Not “all human knowledge” is digitised and published on the internet.

cess11•9mo ago
"As AI agents become more and more human-like"

That's not going to happen. What is going to happen, is that humans are going to become more "AI agent"-like.

ggandv•9mo ago
“And soon”

Base rate is soon never comes.

And soon flying cars, but now Facebook glasses.

bayindirh•9mo ago
> But you can read the books right in the library and learn from them.

How many of them per hour?

> And you can later tell other humans about what you have learned.

For how long you can retain this information without corruption and without evicting old information? How fast you can tell it, in how many speech streams? To how many people?

This "but we modeled them after human brains, they are just like humans" argument is tiring. AI is as much human as an Airbus A380 is a bird.

lupusreal•9mo ago
When have libraries ever rate limited people? The only limit to how fast you can flip through books on library shelves is how fast you can physically manage it without damaging the books or trashing their organization.
bayindirh•9mo ago
People are rate-limited naturally, and the systems we have built are built upon these natural limits. If your system breaks down when you remove these limits, and things get damaged, you need newer limits to protect what you have built.

You can down trees 100x more efficiently, but it proved to be disastrous for the planet, so we enacted more laws and try to control forestation/deforestation by regulations.

If AI can ingest things 100x faster, and it damages some things we have built, we have to regulate the process so things doesn't get damaged or the producers are compensated equally to keep livelihoods their livelihoods, instead of bashing writers and content producers like unwanted and unvalued bugs of last century.

...and if things got blurry because of new tech which was unfathomable a century ago, the solution is to add this tech to the regulation so it can be regulated to protect the producers again, not to yell "we're doing something awesome and we need no permission" and trash the place and harm the people who built this corpus. No?

However, this is not that profitable, so AI people are pushing back.

ethbr1•9mo ago
At root, there is one novel legal and one policy question that need answering:

1. Legally, what is the relationship between copying, compression, and learning? (i.e. what level of lossy compression removes copying obligations)

2. As policy, do we want to establish new rate limits for free use? (since the previous human-physical ones are no longer barriers)

joquarky•9mo ago
Given the basis for copyright, the deeper root is "does this promote the progress of science and useful arts?"
philistine•9mo ago
The library paid for all their books.

Facebook torrented a book list .

tsimionescu•9mo ago
> But you can read the books right in the library and learn from them.

What does this have to do with LLM training? Does OpenAI have a data center in every library, and only process data from that library in that data center?

You're not allowed to maintain a personal copy of a book you borrowed from a library, even though you pay a library fee. Neither should OpenAI, especially since they didn't even pay that small fee.

dooglius•9mo ago
> Even as a human, you are not allowed to go to all libraries and bookstores in the world, copy their work, and stockpile it for reading later.

You are allowed to do this https://en.wikipedia.org/wiki/Authors_Guild,_Inc._v._Google,...

jazzyjackson•9mo ago
That's not a blanket ruling for any kind of copying-all-books, it's a decision that the product Google built out of it's book copying provided a service to the public and didn't threaten the livelihood of the copyright holders. Fair Use is case by case, there is not a ruling yet on whether producing a chatbot that can author competing works for free is Fair Use, personally I'm bearish considering the forth factor, the effect of the use upon the potential market for or value of the copyrighted work.
dooglius•9mo ago
I am not a lawyer, but I'm assuming that if someone stockpiles for reading later (parent's scenario), without making the copies available to others at all, then that would be covered by the ruling since it's a subset of what Google did.
AlotOfReading•9mo ago
You can't subset a fair use argument like that and necessarily get a valid defense out of it. Fair use is essentially arguing "I infringed copyright in one of the narrow ways expressly allowed for the public interest". If you remove the public interest, you probably fail the first pillar.

If you were doing something else (e.g. acting as a public archive), you might not need a fair use defense because you'd fall under different rules.

tsimionescu•9mo ago
That case is entirely irrelevant to what I'm saying for one simple reason: Google had obtained all of the physical books they were scanning legally. They had bought or borrowed at least one physical copy of every book they were scanning.

What OpenAI and the others are doing, by contrast, is the equivalent of stealing every book in a book store, making a digital copy, retaining that copy, and returning the original book. This is completely and obviously illegal, and has been tested in court many times - for example in https://en.m.wikipedia.org/wiki/The_Pirate_Bay_trial .

derbOac•9mo ago
> humans also do not breach copyright by reading what others have written and incorporating it into their understanding of the world.

Tell that to Aaron Swartz.

Ignoring that, it's not the reading that's the problem — if all AI was doing was reading, no one would be talking about it.

dullcrisp•9mo ago
There’s no such thing as an unfiltered AI response. But I’m pretty sure you can get your hands on an untuned model if you cared to. I believe it would only be good for completing documents, though. (Or if you’re just looking for a model to respond to one specific question, just pick a response and that’s your model. You’re not going to use the rest of it anyway.)
SCdF•9mo ago
> "As an AI model, how do you feel

They don't feel, what is this fantasy

glimshe•9mo ago
How do you know you "feel"? What is a "feeling"?
SCdF•9mo ago
Oh please. The fantasy that an LLM is somehow conscious because it's good at parroting stuff back to you is beneath this forum.
glimshe•9mo ago
You're putting words in my mouth. Why don't you answer the question instead?
SCdF•9mo ago
The burden of proof is not on me to disprove the consciousness of a markov generator. So no, I won't.
glimshe•9mo ago
I didn't ask your to prove that. I literally asked about human feeling. Another person answered it.
iamacyborg•9mo ago
https://en.wikipedia.org/wiki/Qualia
cbg0•9mo ago
Well, I have a brain with neural pathways and chemicals running around its various parts influencing how I experience and process my emotions.

Without text written by humans to construct its knowledgebase, an LLM would not be able to conjure up any sort of response or "feeling", as it isn't AI by any stretch of the imagination.

tallanvor•9mo ago
No, this has nothing to do with it. She was fired as part of their anti-DEI stance.
chongli•9mo ago
It’s not the “incorporating it into their understanding of the world” step that is the problem, it’s the casual plagiarism that follows which is upsetting artists.

If some genius human were capable of ingesting every piece of art on the planet and replicating it from memory then artists would sue that person for selling casually plagiarized works to all-comers. When people get caught plagiarizing works in their university essays they get severely punished.

londons_explore•9mo ago
Cats out of the bag. I don't see anyone managing to make training AI on the web illegal.

Even if courts ruled that way, companies would simply 'lose' the records of what training data they used.

Or AI would be trained overseas.

eloisius•9mo ago
I’m also sure that it won’t be made illegal but I don’t share the cynicism. Google ‘losing’ the record of their training data would be conspiracy to commit copyright fraud, and AI trained overseas that violated copyrights could be banned from import.
londons_explore•9mo ago
> could be banned from import.

But as an API hosted abroad? Doubt there is sufficient justification to ban it, especially when evidence of copyright infringement isn't easy to get.

londons_explore•9mo ago
Plenty of companies deliberately don't keep records of dodgy things they do...

For example many companies have a shortish retention period for emails ever since 2012 era executive emails ended up in courtrooms...

Or the decision not to record phone calls...

Both of which, by chance, Google does.

troyvit•9mo ago
They've been accused by the previous Justice Department of similar things and have settled for similar things in the past:

https://insights.issgovernance.com/posts/google-parent-alpha...

> Google’s parent company Alphabet Inc. agreed to a $350 million tentative settlement resolving allegations it concealed data-security vulnerabilities in the now-shuttered Google + social network. The settlement will become the largest data privacy and cyber-security-related securities class action ever recorded by ISS SCAS, if approved.

https://finance.yahoo.com/news/google-under-doj-scanner-alle...

> The Justice Department said Alphabet Inc (NASDAQ: GOOG) (NASDAQ: GOOGL) Google destroyed written records pivotal to an antitrust lawsuit on preserving its internet search dominance, the Wall Street Journal reports.

Whether it's copyright fraud or another kind of fraud, I share the parent's cynicism, especially with AI given the importance Google peeps like Eric Schmidt place in "winning" the AI race (https://futurism.com/google-ceo-congress-electricity-ai-supe...).

ChrisArchitect•9mo ago
Related earlier:

US Copyright Office: Generative AI Training [pdf]

https://news.ycombinator.com/item?id=43955025

bgwalter•9mo ago
Get people to vote for Trump on the all-in podcast by promising a better economy, no wars and less wokeness.

Then take their IP after the election. Nice going from the "Crypto and AI czar".

Here is a hint for the all-in people: You are going to lose big time in the midterms and for sure in 2028. Then you are the target of lawfare again.

spiderfarmer•9mo ago
Does the US still have a rules-based economy? Or is it now completely defined by the whims of a grifter whose power is easily manipulated by technocrats, crypto bros, foreign entities and other sycophants?

It seems they're trying to run the economy on the power of bullying.

Havoc•9mo ago
The new rule is whatever flatters the ego of the king goes

See the ridiculous Boeing bribe the Qatari gave him

nikanj•9mo ago
It's still rules-based, but the rules change daily now.
mcphage•9mo ago
> Does the US still have a rules-based economy?

“The code is more what you’d call ‘guidelines’ than actual rules.”

ethbr1•9mo ago
At this point, the US court system is the only thing keeping its rules-based order intact.

Consequently, you see a lot of 'executive / legislative branch does illegal thing' news items, that are then often emergency stayed by the courts while legal cases work out.

For the good of the country, one or both branches of the legislative need to be taken by the opposition in the 2026 midterms.

JohnTHaller•9mo ago
Unfortunately, the Executive Branch ignoring the Courts isn't something that there's a solution for in the current environment.
ethbr1•8mo ago
Thankfully, it hasn't outright come to that.

MAGA has been pushing hard on boundaries, and playing fast and loose with gray areas, but still seem to be obeying actual final court rulings.

The biggest crisis on the horizon is going to be if the USMS / FBI / DOJ refuse to execute court-ordered redress.

The fix for that is to put at least the USMS directly under the courts.

The longer-term more worrisome trend is the drumbeat in conservative media against "activist judges", which is transparently a ploy to turn their constituency against the judicial branch, in preparation for ignoring judicial outcomes...

Ironically, the current ideologically tilt of the Supreme Court may cut against that. Harder to argue that a 6-3 conservative court is being unfair to their guy.

A 5-4 or 4-5 court would have been a lot easier to tar and feather in public opinion.

metalman•9mo ago
was it Chairman Mao? who said that the first step in a revolution is to kill all the librarians or is this just Authoritarianism with American characteristics?
croisillon•9mo ago
the only reference i found is... you ;) https://news.ycombinator.com/item?id=42757893

but apparently there is a Shakespeare play saying to kill the lawyers, either you or Mao mixed things up

tokai•9mo ago
Only thing I can find on this is an older post of yours. I don't think Mao said that. He worked in a library himself and the director at this library introduced him to Marxism. Marx was angry with bourgeoisie scholars though.
metalman•9mo ago
The quote was given to me by my uncle Kieth, who was the chief librarian in charge of periodicals at the main, central branch of the New York Public Library, long before the internet, there was more to the quote....in that the reason to kill the librarians, which they did a lot of durring the cultural revolution, was to eliminate those that understood the filing system, and was done as a part of the wholesale eliminating of discourse in western languages,philosophys, and systems. Dig a bit deeper, perhaps in chinese language books on Mao, but the quote does fit what was done. Edit. They kept the books though, and the card catolog of what is in.the Chinese national archives is rather astounding..
seper8•9mo ago
Ofcourse unrelated to Elon Musk's rejected Robotaxi trademark.
pwdisswordfishz•9mo ago
It amazes me how many people don't know the difference between trademarks and copyright.
heinrich5991•9mo ago
https://archive.is/x6cn9
throw0101d•9mo ago
See perhaps "Trump Fires U.S. Copyright Chief Days After Landmark AI Report"

* https://www.thedailybeast.com/trump-fires-us-copyright-chief...

And "Copyright and Artificial Intelligence Part 3: Generative AI Training" (PDF):

* https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...

zombot•9mo ago
But when White House BS is no longer copyrighted, everyone can just plagiarize Trump's lies... or have a synthetic Occupant Of The President's Chair that generates even more outrage for a fraction of the price. Won't that dilute Dear Leader's brand?
xnx•9mo ago
Not sure the connection to the AI report is even necessary. Being a female librarian appointed by a black woman (just fired by Trump) appointed by Obama would've been enough to get her fired in this administration.
lazystar•9mo ago
this was my take as well. absent a tweet from the head bird himself, it's the occam's razor of possible reasons.
undersuit•9mo ago
That is a different person. Trump has removed two people recently, you are talking about Carla Hayden, this is about Shira Perlmutter.
xnx•9mo ago
I'm talking about both. Shira Perlmutter was appointed by (just fired) Carla Hayden. Obama nominated Carla Hayden.
undersuit•8mo ago
Oops. That's what happens when I don't read well.
1vuio0pswjnm7•9mo ago
Works where archive.is is blocked, without Javascript or CSS:

https://web.archive.org/web/20250511192206if_/https://www.wa...