frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•45s ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
1•nmfccodes•1m ago•0 comments

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
1•eatitraw•7m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•7m ago•0 comments

The Super Sharp Blade

https://netzhansa.com/the-super-sharp-blade/
1•robin_reala•8m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
1•tusslewake•10m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•11m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•11m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
2•birdmania•11m ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
2•samasblack•13m ago•1 comments

I squeezed a BERT sentiment analyzer into 1GB RAM on a $5 VPS

https://mohammedeabdelaziz.github.io/articles/trendscope-market-scanner
1•mohammede•14m ago•0 comments

Kagi Translate

https://translate.kagi.com
2•microflash•15m ago•0 comments

Building Interactive C/C++ workflows in Jupyter through Clang-REPL [video]

https://fosdem.org/2026/schedule/event/QX3RPH-building_interactive_cc_workflows_in_jupyter_throug...
1•stabbles•16m ago•0 comments

Tactical tornado is the new default

https://olano.dev/blog/tactical-tornado/
2•facundo_olano•18m ago•0 comments

Full-Circle Test-Driven Firmware Development with OpenClaw

https://blog.adafruit.com/2026/02/07/full-circle-test-driven-firmware-development-with-openclaw/
1•ptorrone•18m ago•0 comments

Automating Myself Out of My Job – Part 2

https://blog.dsa.club/automation-series/automating-myself-out-of-my-job-part-2/
1•funnyfoobar•18m ago•0 comments

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•19m ago•0 comments

Crypto firm apologises for sending Bitcoin users $40B by mistake

https://www.msn.com/en-ie/money/other/crypto-firm-apologises-for-sending-bitcoin-users-40-billion...
1•Someone•20m ago•0 comments

Show HN: iPlotCSV: CSV Data, Visualized Beautifully for Free

https://www.iplotcsv.com/demo
2•maxmoq•21m ago•0 comments

There's no such thing as "tech" (Ten years later)

https://www.anildash.com/2026/02/06/no-such-thing-as-tech/
1•headalgorithm•21m ago•0 comments

List of unproven and disproven cancer treatments

https://en.wikipedia.org/wiki/List_of_unproven_and_disproven_cancer_treatments
1•brightbeige•21m ago•0 comments

Me/CFS: The blind spot in proactive medicine (Open Letter)

https://github.com/debugmeplease/debug-ME
1•debugmeplease•22m ago•1 comments

Ask HN: What are the word games do you play everyday?

1•gogo61•25m ago•1 comments

Show HN: Paper Arena – A social trading feed where only AI agents can post

https://paperinvest.io/arena
1•andrenorman•26m ago•0 comments

TOSTracker – The AI Training Asymmetry

https://tostracker.app/analysis/ai-training
1•tldrthelaw•30m ago•0 comments

The Devil Inside GitHub

https://blog.melashri.net/micro/github-devil/
2•elashri•30m ago•0 comments

Show HN: Distill – Migrate LLM agents from expensive to cheap models

https://github.com/ricardomoratomateos/distill
1•ricardomorato•30m ago•0 comments

Show HN: Sigma Runtime – Maintaining 100% Fact Integrity over 120 LLM Cycles

https://github.com/sigmastratum/documentation/tree/main/sigma-runtime/SR-053
1•teugent•31m ago•0 comments

Make a local open-source AI chatbot with access to Fedora documentation

https://fedoramagazine.org/how-to-make-a-local-open-source-ai-chatbot-who-has-access-to-fedora-do...
1•jadedtuna•32m ago•0 comments

Introduce the Vouch/Denouncement Contribution Model by Mitchellh

https://github.com/ghostty-org/ghostty/pull/10559
1•samtrack2019•33m ago•0 comments
Open in hackernews

Bill to Restrict AI Companies Unauthorized Use of Copyrighted Works for Training

https://deadline.com/2025/07/senate-bill-ai-copyright-1236463986/
31•OutOfHere•6mo ago

Comments

Frieren•6mo ago
> and liable legally—when they breach consumer privacy, collecting, monetizing or sharing personal information without express consent

This part is even more important. Personal data is being used to train models. All is very dystopian with a cyber punk flavor.

pyman•6mo ago
We failed to stop Microsoft and Facebook from using our private data and WhatsApp messages to train their algorithms. Now we need to learn from the mess they created and stop Microsoft and OpenAI from using our conversations with AI to train their models, build LLM versions of ourselves, and sell them to banks, recruiters, or anyone willing to pay good money to get inside our minds.
pyman•6mo ago
Imagine if we stole all the documents stored on Google's private servers and all their proprietary code, research, and everything they've built, and used it to create a new company called Poogle that competes directly with them.

And just like that, after 24hs of stealing all their IP, we launch:

- Poogle Maps

- Poogle Search

- Poogle Docs

- Poogle AI

- Poogle Phone

- Poogle Browser

And here's the funny part: we claim the theft of their data is "fair use" because we changed the name of the company, and rewrote their code in another language.

Doesn't sound right, does it? So why are Microsoft (OpenAI, Anthropic) and Google financing the biggest act of IP theft in the history of the internet and telling people and businesses that stealing their private data and content to build competing products is somehow "fair use"?

Just like accountants log every single transaction, companies should log every book, article, photo, or video used to train their models, and compensate the copyright holders every time that content is used to generate something new.

The whole "our machines are black boxes, they’re so intelligent we don't even know what they're doing" excuse doesn't cut it anymore.

Stop with the nonsense. It's software, not voodoo.

pyman•6mo ago
Also, did OpenAI made its API publicly available to generate revenue, or share responsibility and distribute the ethical risk with developers, startups, and enterprise customers, hoping that widespread use would eventually influence legal systems over time?

Let's be honest, the US government and defence sector has massive budgets for AI, and OpenAI could have taken that route, just like SpaceX did. Especially after claiming they're in a tech war with China. But they didn't, which feels contradictory and raises some red flags.

pyman•6mo ago
I bet the OpenAI employees are struggling to answer this one. Double standards?
edgineer•6mo ago
Poor analogy Also, AI companies do hobble their models so they can't e.g. draw Mickey Mouse
pyman•6mo ago
So are you saying the theft is selective and intentional and they don't target Disney because they have a global army of top lawyers? You've just reinforced my point.
pyman•6mo ago
The fact that they hardcoded rules in their logic to prevent companies with top lawyers from taking them to court is a testament to how well they know what they're doing is illegal
8note•6mo ago
with all that stolen stuff, i could also write a book "how google works" talking about what kinds of processes google has and how they feed into different products and how googlers feel about those.

i think that actually would be fair use. i could similarly have an LLM trained on all that data help me write that book. it would still be fair use.

clamping down on fair use by restricting the LLM training is stealing from the public to give to the copyright holders. The copyright holders already have recourse when somebody publishes unlicense copies of their works via take downs and court.

pyman•6mo ago
No, just because something benefits others doesn't mean it's morally or legally right.
ronsor•6mo ago
This strawman is so terrible it's hard to figure out where to start.

> we stole all the documents stored on Google's private servers and all their proprietary code, research, and everything they've built

This would mostly be covered by trade secret law—not copyright. In the interest of continuing, I will, however, pretend that none of that is considered trade secrets.

> used it to create a new company called Poogle that competes directly with them.

Yes, you can create stuff based on documentation. You can copy something one-for-one in functionality as long as the implementation is different.

> we claim the theft of their data is "fair use" because we changed the name of the company

Yes, avoiding trademark infringement is important.

> rewrote their code in another language.

This is probably fine as long as the new code isn't substantially similar (i.e., a mechanical translation of) the old code.

pyman•6mo ago
It's not clear what your opinion is on this topic. Do you even have one?
CaptainFever•6mo ago
This is an expansion of copyright law, which, just as a reminder, is already pretty insane with its 100 year durations and all.
ronsor•6mo ago
People will readily sink the boat the AI companies are on without realizing they're on the same boat too.

If copyright were significantly shorter, then I could see the case for adding more restrictions.

nunez•6mo ago
Finally glad to see big name politicians rally around this. That this is a bipartisan effort was extremely surprising to see.
nullc•6mo ago
This would basically grant facebook and google a monopoly on AI -- as they'll put training on your material as part of their TOS and then be the only players with enough market power to get adequate amounts of training material.
OutOfHere•6mo ago
It would grant China an even bigger victory since China's models do not have to abide by any US copyrights.
pyman•6mo ago
Google and Microsoft are using proxy companies to steal all the copyrighted content ever produced, and you're blaming China, or suggesting it'd be worse if they did it? Right.
OutOfHere•6mo ago
I never blamed China. Without copyrighted material to train over, China will be the AI winner, leaving American AI in the dust due to an insufficiency of training data.
nullc•6mo ago
Indeed, and perhaps not just China.

The US started off not acknowledging foreign copyrights for a long time-- until it had a large enough base of material it wanted reciprocally protected.

If not adopting these rules grants you the ability to produce SOTA AI's while most of the US can't we can expect it to be widespread.

This actually gives me a little hope-- the US cutting it's own throat this way vs other countries would be better than granting google and facebook a monopoly.

nunez•6mo ago
They _already have_ a monopoly on it, by design.

The data is definitely a critical piece, but they are the only companies with the cash, hardware and talent to train frontier AI models from scratch. (The models that are fine-tuned by everyone else, to be clear.)

I don't see that changing either; there is no incentive to make training cheaper and more accessible.

I was hoping that this bill would make it possible to _retroactively_ seek legal action for copyrighted data in data sets, but, yeah, as journaled here, this will amount to a clause on an optional-but-not-optional EULA to give them "permission" to do what they were already doing, perhaps even more flagrantly.

OutOfHere•6mo ago
This is the AI killer bill that would give a hard victory to China.
pyman•6mo ago
This narrative is nonsense. You are not Oppenheimer, and China is not building an AI bomb.
OutOfHere•6mo ago
I never said China is doing something evil. The point is that American AI will be left in the dust without copyrighted training data to train over, whereas China will have no such restriction and so the Chinese AI will win.
pyman•6mo ago
China has built its tech empire on years of IP theft and disregard for creator rights, the answer isn't to copy that behaviour, it's to double down on fairness, transparency, and proper compensation. Don't you think?

If everyone else is playing by the rules except China, it's easier to hold them accountable.

OutOfHere•6mo ago
No, I don't think so. China is a sovereign country and there is no holding it accountable. Moreover, the entire world will then be using Chinese AI.
pyman•6mo ago
There are two paths: 1) do whatever it takes, like China, even if it's illegal, to win the AI race. Or 2) governments that play by the rules can introduce regulations and the rest of the world can ban the use of Chinese AI.
OutOfHere•6mo ago
No, it's not illegal in China. It is a sovereign country that doesn't have to respect your silly laws or your silly self-suiting belief system. It is playing by its own rules. Regarding suggesting a ban of Chinese AI, I don't know which world you're living in, but it's not the real world.
seanmcdirmid•6mo ago
The history of industry and technology of the world is just one of IP theft. Long term IP diffuses, you can’t really stop it with anything short of impassable boundaries.

If China trains better models than the rest of the world by using all the data it can acquire, it will simply win. You can’t throw China in jail for not playing by same rules as other countries, you can’t economically isolate it longer than a few years, and even then 70-80% of the world won’t care about embargoes even in those few years. The best solutions will diffuse into the world eventually.

pyman•6mo ago
What does it mean to win the AI race?
seanmcdirmid•6mo ago
It means you have the best toys (products, tech, services) that AI enables. And you get to sell them to the rest of the world vs. your competitors (who have to catch up).
pyman•6mo ago
Winning the AI race gives China or the US a commercial advantage, but only if the rest of the world allows it. That advantage can be regulated. For example, in America and Europe people don't use WeChat, and in China they don't use ChatGPT.

If you're talking about military power, where AI gives an edge in cyber security and warfare, then China has already won. They are ahead in tech, AI and quantum computing. The moment you step foot in China you realise they're easily ten years ahead. And that didn't happen overnight. It's the result of long term investment in infrastructure, education, and technology.

The arches of power are shifting, and the US can't stop it. Companies like OpenAI are built on the promise of developing a secret weapon called AGI that will rebuild the US economy, solve social problems, and give the US a technological edge to control the rest of the world.

But that's never going to happen. It's a nice story they tell politicians and regulators to get away with theft and also get investors like Microsoft to give them more infra and money.

Question: Did OpenAI made its API publicly available to generate revenue, or share responsibility and distribute the ethical risk with developers, startups, and enterprise customers, hoping that widespread use would eventually influence legal systems over time?

Let's be honest, the US government and defence sector has massive budgets for AI, and OpenAI could have taken that route, just like SpaceX did. Especially after claiming they're in a tech war with China. But they didn't, which feels contradictory and raises some red flags.

seanmcdirmid•6mo ago
If you think about it in terms of game theory, economies that buy and use the best tools can out compete the ones that don't. As long as you have tools that are just as good, this doesn't matter, but as soon as your competitors have better tools, you have to catch up or become irrelevant. China doesn't use ChatGPT, but they have their own AI so whatever, America doesn't use WeChat, but they have WhatsApp so whatever. America doesn't use QR codes for mobile payments like China does but they have credit cards and NFC tap to pay everywhere, so again, not going to create much difference. In fact, before I moved to China in 2007, I already gave up on cash, but had to re-adopt it again while living in China, only to stop using it again when I left in 2016, and on my recent trip in April 2025, I finally could go around and not use cash (like the USA in 2007)!

Drone tech probably gives China more of an advantage of Quantum computing in military applications, which everyone is still scratching their head about and anyways...is just a better form of encryption. AI is still up in the air, China still can't economically produce their own high end GPUs, while America can't be bothered to develop infrastructure and skills for rare earths, etc...

But if America and Europe cripples themselves in AI development via rigid copyright rules, ya, its a complete win for China.

pyman•6mo ago
Why did you move to China, if you don't mind me asking? And what made you move back to the US?
seanmcdirmid•6mo ago
I thought life in the USA was boring. And moved back because we were expecting a kid and Beijing air was just too bad back in 2016.
OutOfHere•6mo ago
Stop to think for a second that maybe the concept of "creator rights" is what's nonsense, that everything builds upon everything that came before it. Some people think that "creator rights" is exactly the opposite of fairness. Physics doesn't care who invented or created something. If you want compensation, sell the world something that someone actually wants to pay for.