frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Zram as Swap

https://wiki.archlinux.org/title/Zram#Usage_as_swap
1•seansh•5m ago•0 comments

Green’s Dictionary of Slang - Five hundred years of the vulgar tongue

https://greensdictofslang.com/
1•mxfh•6m ago•0 comments

Nvidia CEO Says AI Capital Spending Is Appropriate, Sustainable

https://www.bloomberg.com/news/articles/2026-02-06/nvidia-ceo-says-ai-capital-spending-is-appropr...
1•virgildotcodes•9m ago•2 comments

Show HN: StyloShare – privacy-first anonymous file sharing with zero sign-up

https://www.styloshare.com
1•stylofront•10m ago•0 comments

Part 1 the Persistent Vault Issue: Your Encryption Strategy Has a Shelf Life

1•PhantomKey•14m ago•0 comments

Show HN: Teleop_xr – Modular WebXR solution for bimanual robot teleoperation

https://github.com/qrafty-ai/teleop_xr
1•playercc7•17m ago•1 comments

The Highest Exam: How the Gaokao Shapes China

https://www.lrb.co.uk/the-paper/v48/n02/iza-ding/studying-is-harmful
1•mitchbob•21m ago•1 comments

Open-source framework for tracking prediction accuracy

https://github.com/Creneinc/signal-tracker
1•creneinc•23m ago•0 comments

India's Sarvan AI LLM launches Indic-language focused models

https://x.com/SarvamAI
2•Osiris30•24m ago•0 comments

Show HN: CryptoClaw – open-source AI agent with built-in wallet and DeFi skills

https://github.com/TermiX-official/cryptoclaw
1•cryptoclaw•27m ago•0 comments

ShowHN: Make OpenClaw respond in Scarlett Johansson’s AI Voice from the Film Her

https://twitter.com/sathish316/status/2020116849065971815
1•sathish316•29m ago•2 comments

CReact Version 0.3.0 Released

https://github.com/creact-labs/creact
1•_dcoutinho96•31m ago•0 comments

Show HN: CReact – AI Powered AWS Website Generator

https://github.com/creact-labs/ai-powered-aws-website-generator
1•_dcoutinho96•32m ago•0 comments

The rocky 1960s origins of online dating (2025)

https://www.bbc.com/culture/article/20250206-the-rocky-1960s-origins-of-online-dating
1•1659447091•37m ago•0 comments

Show HN: Agent-fetch – Sandboxed HTTP client with SSRF protection for AI agents

https://github.com/Parassharmaa/agent-fetch
1•paraaz•38m ago•0 comments

Why there is no official statement from Substack about the data leak

https://techcrunch.com/2026/02/05/substack-confirms-data-breach-affecting-email-addresses-and-pho...
8•witnessme•42m ago•1 comments

Effects of Zepbound on Stool Quality

https://twitter.com/ScottHickle/status/2020150085296775300
2•aloukissas•46m ago•1 comments

Show HN: Seedance 2.0 – The Most Powerful AI Video Generator

https://seedance.ai/
2•bigbromaker•49m ago•0 comments

Ask HN: Do we need "metadata in source code" syntax that LLMs will never delete?

1•andrewstuart•55m ago•1 comments

Pentagon cutting ties w/ "woke" Harvard, ending military training & fellowships

https://www.cbsnews.com/news/pentagon-says-its-cutting-ties-with-woke-harvard-discontinuing-milit...
6•alephnerd•57m ago•2 comments

Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? [pdf]

https://cds.cern.ch/record/405662/files/PhysRev.47.777.pdf
1•northlondoner•58m ago•1 comments

Kessler Syndrome Has Started [video]

https://www.tiktok.com/@cjtrowbridge/video/7602634355160206623
2•pbradv•1h ago•0 comments

Complex Heterodynes Explained

https://tomverbeure.github.io/2026/02/07/Complex-Heterodyne.html
4•hasheddan•1h ago•0 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•1h ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
2•LiamPowell•1h ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
41•duxup•1h ago•10 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•1h ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•1h ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•1h ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
3•savrajsingh•1h ago•0 comments
Open in hackernews

Sen. Whitehouse: We are moving to file a bipartisan Section 230 repeal

https://bsky.app/profile/judiciarydems.senate.gov/post/3m7sjbvhbms2z
38•tguvot•1mo ago

Comments

tguvot•1mo ago
https://www.threads.com/@senamyklobuchar/post/DSK4Uf6kezY/

"It has been nearly 30 years since Congress passed Section 230 and gave platforms immunity from lawsuits.

Our children deserve a safer internet and tech companies need to be held accountable for what happens on their platforms."

techblueberry•1mo ago
I have a question. Section 230 basically says companies are allowed to moderate without being defined as publishers and losing liability protection.

Could X just basically stop moderating all together? The one (many?) conflicts here is that they legally have to moderate some things (CSAM) and there would be conflict in terms of moderating adult content. Basically is the law consistent enough to adopt a hands off strategy to maintain liability protection? Or would you be forced to go the other direction.

tguvot•1mo ago
lets be realistic, CSAM is just a vehicle to kill 230 here. And i'll be first to admit that CSAM is a problem, either in this context or "chat control".

i work in company that provides some enterprise messaging and we were rather surprised to find few years ago that there was a bunch of people who used our service for CSAM sharing. I had friends in other industries run into cases where there products (not even chat) were (ab)used for same purpose

dragonwriter•1mo ago
> Could X just basically stop moderating all together?

An algorithmic feed is one of the things that would make them a publisher without Section 230. So, they could, but they wouldnt be anything like X anymore.

> Basically is the law consistent enough to adopt a hands off strategy to maintain liability protection?

No, that’s why section 230 was adopted, to address an existential legal threat to any site of non-trivial scale woth user generated content. Withoutt section 230 or a radical revision of lots of other law, the only practical option is for providers to do as much review and editing of, and accept the same liability for, UGC as they would for first-party content.

If you wanted to tighten things up without intentionally nuking UGC as a viable thing for internet businesses practically subject to US jurisdiction, you could revise 230 to explicitly not remove distributor liability (it doesn't actually say it does and the extension to do this by the courts was arguably erroneous), which would give sites an obligation to respond to actual knowledge of unlawful content but not presume it from the act of presenting the content. But the “repeal 230” group isn't trying to solve problems.

stockresearcher•1mo ago
X could do just about anything. It’s actually hard to know what the current state of liability is these days, now that platforms have integrated algorithmic decision-making regarding what to show you.

In Anderson v TikTok, the appeals court decided that since the little girl did not specifically search for the videos she watched, TikTok’s algorithm made what amounted to an editorial decision to show her the videos she watched and thus Section 230 did not give them any protection. TikTok ultimately chose not to appeal to the Supreme Court and thus this is the current state of the law in Pennsylvania, New Jersey and Delaware. Other courts may decide differently.

The general idea is that whenever algorithms are deciding what you see Section 230 is not in play - but the First Amendment might be. The Supreme Court hinted that this is how they view things, BTW. If this is how it is, then Section 230 is essentially a dead law already and losing it only affects old fashioned blogs and forums.

thfuran•1mo ago
>If this is how it is, then Section 230 is essentially a dead law already and losing it only affects old fashioned blogs and forums.

But blogs and forums should be able to exist.

lesuorac•1mo ago
Curating HN to only be tech-specific topics is protected by 230. Literally the history of 230 is there were 2 lawsuits [1].

1. Website was sued over having "defamatory" content posted by a user and website won because they had no moderation (minus illegal stuff).

2. Website was sued over having "defamatory" content posted by a user and website lost because they had moderation (curated to be "family friendly").

Politicians (and less importantly, the general public) like the idea of websites being able to be "family friendly".

So forums and blogs can still exist but if you do any sort of not strictly legally required moderation you have legal liability for all content without 230.

[1]: https://en.wikipedia.org/wiki/Section_230#Background_and_pas...

Bjartr•1mo ago
> not strictly legally required moderation

Ah, so this is a way to make us "need" to enshrine moderation opinion into legislation in order to have nice things

krapp•1mo ago
It's going to be fun watching HN, which is full of people who support this sort of thing (and even more extreme regulations to boot,) deal with the ramifications of this forum's guidelines and moderation policies being de facto illegal.

It won't even be "turning into Reddit" it's all going to turn into 4chan.

dragonwriter•1mo ago
> So forums and blogs can still exist but if you do any sort of not strictly legally required moderation you have legal liability for all content without 230.

Which means the consequence for any mistake on sticking exactly to the bounds of legally mandatory moderation is enormous liability (either massive civil liability if you go slightly beyond the bounds of the minimum, or given the source of most minimums catastrophic criminal liability if you fall below it); the only realistic approach at non-trivial scale is just not to allow UGC except at the level you are willing to edit as if it were first party content you were going to be fully responsible for.

amanaplanacanal•1mo ago
You missed a group: advertisers. They care about what content their ads appear next to.

Exciting times. It will be interesting to see how this all shakes out.

root_axis•1mo ago
> The general idea is that whenever algorithms are deciding what you see Section 230 is not in play

This isn't correct. The ruling was very narrow, with a key component being that a death was directly attributed to a trend recommend by the algorithm that TikTok was aware of, and knew was dangerous. That part is key - from a section 230 enforcement perspective it's basically the equivalent of not acting to remove illegal content. Basically everything we've understood about how algorithms are liable since section 230 was enacted remain intact.

stockresearcher•1mo ago
I don’t agree. The ruling used logical reasoning based on the 2023 Netchoice decision in which the Supreme Court ruled that the actions of the moderating algorithms enjoyed first amendment protection. The first amendment protects you from liability from your own speech, while section 230 protects you from liability from somebody else’s speech. Ergo, if the platform was protected by the first amendment then the algorithm output was the speech of the platform.

Netchoice had a bunch of concurring opinions, including from ACB that essentially says they really aren’t sure how they’d rule in a case directly challenging algorithmic recommendations. That’s why I say it’s not clear how the liability situation is, and it really is baffling why TikTok chose not to appeal.

monocularvision•1mo ago
https://www.techdirt.com/2020/06/23/hello-youve-been-referre...
techblueberry•1mo ago
I dont think this answered my question, but fun anyways!
Bender•1mo ago
In my opinion and based on my limited knowledge not being a lawyer, moderating is not equal to publishing. Deleting illegal material is not publishing. At least that is how I will move forward on my little semi-private and private forums until my lawyers advise me otherwise.
astrange•1mo ago
> The one (many?) conflicts here is that they legally have to moderate some things (CSAM)

That's not really accurate. You have to report it to NCMEC if you encounter it. And you have to do some other things like copyright takedowns too.

But all those are very nuanced "have to"s.

bediger4000•1mo ago
Just a few lawsuits after section 230 repeal will cause vastly more moderation and the disappearance of many or most comment sections. This isn't going to lead to a free speech valhalla.

I think I'm implicitly assuming that laws are equally applied, which is increasingly untrue.

JumpCrisscross•1mo ago
> will cause vastly more moderation and the disappearance of many or most comment sections

We really don’t know this.

lesuorac•1mo ago
I mean we don't know it for sure; but we do know it.

It's like saying repealing laws prohibiting dumping lead into drinking water (ex. Prop 67) won't cause companies to dump lead into water. But like we passed prop 67 because companies were dumping lead. Section 230 of the Communications Decency Act was added because people were sueing websites over comments not made by staff in their comments section.

JumpCrisscross•1mo ago
> Section 230 of the Communications Decency Act was added because people were sueing websites over comments

Were damages paid? (Was damage ever proven in court?)

m4ck_•1mo ago
Is anyone taking bets on how long it'll take for a certain group of high netwoth individuals to sue Wikipedia out of existence for not censoring publicly available information?

On the bright side: future generations will know a cozy, curated version of reality. They won't have to hear distasteful and unsettling things abut our leaders in government and industry. We'll finally be safe from those billionaire sex pests and fraudsters, because we'll have no clue they exist.

2OEH8eoCRo0•1mo ago
I hope they do!
cosmicgadget•1mo ago
This would be an interesting combo with Moody v. Netchoice.