frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

My AI skeptic friends are all nuts

https://fly.io/blog/youre-all-nuts/
806•tabletcorry•5h ago•1124 comments

Ask HN: Who is hiring? (June 2025)

270•whoishiring•11h ago•257 comments

Conformance checking at MongoDB: Testing that our code matches our TLA+ specs

https://www.mongodb.com/blog/post/engineering/conformance-checking-at-mongodb-testing-our-code-matches-our-tla-specs
50•todsacerdoti•4h ago•20 comments

Show HN: I build one absurd web project every month

https://absurd.website
136•absurdwebsite•6h ago•29 comments

Show HN: A toy version of Wireshark (student project)

https://github.com/lixiasky/vanta
190•lixiasky•11h ago•64 comments

Show HN: Kan.bn – An open-source alterative to Trello

https://github.com/kanbn/kan
352•henryball•16h ago•162 comments

Teaching Program Verification in Dafny at Amazon (2023)

https://dafny.org/blog/2023/12/15/teaching-program-verification-in-dafny-at-amazon/
21•Jtsummers•4h ago•4 comments

Ask HN: How do I learn practical electronic repair?

32•juanse•2d ago•30 comments

How to post when no one is reading

https://www.jeetmehta.com/posts/thrive-in-obscurity
508•j4mehta•22h ago•228 comments

Japanese Scientists Develop Artificial Blood Compatible with All Blood Types

https://www.tokyoweekender.com/entertainment/tech-trends/japanese-scientists-develop-artificial-blood/
101•Geekette•4h ago•23 comments

Show HN: Onlook – Open-source, visual-first Cursor for designers

https://github.com/onlook-dev/onlook
326•hoakiet98•4d ago•74 comments

CVE 2025 31200

https://blog.noahhw.dev/posts/cve-2025-31200/
92•todsacerdoti•7h ago•23 comments

ThorVG: Super Lightweight Vector Graphics Engine

https://www.thorvg.org/about
98•elcritch•16h ago•22 comments

Typing 118 WPM broke my brain in the right ways

http://balaji-amg.surge.sh/blog/typing-118-wpm-brain-rewiring
102•b0a04gl•6h ago•144 comments

Show HN: Penny-1.7B Irish Penny Journal style transfer

https://huggingface.co/dleemiller/Penny-1.7B
128•deepsquirrelnet•10h ago•71 comments

Arcol simplifies building design with browser-based modeling

https://www.arcol.io/
45•joeld42•10h ago•24 comments

Snowflake to buy Crunchy Data for $250M

https://www.wsj.com/articles/snowflake-to-buy-crunchy-data-for-250-million-233543ab
118•mfiguiere•6h ago•49 comments

Younger generations less likely to have dementia, study suggests

https://www.theguardian.com/society/2025/jun/02/younger-generations-less-likely-dementia-study
69•robaato•10h ago•59 comments

Ask HN: Who wants to be hired? (June 2025)

98•whoishiring•11h ago•246 comments

Ask HN: How do I learn robotics in 2025?

286•srijansriv•13h ago•81 comments

I made a chair

https://milofultz.com/2025-05-27-i-made-a-chair.html
327•surprisetalk•2d ago•125 comments

The Princeton INTERCAL Compiler's source code

https://esoteric.codes/blog/published-for-the-first-time-the-original-intercal72-compiler-code
131•surprisetalk•1d ago•36 comments

Piramidal (YC W24) Is Hiring a Senior Full Stack Engineer

https://www.ycombinator.com/companies/piramidal/jobs/1a1PgE9-senior-full-stack-engineer
1•dsacellarius•9h ago

Can I stop drone delivery companies flying over my property?

https://www.rte.ie/brainstorm/2025/0602/1481005-drone-delivery-companies-property-legal-rights-airspace/
87•austinallegro•7h ago•183 comments

Mesh Edge Construction

https://maxliani.wordpress.com/2025/03/01/mesh-edge-construction/
38•atomlib•11h ago•1 comments

A Hidden Weakness

https://serge-sans-paille.github.io/pythran-stories/a-hidden-weakness.html
29•serge-ss-paille•12h ago•1 comments

Intelligent Agent Technology: Open Sesame! (1993)

https://blog.gingerbeardman.com/2025/05/31/intelligent-agent-technology-open-sesame-1993/
40•msephton•2d ago•3 comments

If you are useful, it doesn't mean you are valued

https://betterthanrandom.substack.com/p/if-you-are-useful-it-doesnt-mean
746•weltview•17h ago•333 comments

Cloudlflare builds OAuth with Claude and publishes all the prompts

https://github.com/cloudflare/workers-oauth-provider/commits/main/
381•gregorywegory•12h ago•287 comments

TradeExpert, a trading framework that employs Mixture of Expert LLMs

https://arxiv.org/abs/2411.00782
105•wertyk•16h ago•99 comments
Open in hackernews

The Future of Comments Is Lies, I Guess

https://aphyr.com/posts/388-the-future-of-comments-is-lies-i-guess
87•zdw•4d ago

Comments

thomasdziedzic•4d ago
LLMs do seem like a major issue for spam, does hackernews deal with any of this? I presume yes but how do you deal with it if so.
rizky05•1d ago
karma
msgodel•1d ago
Karma only sorts by popularity. LLMs will do best at that not worst, especially if you use a GPT rather than one of the RLed ones.
intended•1d ago
Has never worked solo. Someone just seeds more accounts, matures them, and then sells them to spammers.
immibis•1d ago
HN users can make 5 comments every 4 hours, and I presume there's some hurdle in the way of making a lot of users as well.

Also they get downvoted and hidden, alongside controversial but correct opinions.

The person who replied to this saying they have multiple accounts is shadowbanned - so clearly, they don't really have multiple accounts.

cafard•3d ago
The one comment is either a splendid illustration or a great piece of sarcasm.
amelius•1d ago
True. However I'm sure an LLM would be able to filter that one out without problems. /s
rsfern•1d ago
In the case that it’s a human posting sarcastically, wouldn’t that be a false positive?
Smaug123•1d ago
Depends what you're trying to identify. If you're trying to identify "machine-generated", yes. If you're trying to identify "spam", probably not? Spam posted sarcastically is no more something I'd want in my comments section than spam posted by a bot.
johnea•1d ago
So, you don't think humans should express sarcasm?

Sounds more like censorship than moderation to me...

dwaltrip•1d ago
Just saying "it's sarcasm" doesn't tell you much. It depends on the kind of sarcasm as well as the content.

You can moderate comments on your own blog however you'd like.

johnea•1d ago
> You can moderate comments on your own blog however you'd like.

Thank you so much for that permission.

(this is an example of sarcasm, it's used as a form of criticism to express disagreement by saying the exact opposite of what is actually meant. Currently, this could serve as a test for human text, because the LLM slop that I read is typically asskissingly obsequious, whereas humans are often not that friendly to each other)

atan2•1d ago
"Unavailable Due to the UK Online Safety Act"
auggierose•1d ago
Use a VPN.
tonyedgecombe•1d ago
Or just don't bother.
gawa•1d ago
The author wrote another blog post "Geoblocking the UK with Debian & Nginx"[0]. It's a short tutorial to do exactly as the title says, so it looks like the author did apply this configuration and intentionally want to geoblock the UK for compliance reasons, or maybe as a statement. The blog post has a link to https://geoblockthe.uk

[0] https://aphyr.com/posts/379-geoblocking-the-uk-with-debian-n...

masfuerte•1d ago
https://archive.ph/wfcyv
zloduka•1d ago
https://archive.ph/Vyasv
nickpeterson•1d ago
Comments have always been "bullshitting", and LLM's are a tool to help bullshitters quickly generated additional bullshit.

LLMs are going to reduce the value of bullshit. Look at how it's already decimating the marketing industry!

I just bullshitted those last couple sentences though.

safety1st•1d ago
I really don't envy anyone who has to moderate anything at the moment.

But yeah. The vast majority of user generated content on the big platforms was already very loosely moderated, and was already mostly trash.

The platforms are just going to keep on doing what they always do, which is optimize for engagement. It's not the crappy AI comments I'm worried about, it's the good ones. These things will become much better than humans at generating clickbait, outrage, and generally chewing up people's time and sending their amygdalas into overdrive.

I think we're going to keep getting more of what we have now, only more optimized, and therefore worse for us. As the AIs get good we will evolve an even more useless, ubiquitous, addictive, divisive, ad-revenue-driven attention economy. Unplugging your brain will be harder to do but even more worth doing. Probably most people still will not do it. Getting serious dystopia vibes over all this.

intended•1d ago
God its bleak in trust and safety/content moderation/fact checking. And I’m not even talking about America - good luck to you lovely weirdos.

One of the answers to “how do we solve this mess” was “climate change”. (Dealing with depressing things does funny things to humans).

One report on cyber security (which had Bruce Schneier as an author) showed that LLMs make hitherto unprofitable phishing targets, profitable.

There’s even a case where an employee didn’t follow their phishing training and clicked on a link, and ended up in a zoom call with their team members, transferring a few million in USD to another account. Except everyone on the call was faked.

This is the stuff on the fraud and cyber crime axis, forget the stuff for mundane social media. We’re at the stage where kids are still posting basic GenAI output after prompting “I think vaccines are bad and need to warn people”. They are going to learn FAST at masking this content. Hoo boy.

Dystopia vibes? It’s like looking into the abyss and seeing the abyss reach out to give you a hug.

safety1st•1d ago
Time to go full blown conspiracy theory mode

https://www.youtube.com/watch?v=-gGLvg0n-uY

1) Even telephone calls will become totally untrustworthy -->

2) Mandatory digital identity verification for all humans, at all times -->

3) Total control and censorship, the end of what we think of as the Internet today.

emmelaich•1d ago
See also https://aphyr.com/posts/387-the-future-of-customer-support-i... for more AI slop nonsense.
codr7•1d ago
No worries, this won't last long.

Once the algorithms predominantly feed on their own shit the bazillion dollar clown party is over.

akoboldfrying•1d ago
Even supposing the purported "model collapse" does occur, it doesn't destroy the LLMs we already have -- which are clearly already capable of fooling humans. I don't see the clown party being over, just reaching a stable equilibrium.
energy123•1d ago
Exactly. It logically can't occur, even by the own flawed assumptions of the people that say this. Just freeze all training data at 2024 or keep existing models, the worse case scenario is the models will plateau.
vanschelven•1d ago
This has been debunked (to me) here: https://simonwillison.net/2024/Dec/31/llms-in-2024/#syntheti...
roxolotl•1d ago
That seems to only say that synthetic data is a larger part of models today than in the past. The newer OpenAI models knowingly hallucinate more. Claude 4 seems great but not a multiplier better. Makes me think the effect of synthetic data is at best a net 0. Still has yet to really be seen though.
lucianbr•1d ago
Disagreeing with something is not debunking.
sorokod•1d ago
Debunked is a bit too strong. He qoutes from phi-4 repor that it is easier for the LLM to digest synthetic data. A bit like feeding broiler chickens other dead chickens.

Maybe one day we will have organic LLMs guaranteed to be fed only human generated content.

akoboldfrying•1d ago
I think that, ultimately, systems that humans use to interact on the internet will have to ditch anonymity. If people can't cheaply and reliably distinguish human output from LLM output, and people care about only talking to humans, we will need to establish authenticity via other mechanisms. In practice that means PKI or web of trust (or variants/combinations), plus reputation systems.

Nobody wants this, because it's a pain, it hurts privacy (or easily can hurt it) and has other social negatives (cliques forming, people being fake to build their reputation, that episode of Black Mirror, etc.). Anonymity is useful like cash is useful. But if someone invents a machine that can print banknotes that fool 80% of people, eventually cash will go out of circulation.

I think the big question is: How much do most people actually care about distinguishing real and fake comments? It hurts moderators a lot, but most people (myself included) don't see this pain directly and are highly motivated by convenience.

techpineapple•1d ago
I kind of wonder if I care if comments are real people and actually probably don’t as long as they’re thought provoking. I actually thought it would be an interesting experiment to make my own walled garden LLM link aggregator, sans all the rage bait.

I mean, I care if meetup.com has real people, and I care if my kids’ schools Facebook group has real people, and other forums where there is an expectation of online/offline coordination, but hacker news? Probably not.

nemomarx•1d ago
I feel like part of why comments here are thought provoking is because they're grounded in something? It's not quite coordination, but if someone talks about using software at a startup or small company I do assume they're genuine about that, which tells you more about something being practical in the real world.

And use cases like bringing up an issue on HN to get companies to reach out to you and fix it would be much harder with llms taking up the bandwidth probably.

techpineapple•1d ago
Yeah, this is the trick, for example in the sort of private hacker news example I was talking about creating, I haven’t created it yet, but I sort of suspect that getting the comments to not sound canned would take a lot of prompt engineering, and I also suspect that even if say an individual comment is good, the style over time would be jarring.

On the internet, maybe you have people using character.io, or other complex prompts to make the comments sound more diverse and personal. Who knows.

I wonder how many different characters you would need on a forum like hacker news to pass a sort of collective Turing test.

akoboldfrying•1d ago
Agree. Another complicating factor for detection is that I don't personally mind seeing a sliver of self-promotion in a comment/post if I feel it's "earned" by the post being on-topic and insightful overall. If such a comment was posted by an LLM, I think I would actually be fine with that.
johnea•1d ago
I could understand that position, except that I don't think most LLM generated text are for the purpose of producing thought provoking conversation.

My expectation would be that anyone going through the effort to put a LLM generated comment bot online is doing it for some ulterior motive, typically profit or propaganda.

Given this, I would equate not caring about the provenance of the comment, to not caring if you're being intentionally misinformed for some deceptive purpose.

FeteCommuniste•1d ago
Well if I’m in a discussion I’d like to know whether the other participants are actual people or just slopmachines (“AIs”).
ChrisMarshallNY•1d ago
I have made it a point to be un-anonymous, for the last few years. If you look at my HN handle, it's easy to see who I am, and to look at my work.

This was not always the case. I used to be a Grade A asshole, and have a lot to atone for.

I also like to make as much of my work open, as I can.

teddyh•1d ago
You could have authenticated proofs of human-ness without providing your full identity. There are similar systems today which can prove your age without providing your full identity.
emporas•1d ago
We will ditch anonymity, but for pseudonymity, not eponymity. Meaning someone, somewhere will know who is who and can attest that 1000 usernames are humans, but people will be able to identify with just a username to everyone else, except that one person.

>In practice that means PKI or web of trust (or variants/combinations), plus reputation systems.

Yep, that is the way.

Also LLMs will help us create new languages or dialects from existing languages, with the purpose of distinguishing the inner group of people from the outer group of people, and the outer group of LLMs as well. We are in a language arms race for that particular purpose for thousands of years. Now LLMs are one more reason for the arms race to continue.

If we focus for example in making new languages or dialects which sound better to the ear, LLMs have no ears, it is always humans who will be one step ahead of the machine, providing that the language evolves non stop. If it doesn't evolve all the time, LLMs will have time to catch up. Ears are some of the more advanced machinery on our bodies.

BTW I am making right now a program which takes a book written in Ancient Greek and creates an audiobook, or videobook automatically using Google's text to speech. The same program on Google Translate website.

I think people will be addicted in the future with how new languages sound or can be sung.

aleph_minus_one•1d ago
> I think that, ultimately, systems that humans use to interact on the internet will have to ditch anonymity.

Relevant meme video (which watching is in my opinion worth your time):

  Raiden Warned About AI Censorship - MGS2 Codec Call (2023 Version)
  https://www.youtube.com/watch?v=-gGLvg0n-uY
intended•1d ago
You ditch anonymity, and you have this cascading chilling effect through the interwebs because you cannot moderate communities against the political head winds of your nations.

Worse, it won’t work. We are already able to create fake human accounts, and it’s not even a contest.

And with LLMs, I can do some truly nefarious shit. I could create articles about some discovery of an unknown tribe in the Amazon, populate some unmanned national Wikipedia version with news articles, and substantiate the credentials of a fake anthropologist, and use that identity to have a bot interact with people.

Heck I am bad at this, so someone is already doing something worse than what I can imagine.

Essentially, we can now cheaply create enough high quality supporting evidence for proof of existence. We can spoof even proof of life photos to the point that account take overs resolution tickets can’t be sure if the selfies are faked. <Holy shit, I just realized this. Will people have to physically go to Meta offices now to recover their accounts???>

Returning to moderation, communities online, and anonymity:

The reason moderation and misinformation has been the target of American Republican Senators is because the janitorial task of reducing the spread of conspiracy theories touched the conduits carrying political powers.

That threat to their narrative production and distribution capability has unleashed a global campaign to target moderation efforts and regulation.

Dumping anonymity requires us to basically jettison ye olde internet.

isaacremuant•1d ago
No we won't. Just build your web of trust and leave the rest of us anonymous and alone.

You're just doing the bidding of corporations who want to sell ID online systems for a more authoritarian world.

Those systems also use astroturfing. It was not invented with LLMs.

See my other comment https://news.ycombinator.com/item?id=44130743#44150878 for how this is "bleak" mostly if you were comfortable with your Overton window and censorship.

akoboldfrying•1d ago
> leave the rest of us anonymous and alone

No one is trying to take away your right to host or participate in anonymous discussions.

> Those systems also use astroturfing. It was not invented with LLMs.

No one is claiming that LLMs invented astroturfing, only that they have made it considerably more economical.

> You're just doing the bidding of corporations who want to sell ID online systems for a more authoritarian world.

Sure, man. Funny that I mentioned "web of trust" as a potential solution, a fully decentralised system designed by people unhappy with the centralised nature of PKI. I guess I must be working in deep cover for my corporate overlords, cunningly trying to throw you off the scent like that. But you got me!

If you want to continue drinking from a stream that's been becoming increasingly polluted since November 2022, you're welcome to do so. Many other people don't consider this an appealing tradeoff and social systems used by those people are likely to adjust accordingly.

isaacremuant•1d ago
> Sure, man. Funny that I mentioned "web of trust" as a potential solution, a fully decentralised system designed by people unhappy with the centralised nature of PKI. I guess I must be working in deep cover for my corporate overlords, cunningly trying to throw you off the scent like that. But you got me!

I'm sorry man, I can't trust anything you say unless you post your full name and address. I can also throw some useless strawman quip to distract the conversation.

No one is forcing you to stay up at night or worry about this, so don't.

> If you want to continue drinking from a stream that's been becoming increasingly polluted since November 2022, you're welcome to do so. Many other people don't consider this an appealing tradeoff and social systems used by those people are likely to adjust accordingly.

Lol. The naivety of people like you and throwing these cute dates to start having a semblance of critical reading is hilarious. Not that it helps you since you immediately want authoritarian solutions and any challenge is met with a strawman.

But hey, give us more "sarcasm".

I'll post your comment because it's worth reading in full and going back to your "are you crazy? No one is saying X" fallback.

> I think that, ultimately, systems that humans use to interact on the internet will have to ditch anonymity. If people can't cheaply and reliably distinguish human output from LLM output, and people care about only talking to humans, we will need to establish authenticity via other mechanisms. In practice that means PKI or web of trust (or variants/combinations), plus reputation systems.

> Nobody wants this, because it's a pain, it hurts privacy (or easily can hurt it) and has other social negatives (cliques forming, people being fake to build their reputation, that episode of Black Mirror, etc.). Anonymity is useful like cash is useful. But if someone invents a machine that can print banknotes that fool 80% of people, eventually cash will go out of circulation.

> I think the big question is: How much do most people actually care about distinguishing real and fake comments? It hurts moderators a lot, but most people (myself included) don't see this pain directly and are highly motivated by convenience.

Lol.

akoboldfrying•23h ago
I honestly don't know what you think quoting my entire original post achieves, other than making it clear that I did in fact mention "web of trust", thus undermining your claim that I'm a stooge for big tech.

I wouldn't let that bother you, though. Life must be exciting when you know that everyone else is secretly hellbent on authoritarianism.

isaacremuant•5h ago
> thus undermining your claim that I'm a stooge for big tech.

Strawman. Your web of trust comment doesn't exonerate you from proposing to ban anonimity, doing the bidding of, not big tech, but I opportunistic lobbying tech and governments.

Web of trust is not "good" if you try to impose it on others, it was a comment of "create your own thing with your own friends" instead of pushing your bullshit unto us.

> I wouldn't let that bother you, though. Life must be exciting when you know that everyone else is secretly hellbent on authoritarianism.

I mean. If you were capable of actual arguments instead of just strawmen... Your life would be exciting too. As it is, you just parrot narratives.

andrewaylett•1d ago
There is, as ever, an XKCD for this: https://xkcd.com/810/
Ygg2•1d ago
Except for a scam you can be as constructive and helpful before deploying the scam.
davidclark•1d ago
We’re now finding that sounding helpful and constructive does not equal being helpful and constructive. I wonder what an updated comic would say.
ChrisMarshallNY•1d ago
Having links in comments has always been problematic.

For myself, I usually link to my own stuff; not because I am interested in promoting it, but as relevant backup/enhancement of what I am writing about. I think that a link to an article that goes into depth, or to a GitHub repo, is better than a rough (and lengthy) summary in a comment. It also gives others the opportunity to verify what I say. I like to stand behind my words.

I suspect that more than a few HN members have written karmabots, and also attackbots.

aspenmayer•1d ago
Next, I'm sure, you'll be telling me you're not a bot, Mr Marshall?

https://www.youtube.com/watch?v=4VrLQXR7mKU

Previously (6 months ago but didn't trend, perhaps due for a repost?):

https://news.ycombinator.com/item?id=42353508

ChrisMarshallNY•1d ago
Love it!

Thanks!

aspenmayer•1d ago
Glad you liked it, though I will mention for others that it does involve self-harm, as that may be relevant information, but it is necessary to the story, and it did win an Academy Award for what it's worth, which I think was probably deserved, though I didn't see any of the other also-rans.

Thanks for having the courage to post under your real name, also, as you mentioned in another thread of yours I was reading. It's been a growth experience for me as well.

ChrisMarshallNY•1d ago
It probably didn't trend, because it's in Dutch.

For myself, I have no issue with subtitled films, but a lot of my countrymen are not comfortable with it.

The main issues that I have with foreign (to me) films, is the cultural frame can be odd. That also happens with British and Australian stuff.

aspenmayer•1d ago
> The main issues that I have with foreign (to me) films, is the cultural frame can be odd. That also happens with British and Australian stuff.

For me, that is the magic of film, but I wonder if reality recedes as we approach, via biases, oversights, and the key design 'flaw as feature' of the camera, that it only captures what has already been framed.

AJ007•1d ago
I recall blogs from over 20 years ago, with blatant comment spam, where the blog author would respond to the comment spam individually as if it was real readers. Most didn't fall for that, but a few clearly didn't understand it.

I'm not sure LLMs deviate from a long term trend of increasing volume of information production. It certainly does break the little bubble we had from the early 1990s until 2022/3 where you could figure out you were talking to a real human based on the sophistication of the conversation. That was nice, as was usenet before spammers.

There is a bigger question of identify here. I believe the mistake is to go the path of: photo ID, voice verification, video verification (all trivially by-passable now.) Take another step further with Altman's eyeball thing, another mistake since a human can always be commandeered by a third party. In the long term do we really care that the person we are talking to is real or an AI model? Most of the conversations generated in the future will be AI. They may not care.

I think what actually matters more is some sort of larger history of identify and ownership, matching to what one wishes to (I see no problem with multiple IDs, nicks, avatars.) What does this identify represent? In a way, proof of work.

Now, when someone makes a comment somewhere, if it is just a throw away spam account there is no value. Sure, the spammers can and will do all of the extra stuff to build a fake identity just to promote some bullshit produce, but that already happens with real humans.

ChrisMarshallNY•1d ago
> That was nice, as was usenet before spammers.

Not so sure I'd call it "nice."

I am ashamed to say that I was one of the reasons that it wasn't so "nice."

monkeyelite•1d ago
Google discovered the only way to ultimately resolve spam is to raise the cost to create it.

For web spam this was HTTPS. For account spam this is phone # 2fa. I think requiring a form of id or payment card is the next step.

wiether•1d ago
So they are going to allow only YT premium subs to post comments?

Because if there's one place where Google didn't solve spam, it's on YT's comments

monkeyelite•1d ago
Maybe, but I mean in general for internet participation.
aleph_minus_one•1d ago
> Because if there's one place where Google didn't solve spam, it's on YT's comments

I do believe that this problem is very self-inflicted (and perhaps even desired) by YouTube:

- The way the comments on YouTube are structured and ordered makes it very hard to make deep discussions on YouTube

- I think there is also a limit on the comment length on YouTube, which again makes it hard to write longer, sophisticated arguments.

- Videos for which a lot of comments a generated tend to become promoted by YouTube's algorithm. Thus YouTubers encourage viewers to write lots of comments (thus also a lot of low-quality comments), i.e. YouTube incentivizes that videos are "spammed" with comments. The correct solution would be to incentivize few, but high-quality comments (i.e. de-incentivize comments that contribute nothing valuable (i.e. worth your time to read)). This makes it much easier to detect and remove the (real) spam among them.

eastbound•1d ago
If you make people pay to comment, content farms will gladly pay.
monkeyelite•1d ago
Yes… but there will be less spam and it will be more intelligent because the creator must break even.
JimDabell•1d ago
This doesn’t work in perpetuity. One of the reason why spam is so persistent is that when you ban a spammer, they can just create a new identity and go again. If payment is required then not only do they have to repeatedly pay every time they get banned, they need a new payment card too because you aren’t limited to banning their account – you can ban the payment mechanism they used.
intended•1d ago
This is only to the point that it’s not profitable to spam further.

At some point your cost to dissuade spammers becomes a huge risk for humans to make mistakes of any sort.

At this point users mutiny.

JimDabell•22h ago
Users don’t typically get banned repeatedly, and you probably want the ones that do to stay banned.
wslh•1d ago
Twitter, LinkedIn, and others are following the credit card and id (KYC) way but the issue remains when people start automating interactions, not spam per se but it creates a waste of time since users cannot cope with the triggering of zillions of interactions that cannot be followed by human-time.
safety1st•1d ago
Indeed it is https://youtu.be/-gGLvg0n-uY?feature=shared
intended•1d ago
This keeps me up at night too. I’d like to stake the position that LLMs are antagonistic to the (beleaguered) idea of an internet.

LLMs increase the burden of effort on users to successfully share information with other humans.

LLMs are already close to indistinguishable from humans in chat; Bots are already better at persuading humans[1]. Suggesting that users who feel ineffective at conveying their ideas online, are better served by having a bot do it for them.

All of this, is effectively putting a fitness function on online interactions, increasing the cognitive effort required for humans to interact or be heard. I dont see this playing out in a healthy manner. The only steady state I can envision is where we assume that we ONLY talk to bots online.

Free speech and the market place of ideas, sees us bouncing ideas off of each other. Our way of refining our thoughts and forcing ourselves to test our ideas. This is the conversation that is meant to be the bedrock of democratic societies.

It does not envisage an environment where the exchange of ideas is into a bot.

Yes yes, this is a sky is falling view - not everyone is going to fall off the deep end, and not everyone is going to use a bot.

In a funny way, LLMs will outcompete average forum critters and trolls for their ecological niches.

[1] (https://arxiv.org/pdf/2505.09662)

photonthug•1d ago
> increasing the cognitive effort required for humans to interact or be heard. I dont see this playing out in a healthy manner

We are at the stage where it’s still mostly online but the first ways this will leak into the real world in big ways are easy to guess. Job applications, college applications, loan applications, litigation. The small percentage of people who are savvy and naturally inclined towards being spammy and can afford any relevant fees will soon be responsible for over 80 percent of all traffic, not only drowning out others but also overwhelming services completely.

Fees will increase, then the institutions involved will use more AI to combat AI submissions, etc. Schools/banks/employers will also attempt to respond by networking, so that no one looks at applicants directly any more, they just reject if some other rejected. Other publishing from calls for scientific papers to poetry submissions kind of progresses the same way under the same pressures, but the problem of “flooded with junk” isn’t such a new problem there and the stakes are also a bit lower.

cmrdporcupine•1d ago
I've got a spouse who works in marketing/communications who has spent the weekend working after hours moderating comments on posts about Pride events, and I was musing with her about this -- the days of comments being a thing at all, are numbered. As a means of getting engagement, it gets increasingly the wrong kind not just because of generative AI automation, but because being an asshole is now considered virtuous by many of our highest leaders and the masses are following.

What's the point in even having comments sections? The CBC here in Canada shut theirs off years ago and frankly the world is better for it. News articles are a swamp of garbage comments, generally.

The future of social engagement online is to go back to smaller, registration-required, moderated forums.

washmyelbows•1d ago
There's a lot of people downplaying the importance of genuine online comments but the reality is that (outside of the bubbles lives in by many HN users) millions upon millions of people are meaningfully participating and forming their viewpoints based on them.

I suspect even the 'well I never trust online comments!' crowd here is not as immune to propaganda as they'd like to believe themselves to be

hibikir•1d ago
Since detecting LLMs is a silly end goal, the future of moderation probably needs LLMs too, but to evaluate text and see if it fits into blatant commercial speech. It will ruin places where some kinds of commercial speech is wanted (say, asking for a recommendation on reddit). Still, the mindless recommendation of crypto rugpulls and other similar scams will go away.

I am more concerned about voice alignment efforts, like someone creating over time 10k real-ish accounts attempt to contribute, but are doing so to just abuse upvote features to change perception. Ultimately figuring out what is a real measure of popularity , and what is just a campaign to, say, send people to your play is going to get even harder than it is now

aleph_minus_one•1d ago
> It will ruin places where some kinds of commercial speech is wanted (say, asking for a recommendation on reddit).

There is also a dependence on the culture. For example, what in the USA would be considered a "recommendation" (such as on Reddit) would often be considered "insanely pushy advertising" in Germany.

With this in mind, wouldn't a pertial solution also be to become less tolerant of such pushy advertisement in such places (say on Reddit), even if they are done by honest users?

aspenmayer•1d ago
When it's obvious that entire posts and users are fake, and knowing that product pages on Amazon (which are also sometimes fake) can change what product they list for sale, and since it is known that upvotes/likes/shares are openly for sale, is it really such a stretch to assume that all "recommendations" are as fake as the original question also likely is, until we have evidence to the contrary?
d6e•1d ago
What if we charged a small toll for comments. We create a web standard where you can precharge an amount to your browser account, then you get charged $0.02 for making a comment. The price could be progressively raised until the spammers stop. The profit could pay for website hosting. This would be affordable for users but prohibitively expensive for spammers.
philjackson•1d ago
I seem to remember MS having this idea for email many years ago.
sedev•1d ago
https://craphound.com/spamsolutions.txt
ThrowawayR2•1d ago
The problem originates from LLM service so the toll needs to be on LLM usage in a way that doesn't harm legitimate users but makes it unprofitable to abuse in bulk.
Philpax•1d ago
Amused that the third comment is the Tirreno guy continuing to spam his project [0]. Good ol' human spam will never go out of style!

[0]: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

reconnecting•5h ago

        ^~~ tirreno guy is here
Thank you for mentioning tirreno.

Spam is one of the use cases for tirreno. I'm not sure why you'd call this spamming, as the tool is relevant to the problem.

photonthug•1d ago
Doesn’t it seem like LLMs can assist with moderation rather than making it harder?

I’m not sure exactly why we are still waiting for the obviously possible ad-hominem and sunk cost fallacy detectors, etc. For the first time we now have the ability to actually build a threaded comment system that (tries to) insist on rational and on topic discussion. Maybe part of that is that we haven’t actually made the leap yet to wanting to censor non contributing but still-human “ contributers” in addition to spam. I guess shit posting is still part of the “real” attention economy and important for engagement.

The apparently on topic but subtly wrong stuff is certainly annoying and in the case of vaguely relevant and not obviously commercial misinformation or misapprehension, I’m not sure how to tell humans from bots. But otoh you wouldn’t actually need that level of sophistication to clean up the cesspool of most YouTube or twitter threads.

bgwalter•1d ago
That would presume that the moderation knows the truth, that a single truth even exists and that the moderation itself is unbiased.

It would also presume that an LLM knows the truth, which it does not. Even in technical and mathematical matters it fails.

I do not think an LLM can even accurately detect ad-hominem arguments. Is "you launched a scam coin scheme in the first days of your presidency and therefore I don't trust you on other issues" an ad-hominem or an application of probability theory?

photonthug•1d ago
Suppose you’re right, then any LLM can still label that as hostile or confrontational. implying that we at least now have the ability to try to filter threads on a simple axis like “arguing” vs “information” vs “anecdote” and in other dimensions much more sophisticated than classic sentiment analysis.

We might struggle to differentiate information vs disinformation, sure, but the above mentioned new super powers are still kind of remarkable, and easily accessible. And yet that “information only please” button is still missing and we are smashing simple up/down votes like cavemen

Actually when you think about even classic sentiment analysis capabilities it really shows how monstrous and insidious algorithmic feeds are.. most platforms just don’t want to surrender any control to users at all, even when we have the technology.

ThrowawayR2•1d ago
> "Doesn’t it seem like LLMs can assist with moderation rather than making it harder?"

The moderators will need to pay for LLM service to solve a problem created by malicious actors who are paying for LLM service also? No wonder the LLM providers have sky-high valuations.

photonthug•1d ago
Compute providers are gonna get paid, yeah. We can hope though that there’s something asymmetric in the required expense for good guys va bad guys though. For example “subtly embed an ad for X while you pretend to reply to Y” does seem like a harder problem you need a cloud model for. TFA mentioned crypto blog spam, which could easily be detected with keywords and local LLM, or no LLM at all
intended•1d ago
There’s already a lightweight LLM tool for moderation that doesn’t take much to run.
intended•1d ago
Hey, this is part of my thesis and what I’m working towards figuring out.

People already working on LLMs to assist with content moderation (COPE). Their model can apply a given policy (eg harassment policy,) to a piece of content and judge if it matches the criteria. So the tooling will be made, one way or another.

My support for the thesis is also driven based on how dark the prognostications are.

We won’t be able to distinguish between humans and bots, or even facts soon. The only things which will remain relatively stable are human wants / values and rules / norms.

Bots that encourage pro social behavior, norms and more, are definitely needed just as the natural survival tools we will need.

ivan_gammel•1d ago
The only way to solve it for decentralized messaging systems is a decentralized system for verification of identities based on chain of trust and use of digital signatures by default. It must be a legal framework supported by technical means. For example, id providers may be given a responsibility to confirm certain assumptions about their clients (is a real human, is adult etc) while keeping their identity confidential. The government and the corporations will know only what this person allows the id provider to disclose (unless there’s a legal basis for more, like a decision of the court to accept a lawsuit or a court order to identify suspect or witness). Id provider can issue an ID card that can be used as authentication factor. As long as a real person can be confirmed behind the nickname or email address, the cost of abuse will be permanent ban on a platform or on a network. Not many people will risk it. Natural candidates for id providers can be notaries.
vannevar•1d ago
Yes, I think we'll see the rise of id-verified online communities. As long as all the other members of the community are also id-verified, the risk of abuse (bullying, doxing, etc) is minimized. This wouldn't stop someone from posting AI-generated content, but it would tend to suppress misinformation and spam, which arguably is the real issue. Would people complain about AI-generated content that is genuinely informative or thought-provoking?
intended•1d ago
Verification does not stop harassment or bullying.

It will not stop misinformation either.

Verification is expensive and hard, and currently completely spoof-able. How will a Reddit community verify an ID? In person?

If Reddit itself verifies IDs, then nations across the world will start asking for those IDs and Reddit will have to furnish them.

ivan_gammel•1d ago
The key is „decentralized“ and „chain of trust“. An ID provider does actual identification in person first, maybe collects some biometrics. An online community trusts the ID provider and just asks the necessary questions. A foreign government may force this online community to provide only the data it owns, i.e. the flag „true“ in „verification_completed“ column of the database, maybe an uuid of the person at the ID provider. How does it protect from harassment and bullying? It provides means to address them legally, because court will be able to get real identity of the criminal and the platform can just ban the real person for life, no new registrations and nicknames. Initially this may result in a surge of moderation requests, but eventually it will become less and less as people learn the new rules.

As for misinformation, as long as all actors are known and are real people, they should be allowed to speak. It’s not good to be a flood of fakes.

intended•1d ago
Digital IDs are always spoofable, and frankly it seems the only option now is to go for something like meeting someone in the physical world to verify who they are. This is the realm of banks and organizations that can coordinate that much manpower.

And even then, it doesn't stop harassment and bullying. We already know this from facebook, where people's IDs are known. Going for legal redress requires court time and resources to fight the case.

The core of the misinformation doom loop, is when popular misinformation narrative is picked up and amplified by well known personalities. This is crucial in making it a harmful force in our politics.

So having known actors makes very little difference to misinformations gumming up our information markets.

vannevar•1d ago
>And even then, it doesn't stop harassment and bullying. We already know this from facebook, where people's IDs are known.

It depends on your privacy settings. If people you don't know can comment on your posts, then they're not really verified (ie, you never accepted a friend request from them). In FB communities limited only to friends, I suspect there is much less bullying or harassment. But that kind of community is hard to create on FB, by design.

>So having known actors makes very little difference to misinformations gumming up our information markets.

If a verified actor can be permanently banned from a platform, then of course that will reduce misinformation on that platform by systematically excluding the originators. That includes people who routinely spread misinformation they receive off-platform.

intended•1d ago
Much less bullying is a matter of degrees. It implcitly acknowledges that harassment occurs without anonymiity.

>If a verified actor can be permanently banned from a platform, then of course that will reduce misinformation on that platform by systematically excluding the originators.

Eh. Yes, in a simplified producer/consumer model of this. I'm personally all for removing people who are not engaging in good faith.

Thing is that misinformation is now firmly linked to political power.

Compared to facts? Misinfo is faster and cheaper to produce, yet perfectly suited to maximize engage amongst the target audience. A key moment in that process, is when a fringe narrative is picked up by a key player in the media ecosystem.

Removing such a node, is to take up arms against the political forces using misinformation to fuel their narratives.

Not saying it shouldn't be done if it can. Just that we need a better set of understandings and tools before we can make this case.

ivan_gammel•1d ago
As long as misinformation is produced by a real person, it is ok to have it. It is not a job of platforms to combat misinformation. If this becomes a real problem, then politicians aren’t working hard enough and need to be replaced. German political mainstream was too lazy, now we have AfD in double digits challenging the first place on elections. Time to replace that mainstream. Democrats and old-school Republicans in America were too ignorant and lazy, now America has Trump and needs a full reboot to avoid sliding into irrelevance. This is how things always worked historically.
vannevar•1d ago
>Verification does not stop harassment or bullying.

>It will not stop misinformation either.

I'm open to any evidence that either statement is true. The rational argument that verification will reduce harrassment, bullying, and misinformation is that the verified perpetrator can be permanently banished from the community for anti-social behavior, whereas an anonymous perpetrator can simply create a new account.

Do you have a rational counter-argument?

>If Reddit itself verifies IDs, then nations across the world will start asking for those IDs and Reddit will have to furnish them.

Every community will have to decide whether the benefits of anonymity outweigh the risks. On the whole, I think anonymity has been a net negative for online community, but I understand that others may disagree. They'll still be free to join anonymous communities. But I suspect that large-scale, verified communities will ultimately be the norm, because for everyday use people will prefer them. Obviously, they work better in countries with healthy, functional liberal democracies.

intended•1d ago
>Verification does not stop harassment or bullying.

I can say this from experience moderating, as well as research. I'll take the easy case of real world bullying first - people know their bullies here. It does not stop bullying. Attackers tend to target groups/individuals that cannot fight back.

Now you asked for evidence that either statement was true, but then spoke about reducing harassment. These are not the same things. This 2013 paper studied incivility in anonymous and non-anoymous forums [1] . Incivility was lower in the case where identities were exposed, however this did not stop incivility.

The Australian ESafety commisioner has this to say as well: > owever, it is important to note that preventing or limiting anonymity and identity shielding online would not put a stop to all online abuse, and that online abuse and hate speech are not always committed by anonymous or fake account holders. [2]

Now to bring GenAI into the mix - the cost of spoofing a selfie has now gone down quite a bit, if not made it very cheap. Verification of ID will require being able to manually inspect an individual. This means the costs of verification are VERY labor intensive. India has a biometric ID program, and we are talking about efforts on those scales. And even then, it doesn't stop false IDs from being created.

Combining these various points, ditching anonymity would necessitate a large effort in verifying all users, killing off the ability for people to connect on anonymous forums (LGBTQ communities for example) for some reduction in harassment.

This also assumes that people rigorously check your ID when its being used, becuase if there is any gap or loophole, it will be used to create fake IDs to spam, harass or target people.

[1] https://www.researchgate.net/publication/263729295_Virtuous_...

[2] https://www.esafety.gov.au/industry/tech-trends-and-challeng...

> On the whole, I think anonymity has been a net negative for online community, but I understand that others may disagree.

I would like to agree with you, but having moderated content myself - people do not give a shit and will say whatever they want, because they damned well want you to know it.

Take misinformation; I used to think the volume of misinformation was the issue. It turns out that misinformation amplificaiton is more driven by partisan or momentar political needs, than our improved ability to churn out quantities of it.

ivan_gammel•1d ago
Verification of identity has to be in person and it can be reliable and secure in general. Many countries in the world have a process and infrastructure for that, they mainly need to open verification API to third parties. BundID in Germany, GosUslugi in Russia, Diia in Ukraine (built with support of USAID!) etc.

That said, anonymity is not necessary condition of a safe environment. Pseudonymity with sufficient protections against disclosure will work just fine. If a platform only knows that there’s a real person behind a nickname and they can reliably hold that person accountable it is enough. They don’t need a name. Just some identifier from identity provider.

As for misinformation, is not a moderation issue and should not be solved by platforms. You cannot and should not suppress political messages, they will find their way. It’s the matter of education and political systems and counter-propaganda. The less efficient are the former, the more efficient is propaganda in general.

vannevar•1d ago
>I'll take the easy case of real world bullying first - people know their bullies here. It does not stop bullying. Attackers tend to target groups/individuals that cannot fight back.

But in an online forum where the bully is known and can be banned/blocked permanently, everyone can fight back.

>Now you asked for evidence that either statement was true, but then spoke about reducing harassment. These are not the same things.

Of course there will continue to be harassment on the margins, where people could reasonably disagree about whether it's harassment. But even in those cases, the victims can easily and permanently block any interaction with the harasser. Which removes the gratification that such bad actors seek.

>Incivility was lower in the case where identities were exposed, however this did not stop incivility.

I think we're getting hung up on what 'stop' means in this context. If I have 100 incidences per day of incivility before verification, and only 20/day after, then I've stopped 80 cases/day. Have I stopped all incivility? No, but that was not the intent of my statement. I think it will drastically reduce bullying and misinformation, but there will always be people who come into the new forum and push the envelope. But they won't be able to accumulate, as they are rapidly blocked and eventually banned. The vast majority of misinformation and bullying comes from a small number of repeat offenders. Verification prevents the repetition.

Have you moderated in a verified context, where a banned actor cannot simply create a new account? I feel like there are very few such platforms currently, because as you point out, it's expensive and so for-profit social media prefers anonymity. But if we're all spending a significant part of our lives online, and using these platforms as a source of critical information, it's worth it.

One context where everyone is verified is in a typical business---your fellow employees, once fired, cannot simply create another company email account and start over. And bad apples who behave anti-socially are weeded out. People generally behave civilly to each other. So clearly such a system can and does work, most of us see it on a daily basis.

intended•21h ago
>But in an online forum where the bully is known and can be banned/blocked permanently, everyone can fight back.

Firstly, please acknowledge that knowing the identity of the attacker doesn’t stop bullying. Ignoring or papering over it, deprives arguments of supports that are required to be useful in the real world.

There is a reason I pointed out that it doesn’t stop harassment, because it disproves the contention that anonymity is the causal force for harassment.

The argument for reducing harassment via anonymity is supported, but it results in other issues. In a fully de-anonymized national social media platform, people will target minorities, immigrants and nations from other countries. I.E whatever is the acceptable jingoism and majority view point. Banning such conversation will put the mods in the cross hairs.

And yes, if it reduced harassment by 80%, that would be something. However the gains are lower (from that paper, it seemed like a 12% difference).

——-

I am taking great pains to separate out misinfo from bullying / harassment.

For misinformation, the first section, about minute 3 to minute 4, where Rob Faris speaks, does a better job of articulating the modern mechanics : https://www.youtube.com/watch?v=VGTmuHeFdAo

The larger issue for misinformation, is that it has utility for certain political groups and users today. It allows them the ability to create narratives and political speech faster and more effectively.

Making a more nuanced point will end up with me elaborating on my personal views on market capture for ideas. The shortest point i can make about misinformation, journalism and moderation is this:

Factual accuracy is expensive, and an uncompetitive product, when you are competing in a market that is about engagement. Right now, I don’t see journalism, science, policy - slow, fact and process limited systems, competing with misinformation.

Solving the misinformation problem will require figuring out how to create a fair fight / fair competition, between facts and misinformation.

Since misinformation purveyors can argue they have freedom of speech, and since they are growing increasingly enmeshed with political power structures, simple moves like banning are shifting risk to moderators and platforms - all of whom have a desire to keep living their lives without having to be harassed.

For what it’s worth, I would have argued the same thing as you until a few scant months ago. The most interesting article I read showed that the amount of misinformation that is consumed is a stable % of total content consumed. Indicating that while supply and production capacity of misinformation may increase, the demand is limited. This coupled with the variety of ways misinformation can be presented, and the ineffectiveness of fact checkers at stopping uptake, forced a rethinking of how to effectively address all that is going on.

——-

I don’t have information on how behavior is in a verified context. I have some inklings of seeing this at some point, and eventually being convinced this was not a solution. I’ll have to see if I end up finding something.

I think one of the issues is that verification is onerous, and results in a case where you can lose your ID and then have all the real world challenges that come with it, while losing the benefits that come from being online. There’s a chilling effect on speech in both directions. Anonymity was pretty critical to me being able to even learn enough to make the arguments I am making, or to converse with people here.

If theres a TLDR to my position, it’s that the ills we are talking about are symptoms of dysfunction in how our ecosystem is behaving. So these solutions will only shift the method by which they are expressed. I would agree that it’s a question of tradeoffs. To which my question is what are we getting for what ground we are conceding.

wslh•1d ago
The internet sometimes feels like living in a holographic world, as in "The Invention of Morel" [1].

A recent anecdote: an acquaintance of mine automated parts of his LinkedIn activity. After I liked one of his posts, I received an automatic message asking how I was doing. I recognized that the message didn't match a personal tone, but I replied anyway to catch up. He never responded, highlighting how people are automating the engagement process but can't really keep up with the manual follow-through.

[1] https://en.wikipedia.org/wiki/The_Invention_of_Morel

isaacremuant•1d ago
I'm not that worried because most content and content moderation became heavily homogenized through censorship and political manipulation to the point that a sizeable number of conversations and posts provided very little space for "breakthrough value" or "original value". Of course, if you're part of the Overton window you're just now concerned but if you weren't, you're actually excited to see the disruption.

I do recognize the capabilities to hurt from bots in many spaces and cause cost are a real thing to contend with but the paradigm shift is fascinating. Suddenly people need to question authority (LLM output). Awesome. You should've been doing that all along.

duxup•1d ago
On Reddit I’m seeing a ton of what seems like engagement or karma spam that seem LLM generated.

It will be a story or question with just enough hints at personal drama and non specifics to engage the community. The stories always seem like a mishmash of past popular posts.

They’re usually posted by brand new accounts that rarely if ever post a comment.

Some subs seem relatively free of them, others inundated with them.

jauntywundrkind•1d ago
In Peter Watts' Maelstrom (2002) it's ultimately self replicating code that pushes the internet from a brutal and rough and competitive infoscape into something worse & even more rawly aggressive. But the book and it's tattered wasteland of the internet still has such tone setting power for me, set such an image up of an internet after humans: where the competing forces of exploitation have degraded and degraded and degraded the situation, pushing humans out.

Recently revisited on Peter's blog: https://www.rifters.com/crawl/?p=11220

lawrenceyan•1d ago
This reminds me of the first time someone recommended Mr. Beast’s Feastables™ milk chocolate bars as a comment on one of my posts.

I ended up going to my local Walmart to try one, and boy was it delicious! Sometimes things work out in life.