frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Vm.overcommit_memory=2 is always the right setting

https://ariadne.space/2025/12/16/vmovercommitmemory-is-always-the-right.html
1•todsacerdoti•6m ago•0 comments

Flow – A Programmer's Text Editor

https://flow-control.dev/
1•css_apologist•7m ago•0 comments

Show HN: GPT Image 1.5 – An AI image editor with conversational editing

https://gptimage15.app
1•jackson_mile•9m ago•0 comments

California threatens to ban Tesla sales for 30 days

https://www.sfchronicle.com/california/article/tesla-autopilot-claims-possible-sales-ban-21246659...
1•dangle1•11m ago•0 comments

Get Food with Your Colleagues

https://lcmchris.github.io/posts/get_food_with_your_colleagues
1•lcmchris•12m ago•0 comments

Canada launches its own quantum research program

https://betakit.com/canada-launches-it-own-quantum-research-program-to-rival-darpa-initiative/
2•gangtao•14m ago•0 comments

Ask HN: What happens when a new user's submission disappears?

1•ursAxZA•15m ago•1 comments

Show HN: Learn Japanese contextually while browsing

https://lingoku.ai/learn-japanese
3•englishcat•18m ago•0 comments

New MI6 chief: Tech bosses are becoming as powerful as nations

https://www.thetimes.com/uk/defence/article/new-mi6-chief-blaise-metrew-russia-speech-bqlvlx5hq
3•voxadam•19m ago•1 comments

Detecting hidden market regimes beyond correlation (empirical results)

https://github.com/johnoliveiradev/Multiscale-structural-regime-benchmark/tree/main/results/BTC%2...
1•johnoliveiradev•21m ago•1 comments

A universal law could explain how large trades change stock prices

https://phys.org/news/2025-12-universal-law-large-stock-prices.html
1•pseudolus•23m ago•0 comments

An Interview with a YouTube Writer Behind 500M+ Views

https://www.humaninvariant.com/blog/youtube-interview
2•gwintrob•27m ago•0 comments

LLM Pricing Calculator

https://app.hatrio.ai/free/llm-pricing-calculator
1•DinakarS•28m ago•0 comments

Time Team Map of Episodes (2021)

https://deparkes.co.uk/2021/04/16/time-team-map-of-episodes/
2•zeristor•30m ago•0 comments

The Jagged AI Frontier Is a Data Frontier

https://huggingface.co/spaces/lvwerra/jagged-data-frontier
1•in-silico•30m ago•0 comments

X updates terms, countersues to lay claim to the 'Twitter' trademark

https://techcrunch.com/2025/12/16/x-updates-its-terms-files-countersuit-to-lay-claim-to-the-twitt...
5•SanjayMehta•31m ago•1 comments

All printable snow-based triboelectric nanogenerator: Snow-TENG

https://www.sciencedirect.com/science/article/abs/pii/S2211285519302204
1•westurner•32m ago•0 comments

Synthetic key enzyme enables the conversion of CO2 into formic acid

https://phys.org/news/2025-12-synthetic-key-enzyme-enables-conversion.html
2•westurner•32m ago•0 comments

Hot for its bot, McKinsey may cut jobs

https://www.theregister.com/2025/12/16/mckinsey_may_cut_staff/
2•OptionOfT•34m ago•2 comments

The Longest Suicide Note in American History

https://www.theatlantic.com/ideas/2025/12/national-security-strategy-democracy/685270/
5•petethomas•40m ago•0 comments

Prototypes Are the New PRDs

https://www.figma.com/blog/prototypes-are-the-new-prds/
2•gmays•40m ago•0 comments

Windows 11 will ask consent before sharing personal files with AI after outrage

https://www.windowslatest.com/2025/12/17/microsoft-confirms-windows-11-will-ask-for-consent-befor...
13•jinxmeta•41m ago•5 comments

Racks of AI chips are too damn heavy

https://www.theverge.com/ai-artificial-intelligence/844966/heavy-ai-data-center-buildout
2•jnord•43m ago•0 comments

Understanding Email Encryption

https://www.fastmail.com/blog/email-encryption/
3•nmjenkins•43m ago•0 comments

Commodore 64 Ultimate Review

https://www.ign.com/articles/commodore-64-ultimate-review
3•amichail•44m ago•1 comments

Shmøergh Moduleur: analog DIY-friendly modular synth

https://www.shmoergh.com/moduleur/
1•Philpax•46m ago•0 comments

Most Parked Domains Now Serving Malicious Content

https://krebsonsecurity.com/2025/12/most-parked-domains-now-serving-malicious-content/
2•jnord•48m ago•1 comments

WikiFlix shows us what Netflix would have been like 100 years ago

https://wikiflix.toolforge.org/#/
2•jnord•49m ago•0 comments

An open letter to Mozilla's new CEO: Firefox doesn't need AI

https://old.reddit.com/r/firefox/comments/1poe7kb/an_open_letter_to_mozillas_new_ceo_firefox_doesnt/
6•bpierre•50m ago•1 comments

The brawl over the Colorado River is about more than water

https://www.politico.com/news/2025/12/16/colorado-river-water-users-association-conference-00676796
1•bikenaga•54m ago•0 comments
Open in hackernews

Americans overestimate how many social media users post harmful content

https://academic.oup.com/pnasnexus/article/4/12/pgaf310/8377954?login=false
24•bikenaga•1h ago

Comments

bikenaga•1h ago
Abstract: "Americans can become more cynical about the state of society when they see harmful behavior online. Three studies of the American public (n = 1,090) revealed that they consistently and substantially overestimated how many social media users contribute to harmful behavior online. On average, they believed that 43% of all Reddit users have posted severely toxic comments and that 47% of all Facebook users have shared false news online. In reality, platform-level data shows that most of these forms of harmful content are produced by small but highly active groups of users (3–7%). This misperception was robust to different thresholds of harmful content classification. An experiment revealed that overestimating the proportion of social media users who post harmful content makes people feel more negative emotion, perceive the United States to be in greater moral decline, and cultivate distorted perceptions of what others want to see on social media. However, these effects can be mitigated through a targeted educational intervention that corrects this misperception. Together, our findings highlight a mechanism that helps explain how people's perceptions and interactions with social media may undermine social cohesion."
daveguy•1h ago
Ahhhh. So maybe it's the platforms and their algorithms promoting harmful content for attention that are to blame? And how many of the platforms want to even admit the content they are pushing is "harmful"? Seems like two elephant sized sources of error.
exceptione•1h ago
Open youtube in a fresh browser profile behind a vpn. More than 90% of the recommended videos in the sidebar are right-wing trash like covid-conspiracies, nut-jobs sprouting Kremlin nonsense, alt-right shows.

Baseline is in the end anti-democracy and anti-truth. And Google is heavily pushing for that. The same for Twitter. They are not stupid, if they know you and they think they should push you in a more subtle way then they aren't going to bombard you with Tucker Carlson. Don't ever think the tech oligarchy is "neutral". Just a platform, yeah right.

bdangubic•1h ago
> Baseline is in the end anti-democracy and anti-truth. And Google is heavily pushing for that.

Google et al do not give a hoot about being “left” or “right” - they only care about profit. Zuck tattooed rainbow flag while Biden was President and is currently macho-man crusader. If Youtube can make money from videos about peace and prosperity that’s what you’d see behind the VPN. since no one watches that shit you get Tucker

freejazz•23m ago
> Zuck tattooed rainbow flag while Biden was President and is currently macho-man crusader. If

Funny how you say this but insist you're not the one being fooled right now!

expedition32•10m ago
Normal people are perhaps less inclined to make videos about chemtrails and lizards.

I was always intrigued about Twitter. After the novelty wears off who the hell wants to spend hours ever day tweeting?

Me1000•56m ago
The premise of this study is a bit misguided, imho. I have absolutely no idea how many people _post_ harmful content. But we have a lot of data that suggests a _lot_ of people consume harmful content.

Most users don't post much of anything at all on most social media platforms.

darth_avocado•44m ago
Isn’t that how news works? Sensational stuff sells, so you only see the extremes. Pretty much the same with social media.

Rage = engage

skybrian•31m ago
Saying it's "algorithms" trivializes the problem. Even on reasonable platforms, trolls often get more upvotes, reshares, and replies. The users are actively trying to promote the bad stuff as well as the good stuff.
tsunamifury•1h ago
any basic nodal theory will help you understand its not about how many who post, its about their reach and correlations with viewership of overall graph.

A few bad apples, spoil the whole bunch is illustrated to an extreme in any nodal graph or community.

So it's more about how much toxic content is pushed, not how much is produced. At an extreme a node can be connected to 100% of other nodes and be the only toxic node, yet also make the entire system toxic.

bongodongobob•1h ago
Isn't this just saying they are bad at estimating? It's not like any of these people did any rigorous studies to come to their conclusion.
Apreche•1h ago
This is one of those studies that presents evidence confirming what many people already know. The majority of the bad content comes from a small number of very toxic and very active users (and bots). This creates the illusion that a large number of people overall are toxic, and only those who are in deep already recognize the truth.

It is also why moderation is so effective. You only have to ban a small number of bad actors to create a rather nice online space.

And of course, this is why for-profit platforms are loathe to properly moderate. A social network that bans the trolls would be like a casino banning the whales or a bar banning the alcoholics.

didgetmaster•50m ago
It is very much like crime in general. The vast majority of crimes committed each year are by a tiny minority of people. Criminals often have a rap sheet as long as your arm; while a huge percentage of the population has never had a run-in with the law except for a few traffic or parking tickets.

While crime is definitely a major problem, especially in big cities; it only takes a few news stories to convince some people that almost everyone is out to get them.

SchemaLoad•46m ago
One of the best things platforms started doing is showing the account country of origin. Telegram started doing this this year using the users phone number country code when they cold DM you. When I see a random DM from my country, I respond. When I see it's from Nigeria, Russia, USA, etc I ignore it.

It's almost 100% effective at highlighting scammers and bots. IMO all social media should show a little flag next to usernames showing where the comment is coming from.

BoiledCabbage•5m ago
Yes, but as soon as scammers find their current methods ineffective they will swap to VPN and find a way to get "in country" phone numbers.

There is a fundamental problem with large scale anonymous (non-verified) online interaction. Particularly in a system where engagement is valued. Even verified isn't much better if it's large scale and you push for engagement.

There are always outliers in the world. In their community they are well know as outliers and most communities don't have anyone that extreme.

Online every outlier is now your neighbor. And to others that "normalizes" outlier behaviors. It pushes everyone to the poles. Either encouraged by more extreme versions of people like them, or repelled by more extreme versions of people they oppose.

And that's before you get to the intentional propaganda.

cosmic_cheese•45m ago
Furthermore, this does well to illustrate how that handful of trolls is eroding away the mutual trust that makes modern civilization function. People get start to get the impression that everybody is awful and act accordingly. If allowed to continue to spiral, the consequences will be dire.
RiverCrochet•43m ago
Any site with UGC should include posting frequency next to the name of posters, each time they appear on pages. If a post is someone's 500th for that day it provides a lot of valuable context.
cosmic_cheese•36m ago
Ratio of posts:replies, average message length, and average message energy (negative, combative, inflammatory, etc) provide decent signal and would be nice to see too. Most trolls fall into distinct patterns across those.
chiefalchemist•6m ago
I’m of the belief that HN would benefit from showing a user’s up-votes and down-votes, and perhaps even the post that happened within. Also limit down votes per day, or at least make karma points pay for them. There is definitely an “uneven and subjective” distribution of down-votes and it would be healthy to add some transparency.
thaumasiotes•32m ago
> And of course, this is why for-profit platforms are loathe to properly moderate. A social network that bans the trolls would be like a casino banning the whales or a bar banning the alcoholics.

How so? It's not like Facebook charges you to post there.

nemomarx•3m ago
Other users engaging with them drives views, I think is the idea. Without trolls to dunk on or general posts to get mad about, why go on Twitter?
kelseyfrog•16m ago
I've always wondered who these people are, like demographically.

We hold (or I do at least) certain stereotypes of what type of person they must be, but I'm sure I'm wrong and it'd be lovely to know how wrong I am.

quantified•13m ago
It doesn't take a lot of pee to spoil the soup.
ggm•59m ago
If more people were obligated to undergo KYC to get posting rights, Less people would be able to objectively claim to be other than they are.

If more channels were subject to moderation, and moderators incurred penalty for their failure, channels would be significantly more circumspect in what they permitted to be said.

Free speech reductionists: Not interested.

JuniperMesos•51m ago
Man, I'm against existing American KYC laws in the context of transferring money. I certainly don't want to see them expanded to posting online.
RiverCrochet•33m ago
KYC should work both ways. If a social media network needs to know my real name and address, I should know the real name and address of everyone running the social media network.
ggm•30m ago
Seems fair.
AuthAuth•56m ago
Isnt it still an accurate perception of moral decline? Even if its only 3% sharing misinfo and toxic posts its still 47% that is sharing them, commenting on them and interacting positively with them. This gives the in my opinion correct perception that there is moral decline.
RiverCrochet•45m ago
You have to eliminate this counterpoint for the evidence to fully support your perception: people share stuff on social media because it's easier and actively encouraged.

Here's the counterpoint to that though: people share stuff on social media not just because it's easy, but because of the egocentric idea that "if I like this, I matter to the world." The egocentricism (and your so-called moral decline) started way earlier than that, though-it goes back to the 1990s when talk shows became the dominant morning and afternoon program in the TV days. Modern social media is simply Jerry Springer on sterioids.

cosmic_cheese•30m ago
It doesn't indicate moral decay at all. It just confirms what we already know about the human psyche being vulnerable to certain types of content and interactions. The species has always had a natural inclination towards gossip, drama, intrigue, outrage, etc.

It only looks like "decline" because we didn't used to give random people looking to exploit those weaknesses a stage, spotlight, and megaphone.

ok123456•44m ago
This is intentional: make people think there's nothing online except harmful content, and propose a regulatory solution, which creates a barrier to entry. It's "meta" trying to stop any insurgent network.
goalieca•35m ago
It’s also meta overstating the power of influence. Why would they do that? Because it’s good marketing for them to sell a story around how their services running ads can be used for highly effective mass influence.
JuniperMesos•43m ago
> When US-Americans go on social media, how many of their fellow citizens do they expect to post harmful content?

Just because an American citizen sees something psoted on social media in English, it doesn't mean that it was a fellow American citizen who posted it. There are many other major and minor Anglophone countries, and English is probably the most widely spoken second language in the history of humanity. Not to mention that even if someone does live in America and speak English and post online, they are not necessarily a US citizen.

barfoure•34m ago
Turns out the kids are alright after all!
makeitdouble•20m ago
This study seems to be playing with what toxicity means.

Is the 43% cited at the top of the piece matching the same criteria they use for digging deeper in the study ?

Their specific definition of toxicity is in the supplementary material, and honestly I don't think it matches the spectrum of what people perceive as toxic in general:

> The study looked at how many of these Reddit accounts posted toxic comments. These were mostly comments containing insults, identity-based attacks, profanity, threats, or sexual harassment.

That's basically very direct, ad hominem comments. and example cited:

> DONT CUT AWAY FROM THE GAME YOU FUCKING FUCK FUCKS!

Also why judge Reddit on toxicity but not fake news or any other social trait peolple care about ? I'm not sure what's the valuable takeaway from this study, only 3% of reddit users will straight insult you ?