frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Libraries are under-used. LLMs make this problem worse

https://makefizz.buzz/posts/libraries-llms
1•kmdupree•1m ago•0 comments

DocuSign sends Cease and Desist to free SaaS developer

https://twitter.com/AzianMike/status/1935671153076678983
1•Andrew_nenakhov•1m ago•0 comments

Show HN: A persistent volume slider for the Windows taskbar

https://github.com/swax/Tabavoco
1•swax•2m ago•0 comments

Playing Games on  Silicon

https://store.steampowered.com/curator/42335871-Playing-Games-on-%25EF%25A3%25BF-Silicon/
1•doener•3m ago•0 comments

Lessons from letting AI vibe code a landing page

https://martech.org/lessons-from-letting-ai-vibe-code-a-landing-page/
1•kmdupree•4m ago•0 comments

Wiki Radio: The thrilling sound of random Wikipedia

https://www.monkeon.co.uk/wikiradio/
2•if-curious•4m ago•0 comments

Butlerian Jihad

https://dune.fandom.com/wiki/Butlerian_Jihad
1•baal80spam•5m ago•0 comments

Tfiner: Ramping Up Propulsion via Nuclear Decay

https://www.centauri-dreams.org/2025/06/20/tfiner-ramping-up-propulsion-via-nuclear-decay/
1•JPLeRouzic•8m ago•0 comments

Fort Rinella

https://en.wikipedia.org/wiki/Fort_Rinella
1•arbuge•9m ago•0 comments

Search for stocks using natural language

https://stonktracker.com
1•dchun•11m ago•1 comments

Show HN: Buildable – Project management through MCP

https://bldbl.dev
1•spacesh1psoda•13m ago•0 comments

The Definitive, Insane, Swimsuit-Bursting Story of the Steroid Olympics

https://www.wired.com/story/enhanced-games-freestyle-record-las-vegas-steroids/
1•Element_•13m ago•0 comments

Primal Studio: A publishing suite for Nostr, empowering creators and companies

https://studio.primal.net/
1•janandonly•15m ago•0 comments

Cardiovascular risk associated with the use of cannabis and cannabinoids

https://heart.bmj.com/content/early/2025/06/10/heartjnl-2024-325429
1•consumer451•17m ago•0 comments

U.S. Wealth Distribution (Including Billionaires)

https://joshworth.com/dev/wealthgap/
5•adg•19m ago•2 comments

Another Dumb Electrical Code Change Could Ban DIY EV Charger Installs

https://www.motortrend.com/news/nec-2026-diy-home-ev-charger-install-ban
1•josephcsible•19m ago•0 comments

Mike Lynch's superyacht Bayesian raised from seabed off Sicily

https://www.theguardian.com/world/2025/jun/20/mike-lynch-bayesian-superyacht-raised-from-seabed-sicily-salvage-operation
1•bookofjoe•19m ago•0 comments

Perplexity and FastAPI

1•slroger•19m ago•0 comments

Silicon Valley's 'Tiny Team' Era Is Here

https://www.bloomberg.com/news/articles/2025-06-20/ai-is-ushering-in-the-tiny-team-era-in-silicon-valley
1•petethomas•24m ago•0 comments

Accounts peddling child abuse content flood X hashtags as Thorn cuts ties

https://www.nbcnews.com/tech/tech-news/x-accounts-peddle-child-abuse-musk-material-thorn-cuts-ties-rcna212107
5•riffraff•29m ago•1 comments

Kazakhstan's Two-Step Nuclear Plan Reveals Delicate Diplomacy

https://oilprice.com/Alternative-Energy/Nuclear-Power/Kazakhstans-Two-Step-Nuclear-Plan-Reveals-Delicate-Diplomacy.html
2•Bluestein•32m ago•0 comments

BYD is testing solid-state batteries in its Seal sedan with ~1200 miles of range

https://electrek.co/2025/06/20/byd-tests-solid-state-batteries-seal-ev-with-1000-miles-range/
28•toomuchtodo•37m ago•13 comments

Magenta RealTime: An Open-Weights Live Music Model

https://magenta.tensorflow.org/magenta-realtime
1•iansimon•37m ago•0 comments

"Everyone complains about Datadog but no one leaves"

https://www.reddit.com/r/Observability/s/Ot0vw45vUx
3•djhope99•41m ago•2 comments

Svalboard: Datahand Lives

https://svalboard.com/
2•morganvenable•45m ago•1 comments

You're significantly more likely to die on your birthday. Here's why

https://www.sciencefocus.com/qanda/youre-more-likely-to-die-on-your-birthday
1•domofutu•48m ago•0 comments

Autodesk Fusion 360 powers lunar training facility FLEXHab

https://www.designboom.com/technology/autodesk-fusion-revit-esa-lunar-training-facility-flexhab-06-20-2025/
1•rbanffy•49m ago•0 comments

Practices that set great software architects apart

https://www.cerbos.dev/blog/best-practices-of-software-architecture
3•emreb•50m ago•0 comments

Tether Launches PearPass Password Manager After 16B Credentials Exposed

https://www.ainvest.com/news/tether-launches-pearpass-password-manager-16-billion-credentials-exposed-2506/
2•wslh•50m ago•0 comments

AllTrails launches AI route-making tool, worrying search-and-rescue members

https://www.nationalobserver.com/2025/06/17/news/alltrails-ai-tool-search-rescue-members
3•coloneltcb•50m ago•0 comments
Open in hackernews

Malicious AI swarms can threaten democracy

https://osf.io/preprints/osf/qm9yk_v2
96•anigbrowl•2h ago

Comments

TwoNineA•2h ago
Citizens United already killed democracy.
xp84•2h ago
Agreed. To me, it’s like the difference between semi-automatic weapons and automatic weapons. Yes, it’s easier to not have to wiggle your finger on the trigger, but semi-auto is more than enough to be very effective.

It’s very easy to create deceptive imagery persuasion and to astroturf. With or without AI assistance. All you need is a modest amount of money. And with unlimited money and zero accountability thanks to the ‘responsibility laundering’ made possible through PACs… facts no longer matter. Just money, to buy virality, to influence vibes.

hayst4ck•2h ago
No, this is low agency. It wasn't other people that killed democracy. It was our own lack of response to billionaires enshrining money as legitimate political power that killed democracy. Oligarchy and monarchy are the default. Citizens willing to pay the cost of challenging those in power are what is exceptional. It is citizens asserting their own power rather than submitting to unjust power that creates democracy. It is an insistence that law applies to the most rich that creates democracy.

Democracy requires maintenance and responsibility. You can't expect nice things without paying the maintenance cost, and unfortunately, if you challenge power, power answers and it will hurt. If nobody is willing to die for freedom, then everyone will die a slave.

Blaming others rather than looking within fundamentally accepts authoritarianism, it presumes and accepts that others have power over you and that you can do nothing but submit.

Nobody is challenging power. We only have our own selves to blame for our cowardice.

try_the_bass•1h ago
I disagree about what we failed to respond to, but fully agree that this is the fault of the population at large: if you refrain from putting forth your opinion, your opinion by cannot be counted in the democratic process.

The problem with democracy, more generally, appears to be that the population is wildly susceptible to apathy and complacency, meaning we've reduced the voting set to only this who care enough to vote. This turns politics into a game of disagreements between the most extreme voices.

In my opinion, in order for a democracy to work, voting must be compulsory.

WillPostForFood•1h ago
And yet, in 2020 and 2024 presidential elections, we have highest percentage Voting Eligible Population turning out since WWII, and in two of the last three elections, the candidate who spent less won.

So at least at presidential level, there is neither apathy, complacency, or the ability to buy elections.

hayst4ck•1h ago
> the candidate who spent less won.

That is entirely unsubstantiated. There is no reasonable way to measure this and any measurements taken are inherently political.

I am amenable to the idea of that being true for official spending, but unless the twitter purchase, for example, were tabulated, or spending on our American "pravda" (Truth social which was clearly influenced by https://en.wikipedia.org/wiki/Pravda), I would be extremely suspicious of those numbers.

> so at least at presidential level, there is neither apathy, complacency, or the ability to buy elections.

Again, I completely disagree, and so does Harvard Law Professor Lawrence Lessig, who argues that it is extremely hard to win a primary without fundraising, and fundraising is structurally an election where money counts as votes, and therefore nearly all candidates who make it to the primary have already been filtered through by those with money: https://www.youtube.com/watch?v=mw2z9lV3W1g

hayst4ck•1h ago
Democracy does not work without a culture of responsibility and we no longer have a culture of responsibility.

I am incredibly atheist, but what we are seeing is Christianity, a clear pillar of American culture, malfunctioning on a societal scale. What used to be the major cultural influence on this country has been weaponized for political purposes. There is a quote about how separation of church and state is to protect the state from the church... but now we are coming to understand that that separation exists also to protect the church from the state.

People are very susceptible to politicians lying to them, especially when it's a lie they want to hear or prefer to the truth. Compulsory voting does not address that at all. Education is a hedge on it, but education requires effort, openness, and resources. There is also a media ecosystem which acts as a sort of central nervous system for a country, which is how a country understands itself.

Culture and institutions (such as church/media/academia/police training, etc.) are the foundation of societies operations, and government is largely a manifestation of prevailing culture. Authoritarian governments are a manifestation of a culture that promotes self interest and lack of empathy, rather than one that promotes loving they neighbor and treating others as you wish to be treated.

andai•2h ago
I think it's more "confirmed the patient had been dead for a little while."
andrewla•1h ago
I don't know what to tell you if you still believe that political contributions or spending are a determinative factor in politics rather than a trailing indicator (and a poor one at that).

Even in terms of corruption, this is by far the smallest concern and barely worth noting in the scheme of things. Besides the obvious revolving door for lobbying and legal firms, there is so much money at play in the ex post facto bribery industry, between speaking fees and bulk book sales and low interest and forgiven loans that Citizens United might as well be dust in the wind.

dsabanin•2h ago
I think it's pretty clear we're at this stage already.
lugu•2h ago
Is it time to move away from directly elected representative toward topic-specific randomly pick representative? Would this help prevent operation to influenced opinion?
SketchySeaBeast•1h ago
That feels like it'd turn most non-hot-topic decisions into noise, might as well flip a coin to determine policy.
Spooky23•1h ago
We did this already here in the United States.
navane•2h ago
What democracy lol, didn't need AI for that.
NitpickLawyer•2h ago
I'd argue the only new development is that now it's cheaper / easier to do. But the same concept has been used previously with human-augmented bot farms.
happytoexplain•2h ago
I'm often a little bewildered at why we so consistently label "cheaper/easier" as less significant than "new". "Cheaper/easier" is what creates consequences, not "new".
o_____________o•1h ago
> "Cheaper/easier" is what creates consequences, not "new".

Nuclear bomb?

TypingOutBugs•1h ago
Imagine if they were cheaper and easier to manufacture
spongebobstoes•1h ago
I think it remains a useful distinction. framing this as an evolution ("cheaper") helps understand the problem space. for example, the motivations, capabilities and effectiveness of existing players
SoftTalker•1h ago
Yes exactly. In the past we had privacy/anonymity in public because it simply wasn't feasible to follow everyone, everywhere, all the time. The technology did not exist, and while you could follow selected individuals around, that quickly broke down at numbers greater than "a few." Some regimes did it more than others (the old DDR/Stasi for example) but even they could only keep a close eye on targeted individuals, places, and events.

Now we have cameras on every major road and intersection, most places of business, most transportation facilities, and most public gathering places. We have facial recognition and license plate readers, and cheap storage that is easily searched and correlated. Almost all communications are logged if not recorded. Even the postal service is now imaging the outside of every envelope.

All because it's cheaper and easier.

rikafurude21•1h ago
New developments in AI are used here to push the usual "solutions" - more privacy invasion, more centralized control, for the sake of democracy. Disinformation campaigns arent just used to influence elections but thats like the most common theme. The reality is that humans need to be upgraded with "mental antivirus". We're seeing the beginning of something like that with people being able to tell when text theyre reading is LLM-generated. Everyone probably gets psyopped every time they start scrolling a social media feed - We just need to be aware of that
candiddevmike•1h ago
Some GenAI manufactured consent for the Iran war should do the trick
fasthands9•1h ago
There is an element that when it becomes cheaper/easier it cancels each other out. Spam became cheaper/easier but that just meant people were less trusting of random emails.

It probably is true that some populations are more vulnerable to AI produced propaganda/slop, but ultimately I have 50 more pressing AI concerns.

uniqueuid•2h ago
This paper builds on a series of pathways towards harm. Those are plausible in principle, but we still have frustratingly little evidence of the magnitude of such harms in the field.

To solve the question of whether or not these harms can/will actually materialize, we would need causal attribution, something that is really hard to do — in particular with all involved actors actively monitoring society and reacting to new research.

Personally, I think that transparency measures and tools that help civic society (and researchers) better understand what's going on are the most promising tool here.

alganet•1h ago
There's plenty we can do before any attribution is made.

LLMs hallucinate. They're weak and we can induce that behavior.

We don't do it because of peer pressure. Anyone doing it would sound insane.

It's like a depth charge, to make them surface as non-human.

I think it's doable, specially if they constantly monitor specific groups or people.

There are probably many other methods to draw out evidence without necessarily going all the way into attribution (which we definitely should!).

hayst4ck•2h ago
AI execs push the message of how dangerous and powerful AI is because who doesn't want to invest in the next most powerful thing?

But AI is not dangerous because of the potential for sentience, but because it makes the rich richer and the poor poorer. It gives those with resources more advantage. It's dangerous because it gives individuals the power to provide answers when people ask questions out of curioity they don't know the answer to, which is when they are most able to be influenced.

ivape•1h ago
It's dangerous for psychological reasons. We already saw how the internet was able to organically form into bubbles that escaped into the real world and fully influenced how people identify and conduct themselves. AI now lets people sub divide into further groups based on entirely arbitrary criteria which they feed into AI and AI feeds back their bubble. Think Manson, think cults, suicide pacts, tidepod eating cult, and of course, political parties.

The kids will at some point worship a prompt-engineered God (not the developer, the actual AI agent), and there will be nothing society will be able to do about it. Nobody verbalizes that Gen Z moves entirely like a cult, trend after trend is entirely cult-like behavior. The generation that is going to get raised by AI (Gen Zs kids) are going to be batshit crazy.

hansvm•1h ago
> The kids will at some point worship a prompt-engineered God

The way some people talk about LLM coding I don't know that we're far off.

frollogaston•1h ago
I'm on the older side of Gen Z, I think the older people move more like cults. And the younger ones are addicted to phones from a young age since their Millennial parents thought it'd be a great idea, though that's finally starting to change.
guywithahat•1h ago
> because it makes the rich richer and the poor poorer

This is a terrible take. Whenever there is a massive technical shift it's the incumbents who struggle to adapt. We've already seen companies go from nothing to being worth tens of billions.

esafak•1h ago
The OP is likely referring to the technology's ability to replace certain workers, which naturally leaves them poorer. The net effect is one of increased wealth inequality, the minting of new AI entrepreneurs notwithstanding. Is the data on this out yet?
sgjohnson•1h ago
We survived industrial revolution, we’ll survive AI.
throwaway09387•1h ago
> We survived industrial revolution

Did we? I mean, I guess we're surviving it for now, but climate change data doesn't tell the most optimistic story.

cloverich•1h ago
TBF the worst outcome is famine and mass death of humans which... was somewhat the standard historically (low populations / intermittent famine). I have as much existential dread as you but that's also because I overvalue human life relative to what nature has historically thought about us.
croes•1h ago
Survivorship bias.

You always survive unless you don’t.

ben_w•1h ago
The industrial revolution was so good for those living through it, that they invented communism and recognisably modern policing.

And we got a few wars whose death toll exceeded the pre-industrial population of the UK. And one whose toll exceeds the current population of the UK.

vincentpants•1h ago
Isn't the jury still out on that statement? Pretty sure we've been experiencing the long-tail cultural decomposition caused by the effects the industrial revolution brought with it with the exception of a 20 year opportunity to change our trajectory as a species. I think getting out of this conundrum alive will require de-segmenting our cultural narrative around how we got here.

edit: typos/grammar abound.

willis936•1h ago
This isn't an industrial revolution moment; it's a printing press moment.
hayst4ck•59m ago
"We" is doing an awful lot of work. We may have survived the industrial revolution, but subjective individual experience varies widely.

Life would be quite different if we could subjectively experience ourselves as a whole.

delusional•1h ago
Yet it's somehow the same band of people at the helm of these "new" companies.
hn_throwaway_99•1h ago
Even if there are some new entrants who become remarkably successful, it still means that for the vast majority of humans that aren't AI experts that the "spoils" will still accrue primarily to these "AI winners" while making it much, much, much harder for many, many, many more people to sell their labor. And many of the investors in these companies that are poised to make bank are the usual suspects.

And even then, many of the biggest winners of AI so far (Google, Microsoft, NVidia, etc.) are already some of the biggest companies on the planet.

bilbo0s•1h ago
Even the AI experts are the usual suspects.

The techies making money are the AI experts. The real AI experts. Not the TF/PyTorch monkeys.

There's actually a massive and underappreciated difference between the two groups of people. It goes to the heart of why so many AI efforts in industry fail, but AI Labs keep making more and more advances. Out in industry, we mistake Tensor Flow monkeys for AI experts. And it's not even close.

Worse, you look at market price to get some of the real AI experts, and you realize that you have no shot at securing any of that intellectual capital. And even if you have the resources to secure some of that talent, that talent has requirements. They're fools if they consider you at less than 10^4 H100s. So now you have another problem.

I think techniques, R&D, secrets, intellectual capital, and so on are all centralizing in the major labs. Startups as we knew them 5 to 10 years ago will simply be choosing which model to build on. They have no legitimate shot at displacing any of the core LLM ecosystems. They'll all be nibbling at the edges.

bilbo0s•1h ago
Well, companies, by and large, started and funded by..

incumbents.

That said, I think that's just how reality works.

croes•1h ago
You confuse companies with people.

Companies come and go but the rich people who owns them nearly stay the same

guywithahat•57m ago
You are your own sole proprietorship, go learn an in-need skill and companies will compete for you. It can be hard to see in big companies but these market forces are very apparent in a 50-person company
hayst4ck•1h ago
You can pay more for more access to a better model. Answer quality directly correlates to resources spent generating it.

It literally and structurally offers advantage to those with more resources.

That effect compounds over time and use.

I am not even remotely talking about worker replacement, even assuming no job was lost, a company that is able to pay for better answers should have more profit and therefore more ability to pay for more/better answers.

louwrentius•1h ago
Thank you, this is an excellent assessment
AtlasBarfed•1h ago
It gives the powerful more power to monitor and control the people.

AI is despotism automated.

dismalaf•1h ago
They push the danger factor so that governments will regulate the industry and prevent startups from competing. It's 100% about regulatory capture.
oezi•1h ago
As long as the stock market appreciates more than GDP grows, the rich will become richer quicker than the poor no matter what we do.
frollogaston•1h ago
Social media is a pretty new thing, even home internet is. People used to get info from each other or from a select few broadcast/paper news sources, the latter owned by powerful people of course. AI or not, we're probably going back to that because people realized you can't trust stuff from random places.
SV_BubbleTime•34m ago
>It's dangerous because it gives individuals the power to provide answers when people ask questions out of curiosity they don't know the answer to, which is when they are most able to be influenced.

Can you point to a time in history of written information this wasn't true though?

- Was the Library of Alexandria open to everyone? My quick check says it was not.

- The access to written information already precluded an education of some form.

- Do you think Google has an internal search engine that is better than the one you see? I suspect the have an ad-less version that is better.

- AI models are an easy one obviously. Big Players and others surely have hot versions of their models that would be considered too spicy for consumer access. One of the most memorable parts of ai 2027 for me was the idea that you might want to go high in government to get access to the super intelligence models, and use them to maintain your position.

The point is, that last one isn't the first one.

bigyabai•2h ago
You know what concerns me? Venture capital has started a ball rolling, which HN plays no small part in. VCs exist as an apparatus of America's massive financialized economy, and therefore understands that the system is overleveraged again. We have so much more debt than we're capable of working off, and therefore the people with money want to spend it on moonshots. Cryptocurrency scams, viral marketing schemes, AI shenanigans among the latest. What is the sum of all this? What work is really being done, besides the financier economics we've seen since the 1980s and the dev outsourcing we've seen since the 90s?

Maybe AI swarms do pose some weird contrived threat to democracy. Pieces like this will inevitably be laundered by the American intelligentsia like Karpathy or Hinton, and turned into some polemic hype-piece on social media proving that "safe AI" must be prioritized in a row for regulation. It's borderline ineffable to admit on HN, but America's obsession over speculative economics has pretty much ruined our chance at seizing a technological future that can benefit anyone. Now AI, like crypto before it and the dotcom bubble too, is overleveraged. "Where's the money, Lebowski?"

sgarrity•1h ago
Are there benevolent AI swarms?
matus-pikuliak•1h ago
I have done some research about AI-disinformation. This is a really complex topic, as disinformation or influence operations are complex phenomena that can have many different forms. What I would argue is that disinformation in general do not have a supply problem (how to generate as much of them as possible) but a demand problem (how to get what is generated in front of some eyes). You don't really need a botnet of fake users pushing something, you need a few popular accounts/politicians to spread your message. There is no significant advantage in using AI there.

But, there are still situations where botnets would be useful. For example, spreading propaganda on social media during hot phases of various conflicts (R-U war, Israeli wars, Indo-Pakistani war) or doing short term influence operations before the elections. These cases need to be handled by social media platforms detecting nefarious activity by either humans or AI. So far they could half-ass it as it was pretty expensive to run human-based campaigns, but they will probably have to step up their game to handle relatively cheap AI campaigns that people will attempt to run.

IAmGraydon•1h ago
>Fusing LLM reasoning with multi-agent architectures [1], these systems are capable of coordinating autonomously, infiltrating communities, and fabricating consensus at minimal cost. Where legacy botnets acted like megaphones, repeating one script, AI swarms behave like adaptive conversationalists with thousands of distinct personas that learn from feedback, pivot narratives, and blend seamlessly into real discourse. By mirroring human social dynamics using adaptive tactics, they threaten democratic discourse.

A trip to Reddit will show you just how real this already is. Humans are far more likely to adopt a belief if they think that everyone else also believes it. The moment GPT became available, bad actors have exploited this to spread disinformation and convince everyone to believe it.

try_the_bass•1h ago
Discourse on Reddit has definitely become orders of magnitude more stupid over the past year or so.

While I attribute a good portion of this to these kinds of influence operations, it's pretty clear that the opinion of the average Redditor (bot or not!) has just gotten increasingly shallow over time.

agentultra•1h ago
Real AI safety is enforced regulation and smart policies, period.

Don't let the government's FOMO on new weapons enable these companies to add new coal and methane power to the grid and build data centres in water-stressed regions. Make them pay for the externalities they cause. If it weren't subsidized these companies wouldn't be operating at the scale they are, AI would be in a lab, where it belongs doing cool stuff for science.

Heck, don't let government let alone private corporations weaponize this technology, full stop.

Economic policy that protects businesses and individuals online. Peoples' hosting bills are going through the roof from AI scrapers. The harm is nuts. These companies aren't respecting any of the informal rules and are doing everything they can to form monopolies and shut down competition.

We need social policies that prevent AI use cases from harming the economy. Policies that prevent displacing skilled workers without redistributing the accrued wealth to the labour class. If it's really improving productivity by such a huge factor then we should all be working less and living comfortably.

But I dunno how you prevent the disinformation, fraud, and scams. They've been around for years and it's always a cat-and-mouse game. Social media has just made it worse and AI is just more fuel for the tire fire.

NoMoreNicksLeft•1h ago
Enforced regulation has become impossible for at least the last 25 years or so. Anyone can just build a datacenter in some foreign country, and that software still affects the people in your country.

Back when they were complaining that Russia was interfering with our elections (2016ish?) I wondered what it would take to completely cut Russia off from the Internet. Granted, the Soviets still pulled stunts with less technology, but it was still a manageable problem then. Now? Well, we couldn't cut them off if we wanted to, could we? Even if we bullied Europe into severing all the fiber backbones there and elsewhere, China and North Korea and a dozen other countries would still keep them connected. And we'd still face the problems we face now.

Not that we would try that. Though it might be a sane and even a reasonably good policy, you'd have jackasses here and elsewhere (and not just Russian shills either) talking about how we can't possibly disconnect the friendly Russianerinos, they're good people even if some of their oligarchs aren't.

So we'll get some performative regulation theater that changes nothing. And everyone will just wonder what went wrong, quietly.

21eleven•1h ago
I suspect this is what a node in an AI swarm looks like looks like https://www.reddit.com/u/Low-Ocelot-992/s/95ORJu8j9U
pessimist•1h ago
Killian civilians in war has been completely normalized. Even in highly visible conflicts in Gaza and Ukraine. People just shrug and move on.

Political discourse has lost all sense of decency. Senators, the VP and POTUS all routinely mock and demean their opponents and laugh even at murder. Arrests are made by unknown masked men with assault rifles.

AI is simply irrelevant to this - humans are selfish, tribal and ugly.

lupusreal•1h ago
> Killing civilians in war has been completely normalized.

Relative to which period in human history?

bastawhiz•1h ago
> system-level oversight—a UN-backed AI Influence Observatory

> The Observatory should maintain and continually update an open, searchable database of verified influence-operation incidents, allowing researchers, journalists, and election authorities to track patterns and compare response effectiveness across countries in real time. To guarantee both legitimacy and skill, its governing board would mix rotating member-state delegates, independent technologists, data engineers, and civil society watchdogs.

We've really found ourselves in a pickle when the only way to keep Grandma from being psychologically manipulated is to have the UN keep a spreadsheet of Facebook groups she's not allowed to join. Honestly what a time to be alive.

tyleo•1h ago
I would actually love if what comes out of this is that people stop trusting social media entirely and put a lot more weight into face-to-face interactions.
frollogaston•1h ago
That and low-effort journalism or ads. Without AI, writing tons of bs is already a well-established skill, now it doesn't even take skill.
alganet•1h ago
That doesn't solve the problem.

The person you are talking to face-to-face could have been targeted with disinformation as well.

This is suggested in the paper: manufacture of narrative across communities. Those communities are not exclusively online.

Loughla•1h ago
Legitimately though, when did we shift from 'don't believe anything on the Internet' to 'believe everything on the Internet'?

When and why did that happen?

const_cast•31m ago
When people discovered that telling people what they want to hear makes them money.

My favorite example of this is the entire sphere of anti-science "health" stuff on TikTok. Seeds oils bad, steak good, raw milk good, chemo bad. I noticed something. Every single time, without fail, this person is trying to sell me something. Sometimes it's outright linked in the TikTok, sometimes it's in their bio. But they're always salespeople.

Salespeople lie guys. They want you to buy their stuff, of course they're going to tell you their stuff works.

kristjansson•29m ago
General inability to distinguish the form of content from its veracity? The same people that assailed the internet in the 90s probably bought tabloids at the supermarket checkout. Newsweek, News of the World, who can tell the difference?
Atlas667•1h ago
There are only 2 logical solutions, the powers that be will either create heavier blinders within nation states or the public creates truly independent/public systems.

Social mass-media is doomed to fail in its current form. These platforms are already manipulated by capital through advertising, nation states, data-brokers and platform self interest.

People need more federated networks where agents can be verified, at least locally and "feeds" cannot be manipulated.

The powers that be do not believe in democracy how you and I believe in it, they believe in manufactured consent.

Mass media in its current form is just a way to create consent within the masses, not the other way around, the masses don't make the decisions.

nradov•1h ago
Are malicious AI swarms a greater threat to democracy than British soldiers armed with muskets and cannons?
lurk2•1h ago
I’ve been hearing about the dangers of influence campaigns like Russian bot farms, deep fakes, and LLMs for the last ten years now. While these sorts of papers go out of their way to use the word “democratic,” it always seems to be motivated by a (not-very-compelling) idea that right wing populism is an essentially artificial creation rather than an organic backlash to the excesses of the Obama era. The scale of these influence campaigns may have increased in the last five years, but the (common, in these circles) idea that Russia stole the election with a few million posts on Twitter is silly and itself begins to look like an attempt to implement anti-democratic measures that would allow established authorities to pull the ladder up behind them.

The authors of this paper have at acknowledged that there are (practical, more so than moral) limitations to strict identification systems (“Please insert your social security number to submit this post”) and cite a few (non-US) examples of instances where these kinds of influence campaigns have ostensibly occurred. The countermeasures similarly appear to be reasonable, being focused on providing trust scores and counter-narratives. What they are describing, though, might end up looking a lot like a Reddit-style social credit system which has the impact of shutting out dissenting opinions. One of my favorite things about Hacker News over Reddit is that I can view and vouch for posts that have been flagged dead by other users. 95% of the time the flags were warranted (low-effort comments, vulgarity, or spam), but every once in a while I come across an intriguing comment that others preferred to be censored.

alephnerd•1h ago
These disinformation tactics via bots are used to vocalize BOTH the fringe left ("kill the rich") and the fringe right ("great replacement").

Most organizations and teams who have been investigating automated disinfo at scale have highlighted how fringes on both sides of the spectrum are being manipulated via automated engagement - often with state backing.

Power and Politics is completely orthogonal to ideology.

techpineapple•1h ago
> I’ve been hearing about the dangers of influence campaigns like Russian bot farms, deep fakes, and LLMs for the last ten years now. While these sorts of papers go out of their way to use the word “democratic,” it always seems to be motivated by a (not-very-compelling) idea that right wing populism is an essentially artificial creation rather than an organic backlash to the excesses of the Obama era.

I think it can be both. There's part of me that has become, maybe sort of Bernie Sanders MAGA over the past couple of years. Like I certainly identify with the people who want their lives to be better, and America to be simplified, and I don't think that underlying this is an idea that without Russian influence, we would all be Democrats. But to me the problem with Russian bot farms and other influence campaigns to me isn't the direction of the beliefs, it's the degree, and by the way, this is on both sides.

Right, classically, there's the study that says gun violence is down, but portrayal of gun deaths in the media is up, and I think social media has taken this idea into overdrive. So like, instead of thinking we're one nation with people who disagree, now we're two political parties, and if we don't eliminate the other evil political party, it's the end of America.

Maybe something from the other side I've been trying to figure out:

To a matter of degree, again not binary, to what extent was the woke movement organic, vs constructed re: social media algorithms and capitalism. I'm not saying that nobody organically cares about social justice and etc. But to what extent were certain practices - announcing pronouns at the beginning of meetings - bottom up from activists or the people who cared, or top down via diversity consultants? To what extent did BLM or MeToo grow organically, vs being promoted by social media algorithms?

If there's this big complaint that progressive change was happening too fast, was it really progressive activists driving the change? Or other institutions and influence campaigns.

Spivak•1h ago
> the excesses of the Obama era.

Thank you, I needed a laugh today. I mean you're not wrong that it started with Obama but like come on-- I lived through that era, you probably did too. It was a visceral emotional response to the most powerful man on earth being a black man. That spawned The Tea Party and birther movement the Venn Diagram of which was a circle. The Republican party noticeably changed to a tone of burn it all down while Obama was in office. Being one of the most outspoken birthers what was put Donald Trump into the public sphere as a political figure. This is where the MAGA wing of the Republican party traces its origins.

SoftTalker•1h ago
Blaming it all on racism is too easy. The left needs to face the real reasons that they lost to Donald Trump (because he was easily defeatable, IMO).
Spivak•1h ago
I'm not, that's just where it started. The movement has grown massively since it's humble beginnings during the Barack HUSSEIN Obama era. They started hating Obama before he took office, to say it's a response to what he did during his presidency doesn't track.
edwardbernays•1h ago
Malicious AI seams are merely the tool wielded by the actual danger, which is the threat actors directing them towards a particular goal.

I'm spreading the message because I want more socially conscious people to engage in this. Look into Curtis Yarvin and The Dark Enlightenment. Look into Peter Thiel's (the CEO of Palantir aka America's biggest surveillance contractor) explicitly technofascist musings on using technology to bulldoze democracy.

ilaksh•1h ago
Stop blaming technology for the way humans misuse it. AI, like any technology, is a lever. Like a big metal rod. You could use that to move stones for building a structure, or to dislodge a boulder to roll down a hill and destroy someone else's building.

The pre-AI situation is actually incredibly bad for most people in the world who are relatively unprivileged.

"Democracy" alternates between ideological extremes. Even without media, the structure of the system is obviously wholly inadequate.

Advanced technologies can be used to make things worse. But they are also the best hope for improving things. And especially the best hope for empowering those with less privilege.

The real problems are the humans, their belief systems and social structures. The status quo may seem okay to you on most days, but it is truly awful in general. We need as many new tools as possible.

Don't blame the tools. This is the worst kind of ignorance.

mclau157•1h ago
Its a bit more of a question of which can we change more easily; tools, or human nature
1659447091•1h ago
Human nature. The tools will always follow that.
alephnerd•1h ago
> Stop blaming technology for the way humans misuse it

> Don't blame the tools. This is the worst kind of ignorance

"Stop blaming [guns // religion // drugs // cars // <insert_innovation_here>] for the way humans misuse it"

There is a reason regulations exist. Too much regulation is detrimental to innovation, but some amount of standards is needed.

ilaksh•1h ago
Guns, drugs and religion are not equivalent to AI.

I did not say there should not be regulation or standards.

The point is that the whole diagnosis is wrong. People point at AI as creating a new problem as if everything was okay.

But everything is already fucked and it's not because of technology it's because of people, their social structures and beliefs. That is what we need to fix.

There are lots of ways that AI could help democracy and government in general.

croes•1h ago
And there a lots of ways to fuck the situation even more up.

AI makes fake news in masses possible.

You think the situation is bad? Let’s talk about that in 5 years.

salawat•1h ago
None of it outweighs the harms in the hands of provably malicious governments or corporate interests. Hell, the Western ideal of Government is only truly considered tolerable given the caveat that we're constantly cycling chunks of it out, and that it never becomes so efficient and automated that it can trivially operate without the consent of the governed.
jimmyjazz14•1h ago
> Stop blaming [guns // religion // drugs // cars // <insert_innovation_here>] for the way humans misuse it

I tend to agree with this statement honestly.

SV_BubbleTime•28m ago
Right? People say this as some defense for regulation, but it's the opposite. Everything that can, will be misused. USA has a lot of shootings - where guns are the most common on earth - big surprise? But when you get beyond the surface knee jerk, it's almost entirely gangs over drugs. Well, those things are already "regulated" so clearly there is a disconnect somewhere between freedom and being mad at the next thing in the chain - punishing everyone because someone might be bad.

The adult take is that things do not have malice, people might, so address that because the world will never be regulated into safety enough for the people that don't get human nature.

AtlasBarfed•1h ago
The tool is the final key for a omnipresent full time monitoring and control.

Everywhere you move. Everything you say, everything you buy.

They've had the monitoring technology for decades. The problem was the fire hose.

There is no problem with the fire hose anymore.

If this doesn't scare the s** out of you, then you're ignorant.

Maybe if we had a well-functioning government I would hold out some degree of hope. But our Democratic institutions are already in shambles from facebook.

All previous technologies basically enhanced the talent and intelligence. Yes AI can do that, but the difference this time is it replaces intelligence on a huge scale

The role of the intelligensia, to borrow an old term, arguably has pushed idealistic progress on society with its monopoly on competence.

That Monopoly on competence generally came with it. The essential counterbalance to centralized authority and power along with idealism and philosophically derived morality and righteousness.

Ai is the end of the monopoly of competence for almost all the hacker news crowd.

North Korea is the future with AI.

croes•1h ago
We know that humans are the problem but you can’t change that.

Maybe it’s a bad idea to put powerful tools in the hands of people you know will misuse it.

Let‘s make e gedanken experiment.

We create mighty tools with two buttons. Button 1 solve world hunger and cure every disease. Button 2 kill everybody but you.

Would give everyone such a tool?

salawat•1h ago
You could make it with only 1 button, (your first) and it could still end up doing the second. Full extermination of the people vulnerable to disease and hunger is, in fact, a valid optimization strategy.

There is no getting around the fact that these things are nothing like regular technology where you can at least decompose it down to functional working parts. It isn't debuggable basically. Nor predictable.

voidhorse•1h ago
This is a naive view of technology and its development. Yes, there is a certain degree to which technologies can be appropriate for a range of ends, but they are also created and distributed in historical situations in which the creators have incentives. There are plenty of cases in history in which purely "neutral" technological decisions and developments had intentional political and economic effects. Check out Langdon Winner's classic paper "Do artifacts have politics?"

The tools do not exist without the humans, and the humans, consciously of otherwise, design tools according to their own views and morals.

To outline just a basic example: many initial applications of generative AI were oriented toward the generation of images and other artistic assets. If artists, rather than technologists, had been the designers, do you think this would have been one of the earlier applications? Do you think that maybe they may have spent more time figuring out the intellectual property questions surrounding these tools?

Yes, the morals ultimately go back to humans, and it's not correct to impute morals onto a tool (though, ironically enough, the personification and encoding of linguistic behaviors in AI may be one reason that LLMs can be considered a first exception to this) but reducing the discussion to "technology is neutral" swings the pendulum too far in the other direction and ultimately tends to absolve technologists and designers of moral responsibility pushing it to the use, which, news flash, is illegitimate. The creators of things have a mora responsibility too. For example, the morality of designing weapons for the destruction of human life is clearly contestable.

ilaksh•33m ago
My views are not naive. Let's get specnific. What do your propose? That LLMs or image generators be banned?

Are technologists creating AI swarms for political manipulation? Or is that being done by politicians or political groups?

Are you suggesting that an LLM or image generator is like a gun?

const_cast•27m ago
> Stop blaming technology for the way humans misuse it.

Exactly, that's why I propose every person has access to a nuke.

Elephant in the room here: the scale of technology matters. Being able to lie on a scale that eradicates truth as a concept matters.

We can't just naively say all tools are similar. No no, it doesn't work that way. I'm fine with people having knives. I'm fine with people having a subset of firearms. I am not fine with people having autonomous drones, or nuclear weaponry, or whatever. Everything is a function of scale. We cannot just remove scale from the equation and pretend our formulas still hold up. No, they don't. That's why the printing press created new religions.

mmsc•1h ago
.. AI ... swarms! Like bees! Scary! We should be scared!

Anyways, I'm not sure AI is relevant here. Misinformation is just a form of propaganda which other than allowing the creation of falsehoods quicker, doesn't seem to be any more "threatening" than any other lie.

zzzeek•1h ago
Sure but who needs democracy anyway? Monarchy is the way of the future for famous tech oligarchs like Peter Thiel (and aren't we all just aspiring tech oligarchs on HN?). AI monarchs are the way of the future.
downboots•1h ago
We get to choose our dystopia? :D
devrandoom•1h ago
Sections of Reddit and Twitter have been taken over by incredibly toxic cesspit of bots. They fuel polarization and hate like nothing I've ever seen before.

It's catered for the algorithm which pumps it out to users.

SoftTalker•1h ago
I wonder if AI is going to track like nuclear power. In the 1950s it was the greatest thing. Electricity would be plentiful and cheap. "Too cheap to meter." All kinds of new conveniences would be possible. The future was bright.

Then we had growing environmental concerns. And the costs were much higher than initially promoted. Then we had Three Mile Island. Then Chernobyl. Then Fukushima. New reactor construction came to a standstill. There was no trust anymore that humans could handle the technology.

Now, there's some interest again. Different designs, different approaches.

Will AI follow the same path? A few disasters, a retreat, and then perhaps a renewal with lessons learned?

EGreg•1h ago
I've been saying this for years.

It's curious that 90% of the top-level comments here are all dismissing it outright. And we have the usual themes:

1) "This has always been possible before. AI brings nothing new."

2) "We haven't seen anything really bad yet, so it is a non-issue

3) "AI execs are pushing this narrative to make AI sound important and get more money"

Nevermind the fact that many famous people behind the invention of AI, including Jeffrey Hinton (the "godfather of AI") and others quit their jobs and are spending their time loudly warning people, or signing major letters where they warn about human extinction or job loss... it's all a grift according to the vocal HN denizens.

This is like the opposite of web3 where everyone piles on the other way.

Well... swarms of AI agents take time to amass karma, pagerank, and other metrics, but when with the coming years, they will indeed be able to churn out content 24/7, create normal-looking influencer accounts and dominate the online discussion on every platform. Very likely, the percentage of human-generated content will trend to 0-1% of content on the internet, and it will become a dark forest, and this will be true on "siloed" ecosystems like HN as well: https://maggieappleton.com/ai-dark-forest/

Certainly, saying "this was always possible before" misses the forest for the trees. No, it wasn't.

List of attacks made possible by swarms of agents:

1) Edits to Wikipedia and publishing articles to push a certain narrative, as an Advanced Persistent Threat

2) Popular accounts on social media, videos on YouTube, Instagram and TikTok pushing AI-generated narratives, biased "news" and entertainment across many accounts growing in popularity (already happening)

3) Comments under articles, videos and shared posts that are either for or against the thing, and coordinated upvoting that bypasses voting ring detection (most social networks don't care about it as much as HN). Tactical piling on.

4) Sleeper accounts that appear normal for months or years and amass karma / points until they gradually start coordinating, either subtly or overtly. AI can play the long strategic game and outmaneuver groups people as well, including experts (see #7 for discrediting them).

5) Astroturfing attacks on people who disagree with the narrative. Maybe coordinating posts on HN that make it seem like it is an unpopular position.

6) Infiltrating and distracting opponents of a position, by getting them mired in constant defenses or explanations, where the interlocutors are either AI or friends / allies that have been "turned" or "flipped" by AI to question them

7) Reputational destruction, along the lines of NSA PRISM powerpoint slides (https://archive.org/details/NSA-PRISM-Slides) but at scale and implacable.

8) Astroturfing support for wars, unrest, or whatever else, but at scale, along the lines of Mahachkala Protests in Russia, etc.

These are just some of the early low-hanging fruit for 2026 and 2027.

hayst4ck•1h ago
We are in an age where the technology of oppression gives the rich the feeling that they are above consequences. That people can be controlled in a way they will not be able to break free from.

Malicious AI swarms are only one manifestation of technology which gives incredible leverage to a handful of people. Incredible amounts of information are collected, and an individual AI agent per person watching for disobedience is becoming more and more possible.

Companies like Clearview already scour the internet for any public pictures and associated public opinions and offer a facial recognition database with political opinions to border patrol and police agencies. If you go to a protest, border patrol knows. Our government intelligence has outsourced functions to private companies like Palantir. Privatizing intelligence means intelligence capabilities in private hands, that might sound tautological, but if this does not make you fearful, then you did not fully understand. We have license plate tracking everywhere, cameras everywhere, mapped out "social graphs," and we carry around devices in our pockets that betray every facet of our personal lives. The vast majority of transactions are electronic, itemized, and tracked.

When every location you visit is logged, interaction you have is logged, every associate you communicate with known, every transaction itemized and logged for query, and there is a database designed to join that data seamlessly to look for disobedience and the resources available to fully utilize that data, then how do you mount a resistance if those people assert their own power?

We are becoming dangerously close to not being able to resist those who own or operate the technology of oppression and it is very much outpacing the technology of resistance.

Dig1t•1h ago
This is a popular tactic that the managerial class has been trying to use to keep its power in an age of decentralized mass communication.

Whenever you want to seize control of something or take power away from people just present it as an existential “threat to democracy”.