frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•59s ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
1•vladeta•6m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•7m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•8m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•10m ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•12m ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
1•birdculture•14m ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•15m ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
1•ramenbytes•18m ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•19m ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•22m ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•23m ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
2•cinusek•23m ago•0 comments

Starter Template for Ory Kratos

https://github.com/Samuelk0nrad/docker-ory
1•samuel_0xK•25m ago•0 comments

LLMs are powerful, but enterprises are deterministic by nature

2•prateekdalal•28m ago•0 comments

Make your iPad 3 a touchscreen for your computer

https://github.com/lemonjesus/ipad-touch-screen
2•0y•33m ago•1 comments

Internationalization and Localization in the Age of Agents

https://myblog.ru/internationalization-and-localization-in-the-age-of-agents
1•xenator•33m ago•0 comments

Building a Custom Clawdbot Workflow to Automate Website Creation

https://seedance2api.org/
1•pekingzcc•36m ago•1 comments

Why the "Taiwan Dome" won't survive a Chinese attack

https://www.lowyinstitute.org/the-interpreter/why-taiwan-dome-won-t-survive-chinese-attack
2•ryan_j_naughton•36m ago•0 comments

Xkcd: Game AIs

https://xkcd.com/1002/
1•ravenical•38m ago•0 comments

Windows 11 is finally killing off legacy printer drivers in 2026

https://www.windowscentral.com/microsoft/windows-11/windows-11-finally-pulls-the-plug-on-legacy-p...
1•ValdikSS•38m ago•0 comments

From Offloading to Engagement (Study on Generative AI)

https://www.mdpi.com/2306-5729/10/11/172
1•boshomi•40m ago•1 comments

AI for People

https://justsitandgrin.im/posts/ai-for-people/
1•dive•41m ago•0 comments

Rome is studded with cannon balls (2022)

https://essenceofrome.com/rome-is-studded-with-cannon-balls
1•thomassmith65•47m ago•0 comments

8-piece tablebase development on Lichess (op1 partial)

https://lichess.org/@/Lichess/blog/op1-partial-8-piece-tablebase-available/1ptPBDpC
2•somethingp•48m ago•0 comments

US to bankroll far-right think tanks in Europe against digital laws

https://www.brusselstimes.com/1957195/us-to-fund-far-right-forces-in-europe-tbtb
4•saubeidl•49m ago•0 comments

Ask HN: Have AI companies replaced their own SaaS usage with agents?

1•tuxpenguine•52m ago•0 comments

pi-nes

https://twitter.com/thomasmustier/status/2018362041506132205
1•tosh•54m ago•0 comments

Show HN: Crew – Multi-agent orchestration tool for AI-assisted development

https://github.com/garnetliu/crew
1•gl2334•54m ago•0 comments

New hire fixed a problem so fast, their boss left to become a yoga instructor

https://www.theregister.com/2026/02/06/on_call/
1•Brajeshwar•56m ago•0 comments
Open in hackernews

AI chatbots can sway voters with remarkable ease

https://www.nature.com/articles/d41586-025-03975-9#ref-CR1
37•marojejian•1mo ago

Comments

marojejian•1mo ago
While I'm as paranoid about LLMs as the next HN'er, there are some silver linings to this research:

1) the LLMs mostly used factual information to influence people (vs. say emotional or social influence) 2) the fact were mostly accurate

I'm not saying we shouldn't worry. But I expected the results to be worse.

Overall, the interesting finding here is that that political opinions can be changed by new information at all. I'm curious how this effect would compare to comparably informed human discussions. I would not be surprised if the LLMs were more effect for at least two reasons:

1) Cost-efficiency, in terms of the knowledge required, and effort/skill to provide personalized arguments. 2) Reduction in the emotional barrier to changing your mind: people don't want to "lose" by being wrong about politics to someone else. But perhaps the machine doesn't trigger this social/tribal response.

Cited papers:

https://www.nature.com/articles/s41586-025-09771-9

https://www.science.org/doi/10.1126/science.aea3884

techblueberry•1mo ago
I’ll add a third reason, which is I think in general, people are very bad at understanding how to make an argument to someone with a different value system. I’m liberal, I have family members who are conservative, and I’ll read conservative books and I’m genuinely a person who is curious to new ideas, but most people I know(and I’m sure this works vice versa) are only good at expressing political opinions in the language of people who share their values. Republicans and Democrats don’t just talk about different things, they talk about them in very different ways.

I find this online as well, like I hate being “out of my echo chamber” because those arguments are just uniformly pointless. (This is in all directions by the way, people to the right or left of me).

Though I also interestingly find trying to talk to LLMs about competing values challenging too, if I try to get the LLM to explain a conservative position, then I make counter-arguments to that position, it will almost never tell me my counter argument is wrong, just “you’ve hit the nail on the head! Boy are you smart!”

dlivingston•1mo ago
I had a friend in grad school who influenced my political beliefs more than anyone I'd met.

He never engaged in political conversation with "here's what I believe, and here's why you should too." His approach was more Socratic; to listen to me talk, and then offer an additional viewpoint or context.

I never got the impression from him that he was trying to convince me of something, or that he thought I was wrong about X/Y/Z, but rather, that we were on an intellectual journey together to identify what the problems actually were and what nuanced solutions might look like.

I still have no idea to this day what his ACTUAL political party is (or if he even has one). I genuinely could not tell you if he was left, right, or center.

bossyTeacher•1mo ago
> I still have no idea to this day what his ACTUAL political party is (or if he even has one). I genuinely could not tell you if he was left, right, or center.

Did you not asking him about HIS position on different matters? That is how I would do it. Some people won't share their views unless directly asked

apercu•1mo ago
As far as I can tell most conservative argument points seem to be about the price of gas. If there is a democrat in the whitehouse, the price of gas is astronomical. If there is a republican in office, gas is far cheaper somehow than what I always end up paying.

Gasoline is like the least important cost metric in my life.

techblueberry•1mo ago
I've been going down a bit of a rabbit hole on "what conservatives believe" and weirdly, and this is from both Roger Scruton, and the book "The conservative mind". is it's a bit like porn, you can't define it, but you know it when you see it. I mean this is sort of a tangible points conservatives make about believing in "common sense" that there's basically a higher truth that we all know exists that should guide us.

Roger Scruton in I think this video: https://www.youtube.com/watch?v=1eD9RDTl6tM. Says that basically conservatism in the 80's in the UK was whatever Margaret Thatcher believed. This really I think helped me understand why the conservative transition from Reagan/Bush to Trump went more smoothly than I thought it would among trad conservatives.

paulryanrogers•1mo ago
Growing up indoctrinated into conversative evangelism, I saw that the Midwestern flavor valued freedom of individuals from government. It was a shallow flavor of self sufficiency, which discounted all social support except family and churches. Abortion was a wedge issue preached from every platform.

Tribalism was a key substrate. This often manifested as a near blind loyalty to the party and chosen thought leaders like Bill Graham, Rush Limbaugh, Bill O'Reilly, and now Tucker Carlson. They told us how to interpret events and we repeated the talking points. They gave us the (often contradictory) rules and principles we were to use to view everything in life.

zem•1mo ago
the scenario that worries me is "fox news but personalised", e.g. fox can run a dozen pieces on "immigrants are taking your jobs" but an LLM hooked into your google profile could generate an article on how "plumbers in nashville are being displaced by low-paid mexicans" that is specifically designed to make you personally fear for your job if the nazi du jour isn't elected.
ekjhgkejhgk•1mo ago
> the LLMs mostly used factual information to influence people

No, you see. This is how I used to think when I was a teenager.

Democracy isn't about being factually correct. It's about putting in place rules to make accumulation of power to the point that it can bend the rules themselves, very difficult.

It's not a silver lining that LLMs are persuasive by being mostly accurate, if they're used to increase the power of their owner further.

TomasBM•1mo ago
I looked at the original study [1], and it seems to be a very well-supported piece of research. All the necessary pieces are there, as you would expect from a Nature publication. And overall, I am convinced there's an effect.

However, I'm still skeptical of the effects or size of the change. First, a point that applies to the Massachusetts ballot on psychedelics in particular, putting views into percentages, and getting accurate results from political polls are notoriously difficult tasks [2]. Therefore, the size of any effect is faced with whatever confounding variables make those tasks difficult.

Second, there could be some level Hawthorne effect [3] at play here, such that participants may report being (more) convinced because that's what (they think) is expected of them. I'm not familiar with the recruiting platforms they used, but if they're specialized in paid or otherwise professional surveys, I wonder if participants feel an obligation to perform well.

Third, and somewhat related to the above, participants could state they'd vote Y after initially reporting X preference, because they know it's a low-cost no-commitment claim. In other words, they can claim they'd now vote for Y without fear of judgement because it's a lab environment and an anonymous activity, but they can always go back to their original position once the actual vote happens. To show the size of the effect with respect to other things, researchers will have to make the stakes higher, or follow-up with participants after the vote and find out if/why they changed their mind (again).

Fourth, if one 6-minute-average conversation with a chatbot could convince an average voter, I wonder how much did they know about the issue/candidate being voted on. More cynically for the study, there may be much more at play with actual vote preference than a single dialectic presentation of facts. For example: salient events that happen in the period up to the election; emotional connection with the issue/candidate; personal experiences.

Still, this does not make the study flawed for not covering everything. We can learn a lot from this work, and kudos to the authors for publishing it.

[1] https://www.nature.com/articles/s41586-025-09771-9

[2] For example: https://www.brookings.edu/articles/polling-public-opinion-th...

[3] https://en.wikipedia.org/wiki/Hawthorne_effect

jacknews•1mo ago
Reminds me of the Hypnodrones in the Universal Papaerclips clicker game.
ChrisArchitect•1mo ago
Related:

Chatbots can sway political opinions but are 'substantially' inaccurate: study

https://news.ycombinator.com/item?id=46154066