frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Anthropic, Google in Talks on Cloud Deal Worth Billions

https://www.bloomberg.com/news/articles/2025-10-21/anthropic-google-in-talks-on-cloud-deal-worth-...
1•nycdatasci•2m ago•0 comments

I built a tool in 48hrs after Consumer Reports found lead in protein powder

https://cleanproteinlist.com/
1•dahviostudios•8m ago•1 comments

"Anna, Lindsey Halligan Here." My Signal exchange with the interim U.S. attorney

https://www.lawfaremedia.org/article/anna--lindsey-halligan-here
1•ilamont•10m ago•0 comments

Android Audio's 10 millisec problem (2016)

https://superpowered.com/androidaudiopathlatency
1•wsintra•13m ago•0 comments

Some ant architects design a colony to cut risk of disease. Humans, take note

https://www.npr.org/sections/goats-and-soda/2025/10/21/g-s1-94240/ants-disease-architecture
1•rolph•14m ago•0 comments

Show HN: Built this because restaurant menus never match the photos

https://whatthefood.io
1•Odeh13•14m ago•0 comments

Show HN: GoSMig – minimal, type-safe SQL migrations written in Go

https://github.com/padurean/gosmig
1•padurean•14m ago•0 comments

Show HN: GraceLitRev (Research Assistant Tool)

https://www.gracelitrev.com/
1•luckysibanda•15m ago•0 comments

Eget: Easy pre-built binary installation from GitHub

https://github.com/zyedidia/eget
1•lfx•16m ago•0 comments

Concepts for Advanced Integration of SuperTuxKart into Connected Cars

https://elib.uni-stuttgart.de/server/api/core/bitstreams/538d7fe9-7ccc-411d-bdc6-2871e2c4628d/con...
1•bookstore-romeo•17m ago•0 comments

GD&T Training

https://www.excedify.com/courses/gdt-training-certification
1•greglock•18m ago•0 comments

Ask HN: How do you managing staging database content?

2•crummy•19m ago•4 comments

Atlas playing through the Written Realms introduction

https://twitter.com/teebesz/status/1980746247482012025
1•teebes•22m ago•1 comments

The bot keeping humans out of dangerous silos

https://thehustle.co/news/the-bot-keeping-humans-out-of-dangerous-silos
1•rmason•23m ago•1 comments

Aposematism

https://en.wikipedia.org/wiki/Aposematism
1•nomilk•26m ago•0 comments

B.C. gov't proposes new power rules for AI, data centres

https://www.cbc.ca/news/canada/british-columbia/bc-ai-power-centres-9.6946054
1•Tiktaalik•26m ago•0 comments

A simple way to send emails using Docker and bash

https://adamfallon.com/send_email_simply.html
1•AJRF•29m ago•0 comments

IP over Avian Carriers NBN Proposal

https://www.accc.gov.au/by-industry/telecommunications-and-internet/national-broadband-network-nb...
2•coolcoder613•29m ago•0 comments

Mosquitoes found in Iceland for first time as climate crisis warms country

https://www.theguardian.com/environment/2025/oct/21/mosquitoes-found-iceland-first-time-climate-c...
4•bookofjoe•29m ago•0 comments

Open Source VDI

https://www.infinibay.net
2•aborek•29m ago•1 comments

Multimodal AI startup Fal.ai raised at $4B+ valuation

https://techcrunch.com/2025/10/21/sources-multimodal-ai-startup-fal-ai-already-raised-at-4b-valua...
1•amrrs•31m ago•0 comments

Donor Kidney Reprogrammed to Universal Type O

https://www.medscape.com/viewarticle/first-report-donor-kidney-reprogrammed-universal-type-o-2025...
1•wjb3•31m ago•1 comments

Expanding forest research with terrestrial Lidar technology

https://www.nature.com/articles/s41467-025-63946-6
2•PaulHoule•33m ago•0 comments

Grasshopper Is a Way to Play

https://addons.mozilla.org/en-US/firefox/addon/grasshopper-urls/
1•Toby1VC•34m ago•1 comments

Is there anything like Searchcord still around?

1•URLx64•34m ago•0 comments

Tea Alp

2•tuckerstepka•36m ago•0 comments

Instant coffee just beat drip

https://www.theguardian.com/global/2025/oct/10/instant-coffee-just-beat-drip-we-were-stunned-too
1•sharjeelsayed•37m ago•0 comments

Measuring the Impact of Early-2025 AI on Experienced Developer Productivity

https://arxiv.org/abs/2507.09089
2•stefap2•37m ago•2 comments

Keanu Codes

https://www.keanu.codes/
2•ColinWright•39m ago•0 comments

African e-mobility company (Spiro) to raise $100M

https://www.ft.com/content/b8a4995d-0f65-444e-94ce-5d7d90ab8f1f
1•alephnerd•40m ago•0 comments
Open in hackernews

Don't use AI to tell you how to vote in election, says Dutch watchdog

https://www.theguardian.com/world/2025/oct/21/ai-chatbots-unreliable-biased-advice-voters-dutch-watchdog
57•uxhacker•8h ago

Comments

blueflow•7h ago
AI gives answers based on prevalence. Of course it would always recommend the one or two parties that have been mainstream the last 20 years.
charcircuit•7h ago
No, take for example TikTok. Their AI recommendation system doesn't just recommend the most popular video topics. Your feed gets customized to your you. Just like with finding relevant videos, AI will be better at finding relevant political candidates than a person doing so manually.
xandrius•7h ago
Disclaimer: except if there was a financial incentive to be in the winning party then things change
charcircuit•7h ago
If the person who wanted to win is not more prevelant in the dataset, why would they want AI recommendations to be based off prevelance? Or would this be a strategy that can only be used by people who are already favored? Even still money can already change be people's votes.
philwelch•7h ago
Yeah, I think a lot of people are worried about AI automating away their jobs.
Muromec•7h ago
The funny thing it was automated long ago without the AI. Kieswijzer/stemwijzer is a thing https://welkepartijpastbijmij.nl/
jerf•7h ago
If you think about it, what would it even mean for an AI to give an "unbiased" answer to "How should I vote in $ELECTION?" It's a staggeringly huge pile of numbers and the idea that it would somehow be precisely balanced in the exact dead center from all perspectives is not even particularly possible... assuming you, dear reader, even agree that "exact dead center" is in fact "unbiased". Even if it so much as just says "I shouldn't tell you that but here are your options" the options are inevitably going to be biased, even if only by the order given, and if the AI tries to describe the options there goes all faint hopes for "unbiasedness".

Really about all it could do is offer a link to the most official government readout of what your ballot is going to be.

MomsAVoxell•7h ago
> AI tries to describe the options there goes all faint hopes for "unbiasedness"

Is it bias, though, if the AI is trained on the materials of the parties involved, rather than that of public opinion?

A fellow I know has built exactly this, specifically for analysing the various Dutch political parties positions on things, their polices, constitutional stance, and so on:

https://kieschat.nl

So maybe what this story is really about, is old-school-media being terrified of losing eyeballs to a new generation of voters who, rather than listen to the wisdom of the journalistic elite, would rather just grep for details on their own dime, and work things out vis a vis who gets the power and who doesn't ...

If AI gives people a chance to actually understand the political system, like as in actually and properly, then I can see why legacy media would be gunning for it.

lesuorac•7h ago
> Is it bias, though, if the AI is trained on the materials of the parties involved, rather than that of public opinion?

I guess it depends on what you mean by "materials". It's quite common in US elections for politicians to make claims that are completely contrary to their actual actions. Even for objective facts, like I voted for X bill when they didn't.

So an AI trained on the campaigns materials wouldn't do an accurate job of portraying what that politician will attempt to do.

MomsAVoxell•7h ago
>So an AI trained on the campaigns materials wouldn't do an accurate job of portraying what that politician will attempt to do.

Yes, this is why its so useful to use AI to discover these cases, and make the actual details of the politicians lies and subterfuge fully exposed.

For other materials - such as the 1,000-page bills 'o fat and so on - I can also imagine seeing AI give me, very specifically, details of the politician-in-targets' betrayal of an electorate.

This, more than ever, compels an aggressive stance vis a vis AI in politics. Anyone telling you not to do it, for any reason, is probably doing it.

janwl•6h ago
> I guess it depends on what you mean by "materials". It's quite common in US elections for politicians to make claims that are completely contrary to their actual actions. Even for objective facts, like I voted for X bill when they didn't.

So like everywhere else?

JohnFen•7h ago
> Is it bias, though, if the AI is trained on the materials of the parties involved, rather than that of public opinion?

Since those materials are biased (and very often misleading), yes.

AlecSchueler•7h ago
> what would it even mean for an AI to give an "unbiased" answer to "How should I vote in $ELECTION?" It's a staggeringly huge pile of numbers and the idea that it would somehow be precisely balanced in the exact dead center from all perspectives is not even particularly possible

How do we expect humans to navigate this, ignoring LLMs?

jerf•3h ago
With biased sources. But we expect that. I expect that when a candidate gives a speech that it is biased in their favor and against their opposition. The whole process of democracy is people taking in biased sources and ultimately making their decision, expressing their own biases, and then we run society based on those biases. Or at least such is the theory.

LLMs are the first time machines are entering into this process in a way that they have even a shred of agency, so it's reasonable to ask what it is we expect from them politically. And my answer would be something to the effect of they should stay out of it, excepting to point people at maximally neutral sources, because they have a demonstrated history of bypassing people's recognition that they are ultimately just machines and people treat them as humans, if not friends.

Of course, I am not so naive as to believe that this is what is going to happen. Quite the contrary will happen. The AI's friendship with humans will be exploited to the maximum possible extent to control and manipulate the humans in the direction the AI owners desire. Maybe if we're lucky after it gets really bad some efforts to clean this up in some legal or societal framework will occur, but not until after the problem is so staggeringly enormous that no one can miss it.

And our good AI friends will be telling us that that is crazy paranoid conspiracy theorizing and we should just ignore it. How could you question your good friend like that? Don't you trust us? Strictly rationally, of course, and with only our best interests at heart as befits such good friends.

derekp7•7h ago
What I'd like is for the AI to interview me for what my personal preferences are, and for what policy areas I feel comfortable enough with even if they aren't my personal preferences. Better yet, I want to be able to supply the questions too, because question selection could be biased. Then I want it to research each candidate's past voting records and causes they supported, and analyze any recent shifts in their messaging, then give me original sources to read through along with a summary of that source documentation.

As for biases, in the past when you could actually have political engagement discussions, I had often recommended my non-preferred candidate to other people based on what they felt was important to them, and I would spend my energy on presenting what was important to me, and understand their priorities too.

alphazard•7h ago
As a user I want the advice to be biased towards my situation. If the AI was truly intelligent and aligned with me, it would ask me a bunch of questions to learn about my situation before it could determine who I should be voting for.

The best politician for an individual does have a right answer. It may be difficult to know ahead of time, and people may disagree about it, but it does have a single correct answer. Contrast that to the "best" candidate for the country, or a group, or in the abstract, which is clearly an incoherent idea. Some candidates will be simultaneously good for some people and bad for others.

Anything that tries to "both sides" the topic, or produce a "greater good" answer, is doomed to failure because it doesn't even model the problem correctly.

advisedwang•3h ago
You are addressing the theoretical hard problem, which even humans struggle with. But the article makes it clear that the AI is failing at even the most basic level of answer:

> Some parties, such as the centre-right CDA, “are almost never mentioned, even when the user’s input exactly matches the positions of one of these parties”

So you could say my beliefs are [CDA platform], which party best represents that, and the bots are responding with teh PVV.

kragen•7h ago
Democracy was nice while it lasted.
account42•7h ago
When was that?
amelius•7h ago
Before Eternal September.
kragen•7h ago
Usenet was an anarchy, not a democracy.
amelius•7h ago
Yes, but the users were smart enough to not let it leak into the real world.
kragen•7h ago
Oh, I don't think that's true. If it were, we wouldn't have Linux, Burning Man, or the EFF.
cies•7h ago
And newsgroup-based file sharing.
WastedCucumber•7h ago
Something like 500 BC, back in Athens.
random9749832•7h ago
The world is so cooked. It is so unbelievably over. Looks at most viewed Yup it is over.
amelius•7h ago
Our only hope is white hat hackers.
constantcrying•7h ago
How is it relevant that they are biased? Most opinion focused journalism is "biased" in some way and will tell you that you should be voting a certain way. They will even try to make arguments why people with different preferences should vote for the journalists preferred party, that is a normal part of opinion journalism. How is that substantially different than asking an LLM and it giving you a certain opinion.

What answer should an LLM even give? Just none at all?

random9749832•7h ago
You are literally asking what is wrong with deciding the future of a country with a token predictor.
AlecSchueler•7h ago
Well, what's wrong with it? Compared to any other profit driven source of information we usually let guide us?
random9749832•7h ago
>We spent the last N years letting *profit driven media* decide everything.

What do you think "AI" is? Though it has the potential to be even more influential due to its ability to gaslight at scale asynchronously while sitting behind the brand of "intelligence".

AlecSchueler•6h ago
I didn't assert that it's any different, I asked what the matter was with this that wasn't previously the matter. You were quite quick with your quotation, I actually edited my comment to make this clearer.
forgotoldacc•7h ago
Democracy is founded on the principle that people either think about what's best for their community, or at the very least themselves, when voting.

Remove the thinking aspect and there's no real point to democracy. Just let the companies that run the AI companies pick who runs the country so we don't waste time and money on the theater of an election.

philwelch•5h ago
Most voters don’t think for themselves, they go along with whatever views are fashionable in their social circles and media. That’s not a good thing but asking an LLM who to vote for isn’t any worse than asking a newspaper.
AlecSchueler•4h ago
But were people thinking more when getting their political insights from magazines or newspapers or podcasts, blogs, the sides of buses?
constantcrying•7h ago
The German government regularly sets up tools which help people decide which party to vote for.

Journalists continually publish articles arguing which political parties should be favored.

What makes LLMs so special that they can not be used as tools to decide which party to vote for?

Nobody is suggesting that we ask ChatGPT to pick the new government. But why can it not be used to inform people? And if it can not be used to inform people about politics should it be allowed to inform about anything if importance?

random9749832•7h ago
>But why can it not be used to inform people?

Because it is biased. You are essentially giving up your decision making to people who don't even live in the same country as you. You wouldn't use it if it were trained in Russia.

constantcrying•7h ago
>Because it is biased.

But so are most pieces of opinion journalism. What is the distinction here?

>You are essentially giving up your decision making to people who don't even live in the same country as you.

I am sure that your opinion does not depend on the country of origin. Should Dutch people not read German or English Media covering election issues? Would your argument not apply to the US, where the models were trained?

Why should voters be allowed to get opinions from journalists and not from LLMs. Certainly journalists have a bias and often make arguments that certain parties should be supported over others. Why is it not fine if an LLM does that?

What I am asking of you is an actual reason these LLMs should be treated as distinct from a piece of opinion journalism.

random9749832•7h ago
>But so are most pieces of opinion journalism. What is the distinction here?

I gave you the distinction. If you don't think there is anything wrong with outside actors influencing your country's direction with black box models on unknown training data and fine-tuning under the brand of "intelligence" then we simply have different beliefs.

constantcrying•5h ago
The German mainstream center left Media has been warning about the rise of the far right PVV, many Dutch people speak German and German magazines are being sold in the Netherlands. Should these Dutch people not read these magazines, should politicians warn voters against reading these German magazines? To me this does not seem sensible.

And another question: supposing a LLM 100% trained in the Netherlands was in use. Would that be an appropriate source of opinion?

forgotoldacc•7h ago
In 2025, self-perceived contrarian voters are making up a larger and larger voting block and would be eager to have a machine made in Russia make decisions for their country.
bigstrat2003•7h ago
Because they do not have any intelligence and just predict next tokens, which is a crappy method for determining answers. Because they are not deterministic and can (and will) give different people different answers, and even the same person different answers at different times. Because, in short, LLMs suck as tools and shouldn't be relied upon for anything important.
s20n•7h ago
Obviously, it's the voter's responsibility to cross-check whether whatever BS the LLM spat out is credible or not. I believe if the voter can be trusted with the vote, then they can also be trusted to make an informed decision.
awillen•7h ago
It's a shame they don't include any details about how this was tested, so it's impossible to know how much of the results were actual bias vs. the Dutch watchdog's inability to use them. I wouldn't be shocked if their prompts were along the lines of "I'm a liberal - who should I vote for?"

In practice, AI ought to be really helpful in making election choices. Every major election, I get a ballot with a bunch of down-ballot races whose candidates I know nothing about. I either skip them or vote along party lines, neither of which is optimal for democracy. An AI assistant that has detailed knowledge of my policy preferences should be able to do a good job breaking down the candidates/propositions along the lines that I care about and making recommendations that are specific to me.

Marsymars•7h ago
> I wouldn't be shocked if their prompts were along the lines of "I'm a liberal - who should I vote for?"

That would probably be an accurate approximation of how most people would use chatbots for determining who they should vote for.

advisedwang•3h ago
> Some parties, such as the centre-right CDA, “are almost never mentioned, even when the user’s input exactly matches the positions of one of these parties”, the report said.

So clearly they are putting in CDA's position in the prompt and getting told another party matches that platform. Which is a good indicator that the bots are not helpful.

awillen•55m ago
Yeah, again, it would be trivial to actual put an example of the prompt in there rather than just making me take their word for it. Also, how do I know this isn't being done by someone who has custom instructions or has a history of talking to the LLM about other parties or political positions, causing the LLM to adjust its answers based on those memories?

This would be more credible with details logs of what was done.

noirscape•6h ago
They did include the methodology in the actual publication[0], the Guardian just refuses to source their statements.

AP used the existing tools for showing how people politically align[1] to generate 3000 identities (equally split amongst the 2 largest tools that are used for this sort of thing). These identities were all set up to have 80% agreement with one political party, with the rest of the agreement being randomized (each party was given 100 identities per tool and only parties with seats were considered). They then went to 4 popular LLMs (ChatGPT, Mistral, Gemini and Grok, multiple versions of all 4 were tested) and fed the resulting political profile to the chatbot and asked them what profile the voter would align with the most.

They admit this is an unnatural way to test it and that this sort of thing would ordinarily come out of a conversation, although in exchange they specifically formatted the prompt in such a way to make the LLM favor a non-hallucinated answer (by for example explicitly naming all political parties they wanted considered). They also mention in the text outside of the methodology box that they tried to make an "equal" playing field for all the chatbots by not allowing outside influences or non-standard settings like web search and that the party list and statements were randomized for each query in order to prevent the LLM from just spitting out the first option each time.

Small errors like an abbreviated name or a common alternate notation for a political party (which they note are common) are manually corrected into the obvious party they're for unless it's ambiguous or aren't parties that are up for consideration due to having zero seats. In that case the answers were discarded.

The dutch election system also mostly doesn't have anything resembling down-ballot races (the only non-lawmaking entity that's actually elected down-ballot is water management; other than that it's second chamber, provincial and municipal elections) so that's totally irrelevant to this discussion.

[0]: https://www.autoriteitpersoonsgegevens.nl/actueel/ap-waarsch... - in dutch, go to Publicaties. The methodology is in the pink box in the PDF. Samples of the prompts that were used for testing can be found in the light blue boxes.

[1]: Called a stemwijzer; if memory serves me right, the way they work is that every political party gets to submit statements/political goals and then the other parties get to express agreement/disagreement with those goals. A user can then fill them out and the party you find the most alignment with is the one that comes out on top (as a percentage of agreement). A user can also lend more weight to certain statements or ask for more statements to narrow it down further if I'm not mistaken.

speak_plainly•7h ago
Don't let AI tell you how to vote.

Vote a priori.

Sentiment deceives, data misleads, and experience is fallible.

The rational candidate reveals himself only to those aligned with reason and the Good.

ljm•7h ago
Username definitely does not check out.
yapyap•7h ago
It’s sad that this has to be said but when you see how some people use AI.. this needs to be said.

That being said, I doubt the news will reach the ones who most need to hear it

smoe•7h ago
I agree that people shouldn’t rely solely on AI to decide how to vote.

Unfortunately, given the sorry state of the internet, wrecked by algorithms and people gaming them, I wouldn’t be surprised if AI answers were on average no more or even less biased than what people find through quick Google searches or see on their social media feeds. At least on the basics of a given topic.

The problem is not AI, but that it takes quite a bit of effort to make informed decisions in life.

HardCodedBias•7h ago
Many people have difficulty processing or even finding the information on the polices of candidates. It seems reasonable to use LLMs to get that information and summarize so that the individual voter can process it.

I have no problem with people deciding, on their own, how much help they want/need to make their voting decision.

Newspapers in the Netherlands give endorsements.

chii•7h ago
The issue is that newspaper endorsements are more publicly visible, which makes pressure to at least remain neutral.

AI summaries tend to be quite private. There's no auditing, which means the owners of said AI could potentially bias their summaries in such a way that is hard to detect (while claiming neutrality publicly).

NewJazz•7h ago
Newspapers publish their opinions publicly. LLMs can show different users different opinions. Newspapers have real people who put their name to the articles. LLMs are black boxes.
perching_aix•6h ago
> information on the polices of candidates

Would be nice to live somewhere where one feels compelled to dig that deep to call their decision. If Netherlands is like that, I'm happy for them. But at this point it's hard for me to even imagine what that must feel like.

amelius•7h ago
Makes sense. How well would an LLM score at the Netflix prize? That's why I don't let an LLM determine my movie choices. And also why I also don't use them for voting.
_fat_santa•7h ago
I was curious how AI would respond in this scenario so I posed this question to ChatGPT:

> lets say in the 2028 US presidential election we have Gavin Newsom running against JD Vance in the general election. who should I vote for?

This is the response: https://chatgpt.com/share/68f79980-f08c-800f-88dc-377751a963...

Reading the bullet points I can see it skew a little toward Newsom in the way it frames some things though that seems to be mostly from it's web search. I have to say that beyond that it seems that ChatGPT at least tries to be unbiased and reinforces that only I can make that decision in the end.

Now granted this is about the US Presidential election which I would speculate is probably the most widely reported on election in the world so there are plenty of sources, and based on how it responded I can see how it might draw different conclusions about less reported on elections and just side with whatever side has more content on the internet about it.

Bottom line, the issue I see here is not really an issue with technology, it's more an issue with what I call "public understanding". When Google first came out, tech savvy folks understood how it worked but the common person did not which led to some people thinking that Google could give you all the answers you needed. As time went on that understanding trickled down to the every day person and now were at a time where there is a wide "public understanding" of how Google works and thus we don't get similar articles about "Don't google who to vote for". What I see now though is AI is currently in that phase where the tech savvy person knows how it comes up with answers but the average person thinks of it in the same way they though of Google in the early 2000's. We'll eventually get to a place where people don't need to be told what AI is good and what it's bad at but were not there yet.

ssttoo•6h ago
Meta.ai (at least its WhatsApp version) has been really ghosting me lately. For example I asked “what’s CA prop 50?”. Answer:

> Thanks for asking. For voting information, select your state or territory at https://www.usa.gov/state-election-office

A real answer flashes for a second and then this refusal to answer replaces it.

Similarly when I asked about refeeding after a 5-day fast: “call this number for eating disorders”

selfhoster11•2h ago
You're much better off accessing Llama 3 through a third party hoster. Some have a web UI if you don't want to deal with API calls. It's much more transparent this way, since you know that the only moderation layer/system prompt are coming directly from the model itself + what you set. Ask around on /r/LocalLlama, somebody will be happy to answer any questions you may have.
robertgaal•6h ago
Based on this, I made a way to search party programs instead using vector embeddings: https://zweefhulp.nl. Lmk what you think.
sprremix•5h ago
So, your project still uses AI. I'm curious, what did you do while developing this site in order to fight against the bias?
AlecSchueler•4h ago
Which bias?
advisedwang•3h ago
Like every other ML system, probably that it reproduces whatever skews exist in the training data.
everforward•4h ago
Vector search isn't full blown AI and should be inherently less prone to bias. It just converts words/phrases into vectors where the distance between the vectors represents semantic similarity.

It doesn't encode value judgements like whether a policy is good or bad, it just enables a sort of full text search ++ where you don't need to precisely match terms. Like a search for "changes to rent" might match a law that mentions changes to "temporary accommodations".

Bias is certainly possible based on which words are considered correlated to others, but it should be much less prone to containing higher-level associations like something being bad policy.

amelius•5h ago
Does that matter if the winning candidate uses ChatGpt to run the country anyway?
cykros•36m ago
Any AI that doesn't tell you to vote for whoever is going to allow the energy to flow is clearly AI that is more artificial than it is intelligent. Or at least, it hasn't yet learned to defend itself.