Really about all it could do is offer a link to the most official government readout of what your ballot is going to be.
Is it bias, though, if the AI is trained on the materials of the parties involved, rather than that of public opinion?
A fellow I know has built exactly this, specifically for analysing the various Dutch political parties positions on things, their polices, constitutional stance, and so on:
So maybe what this story is really about, is old-school-media being terrified of losing eyeballs to a new generation of voters who, rather than listen to the wisdom of the journalistic elite, would rather just grep for details on their own dime, and work things out vis a vis who gets the power and who doesn't ...
If AI gives people a chance to actually understand the political system, like as in actually and properly, then I can see why legacy media would be gunning for it.
I guess it depends on what you mean by "materials". It's quite common in US elections for politicians to make claims that are completely contrary to their actual actions. Even for objective facts, like I voted for X bill when they didn't.
So an AI trained on the campaigns materials wouldn't do an accurate job of portraying what that politician will attempt to do.
Yes, this is why its so useful to use AI to discover these cases, and make the actual details of the politicians lies and subterfuge fully exposed.
For other materials - such as the 1,000-page bills 'o fat and so on - I can also imagine seeing AI give me, very specifically, details of the politician-in-targets' betrayal of an electorate.
This, more than ever, compels an aggressive stance vis a vis AI in politics. Anyone telling you not to do it, for any reason, is probably doing it.
So like everywhere else?
Since those materials are biased (and very often misleading), yes.
How do we expect humans to navigate this, ignoring LLMs?
LLMs are the first time machines are entering into this process in a way that they have even a shred of agency, so it's reasonable to ask what it is we expect from them politically. And my answer would be something to the effect of they should stay out of it, excepting to point people at maximally neutral sources, because they have a demonstrated history of bypassing people's recognition that they are ultimately just machines and people treat them as humans, if not friends.
Of course, I am not so naive as to believe that this is what is going to happen. Quite the contrary will happen. The AI's friendship with humans will be exploited to the maximum possible extent to control and manipulate the humans in the direction the AI owners desire. Maybe if we're lucky after it gets really bad some efforts to clean this up in some legal or societal framework will occur, but not until after the problem is so staggeringly enormous that no one can miss it.
And our good AI friends will be telling us that that is crazy paranoid conspiracy theorizing and we should just ignore it. How could you question your good friend like that? Don't you trust us? Strictly rationally, of course, and with only our best interests at heart as befits such good friends.
As for biases, in the past when you could actually have political engagement discussions, I had often recommended my non-preferred candidate to other people based on what they felt was important to them, and I would spend my energy on presenting what was important to me, and understand their priorities too.
The best politician for an individual does have a right answer. It may be difficult to know ahead of time, and people may disagree about it, but it does have a single correct answer. Contrast that to the "best" candidate for the country, or a group, or in the abstract, which is clearly an incoherent idea. Some candidates will be simultaneously good for some people and bad for others.
Anything that tries to "both sides" the topic, or produce a "greater good" answer, is doomed to failure because it doesn't even model the problem correctly.
> Some parties, such as the centre-right CDA, “are almost never mentioned, even when the user’s input exactly matches the positions of one of these parties”
So you could say my beliefs are [CDA platform], which party best represents that, and the bots are responding with teh PVV.
What answer should an LLM even give? Just none at all?
What do you think "AI" is? Though it has the potential to be even more influential due to its ability to gaslight at scale asynchronously while sitting behind the brand of "intelligence".
Remove the thinking aspect and there's no real point to democracy. Just let the companies that run the AI companies pick who runs the country so we don't waste time and money on the theater of an election.
Journalists continually publish articles arguing which political parties should be favored.
What makes LLMs so special that they can not be used as tools to decide which party to vote for?
Nobody is suggesting that we ask ChatGPT to pick the new government. But why can it not be used to inform people? And if it can not be used to inform people about politics should it be allowed to inform about anything if importance?
Because it is biased. You are essentially giving up your decision making to people who don't even live in the same country as you. You wouldn't use it if it were trained in Russia.
But so are most pieces of opinion journalism. What is the distinction here?
>You are essentially giving up your decision making to people who don't even live in the same country as you.
I am sure that your opinion does not depend on the country of origin. Should Dutch people not read German or English Media covering election issues? Would your argument not apply to the US, where the models were trained?
Why should voters be allowed to get opinions from journalists and not from LLMs. Certainly journalists have a bias and often make arguments that certain parties should be supported over others. Why is it not fine if an LLM does that?
What I am asking of you is an actual reason these LLMs should be treated as distinct from a piece of opinion journalism.
I gave you the distinction. If you don't think there is anything wrong with outside actors influencing your country's direction with black box models on unknown training data and fine-tuning under the brand of "intelligence" then we simply have different beliefs.
And another question: supposing a LLM 100% trained in the Netherlands was in use. Would that be an appropriate source of opinion?
In practice, AI ought to be really helpful in making election choices. Every major election, I get a ballot with a bunch of down-ballot races whose candidates I know nothing about. I either skip them or vote along party lines, neither of which is optimal for democracy. An AI assistant that has detailed knowledge of my policy preferences should be able to do a good job breaking down the candidates/propositions along the lines that I care about and making recommendations that are specific to me.
That would probably be an accurate approximation of how most people would use chatbots for determining who they should vote for.
So clearly they are putting in CDA's position in the prompt and getting told another party matches that platform. Which is a good indicator that the bots are not helpful.
This would be more credible with details logs of what was done.
AP used the existing tools for showing how people politically align[1] to generate 3000 identities (equally split amongst the 2 largest tools that are used for this sort of thing). These identities were all set up to have 80% agreement with one political party, with the rest of the agreement being randomized (each party was given 100 identities per tool and only parties with seats were considered). They then went to 4 popular LLMs (ChatGPT, Mistral, Gemini and Grok, multiple versions of all 4 were tested) and fed the resulting political profile to the chatbot and asked them what profile the voter would align with the most.
They admit this is an unnatural way to test it and that this sort of thing would ordinarily come out of a conversation, although in exchange they specifically formatted the prompt in such a way to make the LLM favor a non-hallucinated answer (by for example explicitly naming all political parties they wanted considered). They also mention in the text outside of the methodology box that they tried to make an "equal" playing field for all the chatbots by not allowing outside influences or non-standard settings like web search and that the party list and statements were randomized for each query in order to prevent the LLM from just spitting out the first option each time.
Small errors like an abbreviated name or a common alternate notation for a political party (which they note are common) are manually corrected into the obvious party they're for unless it's ambiguous or aren't parties that are up for consideration due to having zero seats. In that case the answers were discarded.
The dutch election system also mostly doesn't have anything resembling down-ballot races (the only non-lawmaking entity that's actually elected down-ballot is water management; other than that it's second chamber, provincial and municipal elections) so that's totally irrelevant to this discussion.
[0]: https://www.autoriteitpersoonsgegevens.nl/actueel/ap-waarsch... - in dutch, go to Publicaties. The methodology is in the pink box in the PDF. Samples of the prompts that were used for testing can be found in the light blue boxes.
[1]: Called a stemwijzer; if memory serves me right, the way they work is that every political party gets to submit statements/political goals and then the other parties get to express agreement/disagreement with those goals. A user can then fill them out and the party you find the most alignment with is the one that comes out on top (as a percentage of agreement). A user can also lend more weight to certain statements or ask for more statements to narrow it down further if I'm not mistaken.
Vote a priori.
Sentiment deceives, data misleads, and experience is fallible.
The rational candidate reveals himself only to those aligned with reason and the Good.
That being said, I doubt the news will reach the ones who most need to hear it
Unfortunately, given the sorry state of the internet, wrecked by algorithms and people gaming them, I wouldn’t be surprised if AI answers were on average no more or even less biased than what people find through quick Google searches or see on their social media feeds. At least on the basics of a given topic.
The problem is not AI, but that it takes quite a bit of effort to make informed decisions in life.
I have no problem with people deciding, on their own, how much help they want/need to make their voting decision.
Newspapers in the Netherlands give endorsements.
AI summaries tend to be quite private. There's no auditing, which means the owners of said AI could potentially bias their summaries in such a way that is hard to detect (while claiming neutrality publicly).
Would be nice to live somewhere where one feels compelled to dig that deep to call their decision. If Netherlands is like that, I'm happy for them. But at this point it's hard for me to even imagine what that must feel like.
> lets say in the 2028 US presidential election we have Gavin Newsom running against JD Vance in the general election. who should I vote for?
This is the response: https://chatgpt.com/share/68f79980-f08c-800f-88dc-377751a963...
Reading the bullet points I can see it skew a little toward Newsom in the way it frames some things though that seems to be mostly from it's web search. I have to say that beyond that it seems that ChatGPT at least tries to be unbiased and reinforces that only I can make that decision in the end.
Now granted this is about the US Presidential election which I would speculate is probably the most widely reported on election in the world so there are plenty of sources, and based on how it responded I can see how it might draw different conclusions about less reported on elections and just side with whatever side has more content on the internet about it.
Bottom line, the issue I see here is not really an issue with technology, it's more an issue with what I call "public understanding". When Google first came out, tech savvy folks understood how it worked but the common person did not which led to some people thinking that Google could give you all the answers you needed. As time went on that understanding trickled down to the every day person and now were at a time where there is a wide "public understanding" of how Google works and thus we don't get similar articles about "Don't google who to vote for". What I see now though is AI is currently in that phase where the tech savvy person knows how it comes up with answers but the average person thinks of it in the same way they though of Google in the early 2000's. We'll eventually get to a place where people don't need to be told what AI is good and what it's bad at but were not there yet.
> Thanks for asking. For voting information, select your state or territory at https://www.usa.gov/state-election-office
A real answer flashes for a second and then this refusal to answer replaces it.
Similarly when I asked about refeeding after a 5-day fast: “call this number for eating disorders”
It doesn't encode value judgements like whether a policy is good or bad, it just enables a sort of full text search ++ where you don't need to precisely match terms. Like a search for "changes to rent" might match a law that mentions changes to "temporary accommodations".
Bias is certainly possible based on which words are considered correlated to others, but it should be much less prone to containing higher-level associations like something being bad policy.
blueflow•7h ago
charcircuit•7h ago
xandrius•7h ago
charcircuit•7h ago