The notion that AI is reshaping American politic is a clear example of a made-up problem that is propped up to warrant a real "solution".
Who's reading these messages? Other LLMs?
LLMs tend to be very long-winded. One of my personal "tells" of an LLM-written blog post is that it's way too long relative to the actual information it contains.
So if the interns are getting multi-page walls of text from their constituents, I would not be surprised if they are asking LLMs to summarize.
Recently at work a team produced some document that they asked for review on. They mentioned that they experimented with LLMs to write it (didnt specify to what extent). Then they suggested you could feed into an LLM to paraphrase to help review.
So yeah. This is just the world we live in.
rough details -> LLM -> product -> summarize with LLM -> feedback -> revisions -> finished product
Where no single person or group knows what the finished writing product even is.
* There is continuing consolidation in traditional media, literally being bought by moneyed interests.
* The AI companies are all jockeying for position and hemorrhaging money to do so, and their ownership and control is again, moneyed interests.
* This administration looks to be willing to pick winners and losers.
I think this all implies that the way we see AI used in politics in the US is going to be in net in support of the super wealthy, and in support of the current administration.
The other structural aspect is that AI can simulate grassroots support. We have already seen bot farms and such pop up to try to drive public opinion at the level of forum and social media posts. AI will automate this process and make it 10 or 100x more effective.
So both on the high and low ends of discourse, we can expect AI to push something other than what is in the interests of the common person, at least insofar as the interests of billionaires and political elites fail to overlap with those of common people.
Peter Pomerantsev's books are eye-opening on the previous generation of this class of tactics, and it's easy to see how LLM technology + $$$ might be all you need to run a high-scale influence operation.
I guess I just view bad information as a constant. Like bad actors in cybersecurity, for example. So I mean yeah... it's too bad. But not a surprise and not really a variable you can control for. The whole premise of a Democracy is that people have the right to vote however they want. There is no asterisk to that in my opinion.
I really dont see how 1 person 1 vote can survive this idea that people are only as good as the information they receive. If that's true, and people get enough bad information, then you can reasonably conclude that people shouldn't get a vote.
Ban bots from social media and all other speech platforms. We agree that people ought to have freedom of speech. Why should robots be given that right? If you want to express an opinion, express it. If you want to deploy millions of bots to impersonate human beings and distort the public square, you shouldn’t be able to.
I believe our real civic bottleneck is volume, not apathy. Omnibus bills and “manager’s amendments” routinely hit thousands of pages (the FY2023 omnibus was ~4,155 pages). Most voters and many lawmakers can’t digest that on deadline.
We could solve this with LLMs right now.
We've already seen several pork inclusions be called out by the press, only discovered because of AI, but it will be a while before it really starts having an impact. Hopefully it just breaks the back of the corruption, permanently - the people currently in political positions tend not to be the most clever or capable, and in order to game the system again, they'll need to be more clever than the best AI used to audit and hold them to account.
This country is doomed to collapse. This is about the time when Rome decided it was too much overhead to manage the whole empire, so they split into two empires. We're on such a mountain of cards that we're considering running our representative government with AI.
Your optimism just reinforced my blackpill...
For the vast majority of people voting, though, I think a) they already know who they’re voting before because of their identity group membership (“I’m a X person so I only vote for Y party”) or b) their voting is based on fundamental issues like the economy, a particularly weak candidate, etc. and therefore isn’t going to be swayed by these marginal mechanisms.
In fact I think AI might have the opposite effect, in that people will find candidates more appealing if they are on less formal podcasts and in more real contexts - the kind of thing AI will have a harder time doing. The last US election definitely had an element of that.
So I guess the takeaway is: if elections are so close that a tiny amount of voters sway them, the problem of polarization is already pretty extensive enough that AI probably isn’t going to make it much worse than it already is.
So it matters in the same way that the billions of dollars currently put toward this small silver matter, just in a more efficient and effective way. That isn't something to ignore, but it's also not a doomsday scenario IMO.
Polarization is the symptom. The cause is rampant misinformation and engagement based feeds on social media.
https://en.wikipedia.org/wiki/Political_polarization_in_the_...
I do agree that social media might make it worse, though. But again I don’t know if AI is really going to impact the people that are voting based on identity factors or major issues like the economy doing poorly.
I could see how AI influences people to associate their identity more with a particular political stance, though. That seems like a bigger risk than any particular viewpoint or falsehood being pushed.
To rephrase: things are so bad they can't get worse. But the beauty of life is that they always can!
nluken•1h ago
Most people, in my experience, use LLMs to help them write stuff or just to ask questions. While it might be neat to see the little ways in which some political movements are using new tools to help them do what they were already doing, the real paradigm shifting "use" of LLMs in politics will be generating content to bias the training sets the big companies use to create their models. If you could do that successfully, you would basically have free, 24/7 propaganda bots presenting your viewpoint to millions as a "neutral observer".
SoftTalker•1h ago