And I mean total. A sufficiently advanced algorithm will be able to find everything a person has ever posted online (by cross referencing, writing style, etc.) and determine their views and opinions with high accuracy. It'll be able to extrapolate the evolution of a person's opinions.
The government will be able to target dissidents even before they realize they are dissidents, let alone before they have time to organize.
https://www.newsweek.com/donald-trump-database-palantir-dyst...
As many as you can control with signal chat.
Besides, I'm not sure if tanks like the Abrams are as important anymore. Nowadays, things like food and water really matter. For example, exporting corn is crucial. Also, having the soils needed to make modern tech, like chips and batteries, is super important. Therefore Greenland is.
Every billionaire in function constitutes an aristocracy of one. Meanwhile, states have armed forces. A billionaire, in relying on money for power, implicitly also depends on continued access to the global financial system which gives money meaning, in order to exercise that power. States are not obligated to allow such access, and may easily prevent it in the limiting case by ensuring such trade comes with a side of explosives delivered at speed, which broadly suffices to deter desirable counterparties.
Think what anyone may of such a thing, the fact is having an army or navy or air force means you can do it. Which billionaire has one of those?
Is this like a sufficiently smart compiler? :)
Stylometry is well-studied. You'll be happy to know that it is only practical when there are few suspect authors for a post and each author has a significant amount of text to sample. So, tying a pseudonymous post back to an author where anyone and everybody is a potential suspect is totally infeasible in the vast majority of cases. In the few cases where it is practical, it only creates a weak signal for further investigation at best.
You might enjoy the paper Adversarial Stylometry: Circumventing Authorship Recognition to Preserve Privacy and Anonymity by Greenstadt et al.
How do y’all establish ye Theory Of Stylometry, O Phrenology Majors?
O, @dang confirms it on Mastodon or something??
More seriously, why is this essential?
That language could be recognized by a deterministic finite automaton!
What if I don't have an alternate HN account? Or what if I do have one, but it has barely any posts? How can you tie this account back to my identity?
Stylometry.net is down now, so it's hard to make any arguments about its effectiveness. There are fundamental limitations in the amount of information your writing style reveals.
How do you know it didn't miss 10x more than it found? Like, that's almost definitionally unprovable.
You're missing the point, it doesn't have to be practical, only the illusion of it working is good enough.
And if authoritarian governments believe it works well enough, they are happy to let a decent fraction of false positives fall through the cracks.
See for example, polygraph tests being used in court.
The truth, accuracy doesn't matter to authoritarians. It doesn't matter to Trump, clearly, people are being sent away with zero evidence, sometimes without formal charges. That's the point of authoritarianism. The leader just does as he wishes. AI is not enabling Trump, the feckless system of checks and balances is. Similarly, W lied about wmd's, to get us into an endless war. It doesn't matter that this reason wasn't truthful. He got away with it and enriched himself and defense contractor buddies at the expense of the American people.
The essay attempted to mitigate this by noting OAI is nominally a non-profit. But it's clear the actions of the leadership are firmly aligned with traditional capitalism. That's perhaps the only interesting subtly of the issue, but the essay missed this entirely. The omission could not have been intentional, because it provides a complete motivation for item #2.
[1] #2 is 'The US is a democracy and China isn’t, so anything that helps the US “win” the AI “race” is good for democracy.'
Real improvements are achieved in the real world, and building more houses or high speed trains does not require "AI". "AI" will just ruin the last remaining attractive jobs, and China can win that race if they want to, which isn't clear yet at all. They might be more prudent and let the West reduce its collective IQ by taking instructions from computers hosted by mega corporations.
That is, "the ends justifies the means"? Yep, seems like we are already at war. What happened to the project of adapting nonzero sum games to reality??
I would not do business with Kim Jong Un. He is murdering a lot of his own people. Or with Putin. He is murdering a lot of Ukrainians.
But guess what: both North Korea and Russia are under sanctions. You can't do business with them anyway.
But the UAE is not under sanctions. Which means that in the opinion of the US Government it is ok to do business with them. Then who is Open AI to say otherwise? Why should it be any of their concern to determine who is a good guy or a bad guy in the world? Shouldn't there be a division of responsibilities? Let the Department of State determine who is good and who is bad, and let companies do business with those who are not on the sanctions list.
Either is our duty to be the the moral arbiters of the world or it isn't. Which one is it?
There are more than two answers to everything.
> Wasn't it moral to try to eliminate a known mass murderer?
Given the context and the means. No.
Many were opposed to that war, not because they didn't feel it was right to eliminate a mass murderer, but because that was not the stayed reason. The stated reason in fact turned out to be false, and was arguably an abject lie.
In other words ... it's not a great example of what you're trying to claim.
including our own…
At the, end, Saddam ultimately pulled too hard on the leash and miscalculated his power. Murder, mass or otherwise and morality has little bearing on matters of empire.
Thinking otherwise is naive.
Because helping someone do something bad is itself bad.
> Shouldn't there be a division of responsibilities?
It sounds like you mean an abdication of responsibility? We are already responsible for our own choices and actions, as well as their effects.
A lot of the people reading Hacker News right now think they have a better solution for the societal problems of the UAE. I personally have no idea about what's going on over there. But let's say that I'm in charge of the business decisions at Open AI. Should I start thinking that I know a way to solve their problems, and part of that way is for my company to apply some form of AI embargo on them? Or should I simply know my limitations, and restrict my judgment to the matters I am familiar with.
"Abdication of responsibility". What grand words. Why exactly has Open AI a responsibility to guide the UAE towards a better future? And, more importantly, why should Open AI feel confident that they know what is better for the UAE?
OpenAI is not responsible for UAE government. However, it's responsible for its own actions, and for their easily predictable consequences.
And I am talking about not doing bad things, which includes helping others do bad things.
> It is very appealing to think you know better than other people.
Not really -- responsibility, principles, and morality are more of a burden. Indeed, it's far more appealing to dispense with them all and do whatever one wants, claiming that all those burdens are someone else's job.
> A lot of the people reading Hacker News right now think they have a better solution for the societal problems of the UAE.
With all due respect, the percentage of HackerNews readers who are thinking about the UAE right now can be counted on one hand. This discussion is about generalities: Don't help people do bad stuff.
You're arguing about what's bad, and what isn't, and who decides. Interesting topic (perhaps for someone else or for another time)! Would you help someone do something which you think is bad?
LLMs cheaply produce plausible and persuasive BS. This is what they've done from the start. Exactly how that ability will be applied we don't know, but it doesn't take a lot to see that the Venn Diagram of 'cheap & effective BS' and 'public good' does not have a great deal of overlap.
With LLMs, suddenly we have a tool that can generate misinformation on a scale like never before. Messaging can be controlled. Given that the main drivers of this technology (zuck, nadella, altman, and others) have chosen to make befellows of autocrats what follows is surely not a surprise.
The locomotion thing might not have been as obvious at the time as it is in hindsight.
I see this a lot and this is not at all obvious to me. I'm very much an introvert. Would you describe yourself as the same or opposite?
> Each member of hoi polloi will be absolutely cocooned
I generally read specific publications and generally don't seek to "interact" online and entirely avoid social media. Prior to the existence of social media this was the norm. Do you not at all suspect that this overuse of LLMs would push people back towards a more primitive use of the network?
> The only way to control the internet, you see, proved to be to drown it out.
Yet I see them sparing no expense when it comes to manipulating the law. It seems there's a bit more to it than punching down on the "hoi polloi."
The Internet's enshitification is a lot like entropy. It can only go in one direction, even if the user base would genuinely prefer another.
> I see this a lot and this is not at all obvious to me.
Here is an example as to how LLM's are jet fuel for spreading arbitrary disinformation. Scripting the interaction below is trivial with minor prompt alteration and is left as an exercise for the reader.
Prompt:
Generate 10 unique sentences supporting removing
fluoride from municipal water. Make 2 aggressive.
Answers: Here are 10 unique sentences supporting removing fluoride
from municipal water supplies:
Municipal water fluoridation represents a form of mass
medication without individual consent, violating the
fundamental principle of informed medical choice.
The practice of adding industrial-grade fluoride chemicals
to drinking water is an assault on public health that must
be stopped immediately. (aggressive)
Studies have linked excessive fluoride exposure to dental
fluorosis, skeletal fluorosis, and potential neurological
effects, particularly in children.
Modern toothpaste and dental care products already provide
adequate fluoride exposure for those who choose it, making
water fluoridation redundant.
It's time to end the archaic and dangerous practice of
medicating entire populations through the water supply
against their will. (aggressive)
European countries like Sweden, Norway, and the Netherlands
have successfully maintained good dental health without
water fluoridation programs.
The optimal fluoride dosage cannot be controlled through
water supplies since people consume varying amounts of
water based on age, health, and activity level.
Water fluoridation disproportionately affects low-income
communities who cannot afford filtration systems to remove
unwanted chemicals.
Natural fluoride exposure through food sources provides
sufficient intake without the need for artificial
supplementation in drinking water.
Communities should have the right to vote on whether they
want fluoride added to their water supply rather than
having it imposed by government mandate.
> I'm very much an introvert. Would you describe yourself as the same or opposite?How is this in any way relevant to the original premise regarding "LLM's primary usefulness"?
The exercise was to illustrate the ease of producing disinformation.
The topic was chosen to allow a popular LLM offering the ability to produce plausible sentences supporting a conspiratorial position.
The rest of your post interprets generated text, which I clearly identified as being such, as if it were a position I hold and not what it is:
Statistically generated text produced by an algorithm
Remember the original premise:
... LLMs' primary usefulness is as force-multipliers of the
messaging sent out into a society.
My generated example is of course based on content an LLM was trained with, which by definition implies there will be no "unique perspectives." The germane point is that it is trivial to amplify disinformation in ways which can "flood the zone" with seemingly plausible variants of a particular position using LLM's and trivial automation.> A single post can reach millions of unique viewers over night, regurgitating the same old crap already found in a plentiful surplus online is pointless.
When the goal is to "reach millions of unique viewers over night[sic]", you have a point. However, when the goal is to ensure this can never be achieved, then blasting "the same old crap already found" is an oft used technique.
People tend to function more in identity groups, in which the “correct” opinion is learned from a combination of news sources and peers. I don’t think amplifying the content part of that will have much if any effect.
Hard disagree. Misinformation is a form of lying, a way to manipulate people. This has nothing to do with "political beliefs" and instead is firmly rooted in ethics[0].
> Are any of those sentences actually misinformation anyway?
Yes.
> Or is Wikipedia also jet fuel for spreading misinformation and you think we should only have access to centrally curated encyclopedias like Britannica?
This is a nice example of a strawman argument[1] and easily refuted by my citations.
> The only way to control the internet, you see, proved to be to drown it out.
The way to control the internet is to literally control it, like the governments already do.
That is very simple: First, dumping graphics cards on trusting Saudi investors seems like a great idea for Nvidia. Second, the Gulf monarchies depend on the U.S. and want to avoid Islamic revolutions. Third, they hopefully use solar cells to power the data centers.
Will they track users? Of course, and GCHQ and the NSA can have intelligence sharing agreements that circumvent their local laws. There is nothing new here. Just don't trust your thoughts to any SAAS service.
It's a little more insidious than that, though, isn't it? They've got my purchases, address history, phone call metadata, and now with DOGE much of our federal data. They don't need a twitter feed to be adversarial to my interests.
> to any SAAS service.
They're madly scraping the web. I think your perimeter is much larger than SAAS.
That last part was considered dystopian: there can't possibly be enough people to watch and understand every other person all day long. Plus, who watches the watchers? 1984 has been just a scary fantasy because there is no practical way to implement it.
For the first time in history, the new LLM/GenAI makes that part of 1984 finally realistic. All it takes is a GPU per household for early alerting of "dangerous thoughts", which is already feasible or will soon be.
The fact that one household can be allocated only a small amount of compute, that can run only basic and poor intelligence is actually *perfect*: an AGI could at least theoretically side with the opposition by listening to the both sides and researching the big picture of events, but a one-track LLM agent has no ability to do that.
I can find at least 6 companies, including OpenAI and Apple, reported working on always-watching household device, backend by the latest GenAI. Watching your whole recent life is necessary to have enough context to meaningfully assist you from a single phrase. It is also sufficient to know who you'll vote for, which protest one might attend before it's even announced, and what is the best way to intimidate you to stay out. The difference is like between a nail-driving tool and a murder weapon: both are the same hammer.
During TikTok-China campaign, there were a bunch of videos showing LGBT people reporting how quickly TikTok figured their sexual preferences: without liking any videos, no following anyone, nor giving any traceable profile information at all. Sometimes before the young person has admitted that for themselves. TikTok figures that simply by seeing how long the user stares at what: spending much more time on boys' gym videos over girls', or vice versa, is already enough. I think that was used to scare people of how much China can figure about Americans from just app usage?
Well if that scares anyone, how about this: an LLM-backend device can already do much more by just seeing which TV shows you watch and which parts of them give you laugh or which comments you make to the person next to you. Probably doesn't even need to be multimodal: pretty sure subtitles and text-to-speech will already do it. Your desire to oppose the upcoming authoritarian can be figured out even before you admit it to yourself.
While Helen Toner (the author) is worried about democracies on the opposite end of the planet, the stronghold of democracy may as well be nearing the last 2 steps to achieve the first working implementation of Orwellian society:
1. convince everyone to have such device in their home for our own good (in progress)
2. intimidate/seize the owning company to use said devices for not our own good (TODO)
You can use an LLM to do that, but a specific ML model trained on the same dataset would likely be better in every quantitative metric and that tech was available long before transformers stepped onto the stage.
Are there any Natural Language Processing fields today that openly boast about higher performance than LLMs with experimental results? If there was they'd probably be in benchmarks.
Virtually every "democracy" has a comprehensive camera monitoring system, tap into comm networks, have access to full social graph, whatever you buy,know all your finances, and if you take measures to hide it ... Know that you do that.
Previously the fire hose information being greater than the capability of governments to process it was our saving Grace from TurnKey totalitarianism.
With AI it's right there. Simple button push. And unlike nuclear weapons, it can be activated and no immediate klaxon sounds up. It can be ratcheted up like a slow boil, if they want to be nice.
Oh did I forget something? Oh right. Drones! Drones everywhere.
Oh wait, did I forget ANOTHER thing? Right right, everyone has mobile devices tracking their locations, with remote activatable cameras and microphones.
So ... Yeah.
The deployment completely destroys the internet as well as a large swath of American sovereignty in its own borders, as a portion of the population becomes AI-addled ungovernable jihadists that spend half their time drooling over AI generated images and the other half crucifying heretics.
That almost sounds like it describes current reality.
Also, you’re suggesting because a company got away with bad behavior in the past we should never expect better of any other company going forward?
It's more than a generation; IBM literally provided computers that ran the holocaust.
https://en.wikipedia.org/wiki/Chris_Lehane
Fixer par excellence!
> few dozens of their opponents
Why did you ignore the censored/oppressed billions of people living there?
Trying to forever suppress the middle east obviously hasn't worked, so this is just realpolitik with the obvious right choice being what is being done now imho. Saudis are gonna be autocratic in any case, this is just good Hearts of Iron gameplay in real life.
Yes, I certainly agree that it is and recognize all your examples.
> You don't see LLMs' impact on culture and society being at least as broad and thorough?
In the sense I think you're implying, I see them as having almost zero impact. Just because more crap is generated doesn't mean it's going to be more believable, or that it will even be seen. How many tokens do you think it would take a SOTA model to convince you that the earth is flat or that the moon-landing was a hoax? Do you think Trump supporters will start voting for Democrats if they see 100 anti-Trump posts in every comment section on the internet? The LLM isn't going to generate anything that we haven't all heard already.
But I feel like the humans will win that one long term, as bots fill the public web with bitter political rhetoric I think people will retreat to less politicised private communities. I certainly have.
Another angle is as you noted, we basically surrender all our private data to corporations. What if a reigning political party decides that they need to develop an anti-terror model, that scans all communication from all people for Nasty Terror Thoughts then flags them for detainment. If the System has decided you are evil and the System is considered super intelligent, who is allowed to second guess it. Maybe though, evil thoughts are just disagreement with the reigning political party.
I wrote this in a few other places, but this is a long foregone state of affairs. People's attention spans are already fully saturated, bloating the internet with a bunch of variations of the same crap isn't going to do anything that isn't already happening today. I don't need to generate a hundred million times what I can simply post once to a hundred million people.
> What if a reigning political party decides that they need to develop an anti-terror model, that scans all communication from all people for Nasty Terror Thoughts then flags them for detainment
This is already possible and happening (e.g. CSAM scanning). The crux of my point is that LLMs really aren't that big of a deal compared to the panopticon society that we've already built. The agents of the authoritarian control platform aren't going to become 10x more spooky because they installed a language model plugin.
You have already killed ten millions people and you still do not have enough?
How much bloodthirsty you are?
Whether you are building for US autocrats, gulf state autocrats, Russian autocrats, whatever... maybe it's better to not do that? (I know, easier said than done.)
Ironically, I see a lot more leaning towards dystopian tendencies in the West, mostly the US, as technology advances to the point of singularity (or near singularity, where most low and midskilled jobs are automated away).
Meanwhile these autocratic countries have had strong welfare systems for their citizens and increasingly now their residents, since God knows when, and are well positioned to reap the benefits of an AI boom, given their smaller population sizes.
prpl•5h ago