My personal theory is that people broadly are becoming very thick skinned with regard to content being pushed on them, but at the same time it has not occurred to people to simply disengage. (ie, they're getting frustrated by the pushers, but aren't leaving forums, social media, youtube, etc. so that no one can push anything on them) I think in some of the darker corners of the web, you currently see this associated with the term "slop." I assume we're all familiar with metaphor. Most of our waking lives (assuming we're on normal platforms) someone is out trying to twist your arm to get your attention, to get you outraged or jealous; anything for attention.
I really think it's breeding an incredible amount of unthinking cynicism, and at least some of the negativity you find online is just related to all the different wars for attention. As noted, it's quite surprising just how many people won't step away from this crazy attention marketplace. It's easy to do in principle: Put down your phone, your computer, and read a book or take a walk. In practice, it's more like overeating; people were never built with impulse control against novelty and social outrage, and lacking the fundamentals most people fail this test.
1. Printed ads, newspapers, billboards, magazines -- You can explain your product and show a picture of what it looks like to demonstrate the value to customers
2. Television ads, black and white -- We can demonstrate the value to customers so well!
3. Wait a minute, if we put music in the TV ads, the songs get stuck in peoples' heads, this is good for our brand
4. Color TV Ads -- We have all the previous benefits but can get more attention with color!
5. We can target regional TV ads in different parts of the country!
6. Oh, we can target any ad based on demographics on social media? This will be effective
7. Ok, now we want to keep targeting ads, but we're gonna A/B test multiple versions of the ads in real time to maximize effectiveness
8. Ok, I need to maximize effectiveness of this ad, let me generate an AI mockup of the product I'm selling to create an illusion of the lifestyle my brand represents
My point is, marketing has been optimized over time and will continue to optimize for profit in the future, and the result has been a divergence in the actual goal of marketing: We've gone from "Demonstrate the value of our product" to "Create an illusion of our lifestyle".
Scott Galloway occasionally mentions that he thinks most consumer spending is irrational and marketing spends purpose is to make people buy irrationally. Anecdotal observation supports his observation, most strikingly in fashion, where people buy the branded item that costs 100x (or more) than the equally useful unbranded item. Many other examples exist, of course, in almost every market. I tend to agree with Galloway that the goals haven't changed, only the marketing tools (and their effectiveness). Any increase in irrational behavior can be linked to tool efficacy and not to the motivation of the firm, which has remained constant.
Don't remember who said it, or I'd give credit where due...
Optimistically, the cynicism you describe could develop into a sophisticated ability to discern fraud.
There are studios that churn out crap games to capture the casual games market, which is not the gaming industry that an avid gamer would be familiar with.
Watch a kid with a tablet navigate the mobile/casual game market, you will feel sick. They flick between two dozen games in an hour, 90% of each game is locked behind microtransactions, and they get about a minute or two before the unskippable ad shows up.
The ad is for another lootbox/microtransaction fueled game, or actual gambling sometimes, and it's only another couple of minutes before an ad shows up in that game too. Rinse and repeat.
Kids are having their reward circuits absolutely fried, and it is not game enthusiasts making these games, it's just regular old capitalist companies who are trying to squeeze an opportunity.
It is every bit the exploitive, uncreative industry your friend thinks it is, but I do believe it is not the same as the games industry. It's like the difference between a board game and a slot machine.
You mean thin-skinned?
None of these nutcases offer true help to society (note: neither do the extreme leftists, just so we're clear that I'm not team red or team blue), and it does no good that our corporations are actively picking a side.
- They never believed in progressive causes, and were just siding with what they believed was the social majority. (and so when they perceive the social majority has changed, they immediately follow and would follow _any_ social majority.)
- They don't agree with the current anti-progressive social movement (ie, they still hold their old beliefs) but none of them have any backbone whatsoever, and are getting in line with virtually no resistance or fight.
- All the tech company CEOs just happened to be radicalized at the exact same time.
I'm sure that #1 is the most reasonable answer, although perhaps there's a dash of #3 in there. In any case, you'd have to question whether a party-in-power (from a social movement perspective) wouldn't just encourage this trend when _they_ were the ones winning.
The principles of propaganda are well established. Edward Bernays clearly described how to plant ideas and influence public opinion a hundred years ago (https://en.wikipedia.org/wiki/Propaganda_(book)). The only thing that has changed is the speed and intensity of communication.
They always had different standards in different countries and in different circumstances.
FB has been showing lots of dog-whistle racism, occasionally even outright overt racism for many years. The one occasion on which reported a blatantly racist comment they said it was not against "community standards".
They want money, and they want engagement, and they want governments to remove competition.
The criticism against these regulations are all valid and need to be discussed, because we also don't want to create these mechanisms at the government level only so the next authoritarian president can use them for their own personal agenda. But all this discussion should be in the direction of how these companies are going to be regulated, not how they aren't.
There's nothing stopping this hypothetical authoritarian president from creating this after they come to power.
The government controls the algorithm? Then the government pushes propaganda.
The algorithm is public? Then what kind of public algorithm? "Sort by recency", "sort by popularity", etc. will be gamed by propaganda-pushers. "Sort by closest friends" is better, but I suspect even it will be gamed by adversaries who initially push genuine interesting content and encourage you to befriend them, then shift to propaganda.
Sorry to be cynical, but I doubt you can prevent people from being attracted to and influenced by propaganda; if necessary, well-funded organizations will hire paid actors to meet people in person. You must narrow the goal, e.g. can hinder foreign propaganda by down-weighting accounts from foreign IP addresses, detecting and down-weighting foreign accounts which use residential VPNs, and perhaps detecting and down-weighting domestic people who are especially influenced by foreign propaganda to the extent they're probably being funded (but you don't know, so then you get controversy and ambiguity...)
The law is abused in the US because they have the tradition of keeping the constitution to a bare minimum and govern by precedence and common sense, which as we can see isn't very productive.
So yeah I guess I'm advocating for burocracy for now, at least until someone comes with a better idea. I'd take burocracy many times before corporation abuse.
EDIT: now I see I haven't addressed the main question. I believe that society needs a mechanism to hold big tech platforms accountable for abuse. The speed which big techs can push certain kinds of information through their services is such that the due process, when it works, is only effective after damage is done and by then different accounts and different outlets are already pushing the same kind of disinformation ads. Therefore preemptive removal of this content is necessary. The problem now becomes how to make it so that the universe of content eligible for preemptive removal can't be abused by the current administration. How can we make it so that the Israeli misinformation machine can't overshadow other institutions, but at the same time guaranteeing that the next political party in power can't abuse this system to suppress valid propaganda from the opposition?
Saying that the current way isn't productive isn't the same as saying that laws and regulations are designed to be productive. Actually I've acknowledged that first thing when I said that laws are burocratic. But you have to agree that some form of productivity is expected, otherwise why even bother if nothing is gonna get done at the govt level?
> The US Constitution has some flaws but it's still the closest anyone has come to perfection in the governance of human society.
How can you even falsify this claim? And should I take your word for it? From my point of view that makes little sense when corporations can buy elections like Elon did for Trump, and when Trump can just do as he pleases like it's happening now with university sensorship and the sacking of government officials that doesn't subscribe to the president's ideological agenda.
The main issue isn't the misinformation or disinformation; it is how quickly you can amplify reach and reach millions. Reverse chronological + follows based on active user choice would largely address that issue.
>At this point, whoever opposes [CSAM scanning/encryption backdoors] is in favor of [child abuse/criminal activity] ...
So "you're either with us or on the side of the bad guys" is a valid form of argument, but only when the bad guys are evil corporations? More to the point, much of the "regulations" proposed does end up infringing on human rights. For instance regulations forcing social media companies to remove "disinformation" or "content causing hatred/discomfort" necessarily limits others' freedom of speech.
A valid criticism would be an implied false dichotomy in my original comment (either regulation or rampant corporate abuse). My idea is for us to discuss this. Is regulation not the right way? What's the alternative? Not, "oh if that doesn't work for all possible universe of applicable solutions, it doesn't deserve merit"
I can't see how my comment is a "strawman" in any meaningful sense.
>A valid criticism would be an implied false dichotomy in my original comment
That's exactly my point. Adopting a "you're either with us or against us" attitude is totally toxic, and shouldn't be accepted just because it's for a cause you happen to agree with.
>My idea is for us to discuss this. Is regulation not the right way? What's the alternative? Not, "oh if that doesn't work for all possible universe of applicable solutions, it doesn't deserve merit"
If you wanted an intelligent discussion on what regulation should consist of, what's the point of starting off which such an absolutist remark? What does it add compared to something like "what's the right form of regulation to address this?"
I don't know, but the ones I've seen so far do not interest me.
>Are you against regulation?
I'm against bad regulation, yes.
>Or are you here just to discuss aesthetics?
If you think objections to "you're either with us or against us" and "we have to do something" attitudes are merely objections over "aesthetics", then yes.
That's... Good I guess? I mean, who would be in favor of bad regulation?
Anywho, I've laid out what I think in this comment[1], see if it interests you.
There's something weird about this complaint, isn't there? I mean, it's horrifying if social media algorithms are pushing child abuse content to anyone, but so far as I can see it isn't worse if the people they're showing it to are paedophiles. Maybe it's even a bit less bad since they're less likely to be distressed by it.
I think there's something deformed about a lot of the moral discourse around this stuff -- as if what matters is making sure that Those Awful People don't get anything they want rather than making sure bad things don't happen. (Far and away the most important bad thing associated with child abuse is the actual child abuse but somehow that's not where everyone's attention goes.)
r/worldnews is pretty tightly controlled, it's a default subreddit meaning 50+ million people see the posts submitted in this subreddit, and most critically, the ensuing conversation in comments which goes only in one direction. Frankly I'm impressed this all was pulled off so seamlessly.
https://www.reddit.com/r/worldnews/comments/1nc65sx/israel_i...
I don't think it's amorale to use the service for which you are blocking the ads for either. If they don't like it, they can try a new business model. They don't protect you, why should you protect them.
I agree with this logic. I own multiple Porsches because I don't think it's amoral to steal them from dealerships. If they don't like it, they can try a new business model.
Copying a thing or accessing a platform = the previous owner can still use or sell it.
Even if you consider it unethical access, the comparison to stealing really misses the mark.
I know this always triggers a hard-coded response based on regex, but the comparison doesn't rely on the specifics of stealing, so it's not a valid criticism. The logic is: people offer things in exchange for a price. You can take the things in exchange for the price, or you can leave the things. You shouldn't take the things without paying the price.
If anything, ads steal from YOU. They take your time and attempt to get you to part with your money.
You're not obligated to support a business model based on theft, if you want to consider it that. You're not obligated to support any business model.
If it's allowed, then go for it. They can always switch to another business model.
If the last ~9 months has demonstrated anything, it's that this was never the case.
It is really scary that people are pushing for Google and Meta to be the arbiter of truth. I don't think people realize what they are asking for. Western civilizations have a tradition of liberal free speech, and allowing the courts to sort out the specifics of what speech causes harm to what parties (libel, etc).
There are already laws on the books for false advertising. In the US, the FTC is one who prosecutes those laws, not Google or Meta!
full disclosure: I work on Ads at Google. You really don't want to privatize the prosecution, judgement, jury, and execution of speech laws to mega corps (and I am usually pro-privatization on most topics).
It doesn't seem like that big a step to apply a similar standard to advertising platforms. Advertisers have failed to selfregulate the ads they choose to publish and it is infeasible to use the court system to judicate every false ad (that would be millions of court cases). Ergo you do the obvious which is to make the advertiser name a human editor who holds legal responsibility for published ads (on behalf of the company).
Now you can sue the advertising company (eg. Google) for millions of false advertisements at once.
However, our laws mean that Google, Meta, etc. are not legally responsible for the content of the ads they run. The creator of the ad is.
And it is shockingly easy to construct a legal entity that is unaccountable.
This would prevent foreign ads targeting domestic users, and/or give you an organization to sue domestically. In this case, it's likely that the Israeli govt would work through a US based org, and that in court that case would likely fail for free speech rights. Though a case/org in another nation might not hold up under that nation's laws.
I can't sue a publisher for running an ad that was libel. I sue the advertiser who created the libel.
How do you think this works in reality when the people getting sanctioned are trying to bypass the sanctions by creating shell companies and false identities? You either have a totally ineffective sanctions regime because it can be trivially be bypassed by setting up new shell companies, or a vaguely effective one because banks are deputized to figure out whether their customers are sanctioned or not. Luckily we have the latter.
I'm objecting to the notion that mega corp ad networks are the best organization to determine what is truth vs. propaganda in our society.
Advertising is a commercial activity, so it should be reasonable to follow the money and find the advertiser. If necessary, add more requirements for advertisers to be identified/indetifiable so that suits can be served.
The reality is that ads are the primary vehicle for malicious content, whether it be malware, scams, or deception, on the web.
Google, as well as Meta, has demonstrated they do not take adequate measure to block said malicious content. This can lead to tangible real effects, such as getting scammed and losing your life savings.
Therefore, every web user should use a strict ad blocker per FBI recommendations. This is no longer a business question or a free-speech question, it is a computer system security question.
In that context, what google chooses to allow and what they ban is newsworthy. In this specific case, even moreso, since the ads violate google’s own rules.
Forcing ad networks to be the main arbiters of what is true vs. propaganda is a huge step towards an Orwellian society.
* some policies related to the concept of truth are one dealing with scams or fraudsters. Even then, it's only the scope of "does this advertiser actually provide the service they claim to be" or not, which is way more objective than anything related to war, religion, or the middle east.
Google doesn't really want scam ads. It doesn't make a lot of sense to penalize them for removing some of them just because their process isn't perfect; removing them doesn't have to be banned.
But if you make not removing them mandatory, you're replacing the justice system with a private corporation, which is pretty crazy. If the police accuse you of a crime, they have to prove it to a judge and jury. You can appeal to a higher court. Google doesn't have that. And if you add liability for not removing something, they're going to err on the side of removing things they ought not to, with no recourse for the victims. Competitor wants you out of the search results? Report it to Google and you're out, because they get a billion complaints and removing them by default is safer than getting prosecuted for missing a real one.
The correct solution is to let Google remove things that are bad without punishing them for not being perfect -- maybe even err on the side of imposing (civil) liability for removing things they shouldn't instead of not removing things they should -- and rely on the criminal justice system for going after the criminals.
* from my PoV, US history books taught in classrooms deny or downplay the genocide of native American people, so of which were my ancestors. But I don't want society to try and use mega corps to push my PoV.
[1] https://www.alibaba.com/product-detail/Android-Phone-Farm-Se...
It’s propaganda-as-a-service.
Regulating the corporations or their shadow, the government is both fine I guess and with that out of the way: lets discuss this!
So is it news that people are using ads/propaganda to persuade people? No. Will Google, Facebook, Amazon or Apple do anything that will harm their revenue as propaganda platform? No.
Do Google, Facebook, Amazon and Apple use propaganda? Yes.
This is like reading an article about how weapons from weapon companies are being weaponized.
Comments that suggest this was some kind of cover-up or rewriting of history only hurt their credibility. Implying that an entire group is spinning the past—even two years on—comes off as divisive. Let’s stick to the facts, not broad-brush accusations.
https://www.theguardian.com/world/article/2024/aug/05/nine-u...
Whenever did it become somehow a "right" to be able to pay for large scale propaganda? Oddly enough this right is not afforded to those without the funds to pay for it.
After the pressure from the outcome of election influencing, there seemed to be new rules come in place.
For other topics? Not so sure. Maybe it's something to look at before it has an election type response.
There's parallels to this I suspect in other industries affecting the world.
There's barely a mention of the October 7 massacre, where over 1,200 Israelis were murdered and hundreds taken hostage some of them are children. That’s the context behind Israel’s messaging. Leaving that out gives a very distorted picture of why these campaigns exist in the first place.
The article criticizes Israel for running ads that target UNRWA, but completely skips the fact that more than a dozen UNRWA staff were accused of actively participating in the massacre and holding hostages, That allegation was serious enough for countries like the US, Germany, the UK, and Australia to suspend their funding. That’s not “disinformation,” that’s a real international response.
There’s also zero mention of Hamas’s own propaganda operations. No discussion of how they use Telegram, TikTok, or social platforms to push graphic and often fake content to manipulate global opinion. If we're talking about the weaponization of information, how is that not relevant?
Instead, the article spends thousands of words dissecting Israel’s side while ignoring everything else. It presents only one narrative and wraps it in a moral argument that conveniently excludes key facts and context.
A fair critique would examine how all sides are using digital tools in modern conflicts, not just the one the author disagrees with politically. Otherwise, it’s not an analysis. It’s just a well-written piece of propaganda in itself.
crawsome•5h ago
Just recently, all of them, in-concert, started trying to focus on the lady who stole that Baseball at that game. All they are talking about for the last week. Promoted content, sent directly to people's facebook profiles.
Whether or not I feel nationalist terrorists are running the US government, either way I feel the government shouldn't be working this closely with social media. It's extremely dystopian, and it cheapens everything around it.
delichon•5h ago
ToucanLoucan•5h ago
Commenter isn't making the case that the action is illegal, he's saying it's dystopian that the Government is making such blatant use of targeted media. And I agree.
delichon•5h ago
ToucanLoucan•5h ago
ceejayoz•5h ago
The first peaceful transfer of power from one party to another is sometimes called the "Revolution of 1800". https://en.wikipedia.org/wiki/1800_United_States_presidentia...
Every constitutional amendment changes our government. The people who wrote the mechanism in expected this. I doubt they expected us to just... stop amending.
delichon•4h ago
ceejayoz•4h ago
normalaccess•5h ago
You can change lots of things much higher up in the system without taking away our God given rights enshrined in the founding documents to fix these kinds of issues.
ToucanLoucan•4h ago
normalaccess•2h ago
It's easy to spot a problem, but very hard to get the right solution.
ToucanLoucan•1h ago
And like, yeah spotting problems is easier than giving right solutions, but what you're discussing here feels a lot more like just giving up on it entirely, which seems a horrific practice when the entity in question literally runs your society?
normalaccess•4h ago
delichon•4h ago
I'd love to be wrong. If you can find evidence that learning the techniques provides some immunity from them, I'd be happy to see it.
I'm well aware of how I'm being manipulated with regard to the murder in Charlotte, yet it still presses my buttons. The same is true when a beautiful women asks me for anything. Self awareness has little effect on primal motives.
JKCalhoun•4h ago
Do there western countries have the same problem as the U.S.? Are they doing a better job at what you suggest?
bee_rider•4h ago