In our case, it’s just generative AI.
if you want to use ML to do anything at all with image and video, you will usually wind up creating the capability to generate image and video one way or another.
however building a polished consumer product is a choice, and probably a mistake. every technology has good and bad uses, but there seem to be few and trivial good uses for image/video generation, with many severe bad uses.
There. It isn’t even a “real” racism, it’s more of a flamebait, where the more outrageous and deranged a take is, the more likely it would captivate attention and possibly even provoke a reaction. Most likely they primarily wanted to earn some buck from viewer engagement, and didn’t care about the ethics of it. Maybe they also had the racist agendas, maybe not - but that’s just not the core of it.
And in the same spirit, the issue is not really racism or AI videos, but perversely incentivized attention economics. It just happened to manifest this way, but it could’ve been anything else - this is merely what happened to hit some journalist mental filters (suggesting that “racism” headlines attract attention those days, and so does “AI”).
And the only low-harm way - that I can think of - how to put this genie back in the bottle is to make sure everyone is well aware about how their attention is the new currency in the modern age, and spend it wisely, being aware about the addictive and self-reinforcing nature of some systems.
Generating and distributing racist materials is racist regardless of the intent, even if the person "doesn't mean it".
Simple thought experiment: If the content was CSAM, would you still excuse the perpetrators as victims of perversely incentivized attention economics?
Admittedly that didn't satirize CSAM material, rather it cut hard into the reflexive reaction people have at the very thought of CSAM and peodophiles.
https://en.wikipedia.org/wiki/Paedogeddon
Moreover, that took a human to thread that needle, it'll be a while before AI generation can pass through that strange valley.
Racism is just less legally dangerous. There would be people posting snuff or CSAM videos, would that “sell”. Make social networks tough on racism and it’ll be sexism next day. Or extremist politics. Or animal abuse. Or, really, anything, as long as people strongly react to it.
But, yeah, to avoid any misunderstanding - I didn’t mean to say racism isn’t an issue. It is racist, it’s bad, I don’t argue any otherwise. All I want to stress is that it’s not the real issue here, merely a particular manifestation.
I like this reasoning. “Trolling” is when people post things to irritate or offend people, so if you see something that’s both racist and offensive then it’s not really racist. If you see somebody posting intentionally offensive racist stuff, and you have no other information about them, you should assume that the offensiveness of their post is an indicator of how not racist they are.
Really if you think about it, it’s like a graph where as offensiveness goes up the racism goes down becau
But, yeah, as weird as it may sound, you don’t have to be racist (as in believing in racist ideas) to be a racist troll (propagate racist ideas). Publishing and agreeing with are different things, and they don’t always overlap (even if they frequently do). He who had not ever said or wrote some BS without believing a single iota of it but because they wanted to make some effect, throw the stone.
And not sure how sarcastic you were, but nothing I’ve said could possibly mean if something is offensive it’s what somehow makes it less racist.
Exactly. Racism has nothing to do with what people say or do, it’s a sort of vibe, so really there is no way of telling if anything or anyone is Real Racist versus fake racist. It is important to point this out b
If you refuse to distinguish between someone who genuinely believes in concept of a race, or postulates an inherent supremacy of some particular set of biological and/or sociocultural traits, and someone who merely talks edgy shit they heard somewhere and haven’t given it much thought - then I’m not entirely sure how can I persuade you to see the distinction I do.
But I believe this difference exists and is important because different causes require different approaches. Online trolls, engagement farmers, and bonehead racists are (somewhat overlapping but generally) different kind of people. And any of those can post racist content.
Clutch your pearls as much as you want about the videos, but forcibly censoring them is going to cause you to continue to lose elections.
i.e. delete your facebook, your tiktok, your youtube and return to calling people on your flip phone and writing letters (or at least emails). I say this without irony (The Sonim XP3+ is a decent device). all the social networking on smart phones has not been a net positive in most people's lives, I don't really know why we sleep walked into it. I'm open to ideas how to make living "IRL" more palatable than cyberspace. It's like telling people to stop smoking cigarettes. I guess we just have to reach a critical mass of people who can do without it and lobby public spaces to ban it. Concert venues and schools are already playing with it by forcing everyone to put their phones in those faraday baggies so maybe it's not outlandish.
Gonna be hard to admit, but mandatory identity verification like in Korea, i.e attaching real consequences to what happens in the internet is more realistic way this is going to be solved. We've have "critical thinking" programs for decades, it's completely pointless on a aggregate scale, primairly because the majority aren't interested in the truth. Save for their specific expertise, it's quite common for even academics to easily fall into misinformation bubbles.
I think the harm done by circulating racist media is "real" racism regardless of whether someone is doing it because they have hateful ideology, are profiting for it, or just having a good time.
Stop trying to blame technology for longstanding social problems. That's a cop out.
Google wouldn't even need a fingerprint, they could just look up from their logs who generated the video.
If I told you many 14 year olds were making very similar offensive jokes at lunch in high school, would you support adding microphones throughout schools to track and catch them?
A picture is worth a thousand words. Me saying your mom is so fat that _______ in the lunchroom is different than me saying your mom is so fat in cinematic video format that can go locally viral (your whole school). This is the first time in my life I'm going to say this is not a history is echoing situation. This is a we have entirely gone to the next level, forget what you think you know.
LLMs don't think, and also have no race. So I have a hard time saying they can racist, per se. But they can absolutely produce racist and discriminatory material. Especially if their training corpus contains racist and discriminatory material (which it absolutely does.)
I do think it's important to distinguish between photoshop, which is largely built from feature implementation ("The paint bucket behaves like this", etc.), and LLMs which are predictive engines that try to predict the right set of words to say based on their understanding of human media. The input is not some thoughtful set of PMs and engineers, it's "read all this, figure out the patterns". If "all this" contains racist material, the LLM will sometimes repeat it.
It is very scary because the "tech-bros" in the movie pretty much mimic the actions of the real life ones.
The description of the channel on YouTube claims: "In our channel, we bring you real, unfiltered bodycam footage, offering insight into real-world situations." But then if you go to their site, https://bodycamdeclassified.com/, which is focused on threatening people who steal their IP, they say: "While actual government-produced bodycam footage may have different copyright considerations and may be subject to broader fair use provisions in some contexts, our content is NOT actual bodycam footage. Our videos represent original creative works that we script, film, edit, and produce ourselves." Pretty gross.
But I doubt most doomscrollers would notice that in their half-comatose state.
It IS real, unfiltered bodycam footage. From an actor, following a script, in front of one or many other actors, also following scripts. I think that's how they get away with it, they don't specify it's bodycam footage from actual law enforcement. Yes, gross.
Definitely have watched enough videos from this channel to recognize its name. :(
If I have to encounter a constant barrage of shitty racist (or sexist, or homophobic, or whatever) material just to exist online, I'm going to pretty quickly feel like garbage. (If not feel unsafe.) Especially if I'm someone who has other stressors in their life. Someone who is doing well, their life otherwise together, might encounter these and go, "Fucking idiots made a racist video, block."
But empathize with someone who is struggling? Who just worked 18 hours to make ends meet to come home and feed their kids and pay rent for a shitty apartment that doesn't fit everyone, and their kid comes up to them asking what this video means, and it just... gets past all their barriers. It wedges open so many doubts.
This isn't harmless.
If you want to change this situation you should try campaigning for the abolishment of the first amendment or try moving to Europe.
It's a bowl of fun size candy bars, with a few razors, a few drugs, a few rotten apples, etc. mixed in. You can, by and large, get the algorithm to serve you nothing but the candy, but you are still eating only candy bars at that point.
Some people can say no to infinite candy. Other people, like myself, cannot and it's a real problem.
bongodongobob•10h ago