frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
563•klaussilveira•10h ago•157 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
883•xnx•16h ago•536 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
87•matheusalmeida•1d ago•19 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
13•helloplanets•4d ago•7 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
15•videotopia•3d ago•0 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
195•isitcontent•10h ago•24 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
195•dmpetrov•11h ago•87 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
303•vecti•12h ago•135 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
351•aktau•17h ago•171 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
20•romes•4d ago•2 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
348•ostacke•16h ago•90 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
77•quibono•4d ago•16 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
449•todsacerdoti•18h ago•227 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
49•kmm•4d ago•3 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
246•eljojo•13h ago•149 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
381•lstoll•17h ago•259 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
226•i5heu•13h ago•172 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
110•SerCe•6h ago•89 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
65•phreda4•10h ago•11 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
134•vmatsiiako•15h ago•59 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
23•gmays•5h ago•3 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
42•gfortaine•8h ago•12 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
8•neogoose•3h ago•6 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
262•surprisetalk•3d ago•35 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
165•limoce•3d ago•87 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1037•cdrnsf•20h ago•429 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
14•denuoweb•1d ago•2 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
58•rescrv•18h ago•20 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
86•antves•1d ago•63 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
22•denysonique•7h ago•4 comments
Open in hackernews

TikTok is being flooded with racist AI videos generated by Google's Veo 3

https://arstechnica.com/ai/2025/07/racist-ai-videos-created-with-google-veo-3-are-proliferating-on-tiktok/
131•kozika•7mo ago

Comments

bongodongobob•7mo ago
Racists? On the internet you say!?
runjake•7mo ago
I was skeptical about this, but a quick search for “the usual suspects” pulls up many, many examples for me.
aaviator42•7mo ago
I really miss the time before generative images and video were a thing. We opened such a can of worms. Really seems like a "the scientists were so occupied with if they could they didn't stop to think if they should" situation. What is the actual utility of these tools again beyond putting artists out of work?
Waterluvian•7mo ago
I’m old enough to remember when video killed the radio star.
mslansn•7mo ago
I’ve seen lots of them which I found very very amusing. That seems good enough for me. Think about it: there are channels on YouTube and on the telly that are there just to amuse you. So a system that creates amusing videos is a net positive for the world.
deadbabe•7mo ago
Every generation has their “I miss the time before ‘thing I don’t like’ became a thing”.

In our case, it’s just generative AI.

mopenstein•7mo ago
I miss the time before everybody was on the Internet when it was mostly like minded techie types. This modem internet kinda sucks with all its AI generated racism.
bashinator•7mo ago
Has every generation also seen the rise of massively-multiuser automated personalized propaganda engines?
deadbabe•7mo ago
TV was one hell of a propaganda engine.
HaZeust•7mo ago
was?
thefz•7mo ago
It cannot compete with millions of people glued to their pocket device for hours every day, in which they can see only what is not challenging - or reinforcing - their world view
bashinator•7mo ago
Not personalized. Imagine TV that can be exactly tuned to every individual viewers' taste.
currymj•7mo ago
from an information theory perspective, predicting and efficiently representing data is so closely tied to generation that it is unavoidable.

if you want to use ML to do anything at all with image and video, you will usually wind up creating the capability to generate image and video one way or another.

however building a polished consumer product is a choice, and probably a mistake. every technology has good and bad uses, but there seem to be few and trivial good uses for image/video generation, with many severe bad uses.

SchemaLoad•7mo ago
The scientists were absolutely occupied with if we should. But the CEOs steamrolled them and had it built anyway.
drdaeman•7mo ago
> nothing drives engagement on social media like anger and drama

There. It isn’t even a “real” racism, it’s more of a flamebait, where the more outrageous and deranged a take is, the more likely it would captivate attention and possibly even provoke a reaction. Most likely they primarily wanted to earn some buck from viewer engagement, and didn’t care about the ethics of it. Maybe they also had the racist agendas, maybe not - but that’s just not the core of it.

And in the same spirit, the issue is not really racism or AI videos, but perversely incentivized attention economics. It just happened to manifest this way, but it could’ve been anything else - this is merely what happened to hit some journalist mental filters (suggesting that “racism” headlines attract attention those days, and so does “AI”).

And the only low-harm way - that I can think of - how to put this genie back in the bottle is to make sure everyone is well aware about how their attention is the new currency in the modern age, and spend it wisely, being aware about the addictive and self-reinforcing nature of some systems.

CharlesW•7mo ago
> It isn’t even a “real” racism…

Generating and distributing racist materials is racist regardless of the intent, even if the person "doesn't mean it".

Simple thought experiment: If the content was CSAM, would you still excuse the perpetrators as victims of perversely incentivized attention economics?

fluidcruft•7mo ago
I don't follow your CSAM bit but I have no outrage about Blazing Saddles existing, for example.
jazzyjackson•7mo ago
It would indeed be an impressive feat to produce a film satirizing child porn
defrost•7mo ago
Off the cuff the closest example to mind is the Paedogeddon!! special episode of the Brass Eye series created by Chris Morris.

Admittedly that didn't satirize CSAM material, rather it cut hard into the reflexive reaction people have at the very thought of CSAM and peodophiles.

https://en.wikipedia.org/wiki/Paedogeddon

Moreover, that took a human to thread that needle, it'll be a while before AI generation can pass through that strange valley.

RockRobotRock•7mo ago
This is the one thing we didn't want to happen
accoil•7mo ago
AI creating satire about media spreading hysteria? I don't think it's at that point.
drdaeman•7mo ago
I agree, but I believe the intent matters if we’re trying to identify why this happens.

Racism is just less legally dangerous. There would be people posting snuff or CSAM videos, would that “sell”. Make social networks tough on racism and it’ll be sexism next day. Or extremist politics. Or animal abuse. Or, really, anything, as long as people strongly react to it.

But, yeah, to avoid any misunderstanding - I didn’t mean to say racism isn’t an issue. It is racist, it’s bad, I don’t argue any otherwise. All I want to stress is that it’s not the real issue here, merely a particular manifestation.

jrflowers•7mo ago
>it’s not the real issue here

I like this reasoning. “Trolling” is when people post things to irritate or offend people, so if you see something that’s both racist and offensive then it’s not really racist. If you see somebody posting intentionally offensive racist stuff, and you have no other information about them, you should assume that the offensiveness of their post is an indicator of how not racist they are.

Really if you think about it, it’s like a graph where as offensiveness goes up the racism goes down becau

drdaeman•7mo ago
That’s not what I meant, though. When I wrote “not really racist” I meant “the primary cause for posting this is not racism[, but engagement solicitation]”, rather than “not racist”. And it’s not an implication, but only an observation paired with my (and article authors’) guess about the actual intent. I’m sorry for the confusion, I guess I worded that poorly.

But, yeah, as weird as it may sound, you don’t have to be racist (as in believing in racist ideas) to be a racist troll (propagate racist ideas). Publishing and agreeing with are different things, and they don’t always overlap (even if they frequently do). He who had not ever said or wrote some BS without believing a single iota of it but because they wanted to make some effect, throw the stone.

And not sure how sarcastic you were, but nothing I’ve said could possibly mean if something is offensive it’s what somehow makes it less racist.

jrflowers•7mo ago
> you don’t have to be racist (as in believing in racist ideas) to be a racist troll (propagate racist ideas)

Exactly. Racism has nothing to do with what people say or do, it’s a sort of vibe, so really there is no way of telling if anything or anyone is Real Racist versus fake racist. It is important to point this out b

drdaeman•7mo ago
I’m a bit confused, is that possible you think racism is binary? I recognize you jest, but not sure I get the idea, and I sincerely hope you don’t do it pointlessly.

If you refuse to distinguish between someone who genuinely believes in concept of a race, or postulates an inherent supremacy of some particular set of biological and/or sociocultural traits, and someone who merely talks edgy shit they heard somewhere and haven’t given it much thought - then I’m not entirely sure how can I persuade you to see the distinction I do.

But I believe this difference exists and is important because different causes require different approaches. Online trolls, engagement farmers, and bonehead racists are (somewhat overlapping but generally) different kind of people. And any of those can post racist content.

jrflowers•7mo ago
I showed the videos to my friend and he keeps saying stuff like “Seems like it’s racists making and sharing the racist videos” and “So if a person posts a bunch of racist garbage and then post ‘I’m not racist in my heart’ then the second post is obviously true?”

I keep trying to explain that no, it’s not real racism because if you can imagine that it’s not real, it must not be real but then he says “Who made you the arbiter of racism?” and “What purpose on God’s Green Earth does it serve anyone, in any context, to chime in unprompted that you choose to sort racism into real and fake piles? Like what do you get out of that?”

Anyway I explained that it’s fake racism because it’s just somebody that wants attention and he said “racists can want attention too” and “seems like you’re just doing gymnastics to invent excuses for people online that you don’t even know why are you doing that” so I don’t know what to tell him. I don’t think we’ll see eye to eye on this because he incorrectly defines racism as a “real phenomenon” that “affects real people” and is “perpetuated by people’s actions”, whereas I know that what he’s describing is fake racism, because real racism is a little thing people feel in their hearts.

Seems like anybody could plainly see that fake racism is when people say or do real racist things in the world and real racism is intangible, not really strictly “real”, but the guy’s a kook so ¯_(ツ)_/¯

drdaeman•7mo ago
Uhh, I see that really made a very poor word choice when I wrote “‘real’ racism”. My bad, words are hard.

Given that you’ve chose “fake” as the antonym, I think I can see where we differ, and that I misused the term “‘real’”. In my mind I would’ve picked something like “less intentionally”. I’m not even sure if I can imagine what a fake racism could possibly be like: racism is an idea, and I don’t really get what fakeness for ideas means.

I will try to remember and be more careful with the word “real” in the future. Appreciate your comments.

And, yes, that was real racism, of course - in the way you’ve used the expression. Or rather, there’s no such thing that “not real racism”, and “not ‘real’ racism” was a confusingly bad phrase.

whamlastxmas•7mo ago
I think maybe the nuance they’re trying to capture is that yes the content is absolutely freaking racist but the reason it’s being spread isn’t racists laughing at it and liking it, it’s people being angry about it
Dig1t•7mo ago
The creation of CSAM is a crime because an underage person must be harmed in its creation by definition. Making an AI video of an offensive stereotype does not harm anyone in its creation. It is textbook free speech.

Clutch your pearls as much as you want about the videos, but forcibly censoring them is going to cause you to continue to lose elections.

plaguuuuuu•7mo ago
Nobody said anything about governments banning it. We're pointing it out as something harfmul. I'll also happily exercise my free speech (I'm not from the US so it's free, as in - you can't stop me)
HK-NC•7mo ago
I don't think child porn and tired racist stereotypes are the same. Even content showing murder would be ignored by most and none of us, I assume, are pro murder. I dont assume everyone that uses a sexy female thumbnail is a gooner, just farming goons. I think the original poster has a fair point, having seen the videos, they lack the usual cherrypicked accuracy of content made by genuinely racist creators and instead go for.. Watermelon. My friends are about as bothered by watermelon as an irishman is about cartoon leprechauns, but I'm not in the USA so perhaps its a cultural thing.
jazzyjackson•7mo ago
> make sure everyone is well aware about how their attention is the new currency in the modern age, and spend it wisely, being aware about the addictive and self-reinforcing nature of some systems.

i.e. delete your facebook, your tiktok, your youtube and return to calling people on your flip phone and writing letters (or at least emails). I say this without irony (The Sonim XP3+ is a decent device). all the social networking on smart phones has not been a net positive in most people's lives, I don't really know why we sleep walked into it. I'm open to ideas how to make living "IRL" more palatable than cyberspace. It's like telling people to stop smoking cigarettes. I guess we just have to reach a critical mass of people who can do without it and lobby public spaces to ban it. Concert venues and schools are already playing with it by forcing everyone to put their phones in those faraday baggies so maybe it's not outlandish.

atentaten•7mo ago
Have you thought about what we're currently sleep walking into?
whattheheckheck•7mo ago
What have you thought about it?
prmoustache•7mo ago
I didn't need to buy a flip phone to delete all my social media accounts.
drdaeman•7mo ago
> i.e. delete your facebook, your tiktok, your youtube and return to calling people on your flip phone and writing letters

That sounds like an abstinence-type approach. Not saying that it's not a valid option (and it can be the only effective option in case of a severe addiction), but it's certainly not the only way that could work. Put simply, you don't have to give up on modern technology just because they pose some dangers (but you totally can, if you want to, of course).

I can personally vouch for just remembering to ask myself "what I'm currently doing, how I'm feeling right now, and what do I want?" when I notice I'm mindlessly scrolling some online feeds. Just realizing that I'm bored so much I'm willing to figuratively dumpster-dive in hope of stumbling upon something interesting (and there's nothing fundamentally wrong with this, but I must be aware that this interesting thing will be very brief by design, so unless I'm just looking for inspiration and then moving somewhere else, I'm not really doing anything to alleviate my boredom) can be quite empowering. ;-)

> all the social networking on smart phones has not been a net positive in most people's lives

Why do you think so? I'm not disagreeing, but asking because I know plenty of individual examples, but I'm personally not feeling comfortable enough to make it generalization (because it's hard) and wonder what makes you do.

GaggiX•7mo ago
I don't even think it's flamebait, people just like being edgy on the internet so they enjoy these memes, reading the comments under these posts would probably confirm what I'm saying.
corimaith•7mo ago
>And the only low-harm way - that I can think of - how to put this genie back in the bottle is to make sure everyone is well aware about how their attention is the new currency in the modern age, and spend it wisely, being aware about the addictive and self-reinforcing nature of some systems.

Gonna be hard to admit, but mandatory identity verification like in Korea, i.e attaching real consequences to what happens in the internet is more realistic way this is going to be solved. We've have "critical thinking" programs for decades, it's completely pointless on a aggregate scale, primairly because the majority aren't interested in the truth. Save for their specific expertise, it's quite common for even academics to easily fall into misinformation bubbles.

drdaeman•7mo ago
> it's completely pointless on a aggregate scale, primairly because the majority aren't interested in the truth

No offense meant, but unless you know of an experiment that indicated an absence of statistically significant effect of education programs on collective behaviors; especially one that established a causality like you stated, I would dare to suspect that it's not an accurate portrayal of things, but more of an emotionally driven but not entirely factual response.

> mandatory identity verification like in Korea, i.e attaching real consequences to what happens in the internet

I'm not sure I understand the idea. Is it about making it easier for law enforcement to identify authors of online posts, or about real-name policies and peer pressure, or, possibly, something else?

agnishom•7mo ago
> It isn’t even a “real” racism, it’s more of a flamebait

I think the harm done by circulating racist media is "real" racism regardless of whether someone is doing it because they have hateful ideology, are profiting for it, or just having a good time.

Apocryphon•7mo ago
The Tayification of everything
turbofreak•7mo ago
Nice callback. That was a golden era.
ilaksh•7mo ago
This isn't really a problem with video generation or AI in general. Sure, there is an aspect of ragebait to it, but the reality is that racism is extremely widespread. If it were not, this kind of content would not be so popular. The people at the very top of US government right now are white supremacists. I'm sorry that is not an exaggeration. There is another term that encompasses more of their worldviews which is not politically correct but is accurate.

Stop trying to blame technology for longstanding social problems. That's a cop out.

jazzyjackson•7mo ago
Granted that racism is not new, the infinite production of automated content drowning out any genuine human opinion is a harbinger of the internet to come.
trhway•7mo ago
it also allows automated production of positive content. The main issue here is given a sea of good and a sea of bad content, where the typical person would go for a swim? Why calls for empathy fall flat while inciting rage and hatred is so successful?
favflam•7mo ago
This situation is like southern China when the British decided to even up their trade deficit with Opium.
7402•7mo ago
It's entirely appropriate to blame a technology if the answer to the question, "Does this technology make a longstanding social problem worse or better?" is "It makes it worse."

There can be a follow-on discussion about what, if any, benefits are also provided by aforesaid technology

ivape•7mo ago
I think it's fine to fingerprint AI generated images/videos. It's a massive privacy violation but I just can't see any other way. Too many people have always been and will always be unethical.
SchemaLoad•7mo ago
I've been wondering if ChatGPT makes such excessive use of EM dash just so people can easily identify AI generated content.

Google wouldn't even need a fingerprint, they could just look up from their logs who generated the video.

oceanplexian•7mo ago
Google already admitted they are fingerprinting generative video and have a safety obsession so I guarantee they do it to their LLMs. Another reason is to pollute the output that folks like Deepseek are using to train derivative models.
IAmGraydon•7mo ago
The em-dash is one marker, but I’ve read that most LLMs create small but statistically detectable biases in their output to help them avoid reingesting their own content.
partiallypro•7mo ago
Eventually as models become cheaper, the big companies that would do this won't have control over newer generated content, so it's fairly pointless.
WillPostForFood•7mo ago
To what end? You want to fingerprint all AI images and video to catch people who make racist videos in order too to do what? It isn't illegal. If TikTok doesn't like the content they can delete the video and the account. If Google or OpenAI doesn't want the content being created, they can figure out a way to block it, and delete the user's accounts in the meantime.

If I told you many 14 year olds were making very similar offensive jokes at lunch in high school, would you support adding microphones throughout schools to track and catch them?

ivape•7mo ago
If I told you many 14 year olds were making very similar offensive jokes at lunch in high school

A picture is worth a thousand words. Me saying your mom is so fat that _______ in the lunchroom is different than me saying your mom is so fat in cinematic video format that can go locally viral (your whole school). This is the first time in my life I'm going to say this is not a history is echoing situation. This is a we have entirely gone to the next level, forget what you think you know.

nvch•7mo ago
The question is, who is acting in a racist manner here: the LLM that does what it can, or the humans sharing those videos?
unsnap_biceps•7mo ago
Until we get a LLM that actually "thinks", it's just a tool like photoshop. Photoshop isn't racist if someone uses it to create racist material, so a LLM wouldn't be racist either.
redundantly•7mo ago
LLMs can and do have biases. One wouldn't be far off calling an LLM racist.
ghushn3•7mo ago
I saw (on HN, actually) an academic definition for prejudice, discrimination, and racism that stuck with me. I might be butchering this a bit, but prejudice is basically thinking another group is less than purely because of their race. Discrimination is acting on that belief. Racism is discrimination based on race, particularly when the person discriminated against is a minority/less powerful person.

LLMs don't think, and also have no race. So I have a hard time saying they can racist, per se. But they can absolutely produce racist and discriminatory material. Especially if their training corpus contains racist and discriminatory material (which it absolutely does.)

I do think it's important to distinguish between photoshop, which is largely built from feature implementation ("The paint bucket behaves like this", etc.), and LLMs which are predictive engines that try to predict the right set of words to say based on their understanding of human media. The input is not some thoughtful set of PMs and engineers, it's "read all this, figure out the patterns". If "all this" contains racist material, the LLM will sometimes repeat it.

stuaxo•7mo ago
An LLM is a reflection of the biases in the data it's trained on, so it's not as simple as that.
AstroJetson•7mo ago
Just watched Mountainhead about this very topic. The AI videos were good enough to start wars, topple banking systems and countries.

It is very scary because the "tech-bros" in the movie pretty much mimic the actions of the real life ones.

eurleif•7mo ago
Somewhat related, on YouTube, there's a channel filled with fake police bodycam videos. The most-viewed of these are racially inflammatory, e.g.: https://www.youtube.com/watch?v=5AkXOkXNd8w

The description of the channel on YouTube claims: "In our channel, we bring you real, unfiltered bodycam footage, offering insight into real-world situations." But then if you go to their site, https://bodycamdeclassified.com/, which is focused on threatening people who steal their IP, they say: "While actual government-produced bodycam footage may have different copyright considerations and may be subject to broader fair use provisions in some contexts, our content is NOT actual bodycam footage. Our videos represent original creative works that we script, film, edit, and produce ourselves." Pretty gross.

RockRobotRock•7mo ago
These videos are insane, and the lack of "this is fake" comments are disheartening.
tgsovlerkhgsel•7mo ago
I assume those just get deleted?
Lockal•7mo ago
I scrolled down and don't see a single comment with word "fake" (but a lot of comments like "not real") - channel owner probably automatically shadowbans all users who write "fake".
fakedang•7mo ago
The top comment in that link GP shared was literally 'Impersonating a police officer is a crime.'
RockRobotRock•7mo ago
their other videos are better at riding the line of believable, much less comments calling it out unless you dig into the replies.
BLKNSLVR•7mo ago
I've seen less than a handful of, usually shorts, on YT purporting to be body-cam footage, but they all seem too well-framed and fairly obviously scripted / staged / fake to me, because I'm actually paying attention to the 'environment' not just the 'action'.

But I doubt most doomscrollers would notice that in their half-comatose state.

It IS real, unfiltered bodycam footage. From an actor, following a script, in front of one or many other actors, also following scripts. I think that's how they get away with it, they don't specify it's bodycam footage from actual law enforcement. Yes, gross.

hombre_fatal•7mo ago
Dang, police bodycam videos are my guilty pleasure when I'm working out and just want dumb stimulation to pass the grind.

Definitely have watched enough videos from this channel to recognize its name. :(

moritzwarhier•7mo ago
The website you link (disgusting people) has apparently changed.

> For Content Thieves (Warning)

> If you are currently using Body Cam Declassified content without [...]

> You are in violation of copyright law and will be subject to legal action

[...]

> We aggressively pursue legal remedies against content theft, including statutory damages of up to $150,000 per infringement under U.S. [...]

> An additional administrative fee of $2,500 per infringing video will be assessed

> We demand all revenue generated from the unauthorized use of our content

> We maintain relationships with copyright attorneys who specialize in digital media infringement

> We recommend removing the infringing content immediately and contacting us regarding settlement options

A paragraph about the videos being fake is still there.

> While actual government-produced bodycam footage may have different copyright considerations and may be subject to broader fair use provisions in some contexts, our content is NOT actual bodycam footage.

> Our videos represent original creative works that we script, film, edit, and produce ourselves.

> As privately created content (not government-produced public records), our videos are fully protected by copyright law and are NOT subject to the same fair use allowances that might apply to actual police bodycam

> The distinction means our content receives full copyright protection as creative works, similar to any other professionally produced video content.

This reminds me of a non-AI content mill business strategy that has been metastasizing for years. People who film homeless people and drug addicts and make whole Insta and Youtube channels monetizing it, either framed at "REAL rough footage from city XY" or even openly mocking helpless people. The latter seems to be more common on TikTok and I'm not watching "original" videos of such shite.

There is a special place in hell for people who do such things and in my opinion, there should be laws with very harsh punishments for the people that "create" this trash and make money from it. When it's about the filming of real people without their consent, we really need some laws that effectively allow to punish people who do this, because the victims are not likely to defend themselves.

And in total, the whole strategy is to worsen societal division and tensions, and feed bad human instincts (voyeurism, superiority complex) in order to funnel money into the pockets of parasites without ethics.

eurleif•7mo ago
I don't think it's changed. Note that my first quote, claiming it's real bodycam footage, is from the description they wrote for the YouTube channel, not from the site. The second quote, saying it's not actual bodycam footage, is the one from the site, and that's still there.
moritzwarhier•7mo ago
I'm sorry (head -> table), I simply overlooked this part of your comment:

> But then if you go to their site, https://bodycamdeclassified.com/, which is focused on threatening people who steal their IP

I was on the go and not reading properly, sorry

prvc•7mo ago
None of the examples shown in the video are passable hoaxes. They are all obvious burlesque-style parodies, albeit made in bad taste. They all also have clear and prominent hallmarks of AI generation. Anyone fooled by these has got bigger, prior problems than any potential belief instilled by these videos.
ghushn3•7mo ago
The problem is not that they are fooling anyone. No one thinks a woman is marrying a chimpanzee. The problem is that the videos are obviously and openly racist and being spread quite brazenly.

If I have to encounter a constant barrage of shitty racist (or sexist, or homophobic, or whatever) material just to exist online, I'm going to pretty quickly feel like garbage. (If not feel unsafe.) Especially if I'm someone who has other stressors in their life. Someone who is doing well, their life otherwise together, might encounter these and go, "Fucking idiots made a racist video, block."

But empathize with someone who is struggling? Who just worked 18 hours to make ends meet to come home and feed their kids and pay rent for a shitty apartment that doesn't fit everyone, and their kid comes up to them asking what this video means, and it just... gets past all their barriers. It wedges open so many doubts.

This isn't harmless.

linotype•7mo ago
People have to stop watching this trash. And I mean all of TikTok.
ghushn3•7mo ago
Some of TikTok is great. I mean, most of it is just dopamine hits, and it's potentially quite bad from a health perspective. But also, plenty of TikTok is news, or political theory, or thoughtful commentary, or explanations of how things work.

It's a bowl of fun size candy bars, with a few razors, a few drugs, a few rotten apples, etc. mixed in. You can, by and large, get the algorithm to serve you nothing but the candy, but you are still eating only candy bars at that point.

Some people can say no to infinite candy. Other people, like myself, cannot and it's a real problem.

jrflowers•7mo ago
The interesting thing about this is that it is the use case for these video generators. If the point of these tools is to churn out stuff to drive engagement, and the best way to do that is through content that is inflammatory, offensive, or misinformation, then that’s the ideal use case for them. That’s what the tool is for.
no_time•7mo ago
The spirit airlines video with the smoke alarm chirp is perfect though.
thefz•7mo ago
Luckily sane US senators just rejected a 10 year ban on state level AI regulation.