> "Stop believing I wanna see it or that I'll understand, I don't and I won't. If you're just trying to troll me, I've seen way worse, I'll restrict and move on.
> "But please, if you've got any decency, just stop doing this to him and to me, to everyone even, full stop. It's dumb, it's a waste of time and energy, and believe me, it's NOT what he'd want."
Or maybe ignore, block or, heaven forbid, even get off social media if it affects one negatively?
A ghoulish take. Victim blaming. A compassionate person would delete it.
Are you stating that this isn't harassment, or that you are incapable of being harassed on social media?
> Entering the public square means exposure not only to what someone chooses to see but also to what others choose to share with them.
This isnt a public square, let alone even a physical one. Even in the physical space, I am not beholden to look at what others share. I am under no obligation to take the flyer as I pass by. I am under no obligation to stop and listen as they prance around and try to get into my face.
These companies could provide a set level of tools for its users to be able to appropriately weed their own gardens, but they mostly chose not too. If she could set her app to not show her any video links from parties she didn't follow that pinged her, this probably wouldnt be an issue. There could easily be an middle ground between accept all harassment, and not visit the social club.
...living in a world where AI wasn't an enabler for abuse? What a bafflingly weird take - "you didn't object to <thing> when it was first thought of, so now it's being used badly you can't object to it"
They were in the library. Philip K Dick among others wrote extensively on likely downsides of such technology during the period you mention. Even were this not the case, you're basically arguing that nobody under the age of 60 has any right to complain because they weren't around when the concepts were first articulated. This is asinine.
Obviously this is only a metaphorical comparison, but I do wonder if people are going to figure this out with regard to social media. A lot of people are talking about how to "fix" social media, but almost no one is saying "maybe I'll delete it and just read a book or go for a walk or something."
It all reminds me of the Blaise Pascal quote: "All of humanity's problems stem from man's inability to sit quietly in a room alone."
Because it was ringing. I suggested she pick it up to find out why it was ringing, but apparently that’s not something you do in the age of mobile phones.
In this situation, who's the person unable to sit quietly in a room—the person who is receiving unsolicited artificial videos of her dead father, or the people who are generating artificial videos of a dead man and sending them to his daughter?
People definitely read the parent comment as blaming Williams' daughter for the actions of others. I agree that the blame rests with the people actually sending the videos, but I think there's another reading of the parent comment: why do we subject ourselves to this? Why don't we just walk away, when it would be very easy to do so? I'm never going to be able to stop the flood of assholes online, and no one commenting on this thread will ever be able to stop it either. What's in my control is whether or not I engage in that system.
> "You're not making art, you're making disgusting, over-processed hotdogs out of the lives of human beings, out of the history of art and music, and then shoving them down someone else's throat hoping they'll give you a little thumbs up and like it. Gross."
> She concluded: "And for the love of EVERY THING, stop calling it 'the future,' AI is just badly recycling and regurgitating the past to be re-consumed. You are taking in the Human Centipede of content, and from the very very end of the line, all while the folks at the front laugh and laugh, consume and consume."
if you're currently making a fortune working for Anthropic et al, maybe find some form of charity you can do as penance for your day job. Certainly there are people on this site who should atone for this.
> AI is just badly recycling and regurgitating the past to be re-consumed
That’s just correct.
Unfortunately it doesn’t make the reconsumption any less entertaining.
* Reasonably well-intentioned, but we're not particularly bright, and not much for introspection. We never thought about where this money came from, and accepted executive chatter about creating value or whatever, but it was boring to us.
* We see a survival threat, and ourselves as doing things we're not proud of, and don't like to think about it. (It's much easier for an observer to be sympathetic with this today, than for most of the last 25-30 very comfortable years in tech, when most of us were chasing the jobs that were much more than comfortable, at companies that were in the news for being sketchy. We even twisted the entire field's way of interviewing for all jobs, to match the extensive practicing that everyone was doing for the interview rituals of the strictly best-paying, well-more-than-comfortable jobs.)
* Like the above, but we're not actually feeling a threat, just rationalizing greed.
* We're greedy sociopaths, who don't even care to rationalize, except to manipulate others.
* We're not a sociopath, but we've been fed a self-serving aggressive libertarian philosophy/religion, and ate it up. Like the first category in this list, we're not particularly bright or introspective, but we're much less likeable.
* We question the field-wide festival of sociopathic greed, and we've been implicitly assembling a list of things we apparently won't do, companies we apparently won't work with, etc. And it's because of reasoned values, not just appearances or fashion.
Science rules! (I'm only half joking)
Can you bubble sort your way into a video resembling Robin Williams?
(I'm just picking nits. I do agree that this "revolution" is not the same and will not necessarily produce the same benefits as the industrial revolution.)
I'm not implying those adjectives apply to AI, but merely presenting a worse case scenario.
Dismissing the question of "does this benefit us?" with "it's just a tool" evokes Jurassic Park for me.
"Technology is neutral" is a cop-out and should be seen as such. People should, at the very least, try to ask how people / society will make use of a technology and ask whether it should be developed/promoted.
We are all-too-often over-optimistic about how things will be used or position them being used in the best possible light rather than being realistic in how things will be used.
In a perfect world, people might only use AI responsibly and in ways that largely benefit mankind. We don't live in that perfect world, and it is/was predictable that AI would be used in the worst ways more than it's used in beneficial ones.
https://www.bbc.com/future/article/20250523-the-soviet-plan-...
1) Rolling coal. It's hard for me to envision a publicly-available form of this technology that is virtuous. It sounds like it's mostly used to harass people and exert unmerited, abusive power over others. Hardly a morally-neutral technology.
2) Fentanyl. It surely has helpful uses, but maybe its misuse is so problematic that humanity might be significantly better off without the existence of this drug.
Maybe AI is morally neutral, but maybe it isn't.
My point is that like any technology, it's how you use it.
I'm happy that nuclear weapons and AI have been invented, and I'm excited about the future.
I mean, what if the nuclear bomb actually did burn up the atmosphere? What if AI does turn into a runaway entity that eventually functions to serve its own purposes and comes to see humans the same way we see ants: as a sort of indifferent presence that's in the way of its goals?
And a sort of people who sympathize Winston and blame Felix Hoenikker, but still fail to see any parallels between "fiction" and life.
Just yesterday someone posted a "photo" of a 1921 where a submarine lost power, and built sails out of bedsheets to get home.
But the photo posted looked like a post WWII two submarine, rigged like a clipper ship, rather than the real life janky 1920's bed sheet rig and characters everywhere.
Actual incident (with actual photo): https://en.wikipedia.org/wiki/USS_R-14
I mean, thank you I guess, but anyone can do that with the littlest of efforts; and anyone with actual intention of understanding and answering the question would have recognized it as slop ans stopped right there.
However a video likeness of him has virtually no restrictions.
It's too bad we have a dysfunctional government which is struggling to say no to dictatorial martial law and has decided that instead of passing legislation on reforming anything as the result of careful compromise the preferred method is refusing to pay the bill shutting everything down until one side caves.
We could have government that actually tried to address real issues if people actually cared.
I know, Dune and yeah, i get it - science fiction aint real life - but im still into these vibes.
Anyone wanna start a club?
I wish we had machines that actually thought because they'd at least put an end to whatever this is. In the words of Schopenhauer, this is the worst of all possible worlds not because it couldn't be worse but because if it was a little bit worse it'd at least cease to exist. It's just bad enough so that we're stuck with the same dreck forever. This isn't the Dune future but the Wall-E future. The problem with the Terminator franchise and all those Eliezer Yudkowsky folks is that they are too optimistic.
Yeah, I'm in. Let me know when and where the meetings are held.
<spoiler>
I interpreted Thufir Hawat's massive misunderstanding of Lady Jessica's motivation (which was a huge plot point in the book but sadly didn't make it into the films) as evidence that the conclusion that humans are capable of the exact same undesirable patterns as machines.
Did I read that wrong?
</spoiler>
You would not be the first, see: https://en.wikipedia.org/wiki/Luddite
Funny thing is that we still have hand made fabric today, and were still employing a frighting number of people in the manufacturing of clothing. The issue is that we're making more lower quality products rather than higher quality items.
If there were a new kind of "machines that think"--and they aren't a dangerous predator--they could be a contrast to help us understand ourselves and be better.
The danger from these (dumber) machines is that they may be used for reflecting, laundering, and amplifying our own worst impulses and confusions.
its not really a story, this is an instagram post about someone that can be tagged and forwarded items on instagram by strangers, for those of you that aren't familiar
this is not about any broader AI thing and its not news at all. a journalist made an article out of someone's instagram post
If media had one-shot generated actors we could just appreciate whatever we consumed and then forget about everyone involved. Who cares what that generated character likes to eat for breakfast or who they are dating they don't exist.
https://www.latimes.com/entertainment-arts/movies/story/2021...
It's not necessarily disgusting by itself, but sending clips to the guy's daughter is very weird.
WorldPeas•2h ago
randycupertino•1h ago
jMyles•52m ago
And even if this were a viable answer: legal process _where_? What's to stop these "creators" from simply doing their computation in a different jurisdiction?
We need systems that work without this one neat authoritarian trick. If your solution requires that you lean on the violence of the state, it's unlikely to be adopted by the internet.
randycupertino•18m ago
Also, calling legal enforcement as “leaning on the violence of the state” is hyperbolic and a false dichotomy. Every system of rights for and against companies (contracts, privacy, property, speech) comes down to enforceable legal policies.
Examples of cases that have shaped society: Brown v Board of Ed, pollution lawsuits against 3M and Dow Chemical, Massachusetts v. EPA resulted in the clean air act, DMCA, FOSTA-SESTA, the EU Right to Be Forgotten, Reno v. ACLU which outlined speech protections online, interracial marriage protected via Loving v. Virginia, the ruling that now requires police have a warrant to access cell phone data was Carpenter v. US, and these are just a few!
> And even if this were a viable answer: legal process _where_? What's to stop these "creators" from simply doing their computation in a different jurisdiction?
Jurisdictional challenges don't mean a law is pointless. Yes, bad actors can operate from other jurisdictions, but this is true for all transnational issues, from hacking to human smuggling to money laundering. DMCA takedowns work globally, as does GDPR for non-EU companies.
Nobody’s arguing for blind criminalization or over policing AI. But perhaps there should be some legal frameworks to protect safe and humane use.
Centigonal•42m ago
Her goal seems to be to reduce the role in her life played by AI slop portrayals of her dad. Taking the legal route seems like it would do the opposite.
latexr•37m ago
But that shouldn’t be the first step. Telling your fellow man “what you are doing is bothering me, please stop” is significantly simpler, faster, and cheaper than contacting lawyers and preparing for a possibly multi-year case where all the while you’ll have to be reminded and confronted with the very thing you don’t want to deal with.
If asking doesn’t work, then think of other solutions.
Razengan•15m ago
pstuart•13m ago
bossyTeacher•12m ago
It's not because just telling people on the internet to stop doing something doesn't actually stop them from doing it. This is basic internet 101, streissand effect at full power
viraptor•6m ago
izzydata•36m ago
So I don't think there was actually malicious intent and asking people to stop will probably work.
burkaman•29m ago
The most these lawsuits could hope to do is generate publicity, which would likely just encourage more people to send her videos. This direct plea has that risk too, but I think "please don't do this" will feel a lot less adversarial and more genuine to most people than "it should be illegal for you to do this".
randycupertino•7m ago
It's not fruitless and doesn't only generate publicity. Some states like California and Indiana recognize and protect the commercial value of a person's name, voice, image, and likeness after death for 70 years, which in this case would apply for Robin William's daughter.
Tupac's estate successfully sued Drake to take his AI generated voice of Tupac out of his Kendrick Lamar diss track.
There is going to be a deluge of copyright suits against OpenAI for their videos of branded and animated characters. Disney just sent a cease and desist to Character.ai last week for using copyrighted characters without authorization.
lukev•29m ago
1. Whether there is an effective legal framework that prevents AI companies from generating the likenesses of real people.
2. The shared cultural value that, this is not cool actually, not respectful, and in fact somewhat ghoulish.
Establishing a cultural value is probably more important than any legal structures.
estebarb•7m ago
Since the raise of generative AI we have seen all sorts of pathetic usages, like "reviving" assesinated people and making them speak to the alleged killer in court, training LLMs to mimic deseased loved ones, generative nudification, people that is not using their brain anymore because they need to ask ChatGPT/Grok... some of them are crimes, others not. Regardless most of them should stop.