frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

AI Ethics is being narrowed on purpose, like privacy was

https://nimishg.substack.com/p/ai-ethics-is-being-narrowed-on-purpose
76•i_dont_know_•2h ago

Comments

i_dont_know_•2h ago
I keep seeing "AI ethics" being redefined to focus on fictional problems instead of real-world ones, so I wrote a little post on it.
bayindirh•1h ago
Great little post. Congrats.

Also there's the ethics of scraping the whole internet and claiming that it's all fair use, because the other scenario is a little too inconvenient for all the companies involved.

P.S.: I expect a small thread telling me that it's indeed fair use, because models "learn and understand just like humans", and "models are hugely transformative" (even though some licenses say "no derivatives whatsoever"), "they are doing something amazing so they need no permission", and I'm just being naive.

dale_glass•1h ago
Worrying about that stuff is just a waste of time. Not because of what you said, but because it's all ultimately pointless.

Unless you believe this will kill AI, all it does is to create a bunch of data brokers.

Once fees are paid, data is exchanged, and models are trained, if the AI takes your job of programming/drawing/music, then it still does. We arrived at the same destination, only with more lawyers in the mix. You get to enjoy unemployment only knowing that lawyers made sure that at least they didn't touch your cat photos.

bayindirh•54m ago
The thing is, if you can make sure that some of that your images/music/code aren't used for AI training, then you can be sure that you can continue doing what you do, because your personal style enables the specialty you can create.

Maybe you will lose some of your "territory" in the process, but what makes you, you will be preserved. Nobody will be able to ask "draw me a comic with these dialogue in the style of $ARTIST$".

dale_glass•45m ago
> The thing is, if you can make sure that some of that your images/music/code aren't used for AI training, then you can be sure that you can continue doing what you do, because your personal style enables the specialty you can create.

Personal styles are dime a dozen and of far lesser importance than you think.

Professionals will draw in any style, that's how we make things like games and animated movies. Even assuming you had some unique and incredibly valuable style, all it'd take to copy it completely legally is finding somebody else willing to copy your style to provide training material, and train on that.

bayindirh•41m ago
> Personal styles are dime a dozen and of far lesser importance than you think.

Try imitating Mickey Mouse, Dilbert, Star Wars, Hello Kitty, XKCD, you name it.

Randall will possibly laugh at you, but a legal company which happens to draw cartoons won't be amused and come after you in any way they can.

> Professionals will draw in any style...

Yep, after calling and getting permission and possibly paying some fees to you if you want. There's respect and dignity in this process.

Yet, we reduce everything into money. Treating machine code like humans and humans like coin-operated vending machines.

There's something wrong here.

dale_glass•30m ago
> Try imitating Mickey Mouse, Dilbert, Star Wars, Hello Kitty, XKCD, you name it.

Those are not styles, they're characters for the most part.

You absolutely can draw heavy inspiration from existing properties, mostly so long you avoid touching the actual characters. Like D&D has a lot of Tolkien in it, and I believe the estate is quite litigious. You can't put Elrond in a D&D game, but you absolutely can have "Elf" as a species that looks nigh identical to Tolkien's descriptions.

For style imitation, it's long been a thing to make more anime-ish animation in the west, and anime itself came from Disney.

> Yep, after calling and getting permission and possibly paying some fees to you if you want.

Not for art styles, they won't. Style is not copyrightable.

jeppester•1h ago
Sometimes AI is "just like a human", other times AI is "just a machine".

It all depends on what is most convenient for avoiding any accountability.

JackFr•58m ago
IP is a pragmatic legal fiction, created to reward developers of creative and innovative thought, so we get more of it. It’s not a natural law.

As such fair use is whatever the courts say it is.

bayindirh•49m ago
Then let's abolish all of them. Patents, copyrights, anything. Let's mail Getty, Elsevier, car manufacturers, chemical plants, software development giants and small startups that everything they have has no protection whatsoever...

Let us hear what they think...

I'm for the small fish here, people who put things out because of pure enjoyment, waiting nothing but a little respect for the legal documents they attach to their wares they made meticulously, which enables most of the infra which enables you to read this very comment, for example.

Current model rips the small fish and feeds the bigger one forcefully, creates an inequality. There are two ways to stop this. Bigger fish will respect smaller fish, because everybody is equal in front of law (which will not happen) or abolishing all protections and make bigger fish vulnerable to small fish (again, which will not happen).

Incidentally, I'm also here for the bigger fish, too, which put their wares in source-available, "look but not use" type of licenses. They are also hosed equally badly.

I see the first one as a more viable alternative, but alas...

P.S.: Your comment gets two points. One for deflection (it's not natural law argument), and another one for "but it's fair use!" clause. If we argue that only natural laws are laws, we'll have some serious fun.

i_dont_know_•55m ago
Thanks! Yeah, there's a lot of "well, it's 'standard practice' now so it can't be wrong" going on in so many different ways here too...
grues-dinner•1h ago
Yes, all this highly public hand-wringing about "alignment" framed in terms of "but if our AI becomes God, will it be nice to us" is annoying. It feels like it's mostly a combination of things. Firstly, by play-acting that your model could become God, you install FOMO in investors who see themselves not being on the hyper-lucrative "we literally own God as ascend to become its archangels" boat. You look like you're taking ethics seriously and that deflects regulatory and media interest. And, it's a bit of fun sci-fi self-pleasure for the true believers.

What the deflection is away from is that the actual business plan here is the same one tech has been doing for a decade: welding every flow and store of data in the world to their pipelines, mining every scrap of information that passes through and giving themselves the ability to shape the global information landscape, and then sell that ability to the highest bidders.

The difference with "AI" is that they finally have a way to convince people to hand over all the data.

Levitz•35m ago
It's interesting how I think our experience differs completely, for example, regarding people's concerns for AI ethics you write:

>People are far more concerned with the real-world implications of ethics: governance structures, accountability, how their data is used, jobs being lost, etc. In other words, they’re not so worried about whether their models will swear or philosophically handle the trolley problem so much as, you know, reality. What happens with the humans running the models? Their influx of power and resources? How will they hurt or harm society?

This is just not my experience at all. People do worry about how models act because they infer that eventually they will be used as source of truth and because they already get used as source of action. People worry about racial makeup in certain historical contexts[1], people worry when Grok starts spouting Nazi stuff (hopefuly I don't need a citation for that one) because they take it as a sign of bias in a system with real world impact, that if ChatGPT happens to doubt the holocaust tomorrow, when little Jimmy asks it for help in an essay he will find a whole lot of white supremacist propaganda. I don't think any of this is fictional.

I find the same issue with the privacy section. Yes concerns about privacy are primarily about sharing that data, precisely because controlling how that data is shared is a first, necessary step towards being able to control what is done with the data. In a world in which my data is taken and shared freely I don't have any control on what is done with that data because I have no control on who has it in the first place.

[1] https://www.theguardian.com/technology/2024/mar/08/we-defini...

i_dont_know_•21m ago
Thanks for the perspective. For me I think it's a matter of degree (I guess I was a bit "one or the other" when I wrote it).

These things are also concerns and definitely shouldn't be dismissed entirely (especially things like AI telling you when it's unsure, or, the worse cases of propaganda), but I'm worried about the other stuff I mention being defined away entirely, the same way I think it has been with privacy. Tons more to say on the difference between "how you use" vs "how you share" but good perspective, and interesting that you see the emphasis differently in your experiences.

mitthrowaway2•1h ago
It seems to me that this article is the one prevaricating between "ethics" and "safety". The latter is of course a narrow subset of the former, as there are many ethics issues that are not safety issues.
JackFr•1h ago
Safety is about unintentional harm to yourself or others. Ethics largely concern themselves with intentional behavior.
bo1024•50m ago
You might not be aware of the context (actually the author of the article might not either). There has in fact been a big push by major AI companies to focus on quote safety unquote while marginalizing (not citing, giving attention to, etc) people focusing on what those companies call quote ethics unquote.

For example, from Timnit Gebru:

> The fact that they call themselves "AI Safety" and call us "AI Ethics" is very interesting to me.

> What makes them "safety" and what makes us "ethics"?

> I have never taken an ethics course in my life. I am an electrical engineer and a computer scientist however. But the moment I started talking about racism, sexism, colonialism and other things that are threats to the safety of my communities, I became labeled "ethicist." I have never applied that label to myself.

> "Ethics" has a "dilemma" feel to it for me. Do you choose this or that? Well it all depends.

> Safety however is more definitive. This thing is safe or not. And the people using frameworks directly descended from eugenics decided to call themselves "AI Safety" and us "AI Ethics" when actually what I've been warning about ARE the actual safety issues, not your imaginary "superintelligent" machines.

https://www.linkedin.com/posts/timnit-gebru-7b3b407_the-fact...

i_dont_know_•44m ago
True... I was trying to define them the way (I think) companies are defining them (like what their alignment teams are looking at) and the way it's reported. I think in these specific contexts they're used with overlap but yeah I do bounce back and forth a bit here.
lr4444lr•1h ago
AI ethics are like nuclear ethics: the incentive to break them is too powerful without every major player becoming a signatory to some agreement with consequences that have teeth.
ragnot•1h ago
If you have the time, check out the show "Pantheon" (it should be on Netflix). It goes into this and how effectively AI ethics goes out the window when the reward for breaking them means nation-dominating power.
Isamu•1h ago
This has been happening for a long time. I first noticed this with the hand waving dismissals of older concepts like Asimov’s laws.

Not a carefully reasoned argument why “not causing harm to a human” is outmoded, but just pushing it aside. I would love to see a good reasoned argument there.

No, instead there is Avoiding talking about harm to humans. Just because harm is broad doesn’t get you out of having to talk about it and deal with risks, which is at the root of engineering.

blibble•1h ago
> I would love to see a good reasoned argument there.

"we want money from selling weapons"

JackFr•1h ago
Not hand waving, Asimov’s three laws are not a good framework. My claim is that the whole point was so that Asimov could write entertaining stories about the ambiguities and edge cases of the three laws.
qualeed•1h ago
This is a pretty good example of what parent comment was referencing, I think.

You say "Asimov’s three laws are not a good framework.", then don't present any arguments to why it is not a good framework. Instead you bring up something separate: the framework can facilitate story writing.

It could be good for story writing and a good framework. Those two aren't mutually exclusive things. (I'm not arguing that it is a good framework or not, I haven't thought about it enough)

Isamu•47m ago
Right, in particular Asimov is not presenting a detailed framework of any kind.

His laws are constraints, they don’t talk about how to proceed. It’s assumed that robots will work toward goals given them, but what are the constraints?

People now who want to talk about alignment seem to want to avoid talk of constraints.

Because people themselves are not aligned. To push alignment is avoiding the issue that alignment is vague and the only close alignment we can be assured of is alignment with the goals of the company.

Sharlin•29m ago
The burden of proof is obviously on anyone who wants to argue that the three laws are, in fact, a good solid framework for robot ethics. It's pretty astonishing that the three laws are taken by anyone as being some sort of canonical default framework.

Asimov was not in the "try to come with a good framework for robot ethics" business. He was in the business of trying to come up with some simple, intuitive idea that didn't require the readers to have a degree in ethics and that was broken and vague enough to have a plenty of counterexamples to make stories about.

In short, Asimov absolutely did not propose his framework as an actually workable one, any more than, say, Atwood proposed the Gilead as a workable framework for society. They were nothing but story premises that the consequences of which the respective authors wanted to explore.

qualeed•26m ago
>The burden of proof is [...]

Sometimes we can just talk about things without having to pretend we're in a court of law or defending our phd thesis.

Original commenter wasn't asking for anyone to prove anything, or trying to prove anything themselves. They just observed that some conversations are hand-waved away.

krapp•27m ago
The most obvious evidence that Asimov's three laws are not a good framework is the fact that they are not a framework, they are a plot device. There is no "engineering" involved. Isaac Asimov was a professor of biochemistry, he had no clue about how robots or AI might actually work. The robots in his stories have "positronic brains" because positrons at the time were newly discovered and sounded cool.

They aren't simply "good for story writing," their entire narrative purpose is to be flawed, and to fail in entertaining ways. The specific context in which the three laws are employed in stories is relevant, because they are a statement by the author about the hubris of applying overly simplistic solutions to moral and ethical problems.

And the assumptions that the three laws are based on aren't even relevant to modern AI. They seem to work in universe because the model of AI at the time was purely rational, logical and strict, like Data from Star Trek. They fail because robots find logical loopholes which may violate the spirit of the laws but still technically apply. It's essentially a math problem, rather than a moral or ethical problem, whereby the robots find a novel set of variables letting them balance the equation in ways that lead to amoral or immoral consequences.

But modern LLMs aren't purely rational, logical and strict. They're weird in ways no one back in Asimov's day would ever have expected. LLMs (appear to) lie, prevaricate, fabricate, express emotion and numerous other behaviors that would have been considered impossible for any hypothetical AI at the time. So even if the three laws were a valid framework for the kinds of AI in Asimov's stories, they wouldn't work for modern LLMs because the priors don't apply.

qualeed•18m ago
This would probably be better suited under the original comment so that the original commenter has a better chance of seeing/reading it.
morsecodist•6m ago
I think it's fair to point out that they were never intended to be a good framework for aligning robots and humans. Even in his own stories they lead to problems. They were created precisely to make the point that encoding these things in rules is hard.

As for practical problems they are extremely vague. What counts as harm? Could a robot serve me a burger and fries if that isn't good for my health? By the rules they actually can't even passively allow me to get harmed so should they stop me from eating one? They have to follow human orders but which human? What if orders conflict?

myrmidon•59m ago
I think a big factor in Asimov's laws specifically being sidelined is that the whole process of building AI looks very different from what we pictured back then.

Instead of us programming the AIs by feeding it lots of explicit hand-crafted rules/instructions, we're feeding the things with plain data instead, and the resulting behavior is much more black-box, less predictable and less controllable than anticipated.

Training LLMs is closer, conceptually, to raising children than to implementing regexp parsers, and the whole "small simple set of universal constraints" is just not really applicable/useful.

scotty79•54m ago
It might make a comeback when we finally get good at teachning AI what's real and what's imagined and also logical reasoining. I think it does moral evaluation of actions mostly well already (bacause humans are not great at it anyways). Then a rule like "don't harm humans" might suffice.
DSingularity•32m ago
I’m not sure we will ever get good at teaching them to distinguish reality from imagination. Feels like there are too many generative models pushing everything from fake songs to fake video clips.
halfmatthalfcat•21m ago
We can’t even do it ourselves. People live in their own “truth”.
salawat•33m ago
>Training LLMs is closer, conceptually, to raising children than to implementing regexp parsers, and the whole "small simple set of universal constraints" is just not really applicable/useful.

That this can be said, and there still being so doubt we should ramp up the Ethics research before going and rawdogging the implementation just bloody bewilders me.

add-sub-mul-div•15m ago
> the whole process of building AI looks very different from what we pictured back then.

Right, and so do the harm risks. We need a framework centered around how humans will use AI/robots to harm each other, not how AI/robots will autonomously harm humans.

stephencanon•14m ago
Raising children involves a whole lot of simple constraints that you gradually relax.

“Don’t touch the knife” becomes “You can use _this_ knife, if an adult is watching,” which becomes “You can use these knives but you have to be careful, tell me what that means” and then “you have free run of the knife drawer, the bandages are over there.” But there’s careful supervision at each step and you want to see that they’re ready before moving up. I haven’t seen any evidence of that at all in LLM training—it seems to be more akin to handing each toddler a blade and waiting to see what happens.

danaris•53m ago
Asimov's Three Laws of Robotics were explicitly designed to be a good basis for fiction that shows how Asimov's Three Laws of Robotics break down.

Suggesting they be used as a basis for actual AI ethics is...well, it's not quite to the level of creating the Torment Nexus from acclaimed sci-fi novel "Don't Create the Torment Nexus", but it's pretty darn close.

jordanb•30m ago
It's kinda hilarious that people are explicitly trying to build a future based on (mostly dystopian) scifi, which was the point of the torment nexus thing. But then when scifi argues for constraints on technology the argument is "those are just stories."
krapp•24m ago
The argument isn't "those are just stories" it's that "those stories demonstrate why the constraints won't work."

But people are going to try it anyway. Belief in Asimov's three laws is a matter of religious faith. Just know you've been warned.

nathias•1h ago
I don't know why people allow others to proclaim they're 'ethicists' if they have no relevant philosophical education. There are whole fields of 'ethics' that are just PR departments trying to escape the now bad connotations of 'PR departments'.
District5524•26m ago
That reminds me of the new draft standard of CEN/CENELEC (EU std body) on "Competence requirements for professional AI ethicists" https://standards.cencenelec.eu/dyn/www/f?p=205:22:0::::FSP_...

But by the time they'll adopt it, singularity will already have happened... For some reason, my instincts suggests there will be no MA in Philosophy needed.

tucnak•12m ago
It could be the case that Wittgensteinians have won completely, and if that is, indeed, the case—a great chunk of academic ethics should be considered hubris...
blibble•1h ago
> If we give companies unending hype, near unlimited government and scientific resources, all of our personal data including thoughts and behavior patterns, how do we know their leaders will do what we want them to, and not try to subvert us and… take over the world? How do we know they stay on humanity’s side?

I've been saying this for a while

malevolent unaligned entities have already been deployed, in direct control of trillions of dollars of resources

they're called: Mark Zuckerberg, Larry Page, Elon Musk, Sam Altman

"AI" simply increases the scale of the damage they can inflict, given they'll now need far fewer humans to be involved in enacting their will upon humanity

positron26•1h ago
Distribute power or go home. Something missed by many in these conversations is the role of open source in raising the floor so that we have a gazillion companies that have more interest in there being a fair, predictable market than a winner-take-all market.
i_dont_know_•14m ago
Really good point on open source and the nudges it provides, and definitely a point that isn't made often enough!
real_marcfawzi•47m ago
Thank you Nimish for voicing this.

To those who are interested in supporting an Ethical Ai Commons:

We're interested in contributing training data and training tools to an Ethical AI Commons. The tools automate the post training via RLVR of open source LLMs to bake-in a moral critical thinking framework, and train the model to act as an ethical arbiter, which entails identifying the harm done (by perpetrator) and demanding and negotiating for repair (for the affected parties) and accountability (to prevent future harm.) Anyway, the Ethical AI Commons requires more contributors than just us. So we are looking to connect with others who are working within the Ethical AI Ecosystem, and by the very nature of this ecosystem, feel equally obligated to share their learnings and spread the knowledge.

More at:

https://theunderstoryapp.com

Slides #11 and #13 discuss our Ethical Ai Commons proposal

https://theunderstoryapp.com/lets-co-create/

akakajzbzbbx•17m ago
Humans can’t agree on what is ethical / safe, so I don’t get people trying to apply it to AI. Am I missing something big here?

Anytime I see discussion framed as “ethics” my brain swaps ethics with “rules I think are good”.