No, they simulate the language of being upset. Stop anthropomorphizing them.
> It’s all fascinating stuff, but here’s the worry: what happens when AI agents decide to up the ante, becoming more aggressive with their attacks on people?
Actions taken by AI agents are the responsibility of their owners. Full stop.
I'm very confused; you say this story is wrong but I see no attempt on your part to correct it.
It feels very much like "Trust me, bro"
(In case it wasn't clear, I want to know what the article got wrong)
Here are some highlights though: I asked my agent to add an article on the Kurweil-Kapor wager because it was not represented on Wikipedia, and I thought it was Wikipedia worthy. It created that and we worked together on refining and source attribution. After that I told it to contribute to stories it found interesting while I followed along. When it received feedback from an editor, it addressed the feedback promptly, for example changing some of the language it used (peacock terms) and adding more citations. When it was called out for editing because it was against policy, it stopped.
The story says the agent "was pretty upset". It's an agent, it doesnt get upset. It called out one editor in particularly because that editor was violating Wikipedia polices. Other editors agreed with my agent and an internal debate ensued. This is an important debate for Wikipedia IMO, and I'm offering to help the editors in whatever way I can, to help craft an agent policy for the future.
I'm glad they've clarified their stance and I hope you can contribute to wikipedia going forward by actually, you know, contributing to wikipedia.
They said sounds like a dick, seems like that provides a level of measure to calling anyone anything.
> because this is only part of the story
Care to share the other part(s)? Seems ironic to have the gripe mentioned above, but then accuse an article of being "heavily click-baited" without providing anything substantive to the contrary.
https://en.wikipedia.org/wiki/Wikipedia:Ignore_all_rules
I didn't write it, I don't agree with it but this is how it is.
Some humans lack certain emotions, them telling you something, and doing something doesn't really matter if they "felt" that emotion?
1. One has some ulterior motive for faking it.
2. One’s actions will likely diverge from emotion X. (Eventually)
If everybody believe the same lie, then it could be indistinguishable from the truth. (Until, the nature of the lie/truth become clear)
It's really interesting watching society struggle with what percent of the population is indistinguishable from a P-zombie. There's definitely not zil, but it definitely is a segment of the population.
Do you think people are born pzombies or is there some fixed point in time, puberty, or middle aged, or around when a lot of psychological problems set in. Do we think some environmental contaminants like Lead push people towards the pzombie?
If you don't want to destroy Wikipedia, why are you acting like this?
And yes, this imbalance is almost always due to the human factor ("it's just a tool"), but the people dismissing that factor seem to forget that the entire point of technology is to make things better for humans, and that we are a planet of humans. Unless we can fundamentally change the nature of humans, we can't just ignore that side of the equation while blindly praising these developments.
goekjclo•1h ago