https://news.ycombinator.com/item?id=44912783
They actually described the methodology at least (note: I also haven’t fully read the paper yet, but I wanted to post in support of you not having a “take” yet, haha).
Unsolicited advice for all: make an effort to hold onto your manners even with the robots or you'll quickly end up struggling to collaborate with anyone else.
It's sadly become quite common on internet forums to suppose that some post or comment was written by AI. It's probably true in some cases, but people should ask themselves how the cost/benefit to calling it out looks.
The easiest way to protect myself these days is to assume the worst about all content. Why am I replying to a comment in that case? Consider it a case yelling into the void.
2. A bot-generated image is not a record of photon-emissions in the physical world. When I look at photos, they need to be records of the physical world, or they're a creative work.
I think you can't rationally apply the same standard to these 2 things.
AI users aren’t investing actual work and can generate reams if bullshit that puts three burden on others to untangle. And they also aren’t engaging in good faith.
Rhetoric is the model used in debate. Proponents don't expect to change their Opponent's mind, and vice versa. In fact, if your opponent is obstinate (or a non-sentient text generator), it is easier to demonstrate the strength of your position to the gallery.
People reference Brandolini's "bullshit asymmetry principle" but don't differentiate between dialectical and rhetorical contexts. In a rhetorical context, the strategy is to demonstrate to the audience that your interlocutor is generating text with an indifference to truth. You can then pivot, forcing them to defend their method rather than making you debunk their claims.
In classical forums arguments are often some form of stamina contest and bots will always win those.
But ye it is like a troll accusation.
The historical meaning of the word 'hominem' isn't crucial to the universal logical principle of 'ad hominem'. If xenoorganisms beneath the ice-sheets of Titan are dismissing each other's ideas out of hand, they too may be committing this fallacy. The fallacy is the rejection of an argument based on its source rather than its content.
Are bots really infiltrating HN and making constructive non-inflammatory comments? I don't find it at all plausible but "that's just what they want you to think".
For example, in one study, they divide participants into two groups, have one group watch https://www.youtube.com/watch?v=fn3KWM1kuAw (that highlights the high socio-emotional capabilities of a robot), while the other watches https://www.youtube.com/watch?v=tF4DML7FIWk (that highlights the low socio-emotional capabilities of a robot)
They are then asked if they agree or disagree with a (presumably hypothetical?) company's proposal to reduce employees' welfare, such as replacing a meal with a shake. Two groups showed a different preference.
This makes me think about that old question of whether you thank LLM or not. That is treating LLMs more like humans, so if what this paper found holds, maybe that'd nudge our brain subtly toward dehumanizing other real humans!? That's so counter intuitive...
Also, interesting bit: "Participants in the high (vs. low) socio-emotional capability condition showed more negative treatment intentions toward employees"
Low and high socio-emotional groups refer to whether the group was shown the low or high socio-emotional video. The pre-test and exclusion based on lack of attention and instruction following was performed before group selection for each individual, which was presumably random.
Interestingly, both technologies also supercharge scams - one by providing a way to cash out with minimal risk, the other by making convincing human interaction easier to fake.
Crypto was a way that people who think they’re brilliant can engage in gambling.
AI is a way for “smart” people to create language to make their opinions sound “smarter”
Are we simply getting old and bitter?
Personally, I would add a previous cycle to this: social media. Although people were quick to point at the companies which were sparked and empowered by having unprecedented distribution.
Are we really better or worse off than a few decades ago?
No, we are getting wiser. It's not bitterness to look at a technology with a critical eye and see the bad effects as well as the good. It's not foolish to judge that the negative effects outweigh the positive. It's a mark of maturity. "But strong meat belongeth to them that are of full age, even those who by reason of use have their senses exercised to discern both good and evil."
We cannot say "I'm criticial therefore I'm right", neither "I'm optimist therefore I'm right". Right conclusion comes from right process: gathering the right data, and thinking it over carefully while trying to be as unbiased and realist as possible.
(Also strictly speaking, "I'm critical therefore I'm right" isn't always valid, but "I'm uncritical therefore I'm right" is always invalid.)
I can't edit my comment any more, but I should have said, "The opposite of being 'critical' isn't being 'optimistic,' it's being 'uncritical.'"
Maybe, but it has nothing to do with change itself.
Change can be either positive or negative. Often it is objectively negative and can stay that way for decades.
Change itself is a must. It's nature's law.
For crypto, no. It's basically only useful for illegal actions, so if you live in a society where illegal is well correlated with "bad", you won't see any benefit from it.
The case for LLM is more complicated. There are positives and negatives. And the case for social networks is even more complicated, because they are objectively not what they used to be anymore.
Blockchain assets ("controllable electronic records") are defined in the UCC (Uniform Commercial Code) Article 12 that regulates interstate commerce, https://news.ycombinator.com/item?id=33949680#33951026. Some states have already ratified the changes, others are in progress.
U.S. federal stablecoin legislation was passed earlier this year.
It depends, maybe 20 years ago — a couple of years after the dot com bubble — we thought we were not gonna repeat the same mistakes as we did before, and I do believe we blindly drank the kool-aid thinking we were gonna solve all problems with tech.
Now, we're another year old another year wiser times 20. I don't think having one's eyes open is synonymous with bitterness... but, it is what we do with the information and knowledge we have acquired that defines that trait: do we sit and grumble and shake our fists at the cloud (providers?), or do we seek others to try to prevent problems from escalating.
> Are we really better or worse off than a few decades ago?
While technologic progress in several fields has been amazing, it would be naïve of us to not recognize the areas where we have regressed.
Looking back, I think we should have normalized caution, not moving fast and breaking things; normalized interoperability, and not walled gardens; and we should have been more wary about the dangers of not having solved business models instead of normalizing tracking and targeted advertising, which enabled personalized propaganda...
... we should have also paid more attention at the unchecked power of monopolies and media conglomerates and done more to foster a healthier economy as well as improve the quality of life and rights protection of people, including access to education and the strengthening of institutions.
So, to finally answer your question, I think we are in general a bit worse off. Why? Well, I look back to 20 years ago when our outlook on the future was that the sky was the limit if you worked and studied hard; and now the outlook on the future 20 years from now... seems uncertain.
The 19th and 20th centuries saw a huge shift in communication. We went from snail mail to telegrams to radio to phones to television to internet on desktops to internet on every person wherever they are. Every 20-30 years some new tech made it easier, cheaper, and faster to get your message to an intended recipient. Each of these was a huge social shift in terms of interpersonal relationships, commerce, and diminishing cycle times, and we've grown to expect these booms and pivots.
But there isn't much of where to go past "can immediately send a message to anyone anywhere." It's effectively an endstate. We can no longer take existing communication services and innovate on them by merely offering that service using the new revolutionary tech. By tech sectors are still trying to recreate the past economic booms by pushing technologies that aren't as revolutionary or aren't as promising and hyping them up to get people thinking they're the next stage of the communication technology cycle.
Perhaps for uneducated casual communications, lacking in critical analysis. The majority of what passes for "communications" are misunderstood, misstated, omit key critical aspects, and speak from an uninformed and unexamined position... the human race may "communicate" but does so very poorly, to the degree much of the human activity in our society is placeholder and good enough, while being in fact terrible and damaging.
No it has regressed now. We are probably back to the level of 1950s before telephones became common.
People don't answer unknown numbers and are not listed in the telephone book.
When I was a kid in the 90s I could call almost anyone in my town by looking them up in the phone book.
Similar for internet back in the 90s Nigerian princes were provided a means to reach expinentially more people faster.
Imagine the DVR being invented today. A commercial device that helps you skip ads. It would never be allowed to happen.
Anti-capitalist sentiment was incredibly widespread in the US during the 19th century through the 1930s, because far more people were personally impacted, and most needed look no further than their own lives to see it. If nothing else, capitalism has become more sophisticated in disguising its harms, and acclimating people to them to such an extent that many become entirely incapable of seeing any harm at all, or even imagining any other way for a society to be structured, despite humanity having exited for 100,000+ years.
Capitalism is designed to maximize profit, which it does well. It has even improved life for many people. Even the most ardent Marxist acknowledges this fact. But what we really care about (unless you're super rich) is maximizing human well-being overall. So why rely on a system that is not actually meant to maximize, prioritize, or focus on what we actually care about, and only does so occasionally or incidentally? It doesn't make sense, and in almost no other arena of human endeavor is this done. Imagine writing software to maximize x, when you really want it to do y, and just hoping that x makes y happen, and saying it's the least worst way of doing it, without trying any other option.
Fundamentally, I think any socioeconomic system should be designed with people in mind as the organizing principle. If we care about human well-being, happiness, flourishing, etc., it makes no sense not to prioritize it from first principles. I imagine some form of economic and political democracy, wherein people have direct control over the things that affect their lives, in the social, political, and economic spheres (the three are inseparable, despite common capitalist dogma to the contrary). And not the usual representative democracy, where you abdicate any real decision-making power to an effectively unaccountable representative every 4 years.
The usual objection to this is that it would be impossible to maintain the status quo with an arrangement like this. But that's just more capitalist self-preservation talking. There are clearly tradeoffs required in a new system oriented towards maximal human well-being. Likely tons of dirt-cheap Chinese-made products are out. But those never made us happy to being with!
https://znetwork.org/wp-content/uploads/zbooks/htdocs/books/...
"In this book we argue for a new alternative based on public ownership and a decentralized planning procedure in which workers and consumers propose and revise their own activities until an equitable, efficient plan is reached. The vision, which we call a participatory economy, strives for equitable consumption and work which integrate conceptual and manual labor so that no participants can skew outcomes in their favor, so that self-motivation plays a growing role as workers manage their own activities, and so that peer pressure and peer esteem provide powerful incentives once excelling and malingering rebound to the advantage and disadvantage of one's work mates."
That's arguably what AI is - it compressed the internet so that you can extract StackOverflow answers without clicking through all the fucking ads that await you on the journey from search bar to the answer you were looking for.
You can of course expect it, over the next decade or so, to interpose ads between you and your goal in the same way that Google and StackOverflow did from 2010-now.
But for the moment I think it's the exact opposite of your thesis. The AI companies are in cut-throat capture-market-share mode so they're purposely skipping opportunities to cram ads down your throat.
The same cycle happened (is happening) with crypto and AI, just in more compressed timeframes. In both cases the initial period of optimism that transitioned into growing concerns about the negative effects on our societies.
The optimistic view would be that the cycle shortens so much that the negatives of a new technology are widely understood before that tech becomes widespread. Realistically, we'll just see the amorality and cynicism on display and still sweep it under the rug.
cratermoon•5mo ago
> socio-emotional capabilities of autonomous agents
The paper fails to note that these 'capabilities' are illusory. They are a product of how the behaviors of LLMs "hack" our brains and exploit the hundreds of thousands of years of evolution of our equipment as a social species. https://jenson.org/timmy/
Isamu•5mo ago
That is the dehumanization process they are describing.
kohsuke•5mo ago
stuartjohnson12•5mo ago
skeezyboy•5mo ago
ACCount37•5mo ago
You're typing on a keyboard, which means you're nothing but a "next keypress predictor". This says very little about how intelligent are you.
skeezyboy•5mo ago
ACCount37•5mo ago
For all I know, humans are "essentially statistical predictors" too - and all of their insistence on being something greater than that is anthropocentric copium.
skeezyboy•5mo ago
stuartjohnson12•5mo ago
empath75•5mo ago
cwmoore•5mo ago
https://en.m.wikipedia.org/wiki/Marvin_the_Paranoid_Android
kingkawn•5mo ago
cratermoon•5mo ago
In brief, the paper consistently but implicitly regards these tools as having at least minimal socio-emotional capabilities, and that the problem is humans perceiving them as having higher levels.
kingkawn•5mo ago
It is the ability of the agent to emulate these social capacities that leads users to attribute human-like minds. There is no assertion whatsoever that the agents have a mind, but that their behavior leads some people to that conclusion. It’s in your own example.
cootsnuck•5mo ago
> Recent technological advancements have empowered nonhuman entities, such as virtual assistants and humanoid robots, to simulate human intelligence and behavior.
In the paper, "socio-emotional capability" is serving as a behavioral/operational label. Specifically, the ability to understand, express, and respond to emotions. It's used to study perceptions and spillovers. That's it.
The authors manipulate perceived socio-emotional behavior and measure how that shifts human judgments and treatment of others.
Whether that behavior is "illusory" or phenomenally real is orthogonal to the research scope and doesn’t change the results. But regardless, as I said, they quite literally said "simulate", so you should still be satisfied.
empath75•5mo ago
chrisweekly•5mo ago
Your "timmy" post deserves its own discussion. Thanks for sharing it!