> He called for guardrails on AI to stop it capturing individuals' "minds but … also our affections."
> Fr Baggot cited the example of Magisterium AI, a Catholic chatbot. He sits on the scholarly advisory board for the service, and said its creators had worked to prevent it being "anthropomorphic" adding, "We do not want people having an intimate relationship with it."
I appreciate that coinage, "artificial intimacy," and want to explore the implications of it more.
Although I don't like the future proposed by the AI companies, this is the least of my concerns. The only big concern is employment. Like, if AI creates more jobs than it destroys, sure, go ahead, do it now.
BTW I just don't want "deep" bonds, but some sort of bonds is always good. Not sure how "deep" he meant though.
I feel icky saying this but we should make a strong effort as a society to stamp out anti-social behaviors. Addictions are very high on that list.
You might think that you can engage this way without being a burden to others, but you can't.
GLP-1s can help stamp out addiction, but people are going to be people. You can provide them support, but you cannot prevent chronic, determined self harm and destruction. I speak from personal experience.
https://recursiveadaptation.com/p/the-growing-scientific-cas...
And for the same reason: they want their fucking money.
The burden is eased when our environment nudges us toward healthier choices. The extent to which those nudges should be imposed externally is a different, far more complex issue, not least because "healthy choices" are difficult if not impossible to precisely identify and quantify. But at least in the abstract its to our individual and collective benefit for society to make the better choices easier to pursue, which at a minimum means not promoting maladaptive expectations.
It's not my position to tell someone what to want. But the evolutionary firmware your body runs on is tuned for interpersonal bonds. If you want to go against that, nobody will stop you, but it strikes me as needless suffering in a world that already has a considerable amount.
I can certainly see folks getting so used to it, that they then measure all their IRL relationships by that. They could decide that “you’re not my friend,” because you don’t want to listen to them whine endlessly about their ex.
So, just like professional therapists then?
From what I’ve seen of LLMs, it’s the opposite.
All therapists give some some variation of "your problem is $SOMETHING_POSITIVE".
Never "your problem is you're too selfish" because those patients don't go back.
It's always "your problem is you're too willing to help" or "you give too much of yourself" or other similar such BS.
I know there's other responses saying the same thing, but this needs underscoring: good therapists won't put up with this forever. They should use techniques to guide your mind away from keeping you trapped. It's a slow progress with very nonlinear progression. But for those it helps, things can improve.
Eventually you realize you (and perhaps a higher power) freed yourself from your mental bondage. They showed you the path, and walked alongside you, but they weren't the ones making the changes.
Step 1: Stop giving them human or human-like names.
Claude, Siri, Gemini, etc.
https://old.reddit.com/r/Greenpoint/comments/1nmk49r/dystopi...
So maybe an improvement.
Good friend of the Church, Nietzsche predicted dystopia long ago but it never plays out the way people think. The chimp troupe is highly unpredictable. One day it props up Hitlers. Next day it kills him.
Definitely not an improvement to be friends with corporate-owned machines versus being friends with God
I've been looking for this phrase for years.
It describes the phenomenon perfectly, even accounting for the diminishing of emotional/mental/physical closeness that occurs.
And finally, LLMs. They certainly _could_ be used to help individuals bootstrap and quickly gain a basic competence in a new topic, and allow those individuals to reach greater expertise more quickly. But _a lot_ of people will just offload their thinking to the LLMs and actually erode their skills. Is this strictly inevitable from a conceptual standpoint? No. But practically speaking a lot of people will fall into this trap, which enlightened technologists will scratch their heads. "I don't understand why people say LLMs make you dumber, I've used them to advance my career and expand my knowledge, etc. Sounds like you guys just don't like progress."
Malnourished. The word you were looking for is malnourished. Junk food is a problem but the abundance of food didn't somehow cause "cleavage between upper and lower classes."
Americans today can afford to eat 4000 calories worth of food and it's already optimized for palatability and convenience. It's relatively easy to eat 4000 calories of Doritos, microwave burritos, and boxed cookies. There's advertising to remind you of its existence and researchers dedicated to optimizing the delight of eating these products (increasing the odds of overeating just because it's pleasurable and frictionless).
The transition from "abundance" to "abundance multiplied by advertising and product optimization" drove obesity more than the mere availability of calories, IMO. I see a parallel with digital information. There was more than enough information on the Web to spend all day looking at it even before social networks were common. But that "home cooked" experience wasn't engineered for engagement time, so companies that optimized products for engagement were, in practice, a lot better at getting people to look at digital information for many hours per day.
Counterpoint, the richest man in the world is clearly addicted to being on twitter and posts at all hours of the day. More generally I don't see why the richest wouldn't be addicted to social media like the rest of us – after all they have a lot more free time and disposable income
The richest people in SV send their children to schools that are deliberately devoid of, or carefully restrictive of, technology. This is do they can learn to think, not follow.
As far as I can tell, rich kids are just as addicted to phones/etc as anyone else.
I think that's more likely related to how little they actually sleep, and trying to fill their waking hours, more than it is related to an addiction. It seems to be a pattern with these people that only need 4-5 hours a day of sleep.
I'd argue the iPhone crossed that line at some point within the past five years, though, admittedly, it is the iPhone + social media services working together. I doubt Jobs would have approved the gaudy, Myspace-aesthetic-level Messages backgrounds that iOS 26 was proud to launch with.
What are companies going to pay these now-dumber people to do, once they've automated away the jobs the smarter versions of these people did? Will the AI be able to perform the original jobs but unable to perform the jobs achievable by these now-dumber people?
Are we a better-off society if a net dumber population is doing a manual labor job that the robotics companies haven't solved yet?
Kill each other, in some ways.
Even a complete cynical Machivellian with no morals would have better uses for masses post automation. Even keeping them on the dole just to have a conscriptible population to do the massive amounts of logistical gruntwork would make sense. Populations are a variable in military power, even as war machines mean fewer boots on the front lines and more in the logistical support. Only a complete idiot would throw a large population advantage away.
AI is incredibly useful. I'm already getting a ton of use out of it. But you have to treat it like an untrustworthy source, or at least have a "trust but verify" attitude. You also have to understand that it is not sentient, doesn't "care" about you, and is just a hugely powerful autocomplete engine. Any sense of intimacy or understanding you have with it is an illusion.
In engineering I treat it like a junior intern that is very fast, has memorized a huge amount of info, but makes mistakes and has to be hand-held.
While I'm highly skeptical that the current iteration of LLM tech will lead to mass joblessness, the reasoning above is flawed. If it costs less to employ a bot than to employ a human, then the price of human labor will fall until it reaches equilibrium with the bot. And if that equilibrium price happens to be below what it takes to keep a human alive, then it doesn't matter if "human wants are infinite" because it would be cheaper to fulfill those wants without paying a human.
“Life is suffering” meant something very different when the Buddha first said it to now. The idea that “the only constant is change” is a relatively modern creation(or at least the significance of it), so this idea that economics is going to keep working the way it always has - at least feels like it’s going to change if we get more advanced AI.
AI is a fundamentally antisocial anti-human technology
I don't know any yacht owning people but the few people I know with boats are very happy with it's size. The people looking for a football field on water are _limited_. Human desires are limited and if that limit can be achieved without the collective efforts of all humans then under our capitalistic model somebody is going to starve.
While I agree that the replacement of humans with AI would lead to joblessness, I think you'll see far sooner mass joblessness as a human with better technology can replace 50+ other humans (like containership engineer vs sailship crew).
Does anyone know where to find more? Where are the modern christian scholars? Are there christian publications easily available? In the universities I found those sources are available, but only in the specific context of studying religion but much less so as another voice on the subject at hand.
New Polity Podcast[1] also regularly features smart conversations.
Insightful analysis of the modern world and the Christian response to it: https://m.youtube.com/watch?v=Y3hMSZqatHI
He also has a new book out, Against the Machine, which has good reviews, but I haven't read yet.
Recent article entitled "Your Friends Are Not In Your Phone" was fantastic: https://www.plough.com/en/topics/life/technology/your-friend...
Some suggestions for a variety of subjects:
* Fr. Stanley Jaki on Physics and the philosophy of science - I am working through https://www.abebooks.com/9780895267498/God-Cosmologists-Jaki...
* Philosophy in general, Peter Kreeft (I recommend "Jesus Shock", it's amazing how "used" to Christ we've become, and this book does a good job of pointing out just how different the reactions to him are) and Alasdair MacIntyre (After Virtue) are both good "recent" authors.
* Bioethics and philosophy https://www.abebooks.com/first-edition/Bioethics-Limits-Scie... (I will freely admit to bias here, but this is easy to read, clear, and to the point)
* Particularly interesting in the moment: https://www.catholic.com/magazine/online-edition/the-limits-...
For super up-to-date happenings, you can go to Vatican News[1]. (A great example in the first article, "Holy See urges moratorium on autonomous weapons at UN debate on AI".)
For weightier, more timeless writings that address the issues of the current day, but are meant to be read indefinitely, the Papal Encyclicals[2] are the look. Rerum Novarum is a good one to start with.
I'd be skeptical of any persuasive writings by lay-persons (i.e. not priests or nuns). It's like the difference between a lawyer's opinion and a judge's ruling. They can be fantastic scholars, but they don't speak for the church.
1: https://www.vaticannews.va/en/vatican-city.html 2: https://www.papalencyclicals.net/
That illusion of closeness could have the potential to warp how we relate to REAL people. Over time, if your "listener" never judges you or walks away, you might measure real human bonds against an unfair standard.
mensetmanusman•1h ago
However, it also led to the counter reaction of cross fit and extreme fitness by a small percentage.
The same will happen with AI. Most people will become smooth brains when they don’t have to exercise thought and a small fraction will use it to push the bounds of what humans are capable of.
conartist6•1h ago
red_rech•1h ago
bbarnett•1h ago
We're already losing physical books, and data online will slowly become more and more circumspect. That is, AI training on AI, with more and more nonsense blogs, will make simple accuracy of any data very rare.
A strong mind may have the capacity to not be taint by AI too much, but what if it cannot get anything non-AI tainted to feed it? What if there are no teachers of any caliber left, for they are all smooth-brains as you say?
What if society is run AI itself, and no one understands anything at all?
That incredible mind may make some progress, but will lack the solid foundation you and I have had.