We then proceeded to have a conversation about how some people might feel if an appreciation letter was given to them that was clearly written by AI. I had to explain how it feels sort of cold and impersonal which leads to losing the effect he's looking to have.
To be honest, though, it really got me thinking about what the future is going to look like. What do I know? Maybe people will just stop caring about the human touch. It seems like a massive loss to me, but I'm also getting older.
I let him send the letter anyways. Maybe in an ironic twist an AI will respond to it.
So, no, there is no evidence that AI will change stuff. We had canned responses and template answers for a long time but people still like talking to a real human being.
P.S. I think you should have told them to write a thank you letter themselves as a fun game to compare with the AI one and send that one instead.
AI has already changed stuff. I have already seen several related examples of distasteful AI use in corporate settings. One example was management promising that feedback received during a townhall will be reviewed, only to then later proudly announce that they AI-summarized it. I'll readily admit that doing that is actually a very sensible use of AI, just maybe the messaging around it should have been a bit less out of touch. Another example was my coworker expressing his gratitude to the team, while simultaneously passing the milestone of producing more than 10 consecutive words of coherent English for the first time in his life. He was awfully proud of it too.
And to finish it off, talking to real human beings on the internet is increasingly miserable by the day. Without going too far off into the weeds, let me give you a practical, older example. I've participated in a Discord server of a FOSS project, specifically in their support channels, for a couple years - walked away a very different person, with great appreciation for service workers. I'm sure the people coming there loved being able to torment, I mean ask help from, real human beings. By the end, this feeling was very much not reciprocated. I was not alone in this either of course, and the mask would fall off of people increasingly often. Those very real human beings looking for help were not too happy either, especially when said masks fell off. So it was mostly just miserable for everyone involved. AI can substitute in such situations very handily, and everyone is honestly plain better off. Having to explain the same thing over and over to varyingly difficult people is not something a lot of people are cut out for, but the need is ever present and expanding, and AI has zero problems with filling those shoes. It can provide people with the assistance they need and deserve. It can provide even those with that help that do not need it, nor deserve it. Everyone's happier.
We've concocted a lot of inhuman(e) systems and social dynamics for ourselves over time. I have some skepticisms towards the future of AI myself, but it has a very legitimate chance of counteracting many of these dynamics, and restoring some much needed balance.
When coding in a new environment, I like to go fast and break things - this is how I learn best (this is not a good way in general, but works well for me in my amateur dev).
I ask ChatGPT questions that would drive me crazy because they are a bit chaotic, a bit repetitive and give the impression of someone chaotic and slightly dumb (me, the asker, not the AI).
I worry that with time people may start to interact with other people the same way and that would be atrocious.
No they won’t. This has happened many times in the past already with live theatre -> movies, live music -> radio etc. They don’t replace, but break into different categories where the new thing is cheap and abundant. When a corporation writes you a shit letter with ”We miss you, Derek” all reasonable people know what they’re looking at.
Look, it’s about basic economics. It doesn’t matter how ”good” the generated song for someone’s birthday is. What matters is the time, money and effort. In some cases writing a prompt for one-time use is not bad. If you’re generating individual output at scale without any human attention, nobody will appreciate the ”gesture”.
What bothers me is the content farm for X shit-startups and tech-cos thinking they’re replacing humans without side effects. It’ll work just as well as those fake signatures by the CEO in junk mail: it’ll deceive only for a short time, and maybe older people who may be permanently screwed. It’ll just be yet another comms channel saturated with spam, which is entirely fungible from the heaps of all other spam. A classic race to the bottom.
What do you do when we can no longer tell the difference?
You are entirely right though that people will slip under the radar for a while. But it’ll only be a matter of time until a personal cold email means absolutely nothing again, simply because the volume of them will be insane.
I doubt it. We human beings seem intrinsically motivated and enthusiastic about human connections. I believe we are wired like this. I know things changesbut I would need some strong evidence before even playing with the idea that we'll stop caring about the human touch.
Now, as much as I hate AI, that doesn't necessarily mean AI-free. Or even handwritten. It just needs to be some human touch. I would enjoy a handwritten letter but wouldn't mind an email at all. But maybe someone else would find it lazy and tasteless, just as I would find an ai generated text lazy and tasteless.
Maybe the prompt you can guess the person who sent you some ai-generated text used can already be perceived as some human touch. Maybe there is a threshold though.
Now, could it be that your child wanted to impress you with a perfectly written letter, or even with their ai prompt mastering?
Anyway, good anecdote, good perspective, good for you to have had the conversation and let them proceed anyway. Thanks for sharing.
Absolutely, and I think because of this we'll never see the desire go away completely. However, I'm imagining some dystopian future where human touch is so rare that people _forget_ how much it means to them. It's like scrolling through the endless slop of Netflix and then coming to some rare gem of a film where you're reminded what genuine art is.
But it's not like this only happens because of LLMs. If you worked in corporate culture you most definitely received some automated HR emails congratulating you with spending half of your life at the workplace, or something like that. I always felt almost insulted by these, they are literally just spam at best. It's kinda mocking: these are generic depersonalized texts that no one actually wrote for you, yet they always speak about "gratitude", about you being "valued" and such. In fact, it's the only thing they are meant to express: you being valued. It's so cynical.
But, I mean, it's just me. Ostensibly, these folks in HR department do know their job? Maybe most people don't feel like vomiting when they get these emails? Maybe it brings them joy? I never stopped wondering about that. I cannot just ask the closest coworkers, because of course they feel the same as me. But maybe there are other ones? Another social bubble, where this thing is normal, and it is bigger than mine?
Anyway, everyone is kinda used to it. What I am trying to say is that the phenomena is not entirely new, and LLMs don't change the essence of it. Even back when people sent paper mail to each other, I remember these pre-printed birthday/christmas cards, which are ok, because the entire point is that they are not automated and that you remember to send it to someone, yet it was always considered a bit of a poor taste to not add a sentence of yours by hand.
AI is the future whether you like it or not. Teaching him to use that tool effectively will serve him far better than shaming him for engaging the world in a way you find uncomfortable but is acceptable to society.
Consider if you would prefer he write the letter by hand to give the script that literal human touch. If not why is it ok for the computer to make the letters but not the words?
In this case the meaningful gesture is sending the message at all. He asked the AI to do a thing. That was his idea. AI just did the boring work of making it palatable to humans.
Much like driving and everything else automation takes away writing is something most people are profoundly bad at. Nothing is lost when an AI generates a message a human requested.
It is a very sad and cynical view to equate these very different things.
To me, this feels less like outsourcing creativity and more like using a writing assistant to shape your thoughts. Kind of like how we all rely on spellcheck or Grammarly now without thinking twice. People were saying the same thing back then too, that tools were "diluting" writing.
I personally don't see the harm. Not everyone is a native English speaker.
- The timing is interesting as Altman opened US branches of his 'prove humanness' project that hides the biometrics gathering effort - The problem is interesting, because on HN alone, the weight of traffic from various AI bots seems to have become a known ( and reported ) issue - The fact that some still need to be convinced of this need ( it is a real need, but as first point notes, there are clear winners of some of the proposals out there ) resulting in articles like these - Interesting second and third order impact on society mentioned ( and not mentioned ) in the article
Like I said. Interesting.
Even if that happens, and say Apple integrates sigs all the way down through their system UI keyboards, secure enclaves, and TPM, you think they’re going to conform to some shitcoin spec? Nah man, they’ll use their own.
Even then you can't trust it. Companies write DRM and tend to have actual humans run the place. If the government where these humans live decides to point guns at them and demand access, most humans are going to give up the key before they give up their life.
Edit:
Us tomorrow: Your honor, my device clearly shows timestamps and allegedly offending message, but you will note the discrepancy in style, time and message that suggests that is not the case.
LLM judge: Guilty.
Edit2:
Amusingly, the problem has been a human problem all along.
I simply don't want to live in a world where all I am doing is talking back and forth with AI that are pretending to be people. What is the point in that? I am working on https://humancrm.io and I am explicitly not putting any AI into it. Hopefully there are more than a few people that feel like I do.
Really, it someone can't even type a response back, were you ever close to begin with? Unless they always had some level of anxiety when sending you a text, in that case it's good for them to still interact with you without feeling the negative effects (some people truly have issues sending even the simplest messages back to friends).
But in reality no, this won't be a problem. We had copy paste and template system for decades and nobody are using those. And at the end of the day, even if our AI plan our meeting for us, and then end up meeting IRL, what's the problem?
Maybe they work for this guy (or someone like them): https://news.ycombinator.com/item?id=43861328
I think most business email conversations will follow the same route. We don't need all of this bla bla. Just "here is the information you need" -> "update me if you (a) need more info or (b) if the task is done"
What is notable here is though that we continue to reduce human/human interactions and that will eventually lead to a de-sensitizing of human culture.
I hear this "tip" a lot, and I question whether it's statistically-meaningful.
After spending several decades learning the right ways—like ALT+0151 on Windows—it seems deeply unfair that people are going to mischaracterize care and attention to detail as "fake".
But...
Using em-dashes is a signal. It's not a smoking gun, but text that uses em-dashes is more likely to be AI-generated than text that doesn't!
Similarly, text that consistently uses correct spelling and punctuation is more likely to be AI-generated than text that doesn't.
So - yeah - if you use em-dashes your writing looks more like AI wrote it.
But that’s not a bad thing—it means your writing has the same strengths: clarity, rhythm, and elegance. AI learned from the best, and so did you.
On the rare occasion I see some GPT garbage, I either block the sender, or if I know a human is involved I explain how insulting it is and let them know they’re one slop message away from blocked.
Getting used to it is a surefire way to make your communication experience much worse.
If they can't handle an email, what makes you think they can handle a prompt, which requires more, not less careful calibration?
I'd like to highlight the words "counterfeiting" and "debasement", as vocabulary that could apply to the underlying cause of these interactions. To recycle an old comment [0]:
> Yeah, one of their most "effective" uses [of LLMs] is to counterfeit signals that we have relied on--wisely or not--to estimate deeper practical truths. Stuff like "did this person invest some time into this" or "does this person have knowledge of a field" or "can they even think straight."
> Oh, sure, qualitatively speaking it's not new, people could have used form-letters, hired a ghostwriter, or simply sank time and effort into a good lie... but the quantitative change of "Bot, write something that appears heartfelt and clever" is huge.
> In some cases that's devastating--like trying to avert botting/sockpuppet operations online--and in others we might have to cope by saying stuff like: "Fuck it, personal essays and cover letters are meaningless now, just put down the raw bullet-points."
And then you get to Gresham's Law: "Bad money drives out good" (that is, drives it out of circulation)...
They'd clearly dumped the memo they got about the reduction into some AI with a prompt to write a "fun" announcement. The result was a mess of weird AI positivity and a fun "fact" which was so ludicrous that the manager can't have read it before sending.
I don't mind reading stuff that has been written with assistance from AI, but I definitely think it's concerning that people are apparently willing to send purely AI generated copy without at least reviewing for correctness and tone first.
There's always been some innate ability to recognize effort and experience. I don't know the word for it but looking at a child's or experienced artist drawing you just know if they put in minimal or extra effort.
"We are excited to announce we are supporting our family in their health kick journey. To support them, we have taken the difficult decision to reduce the number of beverages available. We remain fully committed to unlimited delicious tap water, free of charge!"
It probably made me angrier than it should have. Now I’m wondering if I’m the “old man yelling at cloud”.
You'll be able to detect someone running base ChatGPT or something - but even base ChatGPT with a temperature of 2 has a very, very different response style - and that's before you simply get creative with the system prompt.
And yes, it's trivial to get the model to not use em dashes, or the wrong kind of quotes, or any other tell you think you have against it.
Before AI, it was hard for many people to write literate text. I was OK with that, if the text was worth reading. I don't need to be entertained, just informed.
The thing that gets me about AI is not that what it generates is un-original, but if it's trained on the bulk of human text, then what it generates is not worth reading.
This is what we will all do. We all are spam filters now.
This quote from the short prose at the beginning of this article expressed one of my major concerns with LLM generated prose.
And specifically, why I don't want LLM generated text as a substitute for web search results.
Since LLM generated prose is a statistical aggregation of web crawling, there is no ability to assess the author's position, or opinions, and how they might affect the author's writing. This removes a major tool in evaluating any potential biases in the writing.
This article itself may have been written by an LLM. But the statement that LLM generated prose eliminates an ability to assess a human's position on a topic is still valid.
Just another step in loosing touch with objective reality. Contrary to popular opinion, perception is NOT reality... It's only an idea inside a person's head, while the overwhelming majority of reality is outside of our liter of jello...
turtleyacht•3d ago
nkrisc•3d ago
Just write text, separate paragraphs, and let me format it how I like to read it.
turtleyacht•3d ago
By the way, HN will flow jagged text when it's not prepended with two spaces (code markup). Had to mark it up on purpose.
So, you wouldn't know this sentence was separated into six lines. But in editing, it preserves the literal format.
int_19h•2d ago