frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

My Two Cents on Abundance

https://josephheath.substack.com/p/my-two-cents-on-abundance
1•paulpauper•2m ago•0 comments

4. Boxing Day: Unwrapping the Mind

https://blog.phenomenal.ink/states-of-mind/
1•paulpauper•2m ago•0 comments

Book Review: Arguments About Aborigines

https://www.astralcodexten.com/p/book-review-arguments-about-aborigines
1•paulpauper•3m ago•0 comments

Reversing a Fingerprint Reader Protocol (2021)

https://blog.th0m.as/misc/fingerprint-reversing/
1•thejj100100•4m ago•0 comments

Project 5QL: A different approach to working with SQL

https://5ql.site
2•SophieBroderick•4m ago•0 comments

A major AI training data set contains millions of examples of personal data

https://www.technologyreview.com/2025/07/18/1120466/a-major-ai-training-data-set-contains-millions-of-examples-of-personal-data/
2•belter•10m ago•0 comments

ChatGPT Is Changing the Words We Use in Conversation

https://www.scientificamerican.com/article/chatgpt-is-changing-the-words-we-use-in-conversation/
1•bdev12345•10m ago•0 comments

Quantum internet gives new insights into Einstein's relativity

https://cosmosmagazine.com/science/physics/quantum-internet-einstein-relativity/
1•Bluestein•11m ago•0 comments

Just Say No to Overcomplicated Cars

https://fossforce.com/2025/07/just-say-no-to-overcomplicated-cars/
2•dxs•11m ago•0 comments

Rocket engine designed by generative AI just completed its first hot fire test

https://www.pcgamer.com/hardware/this-aerospike-rocket-engine-designed-by-generative-ai-just-completed-its-first-hot-fire-test/
2•Bluestein•12m ago•0 comments

Ask HN: What is your Tech Stack?

1•jerawaj740•15m ago•0 comments

MIPS – The hyperactive history and legacy of the pioneering RISC architecture

https://thechipletter.substack.com/p/mips
2•rbanffy•17m ago•0 comments

Anukari working better on some Radeon chips

https://anukari.com/blog/devlog/working-better-on-some-radeon-chips
1•humbledrone•18m ago•0 comments

The perfect cross platform framework

1•miguellima•19m ago•0 comments

Show HN: I built a simple study app and got 60 users so far:')

https://apps.apple.com/us/app/noggn-ai/id6747649185
1•iboshidev•22m ago•0 comments

How Albert Camus Found Solace in the Absurdity of Football

https://www.mmowen.me/camus-absurd-love-of-football
1•decafquest•22m ago•0 comments

Perl Versioning Scheme and Gentoo

https://wiki.gentoo.org/wiki/Project:Perl/Version-Scheme
1•RGBCube•22m ago•0 comments

A Survey of Context Engineering for Large Language Models

https://arxiv.org/abs/2507.13334
1•amirkabbara•32m ago•0 comments

Show HN: A database specialized in Event Sourcing

https://www.thenativeweb.io/products/eventsourcingdb
1•goloroden•34m ago•0 comments

Ask HN: Where is Git for my Claude Code conversations?

2•lil-lugger•35m ago•2 comments

New York halts offshore wind transmission plan amid federal uncertainty

https://www.reuters.com/business/energy/new-york-halts-offshore-wind-transmission-plan-amid-federal-uncertainty-2025-07-17/
3•geox•38m ago•0 comments

Show HN: FishSonar – Real-Time Crypto "Fish" Detector for Binance

https://github.com/swampus/FishSonar
1•swampus•40m ago•0 comments

Life on Venus: Verve Mission Aims for Answers

https://www.universetoday.com/articles/uk-is-considering-a-mission-to-venus-to-search-for-life
1•rbanffy•41m ago•0 comments

Tech CEO caught with company's HR head on Coldplay kiss cam resigns

https://www.theguardian.com/us-news/2025/jul/19/coldplay-couple-ceo-andy-byron-resigns
2•vinni2•41m ago•0 comments

TSMC's quarterly sales hit a record $30B – chipmaker plans over 15 new fabs

https://www.tomshardware.com/tech-industry/semiconductors/tsmc-to-build-over-15-new-fabs-in-the-coming-years-as-quarterly-sales-hit-usd30-billion-on-ai-demand
2•rbanffy•41m ago•0 comments

The role of metabolism in shaping enzyme structures over 400M years

https://www.nature.com/articles/s41586-025-09205-6
3•PaulHoule•43m ago•0 comments

Say No to Gnulib

https://rgbcu.be/blog/no-gnulib/
1•RGBCube•43m ago•0 comments

Metap: A Meta-Programming Layer for Python

https://sbaziotis.com/compilers/metap.html
2•Bogdanp•44m ago•0 comments

Managing EFI Boot Loaders for Linux: Controlling Secure Boot

https://www.rodsbooks.com/efi-bootloaders/controlling-sb.html
1•CaliforniaKarl•51m ago•0 comments

Wool map of Ireland proves a great yarn for Co Wicklow friends

https://www.rte.ie/entertainment/2025/0715/1523589-wool-map-of-ireland-a-great-yarn-for-co-wicklow-friends/
1•austinallegro•52m ago•0 comments
Open in hackernews

It's rude to show AI output to people

https://distantprovince.by/posts/its-rude-to-show-ai-output-to-people/
258•distantprovince•5h ago

Comments

vouaobrasil•4h ago
A lot of the reason why I even ask other people is not to get a simple technical answer but to connect, understand another person's unexepected thoughts, and maybe forge a collaboration –– in addition to getting an answer of course. Real people come up with so many side paths and thoughts, whereas AI feels lifeless and drab.

To me, someone pasting in an AI answer says: I don't care about any of that. Yeah, not a person I want to interact with.

gharper•4h ago
It’s the conversational equivalent of “Let me google that for you”.
MattGaiser•4h ago
I think the issue is that about half the conversations in my life really shouldn't happen. They should have Googled it or asked an AI about it, as that is how I would solve the same problem.

It wouldn't surprise me if "let me Google that for you" is an unstated part of many conversations.

jddj•3h ago
It's the conversational equivalent of an amplification attack
accrual•3h ago
I remember reading about someone using AI to turn a simple summary like "task XYZ completed with updates ABC" into a few paragraphs of email. The recipient then fed the reply into their AI to summarize it back into the original points. Truly, a compression/expansion machine.
ghjnut•2h ago
It is, which I'd argue has a time and a place. Maybe it's more specific to how I cut my teeth in the industry but as programmer whenever I had to ask a question of e.g the ops team, I'd make sure it was clear I'd made an effort to figure out my problem. Here's how I understand the issue, here's what I tried yadda yadda.

Now I'm the 40-year-old ops guy fielding those questions. I'll write up an LLM question emphasizing what they should be focused on, I'll verify the response is in sync with my thoughts, and shoot it to them.

It seems less passive aggressive than LMGTFY and sometimes I learn something from the response.

Arainach•1h ago
Instead of spending this time, it is faster, simpler, and more effective to phrase these questions in the form "have you checked the docs and what did they say?"
ivape•4h ago
I can buy into this. I always thought it was rude or at least insulting when Hollywood robotically creates slop movies. As in, of course they can do it, but damn is it insulting. There really are two types of people in the world:

a) Quantity > Quality if it prints $$$.

or

b) Quality > Quantity if it feels like the right thing to do.

Witnessing type A at scale is a first-class ticket into misanthropy.

devenson•4h ago
less than perfect writing is a signal that your human. At least for now.
phito•4h ago
I really wish some of my coworkers would stop using LLMs to write me emails or even Teams messages. It does feel extremely rude, to the point I don't even want to read them anymore.
SoftTalker•4h ago
Even worse when they accidently leave in the dialog with the AI. Dead giveaway. I got an email from a colleague the other day and at the bottom was this line:

> Would you like me to format this for Outlook or help you post it to a specific channel or distribution list?

righthand•4h ago
Clippy is rolling in his grave.
righthand•4h ago
Seriously you should respond to the slop in the email and waste your coworkers time too.

“No I don’t need this formatted for Outlook Dave. Thanks for asking though!”

AlecSchueler•3h ago
That wastes your own time as well though.
panarchy•1h ago
Just get an AI to do it for you
benatkin•4h ago
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
echelon•4h ago
Wow. What a good giveaway.

I wonder what others there are.

I occasionally use bullet points, emdashes (unicode, single, and double hyphens) and words like "delve". I hate it think these are the new heuristics.

I think AI is a useful tool (especially image and video models), but I've already had folks (on HN [1]!) call out my fully artisanal comments as LLM-generated. It's almost as annoying as getting low-effort LLM splurge from others.

Edit: As it turns out, cow-orkers isn't actually an LLMism. It's both a joke and a dictation software mistake. Oops.

[1] most recently https://news.ycombinator.com/item?id=44482876

Buttons840•4h ago
Use two dashes instead of an actual em dash. ChatGPT, at least, cannot do the same--it just can't.
chaps•4h ago
As a frequent user of two dashes.. I hate how people now associate it with AI.

Also, that "cow-orkers" doesn't look like AI-generated slop at all..? Just scrolling down a bit shows that most of them are three years and older.

JoshTriplett•4h ago
Conventionally, in various tools that take plain text as input, two dashes is an en-dash, and three dashes is an em-dash.
Jtsummers•4h ago
It can't use two dashes? Is that like how Data couldn't use contractions (except he did)?
skeledrew•4h ago
ChatGPT: "Hold my beer..."
furyofantares•4h ago
Maybe I'm misunderstanding - but I don't think LLM's say cow-orkers. Or is that what you mean?
ffsm8•4h ago
As this error seems to be going back a lot longer then LLMs existed (17 yrs), it could be an auto in-correct situation.

Might be incorrectly saved in some spell check software and occasionally rearing it's head

chaps•4h ago
https://ask.metafilter.com/15649/coworkers-why/amp

This goes back a loooooong while.

furyofantares•4h ago
Oh I see the confusion then. It's not an error, it's a joke, and a very old one at that. Like saying Micro$oft.
lupusreal•4h ago
Give away for what, old farts? That link contains a comment citing the jargon file which in turn says that the term is an old Usenet meme.
chaps•4h ago
Soon HN is going to be flooded with blogs about people trying and failing miserably to find AI signal from noisy online discussions with examples like this one.
scarface_74•4h ago
How is that a “giveaway”? The search turns up results from 7 years ago before LLMs were a thing? More than likely it’s auto correct going astray. I can’t imagine an LLM making that mistake
Velorivox•4h ago
I like to use em-dashes as well (option-shift-hyphen on my macbook). I've seen people try to prompt LLMs to not have em-dashes, I've been in forums where as soon as you type in an em-dash it will block the submit button and tell you not to use AI.

Here's my take: these forums will drive good writers away or at least discourage them, leaving discourses the worse for it. What they really end up saying — "we don't care whether you use an LLM, just remove the damn em-dash" — indicates it's not a forum hosting riveting discussions in the first place.

shreezus•4h ago
LinkedIn is probably the worst culprit. It has always been a wasteland of “corporate/professional slop”, except now the interface deliberately suggests AI-generated responses to posts. I genuinely cannot think of a worse “social network” than that hell hole.

“Very insightful! Truly a masterclass in turning everyday professional rituals into transformative personal branding opportunities. Your ability to synergize authenticity, thought leadership, and self-congratulation is unparalleled.”

vouaobrasil•4h ago
Best thing you can do is quit LinkedIn. I deleted my account immediately once I first noticed AI-generated content there.
stevekemp•3h ago
I guess that makes sense, unless you're single. LinkedIn is the new tinder.
exographicskip•2h ago
Color me intrigued
slumberlust•1h ago
Would you like to 'swap business cards?'
quietbritishjim•4h ago
> now the interface deliberately suggests AI-generated responses to posts

This feature absolutely defies belief. If I ran a social network (thank god I don't) one of my main worries would be a flood of AI skip driving away all the human users. And LinkedIn are encouraging it. How does that happen? My best guess is that it drives up engagement numbers to allow some disinterested middle managers to hit some internal targets.

distantprovince•2h ago
This feature predates LLMs though, right? Funnily enough, I actually find it hilarious! In my mind, once they introduced it, it immediately became "a list of things NOT to reply if you want to be polite" and I was used it like that. With one exception. If I came across an update from someone who's a really good friend, I would unleash full power of AI comments on them! We had amazing AI generated comment threads with friends that looked goofy as hell.
j45•3h ago
AI Content that doesn't appear AI today will have to be the type that doesn't appear like AI in 1, 2 years.

Folks who are new to AI are just posting away with their December 2022 because it's new to them.

It is best to personally understand your own style(s) of communication.

herval•4h ago
have you tried sharing that feedback with them?

one of my reports started responding to questions with AI Slop. I asked if he was actually writing those sentences (he wasn't), so I gave him that exact feedback - it felt to me like he wasn't even listening, when he clearly jut copy-pasted clearly AI responses. Thankfully he stopped doing it.

Of course as models get better at writing, it'll be harder and harder to tell. IMO the people who stand to lose the most are the AI sloppers, in that case - like in the South Park episode, as they'll get lost in commitments and agreements they didn't even know they made.

pyman•3h ago
Didn't our parents go through the same thing when email came out?

My dad used to say: "Stop sending me emails. It's not the same." I'd tell him, "It's better. "No, it's not. People used to sit down and take the time to write a letter, in their own handwriting. Every letter had its own personality, even its own smell. And you had to walk to the post office to send it. Now sending a letter means nothing."

Change is inevitable. Most people just won't like it.

A lot of people don't realise that Transformers were originally designed to translate text between languages. Which, in a way, is just another way of improving how we communicate ideas. Right now, I see two things people are not happy about when it comes to LLMs:

1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.

2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.

Both of these things won't matter anymore in the next two or three years.

aidos•3h ago
I really don’t think they’re the same thing. Email or letter, the words are yours while an LLM output isn’t.
unyttigfjelltol•3h ago
Which words, exactly, are "yours"? Working with an LLM is like having a copywriter 24/7, who will steer you toward whatever voice and style you want. Candidly, I'm getting the sense the issue here is some junior varsity level LLM skill.
pyman•3h ago
Initially, it had the same effect on people until they got used to it. In the near future, whether the text is yours or not won't matter. What will matter is the message or idea you're communicating. Just like today, it doesn't matter if the code is yours, only the product you're shipping and problem it's solving.
aspenmayer•2h ago
Code is either fit for a given purpose or not. Communicating with a LLM instead of directly with the desired recipient may be considered fit for purpose for the receiving party, but it’s not for the LLM user to say what the goals of the writer is, nor is it for the LLM user to say what the goals of the writer ought to be. LLMs for communication are inherently unfit for purpose for anything beyond basic yes/no and basic autocomplete. Otherwise I’m not even interacting with a human in the loop except before they hit send, which doesn’t inspire confidence.
majormajor•2h ago
Similar-looking effects are not the "same" effect.

"Change always triggers backlash" does not imply "all backlash is unwarranted."

> What will matter is the message or idea you're communicating. Just like today, it doesn't matter if the code is yours, only the product you're shipping and problem it's solving.

But like the article explains about why it's rude: the less thought you put into it, the less chance the message is well communicated. The less thought you put into the code you ship, the less chance it will solve the problem reliably and consistently.

You aren't replying to "don't use LLM tools" you're replying to "don't just trust and forward their slop blindly."

jaredcwhite•55m ago
Doesn't matter today? What are you even talking about? It completely matters if the code you write is yours. The only people saying otherwise have fallen prey to the cult of slop.
lcnPylGDnU4H9OF•41m ago
Why does it matter where the code came from if it is correct?
jaredcwhite•35m ago
Why does it matter where the paint came from if it looks pretty?

Why does it matter where the legal claims came from if a judge accepts them?

Why does it matter where the sound waves came from if it sounds catchy?

Why does it matter?

Why does anything matter?

Sorry, I normally love debating epistemology but not here on Hacker News. :)

lcnPylGDnU4H9OF•17m ago
I understand the points about aesthetics but not law; the judge is there to interpret legal arguments and a lawyer who presents an argument with false premises, like a fabricated case, is being irresponsible. It is very similar with coding, except the judge is a PM.

It does not seem to matter where the code nor the legal argument came from. What matters is that they are coherent.

moomoo11•3h ago
The prompt is theirs.
Tadpole9181•3h ago
Then just send me the prompt.
j45•3h ago
Some do put their words into the LLM and clean it up.

And it stays much closer to how they are writing.

drweevil•2h ago
That is indeed the crux of it. If you write me an inane email, it’s still you, and it tells me something about you. If you send me the output of some AI, have I learned anything? Has anything been communicated? I simply can’t know. It reminds me a bit of the classic philosophical thought experiment "If a tree falls in a forest and no one is around to hear it, does it make a sound?" Hence the waste of time the author alludes to. The only comparison to email that makes any sense in this case are the senseless chain mails people used to forward endlessly. They have that same quality.
threatofrain•3h ago
I mean that's fine, but the right response isn't all this moral negotiation, but rather just to point out that it's not hard to have Siri respond to things.

So have your Siri talk to my Cortana and we'll work things out.

Is this a colder world or old people just not understanding the future?

conartist6•3h ago
It's demonstration by absurdity that that is not the future. You're describing the collapse of all value.
kenanblair•3h ago
Same thing with photography and painting. These opinionated pieces display a false dichotomy which propagates into argument, when we have a tunable dial rather than a switch, appropriately increasing or decreasing our consideration, time, and focus along a spectrum rather than treating it as an on and off switch.

I value letters far more than emails, pouring out my heart and complex thought to justify the post office trip and even postage stamp. Heck, why do we write birthday cards instead of emails? I hold a similar attitude towards LLM output and writing; perhaps more analogous is a comparison between painting and photography. I’ll take a glance at LLM output, but reading intentional thought (especially if it’s a letter) is when I infer about the sender as a person through their content. So if you want to send me a snapshot or fact, I’m fine with LLM output, but if you’re painting me a message, your actionable brushstrokes are more telling than the photo itself.

j45•3h ago
One thing is it's less about change. It's more about quality vs quantity and both have their place.
conartist6•3h ago
Just be a robot. Sell your voice to the AI overlords. Sell your ears and eyes. Reality was the scam; choose the Matrix. I choose the Matrix!
distantprovince•2h ago
I can see the similarity yes! Although I do feel like the distance between handwritten letter and an email is shorter than between email and LLM generated email. There's some line it crossed. Maybe it's that email provided some benefit to the reader too. Yes, there's less character, but you receive it faster, you can easily save it, copy it, attach a link or a picture. You may even get lucky and receive an .exe file as a bonus! LLM does not provide any benefit for the reader though, it just wastes their resources on yapping that no human cared to write.
lxgr•3h ago
"Hey, I can't help but notice that some of the messages you're sending me are partially LLM-generated. I appreciate you wanting to communicate stylistically and grammatically correct, but I personally prefer the occasional typo or inelegant expression over the chance of distorted meanings or lost/hallucinated context.

Going forward, could you please communicate with me directly? I really don't mind a lack of capitalization or colloquial expressions in internal communications."

pyman•1h ago
I see two things people are not happy about when it comes to LLMs:

1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.

2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.

Both of these things won't matter anymore in the next two or three years. LLMs will keep getting smarter, while our egos will keep getting smaller.

People still don't fully grasp just how much LLMs will reshape the way we communicate and work, for better or worse.

csa•9m ago
“No”
moomoo11•3h ago
Why? AI is a tool. Are their messages incorrect or something? If not who cares, they’re being efficient and thus more productive.

Please be honest. If it’s slop or they have incorrect information in the message, then my bad, stop reading here. Otherwise…

I really hope people like this with holier than thou attitude get filtered out. Fast.

People who don’t adapt to use new tools are some of the worst people to work around.

distantprovince•2h ago
> If it’s slop or they have incorrect information in the message, then my bad, stop reading here.

"my bad" and what next? The reader just wasted time and focus on reading, it doesn't sound like a fair exchange.

moomoo11•2h ago
That’s on them, I said what I wanted to.

Most of the time people just like getting triggered that someone sent them a —— in their message and blame AI instead of adopting it into their workflows and moving faster.

jaredcwhite•57m ago
If it took you no time to write it, I'll spend no time reading it.

The holier than thou people are the ones who are telling us genAI is inevitable, it's here to stay, we should use it as a matter of rote, we'll be left out if we don't, it's going to change everything, blah blah blah. These are articles of faith, and I'm sorry but I'm not a believer in the religion of AI.

eric_cc•53m ago
How do you know the effort that went into the message? Somebody with writing challenges may have written the whole thing up and used ai assistance to help get a better outcome. They may have proof-read and revised the generated message. You sound very judgmental.
jaredcwhite•45m ago
And you sound very ableist. Why should we expect people who may have a cognitive disability of some kind to cloak that with technology, rather than us giving them the grace to communicate how they like on their terms?
anal_reactor•2h ago
I love it because it allows me to filter out people not worth my time and attention beyond minimal politeness and professionalism.
eric_cc•55m ago
I know people with disabilities that struggle with writing. They feel that AI enables them to express themselves better than they could without the help. I know that’s not necessarily what you’re dealing with but it’s worth considering.
monkeydust•4h ago
I don't mind people using AI to help refine their thoughts and proof their output but when it is used in absence of their own thoughts I am starting to value that person a little bit less.
accrual•3h ago
Exactly. I've already seen two very obvious AI comments on Reddit in the past 2 days. One even had the audacity to copy a real user's reply back into the AI and pass the response back again. I just blocked them since they're in a sub I like to hang out in.
tbatchelli•4h ago
I find it as yet another way to externalize costs: I spend 0 time thinking, I dump AI slop on you and ask you to review it or refute me with the nonsense that I just sent you.

Last time someone did this to me I sent them a few other answers by the same LLM to the same prompt, all different, with no commentary.

zer00eyz•4h ago
This is an interesting take.

Cause all an LLM is, is a reflection of its input.

Garbage in garbage out.

If we're going to have this rule about AI, maybe we should have it about... everything. From your mom's last Facebook post, to what is said by influencers to this post...

Say less. Do more.

echelon•4h ago
Previously there was some requirement for novel synthesis. You at least had to string your thoughts together into some kind of argument.

Now that's no longer the case and there are lazy or unthoughtful people that simply pass along AI outputs, raw and completely unprocessed, as cognitive work for other human beings to deal with.

z3c0•4h ago
An LLM's output being a reflection of its output would imply determinism, which is the opposite of their value prop. "Garbage in, garbage out" is an addage born from traditional data pipelines. "Anything in, generic slop, possibly garbage, out" is the new status quo.
majormajor•4h ago
That's not really true at all, at least at the end user level.

You can have a very thoughtful LLM prompt and get a garbage response if the model fails to generate a solid, sound answer to your prompt. Hard questions with verifiable but obscure answers, for instance, where it generates fake citations.

You can have a garbage prompt and get not-garbage output if you are asking in a well-understood area with a well-understood problem.

And the current generation of company-provided LLMs are VERY highly trained to make the answer look non-garbage in all cases, increasing the cognitive load on you to figure out.

oncallthrow•4h ago
It's deeply sad to me that I will never again be able to read a message from someone and know, for sure, that it was written by them themselves.
achierius•4h ago
You can if you can watch them write it :)
varjag•4h ago
You have to watch them do it IRL because the video feed can also be generated now.
pixl97•4h ago
Give it a few decades and they'll be ghost hacking your optical circuits Ghost in the Shell style.
dcreater•4h ago
We will have a web where proof of humanity is the de facto standard. Its only a question of when. Things have to get worse before they get better in afraid
bhaney•4h ago
How would that help? Plenty of humans will just continue to be willing AI-proxies.
accrual•3h ago
It would raise the barrier to entry, I suppose. I agree with GP that at some point in the future, real human output will become more rare and more valuable. And pure "human made" content (movies, music, books, blog posts, comments, etc.) may have access controls or costs associated.

We're already seeing the social contract around hosting your own blog change due to the constant indexing from AI crawlers.

vouaobrasil•4h ago
There's still a sort of web of trust. All you have to do is find people who you really trust and hate AI. For instance, people who know me know that there's no way in hell that I'd ever use any sort of generative AI, for anything.
pixl97•4h ago
I'm guessing that you've never actually lived in that world...

When I got squggily written cursive letters from my grandma I could be pretty sure those were her words though up by herself, for the effort to accurately reproduce the consistent mess she made would have been great. But the moment we moved to the typewriter and then other digital means uniformly printed out on paper or screens you've really just assumed that it was written by the human you were expecting.

Furthermore, the vast majority of communications done in business long before now were not done by 'people' per say. They were done by processes. In the vast majority of business email that I type out there is a large amount of process that would not occur if I were talking to a friend. Moreso this communication is facilitative to some other end goal. If the entire process that existed could be automated away humanity would be better off as some mostly useless work would be eliminated.

Do you know why people are so willing to use AI to communicate with each other? Because at the end of the day they don't give two shits about communicating with you. It's an end goal of receiving a paycheck. There is no passion, no deep interest, no entertainment for them in doing so. It's a process demanded of them because of how we integrate Moloch into our modern lives.

unyttigfjelltol•3h ago
If you come to the LLM with your message, and then use the LLM to iterate drafts and tighten your prose, then no, the exercise was exactly the opposite of a disrespect to the reader.

Sending half-baked, run-on, unvetted writing, when you easily could have chosen otherwise, is in fact the disrespectful choice.

conartist6•2h ago
Why would I want everyone who talks to me to sound like a clone of the same vapid robot?

I would avoid that world at any cost of I was allowed a choice, but the point is that it's used as a weapon against you. Consent appears to be unnecessary.

unyttigfjelltol•1h ago
You and I must be talking to different LLMs. For example, here's how R1 1776 would concisely rewrite your comment in a warm, generous wise voice:

I cherish the unique humanity in every voice. Forced robotic uniformity feels like an imposition, not a choice—and consent matters deeply.

The output is the the opposite of how you describe it, and vastly more persuasive than your own words. When it's persuasion that matters, use all tools available.

thomashabets2•4h ago
I got a feature request in the form of a PR a few months that said "chatgpt generated this as a possible implementation, does it work"?

I stopped there and replied that if you don't care enough to test if it works, then clearly you don't actually want the feature, and closed the ticket.

I have gotten other PRs that are more in the form of "hey I don't know what I'm doing. I used GPT but and it seems to work but I don't understand this part". I'm happy to help point in the right direction for those. Because an least they're trying. And seem like this is part of their learning.

... Or they just asked jippity to make it seem that way.

MrGilbert•4h ago
It gets interesting once you start a discussion about a topic with someone who had ChatGPT doing all the work. They often do not have the same in-depth understanding of what is written there vs. someone who wrote it themselves. Which may not come as a surprise, but yet - here we are. It‘s these kind of discussions I find exhausting, because they show no honesty and no interest by the person I'm interacting with. I usually end these conversations quickly.
conartist6•2h ago
AI doesn't leave behind the people who don't use it, it leaves behind the people who do. Roko's Reverse Basilisk?
MrGilbert•2h ago
I never heard of Roko's Basilisk before, and now I entered a disturbing rabbit hole. Peoples' minds are... something.

I mean, it's basically cheating. I get a task, and instead of working my way through the task, which might be tedious, you take the shorter route and receive instant gratification. I can understand how that is causing some kind of rush of endorphines, much like eating a bar of chocolate will. So, yeah - I would agree, altough I do not have any studies that support the hypothesis.

esafak•4h ago
Applications could automatically insert subtle icons next to messages that are automatically generated. It wouldn't work for copy-and-pasted text but it's a start.
accrual•3h ago
Maybe even a post-processing step that replaces all spaces with a suitable Unicode character to act as a watermark. There are more sophisticated ways to watermark text that aren't as easily thwarted with a search/replace, but it might work for some low-risk applications.
dlevine•4h ago
If someone uses AI to generate an output, that should be stated clearly.

That is not an excuse for it being poorly done or unvetted (which I think is the crux of the point), but it’s important to state any sources used.

If i don’t want to receive AI generated content, i can use the attribution to filter it out.

ivanjermakov•4h ago
Related: I'd rather read the prompt https://news.ycombinator.com/item?id=43888803
bethekidyouwant•4h ago
Obviously the answer is to send them back an AI generated response
majormajor•4h ago
LLMs are very very good at adding words in a way that looks "well written" (to our current mental filters) without adding meaning or value.

I wonder how long it will be before LLM-text trademarks become seen as a sign of bad writing or laziness instead? And then maybe we'll have an arms race of stylistic changes.

---

Completely agree with the author:

Earlier this week I asked Claude to summarize a bunch of code files since I was looking for a bug. It wrote paragraphs and had 3 suggestions. But when I read it, I realized it was mostly super generic and vague. The conditions that would be required to trigger the bug in those ways couldn't actually exist, but it put a lot of words around the ideas. I took longer to notice that they were incorrect suggestions as a result.

I told it "this won't happen those ways [because blah blah blah]" and it gave me the "you are correct!" compliment-dance and tried again. One new suggestion and a claimed reason about how one of its original suggestions might be right. The new suggestion seemed promising, but I wasn't entirely convinced. Tried again. It went back to the first three suggestions - the "here's why that won't happen" was still in the context window, but it hit some limit of its model. Like it was trying to reconcile being reinforcement-learning'd into "generate something that looks like a helpful answer" with "here is information in the context window saying the text I want to generate is wrong" and failing. We got into a loop.

It was a rare bug so we'll see if the useful-seeming suggestion was right or not but I don't know yet. Added some logging around it and some other stuff too.

The counterfactuals are hard to evaluate:

* would I have identified that potential change quicker without asking it? Or at all?

* would I have identified something else that it didn't point out?

* what if I hadn't noticed the problems with some other suggestions and spent a bunch of time chasing them?

The words:information ratio was a big problem in spotting the issues.

So was the "text completion" aspect of "if you're asking about a problem here, there must be a solution I can offer" RL-seeming aspect of its generated results. It didn't seem to be truly evaluating the code then deciding so much as saying "yes, I will definitely tell you there are things we can change, here are some that seem plausible."

Imagine if my coworker had asked me the question and I'd just copy-pasted Claude's first crap attempt to them in response? Rude as hell.

drewvlaz•4h ago
One of the largest issues I've experienced is LLMs being too agreeable.

I don't want my theories parroted back to me on why something went wrong. I want to have ideas challenged in a way that forces me to think and hopefully lead me to a new perspective that I otherwise would have missed.

Perhaps a large portion of people do enjoy the agreeableness, but this becomes a problem not only because I think there are larger societal issues that stem from this echo-chamber like environmental but also simply that companies training these models may interpret agreeableness as somehow better and something that should be optimized for.

scarface_74•3h ago
That’s simple - after it tries to be helpful and agreeable I just ask for a “devils advocate” response. I have a much longer prompt I use sometimes involve being a “sparring partner”.

And I go back and forth sometimes between correcting its devils advocate responses and “steel man” responses.

AllegedAlec•4h ago
I have one of those coworkers. I tell him I have a problem with a missing BIOS setting. He comes back 2 minutes later "Yeah I asked an LLM and it said to go into [submenu that doesn't exist] and uncheck [setting I'm trying to find].

What's even more infuriating is that he won't take "I've checked and that submenu doesn't exist" for an answer and insists to check again. Had to step away for a fag a few times in fear of putting his face through the desk.

Bluestein•4h ago
"Your slop is showing ..."
Isamu•4h ago
Yes it is absolutely rude in many contexts. In a team context you are looking for common understanding and being “on the same page”. If someone needs to consult AI to get up to speed that’s fine, then their interaction with you should reflect what they have learned.
cm2012•4h ago
I couldn't disagree more. Its like someone going to Wikipedia to helpfully copy and paste a summary of an issue. Fast and with a good enough level of accuracy.

Generally the AI summaries I see are more topical and accurate than the many other comments in the thread.

Velorivox•4h ago
Really!?

[0] https://i.imgur.com/ly5yk9h.png

cm2012•3h ago
You shouldn't compare against perfection, but against reality. ChatGPT o3 has been proven to outperform even experts on knowledge tasks quite a bit.

In general it raises the mean accuracy and info of a given thread.

Its like self driving cars.

lysace•4h ago
They are mostly posturing.

I don't see any problem sharing a human-reviewed LLM output.

(I also figure that human review may not be that necessary in a few years.)

mook•3h ago
But it's the human review that makes it not rude; not bothering to review means you're wasting the other person's time. If they wanted a chatbot response they could have went to the LLM directly.

It's like pointing to a lmgtfy link. That's _intentionally_ rude, in that it's normally used when the question isn't worth the thought. That's what pasting a chatbot response says.

lysace•3h ago
Agreed.
cm2012•3h ago
This I agree with as well 100%.
codeulike•4h ago
Brilliant analogy with the Scramblers of Blindsight
gigatree•4h ago
Someone telling you about a conversation they had with ChatGPT is the new telling someone about your dream last night (which sucks because I’ve had a lot of conversations I wanna share lol).
toast0•4h ago
Eh. It's more like I asked my drunk uncle, and he sounded really confident when he told me X.
accrual•3h ago
I think it's different to talk about a conversation with AI versus just passing the AI output to someone directly.

The former is like "hey, I had this experience, here's what it was about, what I learned and how it affected me" which is a very human experience and totally valid to share. The latter is like "I created some input, here's the output, now I want you reflect and/or act on it".

For example I've used Claude and ChatGPT to reflect and chat about life experiences and left feeling like I gained something, and sometimes I'll talk to my friends or SO about it. But I'd never share the transcript unless they asked for it.

kelseyfrog•3h ago
This is the same sentiment I have.

It feels really interesting to the person who experienced it, not so much to the listener. Sometimes it can be fun to share because it gives you a glimmer of insight into how someone else's mind works, but the actual content is never really the point.

If anything they share the same hallucinatory quality - ie: hallucinations don't have essential content, which is kind of the point of communication.

labrador•4h ago
The author was thinking "boring and uninteresting" but settled on the word "rude." No, it's not rude. Emailing your co-workers provactive political memes or telling someone to die in a fire is rude. Using ChatGPT to write and being obvious about it marks you as an uninteresting person who may not know what they are talking about.

On the other hand, emailing your prompt and the result you got can be instructive to others learning how to use LLMs (aren't we all?) We may learn effective prompt techniques or decide to switch to that LLM because of the quality of the answer.

recipe19•4h ago
I disagree. The most obvious message this telegraphs is "I don't respect you or your argument enough to parse it and articulate a response, why don't you argue with a machine instead". That's rude.

There is an alternative interpretation - "the LLM put it so much better than I ever could, so I copied and pasted that" - but precisely because of the ambiguity, you don't want to be sneaky about it. If you want me to have a look at what the LLM said, make it clear.

A meta-consideration here is that there is just an asymmetry of effort when I'm trying to formulate arguments "manually" and you're using an LLM to debate them. On some level, it might be fair game. On another, it's pretty short-sighted: the end game is that we both use LLMs that endlessly debate each other while drifting off into the absurd.

mkehrt•4h ago
It's rude like calling someone on the phone is rude or SCREAMING IN ALL CAPS is rude. It's a new social norm that the author is pointing out.
lupusreal•4h ago
> boring and uninteresting

Subjecting people to such slop is rude. All the "I asked chatbot and it said..." comments are rude because they are excessively boring and uninteresting. But it gets even worse than just boring and uninteresting when presenting chatbot text as something they wrote themselves, which is a form of lying / fraud.

grey-area•4h ago
It’s rude and disrespectful, as well as boring.
righthand•4h ago
> aren't we all?

No in fact I disabled my TabNine Llm until I can either train my own similar model and run everything locally or not at all.

Furthermore the whole selling point has been that anyone can use them _without needing to learn anything_.

varjag•4h ago
I recently had a non-technical person contest my opinion on a subtle technical issue with ChatGPT screenshots (free tier o4) attached in their email. The LLM wasn't even wrong, just that it had the answer wrapped in customary platitudes to the user and they are not equipped to understand the actual answer of the model.
Keyframe•4h ago
There's another level altogether when the other party pretends it's not AI-generated at all.
ninetyninenine•4h ago
The problem here is that I’ve been accused multiple times of using LLMs to write slop when it was genuinely written by myself.

So I apologized and began actually using LLMs while making sure the prompt included style guides and rules to avoid the tell tale signs of AI. Then some of these geniuses thanked me for being more genuine in my response.

A lot of this stuff is delusional. You only find it rude because you’re aware it’s written by AI. It’s the awareness itself that triggers it. In reality you can’t tell the difference.

This post, for example.

scarface_74•3h ago
I did too, the AWS “house style” (former ProServe employee) of writing even before LLMs can come across as AI Slop. Look at some blog posts on AWS even pre-2021.

I too use an LLM to help me get rid of generic filler and I do have my own style of technical writing and editing. You would never know I use an LLM.

lvl155•4h ago
While I understand this sentiment, some people simply suck at writing nice emails or have a major communication issue. It’s also not bad to run your important emails through multiple edits via AI.
z3c0•4h ago
Is it too much to ask them to learn? People can have poor communication habits and still write* a thoughtful email.
yoyohello13•4h ago
Seriously. If you can’t spend effort to communicate properly, why should I expend effort listening?
Al-Khwarizmi•3h ago
Maybe yes, it's too much?

I'm a non-native English speaker who writes many work emails in English. My English is quite good, but still, it takes me longer to write email in English because it's not as natural. Sometimes I spend a few minutes wondering if I'm getting the tone right or maybe being too pushy, if I should add some formality or it would sound forced, etc., while in my native language these things are automatic. Why shouldn't I use an LLM to save those extra minutes (as long as I check the output before sending it)?

And being non-native with a good English level is nothing compared to people who might have autism, etc.

z3c0•2h ago
I'm a native English speaker who asks myself the same questions on most emails. You can use LLM outputs all you want, but if you're worried about the tone, LLM edits drive the tone to a level of generic that ranges from milquetoast, to patronizing, to outright condescending. I expect some will even begin to favor pushy emails, because at least it feels human.
adamtaylor_13•4h ago
The article clearly supports this type of usage.
deadbabe•4h ago
Then they shouldn’t be in jobs or positions where good communication skills and writing nice emails are important.
GPerson•4h ago
Seems like there are potential privacy issues involved in sharing important emails with these companies, especially if you are sharing what the other person sent as well.
lxgr•3h ago
Almost all email these days touches Google's or Microsoft's cloud systems via at least one leg, so arguably, that ship has already sailed, given that they're also the ones hosting the large inference clouds.
stefan_•3h ago
Ha, did you see the outrage from people when they realized that them sharing their deepest secrets & company information with ChatGPT was just another business record to OpenAI that is total fair game in any sort of civil suit discovery? You would think some evil force just smothered every little childs pet bunny.

Tell people there are 10000 license plate scanners tracking their every move across the US and you get a mild chuckle, but god forbid someone access the shit they put into some for profit companies database under terms they never read.

lvl155•3h ago
If you work in a big enough organization, they have AI sandboxes for things like this.
scarface_74•4h ago
I work with a lot of people who are in Spanish speaking countries who have English as a second language. I would much rather read their own words with grammatical errors than perfect AI slop.

Hell I would rather just read their reply in Spanish and if they need to write it out really fast without struggling trying to translate it and I use my own B1 level Spanish comprehension than read AI generated slop.

Al-Khwarizmi•3h ago
Or are non-native speakers. LLMs can be a godsend in that case.
crazygringo•4h ago
>> "I asked ChatGPT and this is what it said: <...>".

> Whoa, let me stop you right here buddy, what you're doing here is extremely, horribly rude.

How is it any different from "I read book <X> and it said that..."? Or "Book <X> has the following quote about that:"?

I definitely want to know where people are getting their info. It helps me understand how trustworthy it might be. It's not rude, it's providing proper context.

toast0•4h ago
Because published books, depending on genre, have earned a presumption of being based on reality. And it's easy to reproduce a book lookup, and see if they link to sources. I might have experience with that book and know of its connection with reality.

ChatGPT and similar have not earned a presumption of reality for me, and the same question may get many different answers, and afaik, even if you ask it for sources, they're not necessarily real either.

IMHO, it's rude to use ChatGPT and share it with me as if it's informative; it disrespects my search for truth. It's better that you mention it, so I can disregard the whole thing.

Arainach•4h ago
A book is a credentialed source that can be referenced. A book is also something that not everyone may have on hand, so a pointer can be appreciated. LLMs are not that. If I wanted to know what they said I'd ask them. I'm asking you/the team to understand what THEY think. Unfortunately it's becoming increasingly clear that certain people and coworkers don't actually think at all very often - particularly the ones that just take any question and go throw it off to the planet burning machine.
mook•4h ago
To me, it's different because having read a book, remembered it, and pulled out the quote means you spent time on it. Pasting a response from ChatGPT means you didn't even bother reading that, understand the output, thought about it to make sure it makes sense, and then resynthesize it.

It mostly means you don't respect the other person's time and it's making them do the vetting. And that's the rude part.

scarface_74•3h ago
I assume a book is correct or I at least assume to author thought it was correct when it comes to none ideological topics.

But you can’t assume positive intent or any intent from an LLM.

I always test the code, review it for corner cases, remove unnecessary comments, etc just like I would a junior dev.

For facts, I ask it to verify whatever says based on web source. I then might use it to summarize it. But even then I have my own writing style I steer it toward and then edit it.

tlonny•4h ago
I totally agree.

Isaac, if you're reading this - stop sending me PDFs generated by Perplexity!

nyclsrmn•4h ago
Could not agree more!

> "I asked ChatGPT and this is what it said: <...>". ... > "I vibe-coded this pull request in just 15 minutes. Please review"

This is even nice. You have outlined here an actual warning. Usually there is none.

When you get this type of request you are pretty much debugging AI code on the spot without any additional context.

You can just see when text/code is AI generated or not. No matter text or code. No tools needed.

JoshTriplett•4h ago
> I vibe-coded this pull request in just 15 minutes. Please review

"I hand-typed this close message in just 15 seconds. Please refrain."

stego-tech•4h ago
Well said in a short, digestible post that’s easily shared with even non-tech folks. Exactly what a good post on etiquette should read like.
smithbits•4h ago
Yes. I just had a bad experience with an online shop. I got the thing I ordered, but the interaction was bad so I sent a note to their support email saying "I like your company, but I recently had this experience that felt icky, here's what happened" and their "AI Agent Bot" replied with a whole lot of platitudes and "Since you’ve indicated no action is needed and your order has been placed, we will be closing this ticket." I'm all for LLM's helping people write better emails, but using them to auto-close support tickets is rude.
oldge•3h ago
Seems like a strongly coupled set of events that leaks their internal culture. “Customers are not worth the effort”.
jmugan•4h ago
This is exactly how I feel about both advertising and unnecessary notifications. "The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus."
lxgr•4h ago
I believe there was a very similar line of argument at the time photography was becoming popular. "Sure, it's a useful tool, but it will never be an art form like painting. It only reproduces what's already there!"

Yet today, we both cringe at forgettable food Instagrams and marvel at the World Press Photo of the Year.

I do fully agree with the conclusions on etiquette. Just like it's rude to try to pass a line-traced photo as a freehand drawing, decompressing very little information into a wall of text without a disclaimer is rude.

skeledrew•4h ago
Not seeing a problem here as long as the one showing the output has reviewed it themselves before showing, and made the decision to show based on that review. That's what we should be advocating for. So far what I'm seeing is people slamming others or ignoring automatically on even the vague suspicion that something has been generated.

Just the other day I witnessed in a chat someone commenting that another (who previously sent an AI summary of something) had sent a "block of text" which they wouldn't read because it was too much, then went to read it when they were told it was from Quora, not generated. It was a wild moment for me, and I said as much.

intended•4h ago
Thats a big if.
marliechiller•4h ago
> "For the longest time, writing was more expensive than reading"

Such a great point and one which I hadn't considered. With LLMs, we've flipped this equation, and it's having all sorts of weird consequences. Most obviously for me is how much more time I'm spending on code reviews. Its massively increased the importance of making the PR as digestible as possible for the reviewer, as now both author and reviewer are much closer to equal understanding of the changes compared to if the author had written the PR solely by themselves. Who knows what other corollaries there are to this reversal of reading vs writing

lxgr•4h ago
Yes, just like painting a picture used to be extremely time-consuming compared to looking at a scene. Today, these take roughly the same effort.

Humanity has survived and adapted, and all in all, I'm glad to live in a world with photography in it.

That said, part of that adaptation will probably involve the evolution of a strong stigma against undeclared and poorly verified/curated AI-generated content.

hks0•4h ago
> "I vibe-coded this pull request in just 15 minutes. Please review" > > Well, why don't you review it first?

My current day to day problem is that, the PRs don't come with that disclaimer; The authors won't even admit if asked directly. Yet I know my comments on the PR will be fed to the cursor so it makes more crappy edits, and I'll be expecting an entirely different PR in 10 minutes to review from scratch without even addressing the main concern. I wish I could at least talk to the AI directly.

(If you're wondering, it's unfortunately not in my power right now to ignore or close the PRs).

lukevp•4h ago
Rather than close or ignore PRs, you should start a dialogue with them. Teach them that the AI is not a person, and if they contribute buggy or low quality code, it’s their responsibility, not the AIs, and ultimately their job on the line.

Another perspective I’ve found to resonate with people is to remind them — if you’re not reviewing the code or passing it through any type of human reasoning to determine its fit to solving the business problem - what value are you adding at all? If you just copy pasta through AI, you might as well not even be in the loop, because it’d be faster for me to do it directly, and have the context of the prompts as well.

This is a step change in our industry and an opportunity to mentor people who are misusing it. If they don’t take it, there are plenty of people who will. I have a feeling that AI will actually separate the wheat from the chaff, because right now, people can hide a lack of understanding and effort because the output speed is so low for everyone. Once those who have no issue with applying critical thinking and debugging to the problem and work well with the business start to leverage AI, it’ll become very obvious who’s left behind.

drewbug01•2h ago
> Rather than close or ignore PRs, you should start a dialogue with them. Teach them that the AI is not a person, and if they contribute buggy or low quality code, it’s their responsibility, not the AIs, and ultimately their job on the line.

I’m willing to mentor folks, and help them grow. But what you’re describing sounds so exhausting, and it’s so much worse than what “mentorship” meant just a few short years ago. I have to now teach people basic respect and empathy at work? Are we serious?

For what it’s worth: sometimes ignoring this kind of stuff is teaching. Harshly, sure - but sometimes that’s what’s needed.

lxgr•3h ago
Trust is earned in drops and lost in buckets. If somebody asks for my time to review slop, especially without a disclaimer, I'll simply not be reviewing their pull requests going forward.
craftkiller•3h ago
Show their manager?
distantprovince•2h ago
100% real life is much more grim. I can only hope we'll somehow figure it out.

I haven't personally been in this position, but when I think about it, looping all your reviews through the cursor would reduce your perceived competence, wouldn't it? Is giving them a negative performance review an option?

geor9e•4h ago
pasting llm output in group chat is a war crime
jmugan•4h ago
I love the post but disagree with the first example. "I asked ChatGPT and this is what it said: <...>". That seems totally fine to me. The sender put work into the prompt and the user is free to read the AI output if they choose.
guywithahat•3h ago
I think in any real conversation, you're treating AI as this authority figure to end the conversation, despite the fact it could easily be wrong. I would extract the logic out and defend the logic on your own feet to be less rude.
jmugan•3h ago
Oh, I'm usually trying to gather information in conversations with peers, so for me, it's usually more like, "I don't know, but this is what the LLM says."

But yeah, to a boss or something, that would be rude. They hired you to answer a question.

justaj•48m ago
And what if you let a human expert fact-check the output of an LLM? Provided you're transparent about the output (and its preceding prompt(s)) ?

Because I'd much rather ask an LLM about a topic I don't know much about and let a human expert verify its contents than waste the time of a human expert in explaining the concept to me.

Once it's verified, I add it to my own documentation library so that I can refer to it later on.

lukebechtel•4h ago
> For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.
accrual•3h ago
> "I didn't have time to write you a short letter, so I wrote you a long one."

Quote is from Mark Twain and perfectly encapsulates the sentiment. Writing something intended for another person to read was previously an effort. Some people were good at it, some were less good. But now, everyone can generate some median-level effort.

linotype•4h ago
I’m building a tool to help filter these kinds of low value articles out (especially the flow of constant AI negativity, but it will work for many topics). If you’re interested email me at linotype@fastmail.com and I’ll send you a link when it’s ready.
jancsika•4h ago
> The only explanation is that something has coded nonsense in a way that poses as a useful message

How is this more plausible than the scrambler's own lack of knowledge of potential specifications for these messages?

In any case, there's obviously more explanations than the "coded nonsense" hypothesis.

unyttigfjelltol•3h ago
Whether it's LLM output is orthogonal to rudeness, lack of sensibility or generic content.There are all sorts of tools out there which use LLMs as a front end for some pretty spectacular back-end functions.

If you're offered an AI output it should be taken as one of two situations: (a) the person adopts the output, and maybe put a fair amount of effort into interacting with the LLM to get it just right, but can't honestly claim ownership (because who can), or (b) the output is outside their domain of expertise and functioning as a toehold or thumbnail in some esoteric topic that no single resource they know can, and probably the point is so specific that such a resource doesn't exist.

The tenor of the article makes me confused about what people have been doing, specifically , with ChatGPT that so alienated the author. I guess the point is there are some topics LLMs are fundamentally incompetent to perform? Maybe its more the perception that the LLM is being treated as an oracle than a tool for discovery?

peteforde•3h ago
I read and enjoyed Blindsight, and ironically an LLM wouldn't have made the mistake of believing this supports such a kooky position.
QuantumGood•3h ago
In science fiction dystopias, there is often the "adjustment to the machines taking over" phase, with analysis of the arguments of those resisting. AI is rapidly ticking the boxes of common "shift to dystopia" writings.
chang1•3h ago
I get annoyed when I ask someone a question (work related or not) and they don't know the answer, they will then proceed to tell a prompt for ChatGPT in a stream of consciousness sort of way.

Then I get even more annoyed when they decide to actually use their own prompt, and then read back to me the answer.

I would much prefer the answer "I don't know".

jerlam•1h ago
It seems there are people deeply afraid of admitting they don't know something, despite the fact that not knowing things is the default. But giving the wrong answer is always worse.
jtwoodhouse•3h ago
Nothing says “I don’t respect you” like giving someone a sequence from a random text generator.
KronisLV•2h ago
> For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.

I think it all goes to crap when there is some economic incentive: e.g. blogspam that is profitable thanks to ads and anyone that stumbles upon it, alongside being able to generate large amounts of coherent sounding crap quickly.

I have seen quite a few sites like that in the first pages of both Google and DuckDuckGo which feels almost offensive. At the same time, posts that promise something and then don't go through with it are similarly bad, regardless of AI generated or not.

For example, recently I needed to look up how vLLM compares with Ollama (yes, for running the very same abominable intelligence models, albeit for more subjectively useful reasons) because Qwen3-30B-A3B and Devstral-24B both run pretty badly on Nvidia L4 cards with Ollama, which feels disappointing given their price tags and relatively small sizes of those models.

Yet pretty much all of the comparisons I found just regurgitated high level overviews of the technologies, like 5-10 sites that felt almost identical and could have been copy pasted from one another. Not a single one of those had a table of various models and their tokens/s on a given bit of hardware, for both Ollama and vLLM.

Back in the day when nerds got passionate about Apache2 vs Nginx, you'd see comparisons with stats and graphs and even though I wouldn't take all of those at face value (since with Apache2 you should turn off .htaccess and also tweak the MPM settings for more reasonable performance), at least there would sometimes be a Git repo.

conradludgate•2h ago
The problem I've been having is when I spend time researching a problem, I link documentation and propose a clean solution. Someone I'm talking with will then send a screenshot of deepseek or chatgpt essentially agreeing with me.

I don't care what chatgpt or deepseek thinks about the proposal. I care what _you_ think about it - that's why I'm sending it to you.

parasti•2h ago
My boss posts GPT output as gospel in chats and task descriptions. So now instead of being a "you figure it out" it's "read this LLM generated garbage and then figure it out".
627467•13m ago
Another one to add to the list:

(Present a solution/output proposal to team)

> Did you ask chatgpt?