"Stories about AI" is not offensive to me. Its influence on the industry is undeniable and if I'm feeling tired of that content I just won't engage with it.
AI-writing is another story, but yeah -- HN is downstream of that problem. You can encourage people not to submit articles that seem to be LLM authored, but it won't work.
I think it's going to be really difficult to segregate discussions about AI from discussions about software development over the next few years.
I often wonder how exactly you'd mitigate this. Further, as a user, I wonder what incentive there is for me to write anything at all online, let alone commenting on forums, if it will just be fed back into an LLM.
Is paywalling or forcing user accounts the solution? That feels antithetical to the reason for the internet at all.
Just musings.
The question that I have is this.
Is it possible the language will converge towards AI mannerism when writing - i.e. most people will naturally write like AI because they will pick up on the subtleties of language from ChatGPT, Claude, etc? In other words there is an exposure effect at play.
I just found out about Communication Accommodation Theory (CAT) which makes me think that the answer is probably "yes".
Turing test is really in the rearview, huh?
Humans need machines to detect if a machine wrote the text, because humans aren’t sure.
I tried it against some of my AI generated articles. It says 100% human
Turns out if one manually write a structure and a core idea first, nobody think it's AI.
But what happens next, when we just fail at the task of recognizing ourselves in cyberspace? Where LatestClaw is just plain better at mimicking you than you are? What happens to the living we used to claw out of the ether for ourselves?
Do I need to learn to farm?
...and they will also likely pay less than they do now because there will be more labor supply, which the people currently doing those jobs won't be happy about.
I think it's cause they told it to type like a 13 year old and nobody could imagine AI talking like that.
2016-2018 was Docker and Kubernetes. 2020 was COVID. 2021-2022 was WFH good, RTO bad...and lots of Web3 and crypto stuff. 2023 was the dawn of AI, and it hasn't let up since. These are vibes and likely inaccurate.
Pot, kettle, black. "Remarkably good" drastically oversells the reliability of it and other AI detectors. It means very little that Pangram did better than other competitors in this snake-oily category in one 2025 benchmark.
cj•1h ago
Edit: Clearly the topics have evolved over time (AI, crypto, there will always be some topic taking up the majority of attention), but the type and worthiness of content seems unchanged.
giancarlostoro•1h ago
1attice•17m ago