The problem is it isn't easy to detect it and I'm sure the people who work on generated stuff will work hard to make detection even harder.
I have difficulty detecting even fake videos. How can I possibly I detect generated text in plain text accurately? I mean I will make plenty of false positive mistakes, accusing people of using generated text when they wrote it themselves. This will cause unnecessary friction which I don't know how to prevent.
Second thought: Does it _really_ matter? You find it interesting, you continue reading. You don't like it, you stop reading. That's how I do it. If I read something from a human, I expect it to be their thoughts. I don't know if I should expect it to be their hand typing. Ghost writers were a thing long before LLMs. That said, it wouldn't even _occur_ to me to generate anything I want to say. I don't even spell check. But that's me. I can understand that others do it differently.
LLMs are naive and have a very mainstream view on things; this often leads them down suboptimal paths. If you can see through some of the mainstream BS on a number of topics, you can help LLMs avoid mistakes. It helps if you can think from first principles.
I love using LLMs but I wouldn't trust one to write code unsupervised for some of my prized projects. They work incredibly well with supervision though.
Probably my best and most insightful stuff has been produced more or less effortlessly, since I spent enough time/effort _beforehand_ getting to know the domain and issue I was interested in from different angles.
When I try writing fluff or being impressive without putting in the work first, I usually bump up against all the stuff I don't have a clear picture of yet, and it becomes a neverending slog. YMMV.
jdthedisciple•1h ago
zvqcMMV6Zcr•1h ago
Growtika•51m ago
djeastm•34m ago
>We're not against AI tools. We use them constantly. What we're against is the idea that using them well is a strategy. It's a baseline.
The short, staccato sentences seem to be overused by AI. Real people tend to ramble a bit more often.
xnorswap•28m ago
Not exclusive to AI, but I'd be willing to bet any money that the subheadings were generated.
duskdozer•25m ago
I would much, much, much rather read an article with imperfect English and mistakes than an LLM-edited article. At least I can get an idea of your thinking style and true meaning. Just as an example - if you were to use a false friend [1], an LLM may not deal with this well and conceal it, whereas if I notice the mistake, I can follow the thought process back to look up what was originally intended.
[1] https://en.wikipedia.org/wiki/False_friend