>These aren’t the actions of early adopters—they’re the moves of a community that’s decided it’s not worth it at the moment.
These aren't the words of a human – they're the words of an LLM
eschatology•1h ago
These ai authorship accusations are now everywhere and it's getting really annoying.
It just distracts the discussion away and adds nothing.
exmadscientist•1h ago
I disagree. Articles written by AI are inherently less trustworthy (they're notorious for fabrications and hallucinations) and often have a very low content density. "Write me a 10 paragraph article about high hardware prices for my blog" is the sort of thing that expands to a lot of fluff with not a lot of content. I don't really want to read that article.
Even if AI might (might) be justifiable as an editor, it's still such a negative signal for "is this worth reading?" that in my opinion it is worthwhile to point out and discuss.
bryanlarsen•1h ago
This case is interesting, because it seems obvious that the AI accusation is just plain wrong. The article is riddled with the kind of grammatical and spelling mistakes that humans regularly make but that a modern AI would never make.
83•1h ago
That could easily be part of their prompt. I just did a quick test telling Copilot to "add a few spelling and grammar mistakes to look like a human wrote it" and it does a reasonably convincing job.
exmadscientist•1h ago
It's also very easy to paste a paragraph in to a chatbot and ask it to revise it. Or ask it to write an introduction.
I don't really have a problem with that use of AI.
But one of the costs is reputational: potential readers are now going to assume AI wrote the whole article, fairly or unfairly. That's a consideration writers have to weigh before choosing to do this.
AnimalMuppet•59m ago
I agree that AI-written text often has a low content density. I wonder if it's a matter of information theory.
Information theory defines the information of a symbol as being related to how often it occurs and how often it is expected to occur. Something that isn't expected carries more information. (Usually "symbol" is defined as one character or byte, but it could be a word or word part.)
Well, if you think about LLMs that way, they give you the most-probable next word (or word part). That means that they give you less information than normal writing. I suspect that's why it reads as bland, low-content - because it really is low content, in the information theory sense.
Now, it doesn't always give you the most probable next symbol. There is some randomness. And you can increase the randomness by turning up the temperature. But if you do, then I suspect it becomes incoherent more quickly. (Random gibberish may have high information from an information theory standpoint, but humans don't want to read that either.)
inahga•1h ago
On the contrary, GP's warning saved my time and attention. For that I thank them.
nottorp•1h ago
Maybe it's because no matter how it was done this is a boring piece that talks about the tiny tiny minority of higher numbers obsessed "gamers" that do upgrade their hardware yearly.
It's not even representative for gamers as a whole.
And starting from that, it's easy to also accuse it of being LLM generated, even if it isn't.
I have no opinion because I couldn't go past the first paragraph. It's not talking about any subgroup I can identify with.
Also after skimming it didn’t say anything new or insightful. No matter if “content creator” or “AI”.
RGamma•1h ago
LLMs allow you to blow up a crudely defined sentiment into a cloud of semi-plausible sounding blurb too easily.
Should we allow this to normalize, then I'm done with this part of the internet. (And I agree, one has to use this criticism with some care)
messe•1h ago
Nice en—dash.
LoganDark•1h ago
Sometimes, I wish hair spaces worked on HN so I could typeset em-dashes properly...
InitialLastName•1h ago
I don't know, it would take quite a subtle prompt to get an LLM to write the sub-edited slop in the latter part of the article:
> We are now again in a new inflatory phase, and this time the difference is that it does not only impact GPUs, but almost everything really important, like RAM and NVME storage.
> But there’s also something new on the software side. The advent of new technologies like DLSS, FSR, and more recently Framegen have also changed the performance equation a bit.
> Finally, the games themselves are not pushing the enveloppe [sic] as much as they used to.
> but that’s not obvious that the visual outcomes are far better.
> We have entered in the marginal progress zone.
bryanlarsen•1h ago
AI wouldn't have got this sentence backwards either:
> We had Witcher 3: Blood and Wine back in 2016 and while it’s the best looking game ever, it has aged very well.
I'm sure the author meant to say it's not the best looking game ever.
tokai•1h ago
55 answers to that poll, with no idea of if or how it has changed from last year. I do believe that many people are postponing upgrades, but those poll results are not worst spending that many words on.
yomismoaqui•1h ago
I updated my laptop this past December for fear of the RAM price hike.
Today I checked its price on the web I bought it and it was almost 300€ more.
pocksuppet•1h ago
Who was upgrading on a yearly schedule?
mjorgers•1h ago
Am I the only one who hasn’t felt the need to upgrade in _way_ more than just one year? I still have an old XPS 15 9560 (pushing 10 years!!!) running Ubuntu which is perfectly usable. I upgraded the ram (32Gb) and the battery, and I still consider it to be totally usable for most day to day tasks. Development, docker containers, browsing the web with an unhealthy number of tabs open. What more do I need?
hotsauceror•1h ago
Agreed. Main box is 10+ years i5-4770K, I think. 16 GB RAM, GTX 1080. Runs Debian Trixie + KDE, Spotify, VS Code, docker, Steam, etc just fine.
0_____0•1h ago
Same. My daily driver is a high spec Dell laptop from 2018. I do CAD work on it and it's approximately fine. I upgraded the memory last year and I've had to repaste the heatsink and replace the battery, but I still can't justify getting a new machine given that it does everything I actually need flawlessly.
pinkmuffinere•2m ago
As somebody who only buys used electronics, I am worried that this means the used-electronics-prices are going to start rising closer to a more accurate (and more expensive) level. Bad for me, but probably good for allocation of resources.
hackeman300•1h ago
These aren't the words of a human – they're the words of an LLM
eschatology•1h ago
It just distracts the discussion away and adds nothing.
exmadscientist•1h ago
Even if AI might (might) be justifiable as an editor, it's still such a negative signal for "is this worth reading?" that in my opinion it is worthwhile to point out and discuss.
bryanlarsen•1h ago
83•1h ago
exmadscientist•1h ago
I don't really have a problem with that use of AI.
But one of the costs is reputational: potential readers are now going to assume AI wrote the whole article, fairly or unfairly. That's a consideration writers have to weigh before choosing to do this.
AnimalMuppet•59m ago
Information theory defines the information of a symbol as being related to how often it occurs and how often it is expected to occur. Something that isn't expected carries more information. (Usually "symbol" is defined as one character or byte, but it could be a word or word part.)
Well, if you think about LLMs that way, they give you the most-probable next word (or word part). That means that they give you less information than normal writing. I suspect that's why it reads as bland, low-content - because it really is low content, in the information theory sense.
Now, it doesn't always give you the most probable next symbol. There is some randomness. And you can increase the randomness by turning up the temperature. But if you do, then I suspect it becomes incoherent more quickly. (Random gibberish may have high information from an information theory standpoint, but humans don't want to read that either.)
inahga•1h ago
nottorp•1h ago
It's not even representative for gamers as a whole.
And starting from that, it's easy to also accuse it of being LLM generated, even if it isn't.
I have no opinion because I couldn't go past the first paragraph. It's not talking about any subgroup I can identify with.
Also after skimming it didn’t say anything new or insightful. No matter if “content creator” or “AI”.
RGamma•1h ago
Should we allow this to normalize, then I'm done with this part of the internet. (And I agree, one has to use this criticism with some care)
messe•1h ago
LoganDark•1h ago
InitialLastName•1h ago
> We are now again in a new inflatory phase, and this time the difference is that it does not only impact GPUs, but almost everything really important, like RAM and NVME storage.
> But there’s also something new on the software side. The advent of new technologies like DLSS, FSR, and more recently Framegen have also changed the performance equation a bit.
> Finally, the games themselves are not pushing the enveloppe [sic] as much as they used to.
> but that’s not obvious that the visual outcomes are far better.
> We have entered in the marginal progress zone.
bryanlarsen•1h ago
> We had Witcher 3: Blood and Wine back in 2016 and while it’s the best looking game ever, it has aged very well.
I'm sure the author meant to say it's not the best looking game ever.