I'm not reading this.
The Wikipedia article on detecting AI writing is a big help if you need to calibrate your sensors: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
Maybe, like overplayed pop songs, in 20 years or so we’ll come around to viewing the phrase fondly.
I do miss the days when technical reports were clear and concise. This one has some interesting information, but it’s buried under a mountain of empty AI-written bloat.
Like this, its purpose is to fly under the radar unless your figurative ears are pricked up and primed to detect the telltale signs. Fuck this shit.
I can't be the only one.
It's slow and annoying, AI overview is good enough for me most of the times so that added time I bet makes websites lose a lot of visits.
> 5 out of 8 points versus just 3 for "I am human." For the verifying state, it was even more dramatic — 7.5 versus 0.5.
n × p >= 5? (Sample size and margins of errors. Is 5:3 even meaningful or is this rather random personal preference?) Apparent splitting of missing or inconclusive data points? (7.5 vs. 0.5 out of a total of 8 subjects.) What kind of (social) research is this supposed to be?
noplacelikehome•58m ago
tempest_•53m ago
If you try and run a site that has content that LLMs want or expensive calls that require a lot of compute and can exhaust resources if they are over used the attack is relentless. It can be a full time job trying to stop people who are dedicated to scrapping the shit out of your site.
Even CF doesnt even really stop it any more. The agent run browsers seem to bypass it with relative ease.
neoromantique•41m ago
PaulDavisThe1st•36m ago
LoganDark•25m ago
KolmogorovComp•14m ago
This is wrong. Git does store full copies.
neoromantique•16m ago
Prebuild statically the most common commits (last XX) and heavily rate limit deeper ones
flexagoon•46m ago
pchew•36m ago
tick_tock_tick•34m ago
sebzim4500•8m ago