I love Firefox, I don't love Mozilla - I've no way to donate specifically to Firefox.
[1] https://joindatacops.com/resources/how-73-of-your-e-commerce...
They should mainly be worried about their reliability and trustworthiness. They should not worry about article length, as long as it's from exhaustiveness and important content is still accessible.
Serving perfectly digestible bits of information optimized for being easy to read must not be the primary goal of an encyclopedia.
By the way, "AI summaries" routinely contain misrepresentations, misleading sentences or just plain wrong information.
Wikipedia is (rightly) worried about AI slop.
The reason is that LLMs cannot "create" reliable information about the factual world, and they can also only evaluate information based on what "sounds plausible" (or matches the training priorities).
You can get an AI summary with one of the 100 buttons for this that are built into every consumer-facing product, including common OS GUIs and Web browsers.
Or "ask ChatGPT" for one.
walterbell•3mo ago
nness•3mo ago
My other thought is that you don't want a link showing you scraped anything... and faking browser traffic might draw less attention.
fzeroracer•3mo ago
danielbln•3mo ago
solarkraft•3mo ago
walterbell•3mo ago
https://utcc.utoronto.ca/~cks/space/blog/web/WeShouldBlockFo...
SideburnsOfDoom•3mo ago
ectospheno•3mo ago
jjtheblunt•3mo ago
in contrast, letting their servers render the content with their proprietary tools yields the sought data, so scraping might be a pragmatic choice still.
NoPicklez•3mo ago
twosdai•3mo ago
walterbell•3mo ago