https://en.wikipedia.org/wiki/Printing_press#Gutenberg.27s_p...
if everyone has a bible, then who needs the church to tell you what it says.
Clearly, all the protestants who burned more witches than the catholics ever did, and kept at it for centuries after the inquisition had stopped. But that's just my opinion here.
It is a good analogy. There is great concern that the unwashed masses won’t know how to handle this tool and will produce information today’s curators would not approve of.
"Running your own models on your own hardware" is an irrelevant rounding error here compared to the big-company models.
the church did all of the reading and understanding for us. owners of the church gobbled up as much information as it could (encouraging confessions) and then the church owners decided when, how, where and which of that information flowed to us.
I have a very vocally anti-AI friend, but there is one thing he always goes on about that confuses me to no end: hates AI, strongly wants an AI sexbot, is constantly linking things trying to figure out how to get one, and asking me and the other nerds in our group about how the tech would work. No compromises anywhere except for one of the most human experiences possible. :shrug:
I expect people to be lazy, but that we'd outsource feelings was surprising.
I recall some Chinese language discussion about the experience of studying abroad in the Anglophone world in the early 20th century and the early 21st century. Paradoxically, even if you are a university student, it may now be harder to break out of the bubble and make friends with non-Chinese/East Asians than before. In the early 20th century, you'd probably be one of the few non-White students and had to break out of your comfort zone. Now if you are Chinese, there'd be people from a similar background virtually anywhere you study in the West, and it is almost unnatural to make a deliberate effort to break out of that.
But it was the only way forward to a new equilibrium.
To quantify it, you'd need measurable changes. For example, if you showed that after widespread LLM adoption, standardized test scores dropped, people's vocabulary shrank significantly, or critical thinking abilities (measured through controlled tests) degraded, you'd have concrete evidence of increased "dumbness."
But here's the thing: tools, even the simplest ones, like college research papers, always have value depending on context. A student rewriting existing knowledge into clearer language has utility because they improve comprehension or provide easier access. It's still useful work.
Yes, by default, many LLM outputs sound similar because they're trained to optimize broad consensus of human writing. But it's trivially easy to give an LLM a distinct personality or style. You can have it write like Hemingway or Hunter S. Thompson. You can make it sound academic, folksy, sarcastic, or anything else you like. These traits demonstrably alter output style, information handling, and even the kind of logic or emotional nuance applied.
Thus, the argument that all LLM writing is homogeneous doesn't hold up. Rather, what's happening is people tend to use default or generic prompts, and therefore receive default or generic results. That's user choice, not a technological constraint.
In short: people were never uniformly smart or hardworking, so blaming LLMs entirely for declining intellectual rigor is oversimplified. The style complaint? Also overstated: LLMs can easily provide rich diversity if prompted correctly. It's all about how they're used, just like any other powerful tool in history, and just like my comment here.
You say it's human nature to take shortcuts, so the danger of things that provide easy homogenizing shortcuts should be obvious. It reduces the chance of future innovation by making it more easy for more people have their perspectives silently narrowed.
Personally I don't need to see more anecdotal examples matching that study to have a pretty strong "this is becoming a problem" leaning. If you learn and expand your mind by doing the work, and now you aren't doing the work, what happens? It's not just "the AI told me this, it can't be wrong" for the uneducated, it's the equivalent of "google maps told me to drive into the pond" for the white-collar crowd that always had those lazy impulses but overcame them through their desire to make a comfortable living.
Which to me is roughly as bad a take as "LLMs are just fancy auto-complete" was.
I feel it's worth reminding ourselves that evolution on the planet has rarely opted for human-level intelligence and that we possess it might just be a quirk we shouldn't take for granted; it may well be that we could accidentally habituate and eventually breed outselves dumber and subsist fine (perhaps in different numbers), never realizing what we willingly gave up.
Local news coverage has really suffered these past several years. Wouldn't it be great to see relevant local news emerge again, written by humans for humans?
That approach might be a good start. Use a cloud service that forbids AI bot scraping to protect copyright?
Unless you mean a platform only for vetted local journalists...
That sounds a lot like Nextdoor. With all the horrors that come with it.
Perhaps a site could kick off where people proposed sites for Web Rings, edited them. The sites in question could somehow adopt them — perhaps by directly pulling from the Web Ring site.
And while we're at it, no reason for the Web "Ring" not to occasionally branch, bifurcate, and even rejoin threads from time to time. It need not be a simple linked list who's tail points back to it's head.
Happy to mock something up if someone smarter than me can fill in the details.
Pick a topic: Risograph printers? 6502 Assembly? What are some sites that would be in the loop? Would a 6502 Assembly ring have "orthogonal branches" to the KIM-1 computer (ring)? How about a "roulette" button that jumps you to somewhere at random in the ring? (So not linear.) Is it a tree or a ring? If a tree, can you traverse in reverse?
There's still the buy-in problem though. Convincing the owners of the sites you want in the ring to modify their HTML to dynamically fetch and display the ring links.
Sure we can idealize feats of the human brain such as memorizing digits of pi. LLMs put more human behavior into the same category as memorizing digits of pi, and make the previously scarce “idea clay” available to the masses.
It’s not the same as a human brain or human knowledge but it is still a very useful tool just like the tools that let us do maths without memorizing hundreds of digits of pi.
https://en.wikipedia.org/wiki/Glasshouse_(novel)
> "Curious Yellow is a design study for a really scary worm: one that uses algorithms developed for peer-to-peer file sharing networks to intelligently distribute countermeasures and resist attempts to decontaminate the infected network".
Hat tip to HN user cstross (as I discovered the idea via Charlie’s blog):
http://www.antipope.org/charlie/blog-archive/October_2002.ht...
These topics were first brought to my attention through his amazing novel Glasshouse. I’ve had the pleasure of having my first edition copy of the book signed by the author, and I then promptly loaned it indefinitely to a friend, who then misplaced it. The man himself is a friendly curmudgeon who I am happy to have met, and I have enjoyed reading about the future through his insights into the past and present.
Also I must acknowledge Brandon Wiley, who wrote the inspiration for Curious Yellow as far as I can tell.
That's how I view LLM's now. They are what follows computers in the evolution of information technology.
Humans still have an inherent need to be heard and hear others. Even in a pretty extreme scenario I think bubbles of organic discussion will continue
I'm all for it. Let big tech destroy their cash cow, then maybe we can rebuild it in OUR interest.
> The only explanation is that something has coded nonsense in a way that poses as a useful message; only after wasting time and effort does the deception becomes apparent. The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus.
> Viruses do not arise from kin, symbionts, or other allies.
> The signal is an attack.
―Blindsight, by Peter Watts
GPT Might Be an Information Virus - https://news.ycombinator.com/item?id=36675335 - July 2023 (31 comments)
GPT might be an information virus - https://news.ycombinator.com/item?id=35218078 - March 2023 (1 comment)
And, as time goes on, it'll get more efficient at the consumption and waste less and less energy on the generation of utility. It is an organism that needs servers to feed and generates hype like a deep-sea monster glows its lure.
Seems to be working out great so far. (=
hayden_dev•5h ago
anthk•4h ago
DrammBA•4h ago