https://en.wikipedia.org/wiki/Printing_press#Gutenberg.27s_p...
if everyone has a bible, then who needs the church to tell you what it says.
Clearly, all the protestants who burned more witches than the catholics ever did, and kept at it for centuries after the inquisition had stopped. But that's just my opinion here.
It is a good analogy. There is great concern that the unwashed masses won’t know how to handle this tool and will produce information today’s curators would not approve of.
"Running your own models on your own hardware" is an irrelevant rounding error here compared to the big-company models.
the church did all of the reading and understanding for us. owners of the church gobbled up as much information as it could (encouraging confessions) and then the church owners decided when, how, where and which of that information flowed to us.
I have a very vocally anti-AI friend, but there is one thing he always goes on about that confuses me to no end: hates AI, strongly wants an AI sexbot, is constantly linking things trying to figure out how to get one, and asking me and the other nerds in our group about how the tech would work. No compromises anywhere except for one of the most human experiences possible. :shrug:
Sexbots are the raison d'etre for that sub
I expect people to be lazy, but that we'd outsource feelings was surprising.
She says that about 75% of the custom card customers would ask her what they should write for a message.
She wrote messages of friendship, love, birthdays, graduations, congratulations, sympathy, etc. To support her coworkers on other shifts, she filled an index card box with several dozen canned "custom" messages for Hallmark customers to choose from.
Somewhat separately, she reports that working at Hallmark is a good way to make a misanthrope out of an intelligent teenager. To which I reply that most of the intelligent teenagers I knew were already misanthropes! But the stories she tells, particularly of Christmas ornament hysteria, are hysterical.
I recall some Chinese language discussion about the experience of studying abroad in the Anglophone world in the early 20th century and the early 21st century. Paradoxically, even if you are a university student, it may now be harder to break out of the bubble and make friends with non-Chinese/East Asians than before. In the early 20th century, you'd probably be one of the few non-White students and had to break out of your comfort zone. Now if you are Chinese, there'd be people from a similar background virtually anywhere you study in the West, and it is almost unnatural to make a deliberate effort to break out of that.
But it was the only way forward to a new equilibrium.
To quantify it, you'd need measurable changes. For example, if you showed that after widespread LLM adoption, standardized test scores dropped, people's vocabulary shrank significantly, or critical thinking abilities (measured through controlled tests) degraded, you'd have concrete evidence of increased "dumbness."
But here's the thing: tools, even the simplest ones, like college research papers, always have value depending on context. A student rewriting existing knowledge into clearer language has utility because they improve comprehension or provide easier access. It's still useful work.
Yes, by default, many LLM outputs sound similar because they're trained to optimize broad consensus of human writing. But it's trivially easy to give an LLM a distinct personality or style. You can have it write like Hemingway or Hunter S. Thompson. You can make it sound academic, folksy, sarcastic, or anything else you like. These traits demonstrably alter output style, information handling, and even the kind of logic or emotional nuance applied.
Thus, the argument that all LLM writing is homogeneous doesn't hold up. Rather, what's happening is people tend to use default or generic prompts, and therefore receive default or generic results. That's user choice, not a technological constraint.
In short: people were never uniformly smart or hardworking, so blaming LLMs entirely for declining intellectual rigor is oversimplified. The style complaint? Also overstated: LLMs can easily provide rich diversity if prompted correctly. It's all about how they're used, just like any other powerful tool in history, and just like my comment here.
You say it's human nature to take shortcuts, so the danger of things that provide easy homogenizing shortcuts should be obvious. It reduces the chance of future innovation by making it more easy for more people have their perspectives silently narrowed.
Personally I don't need to see more anecdotal examples matching that study to have a pretty strong "this is becoming a problem" leaning. If you learn and expand your mind by doing the work, and now you aren't doing the work, what happens? It's not just "the AI told me this, it can't be wrong" for the uneducated, it's the equivalent of "google maps told me to drive into the pond" for the white-collar crowd that always had those lazy impulses but overcame them through their desire to make a comfortable living.
This is how I know this comment was written by an AI.
Which to me is roughly as bad a take as "LLMs are just fancy auto-complete" was.
I feel it's worth reminding ourselves that evolution on the planet has rarely opted for human-level intelligence and that we possess it might just be a quirk we shouldn't take for granted; it may well be that we could accidentally habituate and eventually breed outselves dumber and subsist fine (perhaps in different numbers), never realizing what we willingly gave up.
We became a technological species.
We observed, standardized and mechanized our environments to work for us. That is our niche.
But then things snowballed in the last couple of centuries. A threshold was crossed. Our technology became our environment, and we began adapting the environment for our technologies direct benefit, for own indirect benefit.
Simple roads for us at first, then paved for mechanized contraptions. Wires for talking at first, then optimized for computers. We are now almost completely building out a technological world for the convenience and efficiency of the technology.
And once our technology frees us from dependence on others, a second threshold will be crossed. Then neither others or the technology, will need us.
I don't see a species of devolving humans, no longer needed by their creations, in a world now convenient for those creations, finding a happy niche.
If there is a happy landing, it will need to take a different route than that.
To get 10% of a number, just move the decimal left: 10% of 40 -> 4.0.
To get 5% of this number, get 10% first, then halve it. It is the half of 10%, which in this case would be 2.0.
If you want to do this on a computer / calculator, you simply do: 40 * 0.10 for 10% or 0.05 for 5%. I was a very young kid when I learned to do this on a calculator, and I absolutely loved it!
I have learned so much the past 2.5 years it is almost hard to believe.
To say I am getting dumber is just completely preposterous.
Maybe this would be leading me astray if I had the intelligence of Paul Dirac and I wasn't fully applying my intelligence. The problem is I don't have anything like the intelligence of Paul Dirac.
The current style & cadence of LLM output is already getting tiring for many so I'd expect a different style to take hold soon enough. And given LLM can mimic any style that is easy enough to do at scale and quickly. Then the cycle commences again until someone comes up with a novel style of writing, that people like, that the LLM dont know yet and then the cycle starts again....
Edit:
I also vaguely remember an article around the cultural impact of one of the image creation ai early on, maybe Dall-E if memory serves me well. I remember very little of the article now except a comment an artist made which was along the lines that in a few years the image generation would be so good & realistic, that inevitably a counter culture will emerge around nostalgia for the weird hallucinatory creations it used to make at the start simply because at least it'll be more interesting. In a similar way you get the nolstagia for things like vinyl & handcrafted toys etc. I think about that aspect of it broadly a lot.
Local news coverage has really suffered these past several years. Wouldn't it be great to see relevant local news emerge again, written by humans for humans?
That approach might be a good start. Use a cloud service that forbids AI bot scraping to protect copyright?
Unless you mean a platform only for vetted local journalists...
That sounds a lot like Nextdoor. With all the horrors that come with it.
No, we want real news! With some editorial oversight.
Local reddits were good for a while, but the bots and human moderators make their own rules. It's not consistent.
A nice place to start could be with old-school weather reports, where we can learn something again. It's all so superficial these days.
Local events, local political issues with objectivity, history and future outlook, the list goes on and on. Maybe muzzle the negative talk with strict categories and guard rails to avoid another Nextdoor?
Perhaps a site could kick off where people proposed sites for Web Rings, edited them. The sites in question could somehow adopt them — perhaps by directly pulling from the Web Ring site.
And while we're at it, no reason for the Web "Ring" not to occasionally branch, bifurcate, and even rejoin threads from time to time. It need not be a simple linked list who's tail points back to it's head.
Happy to mock something up if someone smarter than me can fill in the details.
Pick a topic: Risograph printers? 6502 Assembly? What are some sites that would be in the loop? Would a 6502 Assembly ring have "orthogonal branches" to the KIM-1 computer (ring)? How about a "roulette" button that jumps you to somewhere at random in the ring? (So not linear.) Is it a tree or a ring? If a tree, can you traverse in reverse?
There's still the buy-in problem though. Convincing the owners of the sites you want in the ring to modify their HTML to dynamically fetch and display the ring links.
Sure we can idealize feats of the human brain such as memorizing digits of pi. LLMs put more human behavior into the same category as memorizing digits of pi, and make the previously scarce “idea clay” available to the masses.
It’s not the same as a human brain or human knowledge but it is still a very useful tool just like the tools that let us do maths without memorizing hundreds of digits of pi.
https://en.wikipedia.org/wiki/Glasshouse_(novel)
> "Curious Yellow is a design study for a really scary worm: one that uses algorithms developed for peer-to-peer file sharing networks to intelligently distribute countermeasures and resist attempts to decontaminate the infected network".
Hat tip to HN user cstross (as I discovered the idea via Charlie’s blog):
http://www.antipope.org/charlie/blog-archive/October_2002.ht...
These topics were first brought to my attention through his amazing novel Glasshouse. I’ve had the pleasure of having my first edition copy of the book signed by the author, and I then promptly loaned it indefinitely to a friend, who then misplaced it. The man himself is a friendly curmudgeon who I am happy to have met, and I have enjoyed reading about the future through his insights into the past and present.
Also I must acknowledge Brandon Wiley, who wrote the inspiration for Curious Yellow as far as I can tell.
That's how I view LLM's now. They are what follows computers in the evolution of information technology.
Humans still have an inherent need to be heard and hear others. Even in a pretty extreme scenario I think bubbles of organic discussion will continue
I'm all for it. Let big tech destroy their cash cow, then maybe we can rebuild it in OUR interest.
> The only explanation is that something has coded nonsense in a way that poses as a useful message; only after wasting time and effort does the deception becomes apparent. The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus.
> Viruses do not arise from kin, symbionts, or other allies.
> The signal is an attack.
―Blindsight, by Peter Watts
GPT Might Be an Information Virus - https://news.ycombinator.com/item?id=36675335 - July 2023 (31 comments)
GPT might be an information virus - https://news.ycombinator.com/item?id=35218078 - March 2023 (1 comment)
And, as time goes on, it'll get more efficient at the consumption and waste less and less energy on the generation of utility. It is an organism that needs servers to feed and generates hype like a deep-sea monster glows its lure.
Seems to be working out great so far. (=
Just wish we had a competent government to handle the upcoming transition. But even an incompetent one can have smart employees under it, and can give them the funding they need to accomplish this.
If, as this article predicts, the result of GPT is that we don't trust information from the internet, and everybody moves away from it, that's great. Traditional journalism was better, as it turns out. Talking mainly to your friends rather than millions of people was better, as it turns out. I'm ready to go back to that, should it come to it.
But it won't. This essay is making a catastrophic prediction that won't come to pass. Whatever the future is, it's going to be something nobody is predicting yet. It'll be better than the doomsayers predict, and worse than what the cheerleaders say. It will be nothing like a simple magnification of the present concern over epistemology.
Today it's far more difficult (and personally quite frustrating) to find information written by actual people with actual experience on any given topic than it was 5 years ago, because for every one of those articles there are now 20 more written by LLMs, often outranking them. This frustration is only going to grow as the LLM proliferation continues.
Stopped reading here.
No, humans won't stop generating content. There's no reason to believe this is inevitable.
On this view, we are not exquisitely designed machines but rather accidental pitched battles occurring in nature. The question is does this ontological view produce new predictive capacity? Can you see yourself as a being entirely driven by microscopic life, rationalizing everything after the fact so that you are the master of your destiny? Is intelligence something that you partake in rather than possess? What is technology and what does it want from us?
hayden_dev•6mo ago
anthk•6mo ago
DrammBA•6mo ago