I don't mind AI output, AI is useful for getting you to "mediocre" fast, which is extremely useful for 75% of the population. The rest write well.
I'm very grateful that AI raised the floor to what used to be the 75th percentile.
> I’ve gotten perfect scores on every reading test I’ve ever taken, including the SAT and GRE.
In all honesty though, it is certainly relatable, and more people should be pissed at the state of our industry.
would often see expedients like
this -- or this--or even, most
minimally, just this-
which held over well enough that I believe Word by default probably still replaces one of them if you use it.It's particularly scary watching "AI slop" follow that path because of the extreme moral polarization associated with using LLMs or generative art. There's people who will see some casual mention of a game or film or app or something "using AI" on social media without evidence and immediately blast off into a witch hunt to make sure the whole world knows that whoever involved with that thing are Bad People who need to be shunned and punished. It has almost immediately become the go-to way to slam someone online because it carries such strong implications, requires little/no evidence, and is almost impossible to fully refute. Think there's a lot to learn from observing this, and it does not bode well for the next few years of discourse.
I cannot tell whether that's a joke, but I'm very interested if it's serious
We had this project (all public research) to classify buildings and identify their different subsystems (e.g. load-bearing structure, roof type, ventilation type) to figure out the expected casualties if there was a WMD event of some type. We could get decent data for much of the world, but for some places we had literally nothing beyond a tiny picture of it from satellite imagery.
I had been playing with using GPT-3 to try to have it autocomplete forms like the following. This was 2021 before we had good APIs for instruct models, so this was just straight up letting the LLM regurgitate after pretraining. Here was the type of prompt we used:
""" Engineering building report for building located at 123, X Street, Knoxville TN Prepared by Benjamin Lee, FE --- Building footprint area: 1200 m2 Roof type: built-up roofing Facade material: brick HVAC present: """
Surprisingly (at the time), this was a decent prior. You could also add all sorts of one-off points of interest and amenities like swimming pools and other trivia to help guide the conditional probabilities.
This is actually useful to know. Many people grew up in a time or place where they didn't receive the same kind of education. They may have weak reading and writing skills. Many of these folks might feel self conscious about their writing. A writing assistant is one way they can feel more confident, and I don't think that's a bad thing, even if I can usually tell.
A lot of people will look at a credit agreement they're about to sign and have no way of knowing whether it's a fair deal or naked exploitation that will financially ruin them. Thanks to LLMs, they can now point their phone camera at it and ask "is this a fair contract, or am I being ripped off". The LLM will give you an answer in speech and happily go back-and-forth with questions and clarifications for as long as you need. Whether the LLM is 90% accurate or 99% accurate at that task isn't truly critical, because it's a vast improvement over the status quo. Modern life is full of tasks that are utterly trivial for people in the top 10%, but a fraught ordeal for people lower down in the distribution.
To paraphrase a very old adage: God created man and Sam Altman made them equal.
We should be addressing _why_ so much of our population is functionally illiterate and _why_ people can't read financial agreements that can harm them. This should definitely not be left to an extremely-profit-motivated entity to equalize.
Instead, more states are eroding public education in favor of school choice and voucher programs, which is scary in a world wherein many parents would have no issues pulling their kids out of school or sending them to religious schools that consider the Bible as the authoritative text for everything, and for-profit schools would have no problem sacrificing curriculum, learning development and staff the minute it complicates revenue.
Technology should not be fixing such a societal or legislative issue. In the example you're providing, why should the user trust the LLM, but not the credit agreement? Why shouldn't the LLM point the user to a different credit agreement, just as exploitative? The company operating the LLM may have such an incentive, and it could be very lucrative.
Illiteracy in the case you are proposing can only increase, for the benefit of some.
(The argument that people can use open source models can't possibly be applied, considering you're speaking about, as you called them, the other 90%, or even the functionally illiterate.)
The last paragraph is concerning to say the least.
Also, you can't get better without practice. LLMs being so easily accessible will make it possible to never need to learn how to write, which is extremely dangerous given how Microsoft, Meta, Google and Amazon are the companies driving model development in the US and that LLMs aren't available all of the time (like, when you're outside interacting with real people).
The two things that really resonated with me were:
> LLMs are very good at producing output, but they are not a substitute for you.
and
> You are a rich soup of learnings and experiences, and you’ve been simmering for decades. Please let me try some of it.
Your piece reminds me of an old song by Supertramp called "Hide in your Shell"[0]. The speaker tries to figure out how he can lead the subject out of their futile refuge and the punch lines are "I wanna know you" "Please let me near you".
Fun fact, I spent about 20 minutes composing this comment. Still "I lacked the time to make it shorter." I too was cursed with top 2% language skills and composing (and re-formulating) language is such a part of my flesh-and-blood identity that I could not conceive of a life worth living without it. When I do prompt LLMs for software solutions, I spend at least 15-25 minutes crafting context MD files and/or prompts. But then again, I'm not asking them to churn out yet another 1-shot modern web stack app.
I agree the world is richer because of folks like us. Carry on, wayward son. There'll be peace when you are done.
[0] https://www.letras.com/supertramp/84400/ I chose this site because it had the fewest pop-ups (still has a lot). Obviously, just prompting the title and the group returns the lyrics instantly from any mainstream LLM.
At the same time, thanks to anti-AI-snobs like this distinguished gentleman, I have started spending significant amounts of time to deliberately not sound like an LLM (foreign-language speaker, analytical writer, typographical nerd who used em-dashs unironically), which makes me do double the work (and ironically makes the workflow of 'think yourself' -> 'rewrite to sound more human' more expensive than 'let the LLM do the thinking' -> 'rewrite to sound more human' in a business setting).
And yet - on the other hand - this absurdity has given rise to a strange, decadent joy: I've begun to write in florid, fanciful style simply to lampoon the process itself. A mockery wrapped in velvet. A jest dressed in brocade. If one must dance for the algorithmic (or anti-algorithmic) court, why not do so with powdered wig and fan in hand?
I sort of take this as a compliment, because I've been writing like this my whole life and if it reads like LLM slop, then there's the implication that the result of all of OpenAI's A/B testing and post-training leads to something like my style which at least means it gets people to engage with it!
Unfortunately that's rewarded a lot these days. It's basically the same thing as rage bait.
You can capitalize on fear of AI by throwing the word slop around, getting people angry, and getting those sweet clicks.
> Crowdbotics uses AI to analyze your code base across three layers of abstraction, giving developers, architects, and business stakeholders the insights they need to modernize and maintain legacy code.
FWIW the site will be undergoing major changes soon which should help more clearly explain it.
Not that you're asking, but what would really convince me would be being able to point a cut-down version of the tool, presumably in a BYOAPI fashion, at one of my own old stale repos like https://github.com/aaron-em/same-encoder, and see what kind of a modernization plan it comes up with and how that compares with my own. A toy problem to be sure, but what better field for evaluation? Think of it like a take-home interview, perhaps.
> If you are reading this, you were most likely sent here after you shared a document with me and I viewed it unfavorably.
"Did this peasant just talk to My Majesty? Ew, how dare they, I view their document unfavorably."
> Alternately, perhaps I really like you and think you have potential, so I am sending you this message prematurely so that you don’t accidentally trigger a peeve of mine.
"Yeah, I like that one, he's funny, I'll keep him as a pet. But don't you dare send me AI slop, you peasant!"
> My life is busy. I have children, a demanding job, and personal aspirations. When I want information to help you, enable you, or supervise you, I want it in the most efficient format possible.
"Look at me, I'm the busiest. Don't have time for nothing, can't you see? Ungrateful peasants, I enable them by the thousand, only to be sent AI slop."
> I’ll cut you a break. I will hold my nose despite all the text smells because I genuinely want to see your best work in the best light possible.
"Ugh, fine, I'll have a look at your slop, you filthy slave. But that's solely because I always see everyone in the best light possible."
The human reviewer asks and wants you to proof-read your output before submission. It claims being able to detect any AI slop. I wonder whether this is true, and if so, for how long. Maybe it will be replaced by a GAN LLM, and then the loop will be closed.
As a developer when I first encountered auto-linters I felt threatened. When I saw coding agents my first reaction was to feel threatened. Maybe the author is going through something similar.
tracedddd•10h ago
stephantul•10h ago
ckrapu•10h ago
throwanem•9h ago
(Of course, you would be well advised to review my comment history on this website before deciding how much of a commendation that constitutes, versus how much a warning.)
Incipient•9h ago
Im in half data and half software engi...and I have no clue how to best move forward for what it's worth haha. Just know I need to, in some direction, with how the tech is going. Definitely can't keep going like it's still in the pre-AI days.
_zoltan_•9h ago
throwanem•9h ago