frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

USB-C hubs and my slow descent into madness (2021)

https://overengineer.dev/blog/2021/04/25/usb-c-hub-madness/
56•pabs3•1h ago•30 comments

My favorite use-case for AI is writing logs

https://newsletter.vickiboykis.com/archive/my-favorite-use-case-for-ai-is-writing-logs/
89•todsacerdoti•3h ago•49 comments

NIH Is Far Cheaper Than the Wrong Dependency

https://lewiscampbell.tech/blog/250718.html
24•todsacerdoti•58m ago•6 comments

ChatGPT agent: bridging research and action

https://openai.com/index/introducing-chatgpt-agent/
499•Topfi•10h ago•360 comments

Mistral Releases Deep Research, Voice, Projects in Le Chat

https://mistral.ai/news/le-chat-dives-deep
443•pember•12h ago•97 comments

Perfume reviews

https://gwern.net/blog/2025/perfume
169•surprisetalk•1d ago•89 comments

Mammals Evolved into Ant Eaters 12 Times Since Dinosaur Age, Study Finds

https://news.njit.edu/mammals-evolved-ant-eaters-12-times-dinosaur-age-study-finds
44•zdw•4h ago•25 comments

A look at IBM's short-lived "butterfly" ThinkPad 701 of 1995

https://www.fastcompany.com/91356463/ibm-thinkpad-701-butterfly-keyboard
35•vontzy•2d ago•10 comments

My experience with Claude Code after two weeks of adventures

https://sankalp.bearblog.dev/my-claude-code-experience-after-2-weeks-of-usage/
150•dejavucoder•8h ago•127 comments

Hand: open-source Robot Hand

https://github.com/pollen-robotics/AmazingHand
347•vineethy•15h ago•95 comments

Extending That XOR Trick to Billions of Rows

https://nochlin.com/blog/extending-that-xor-trick
25•hundredwatt•3d ago•0 comments

Anthropic tightens usage limits for Claude Code without telling users

https://techcrunch.com/2025/07/17/anthropic-tightens-usage-limits-for-claude-code-without-telling-users/
241•mfiguiere•6h ago•139 comments

Astronomers Discover Rare Distant Object in Sync with Neptune

https://pweb.cfa.harvard.edu/news/astronomers-discover-rare-distant-object-sync-neptune
16•MaysonL•3h ago•1 comments

Self-taught engineers often outperform (2024)

https://michaelbastos.com/blog/why-self-taught-engineers-often-outperform
176•mbastos•12h ago•143 comments

All AI models might be the same

https://blog.jxmo.io/p/there-is-only-one-model
155•jxmorris12•9h ago•79 comments

Apple Intelligence Foundation Language Models Tech Report 2025

https://machinelearning.apple.com/research/apple-foundation-models-tech-report-2025
183•2bit•9h ago•136 comments

RisingWave: An Open‑Source Stream‑Processing and Management Platform

https://github.com/risingwavelabs/risingwave
9•Sheldon_fun•2d ago•1 comments

Show HN: PlutoFilter- A single-header, zero-allocation image filter library in C

https://github.com/sammycage/plutofilter
53•sammycage•3d ago•9 comments

Run TypeScript code without worrying about configuration

https://tsx.is/
56•nailer•9h ago•39 comments

23andMe is out of bankruptcy. You should still delete your DNA

https://www.washingtonpost.com/technology/2025/07/17/23andme-bankruptcy-privacy/
44•1vuio0pswjnm7•4h ago•14 comments

Archaeologists discover tomb of first king of Caracol

https://uh.edu/news-events/stories/2025/july/07102025-caracol-chase-discovery-maya-ruler.php
136•divbzero•3d ago•30 comments

Louisiana cancels $3B coastal repair funded by oil spill settlement

https://apnews.com/article/louisiana-coastal-restoration-gulf-oil-spill-affaae2877bf250f636a633a14fbd0c7
45•geox•2h ago•12 comments

Writing a competitive BZip2 encoder in Ada from scratch in a few days (2024)

https://gautiersblog.blogspot.com/2024/11/writing-bzip2-encoder-in-ada-from.html
96•etrez•4d ago•54 comments

Stone blocks from the Lighthouse of Alexandria recovered from seafloor

https://archaeologymag.com/2025/07/lighthouse-of-alexandria-rises-again/
88•gnabgib•4d ago•19 comments

Delaunay Mesh Generation (2012)

https://people.eecs.berkeley.edu/~jrs/meshbook.html
15•ibobev•3d ago•6 comments

On doing hard things

https://parv.bearblog.dev/kayaking/
238•speckx•3d ago•86 comments

People kept working, became healthier while on basic income: report (2020)

https://www.cbc.ca/news/canada/hamilton/basic-income-mcmaster-report-1.5485729
159•jszymborski•4h ago•156 comments

Game of trees hub

https://gothub.org/
23•todsacerdoti•2d ago•6 comments

Ask HN: What Pocket alternatives did you move to?

61•ahmedfromtunis•7h ago•81 comments

ICE's Supercharged Facial Recognition App of 200M Images

https://www.404media.co/inside-ices-supercharged-facial-recognition-app-of-200-million-images/
108•joker99•7h ago•63 comments
Open in hackernews

Don't Fall for AI: Reasons for Writers to Reject Slop

https://mythcreants.com/blog/dont-fall-for-ai-nine-reasons-for-writers-to-reject-slop/
69•BerislavLopac•5h ago

Comments

lexandstuff•4h ago
It used to be that if I saw a typo or grammatical error in someone's writing, I'd switch off, thinking the author really didn't care that much about the text they're writing to proofread it. Now, the complete opposite. Leaving in typos and such shows a clear signal that the author cared enough about what they're writing not to out source to AI.

Related to that, I saw a local band posting marketing material online, that was this kind of amateurish typography with a collage of photos decorated with coloured markers. 2 years ago I'd be laughing at what a terrible job it was, today, it's a breath of fresh human air from all the slop we're subjected to all over the internet. It caught my attention, so much that I'm going to see the band this weekend.

analog31•4h ago
I have a rule that if something seems more literate than the person who wrote it, they probably didn't write it.

Also, the vast majority of stuff ever written isn't worth reading, so filtering your feed for stuff that's worth reading isn't new to the AI age.

boznz•4h ago
Any spelling or grammar mistake is enough for me to re-publish one of my e-books (which usually takes about 20 minutes). The reason is that even a simple error may be enough to pull your reader out of their fantasy space and back into the real world, and as an avid reader myself I would prefer that did not happen.
dsign•4h ago
I remember with fondness the typos in one of Terry Pratchett books, left there not by him but by one in his army of editors :-) .
gerdesj•3h ago
"It used to be that if I saw a typo or grammatical error in someone's writing,"

It's not quite that simple. Many moons ago I taught RSA IT skills levels 2 and three. Hmmm I used a plural for levels and a literal 2 and spelt out three.

You are probably not 50+ years old and have not had to run anti spam email systems for several decades! When you are deciding whether something is created by something other than is claimed, you need way more "rules" than typos and that.

Look at the language in use: A fair sign of AI is banality, verbosity and obsequiousness.

Please don't look upon lazy spelling and grammar as a sign of authenticity: "Its how real people work" - it isn't. That will be mercilessly abused by the baddies. Unfortunately we will all have to raise our game and be proactive in spotting baddies.

Also, please don't become too worried about all this stuff. The bubble will eventually burst.

You be you and look after yourself. Take care.

ben_w•3h ago
Zig when others zag. This too shall pass, and will be forgotten.

Flash websites no one watched. Carousels. Consultants saying "we need a viral". Every product needed a MySpace page, to be prefixed with an "i", or to have most vowels removed. Blue-and-orange film posters.

All those trends will be lost in time, like tears in rain.

barbarr•18m ago
Same, ever since AI got big I've started to see typos and signs of amateurism in a positive light. I wonder how long it will be before people start generating slop with typos on purpose so that we think a human wrote it. Maybe we're already there.
n42•4h ago
why do I get so annoyed every time I read the word "slop" used like this? I have the same reaction with "enshittification". am I just getting grumpy and old?

it triggers the same eye roll as the schoolyard bully nicknames so popular in politics right now. bite sized, zero effort, fashionable take downs that suffocate any attempt at genuine discourse.

but I am probably just grumpy and old.

throwawayoldie•4h ago
It's easy to be grumpy in a world full of enshittified slop.
Labov•4h ago
"Slop" is a bit snappier than "artificial cultural homogenization." Now that'd get some eye rolls.
maxbond•4h ago
I think these words are useful because they convey a feeling of disenchantment people are experiencing with technology. "You say this is progress, but the experience keeps getting shittier. You say this model's output is the next big thing, but my plate is filled with indistinguishable slop."

I would point out what they're criticizing is also lazy and driven by trends, the reflexive acceptance that whatever is new is inevitable and must be embraced. To me "slop" especially feels splashing someone with a bucket of water to try and wake them up from a stupor.

lexandstuff•4h ago
Personally, I think it's a perfect word for what it is: carelessly created content that no one wants.
charcircuit•4h ago
Except you also see people complain about how many likes or views they get on social media. There is signal that people like a subset of "slop".
JKCalhoun•3h ago
Worse (?) a lot of what actual humans write is slop.
the_af•2h ago
This doesn't mean we should make slop easier to create.

More slop is a bad, not a good thing.

add-sub-mul-div•4h ago
New phenomena need new words.
JKCalhoun•3h ago
I feel as you do but I also recognize that I am a bit defensive with regard to LLMs.

And maybe I'm a little too optimistic? Because I see a world in a few years when AI is producing content good enough that those still calling it "slop" will come across as sounding a little shrill.

anton-c•2h ago
If it's AI art/vid with an AI voice reading an AI script(as has become common on youtube) it will always be slop, regardless of how high quality the output is.

I want to know a person's ideas, not a computers regurgitation of others'. It's low effort and usually lacks a point.

Now it doesnt have zero usefulness in writing/the arts. Probably tons tbh. For instance someone using AI voice because they aren't an English speaker and want to talk to that audience, or using it to clean up grainy film is different (in my opinion) than genning the writing or art.

Things made without enough human in the loop - I've found - lack purpose and identity. I dont see AI changing there. If it wasn't a good idea from the start, ai isn't gonna fix that. No amount of awesome cgi or a-list actors saves a terrible script.

The only people I see pushing stuff like ai music is spotify so it doesn't have to pay royalties, but everyone I speak to hates it. The listeners, artists, and the record labels those models stole from. Probably instrument and audio software makers too. When people figure out a pic is AI they voice frustration and embarrassment.

There's more in the word 'slop' than just bad content. Comments/posts on here or reddit often get slaughtered solely because it was written by AI and the user wasn't skilled enough to hide it. Some people just don't like reading something a machine trying to sound like a person wrote.

I don't doubt we will advance to the stage where it becomes on the same level quality-wise, but doubt most people would be wanting AI content while human made stuff is available. It will still be considered low effort slop by many, I believe.

southernplaces7•2h ago
I've always preferred the word sludge. A mixed-up waste byproduct of something that was already made, accumulating in low places, ignored when possible but capable of being toxic and clogging up things that are supposed to work better.
Aeolun•4h ago
Hmm, I don’t think using the AI is all that different from using grammarly. The big problem is that it won’t make terrible stories good. It can still make good stories terrible though.
abtinf•4h ago
It doesn’t mention the number one most important reason: the output is absolute garbage.

I can spot AI writing very quickly now, after just a few sentences or paragraphs. It became a lot easier to spot after I tried to use it in my own writing.

Calling it “slop” is far too generous.

If you know what you want to say, you might think to yourself “I’ll have this write an outline or a first draft that I will then thoroughly edit.”

And every time, what you’ll find is that the LLM output is fundamentally unusable. Points a subtly missing. Points are subtly repeated. Points are miscategorized. Points don’t make sense at all. Points don’t flow in a logical order.

If you try to use an LLM and you don’t know what you want to say, then it’s hopeless. You absolutely will not see the defects. If anyone who knows the subjects reads it, they will instantly know you are a lying piece of shit.

JulieHenne•4h ago
you can check this post: https://news.ycombinator.com/item?id=44598695
JKCalhoun•3h ago
> I can spot AI writing very quickly now, after just a few sentences or paragraphs.

Not denying this is true — but like a lot of what we've seen with AI, lets see how you feel in two years time when the models have improved as much.

I think it was actually Brian Eno that said it (essentially): whatever you laugh about with regard to LLMs today, watch out, because next year that funny thing they did will no longer be present.

bluefirebrand•3h ago
> like a lot of what we've seen with AI, lets see how you feel in two years time when the models have improved as much

People have been saying this for years now though

JKCalhoun•2h ago
And I think every year they have improved? I can say I am certainly much more impressed now than I was a year ago.
capnrefsmmat•3h ago
I don't think the AI companies are systematically working to make their models sound more human. They're working to make them better at specific tasks, but the writing styles are, if anything, even more strange as they advance.

Comparing base and instruction-tuned models, the base models are vaguely human in style, while instruction-tuned models systematically prefer certain types of grammar and style features. (For example, GPT-4o loves participial clauses and nominalizations.) https://arxiv.org/abs/2410.16107

When I've looked at more recent models like o3, there are other style shifts. The newer OpenAI models increasingly use bold, bulleted lists, and headings -- much more than, say, GPT-3.5 did.

So you get what you optimize for. OpenAI wants short, punchy, bulleted answers that sound authoritative, and that's what they get. But that's not how humans write, and so it'll remain easy to spot AI writing.

JKCalhoun•2h ago
That's interesting. I had not heard that. I wonder if making them sound more human and making them better at specific tasks though are mutually exclusive. (Or if perhaps making them sound more human is in fact also a valid task.)
sandspar•3h ago
I like that Brian Eno quote. If I recall correctly, he was also referring to nostalgia. Like, once the technology improves, you begin to miss the old rough edges. I know that I love seeing old images of Google DeepDream, for example.* It's the same reason why young people miss Playstation 2 blocky graphics, or why photographers sometimes edit their images for unreal Kodachrome color. The things annoy us today are the very things that we'll miss the most.

* https://en.wikipedia.org/wiki/DeepDream

spijdar•4h ago
I find this sort of thing discouraging, as a guy who dreamt of being a novel writer long before becoming a “computer person”.

The last few days, I let the intrusive thoughts win, and I played around with automating the process of building themes, characters, outlining, drafting, and revising a novel with the Gemini API, pausing between steps to manually edit each document. It’s crude, but with enough cycles of “read the last draft, write instructions for improving it, redo everything with those instructions” the end result is shockingly not terrible.

It’s not great. Good might even be too far. It’s derivative, and still feels like the embodiment of all the negative connotations of the term “genre fiction”.

Yet, I can’t escape the fact that it’s better reading than what I write. It is objectively less intellectually “interesting”, and it doesn’t have my “voice”, my artistic fingerprint. But it’s entertaining enough that I could see myself reading it at bedtime for fun, a sentiment I’ve never felt for my own writing.

And all that for a pittance of the effort it takes to write a long story. I’m still not sure how to feel about it. It’s sapping my willpower to continue writing “for real”, in the face of being able to “give life” to the characters and story ideas I’ve had languishing for a decade. I know that it’s not “real”, that the stories are superficial, and that the existence of these models is at best ethically questionable.

But for stories that, either way, I’ll probably never share with anyone else, it’s hard to feel that principled about it, in the face of a miserable comparison between my prose and an LLM’s prose. I’m sure if I wrote fiction for a living, I’d feel as passionate as the article’s author, but in my case, it’s just the melancholy of mediocrity. Ah well :-)

twilight-code•4h ago
AI can generate text, but it cannot truly 'create' - it remixes.
canpan•3h ago
> Yet, I can’t escape the fact that it’s better reading than what I write

I worry we will have a lot less good writers or artists in the future. Everyone starts bad, without the skill. The hurdle is, why should I put in effort to learn, if AI is already somewhat good.

Someone mentioned here before, we learn to judge skill before we can learn the skill itself. The drive to jump the gap is what creates the genius.

Please don't give up, you can do it!

mattigames•3h ago
I don't think that AI being somewhat good is the main stopper, it's that we know for certain that it will improve at a giant pace, the very same prompt that triggers a mediocre book today will trigger a good book tomorrow (as an aside, I firmly believe that the word "trigger" is way more apt than "create" when talking about AI output), there is billions of dollars invested to make sure of that, and is not like we haven't seen unbelievable leaps already for art generation.
blibble•3h ago
> it's that we know for certain that it will improve at a giant pace

how exactly?

it's already trained on the entire corpus of human generated text and outputs garbage

there's not a second internet to plagiarise

mattigames•3h ago
There is 2 things that are gonna help a lot, one is better classification, in that regard advances will help purge itself, just like our digestive system filters what we don't need models will do the same, e.g. scientific and factual models will have zero jokes in it's corpus, and you will know that the response you received came from such a humourless model, the second thing is just plain feedback, teachers will use AI and give it thumbs up or thumbs down and the machine will use such opinion over the opinion of most students, it will also favour the input by students with the most promise, those who win math championships and all that jazz.
blibble•2h ago
but there's two new problems they didn't have to deal with when they parasitised the entire internet:

1. everyone with any content of value is now blocking the AI crawlers, because they suck up content and resources whilst offering nothing in return

2. the only people not blocking them are websites that are entirely AI slop

the high point may have be in the past...

mattigames•2h ago
They gonna focus on high value targets, that mean giving it for free to colleges and others in exchange for their cooperation, giving it for free to Hollywood plus custom solutions for their needs, in exchange again, for their cooperation, the entire internet was just a great bootstrap plan not their lifelong strategy.
the_af•2h ago
> the very same prompt that triggers a mediocre book today will trigger a good book tomorrow

"Good" in which sense? That people read it and/or pay for it? But people already did, before LLMs: they read and paid for the most terrible, cliched, trite stuff. I mean, there are whole genres that are basically trash, before anyone even dreamed of AI (I'm pretty sure 90% of mainstream Hollywood script writers can be replaced by an LLM; they already feel like they were written by one anyway. This is not praise of LLMs, it's criticism of Hollywood!).

Surely, then, a "good" book is not merely something people will read or pay for. So why would AI become "good" at it, in which sense?

Reading/writing is a human activity. If you cut humans from a big part of the loop, how can the result ever be good?

This isn't the same context as writing code or building apps.

andrei_says_•2h ago
Years ago I read Luis Bunuel’s biography. When he visited Hollywood (in the 50s or maybe 60s) he made a small device out of paper which allowed him to predict the plot of the typical Hollywood movie.

It was made out of a few wheels, the outer one larger than the inner ones, all attached with a pin in the middle. He had written types and characters and events on the edges of the outer wheels.

He would ask you how the movie starts, what were the main characters, adjust the outer wheels containing these items and get the rest of the plot with very high accuracy.

Hollywood is a home of amazing masterful artists but the suits mostly bet on what has proven to work.

the_af•2h ago
Haha, I didn't know that anecdote! It totally tracks with reality.

I didn't mean to say the people who work in Hollywood don't know their craft, plenty of skilled people who I'm sure would produce wonderful work (and sometimes do!) if given the chance.

I meant Hollywood as this machinery of algorithmic clones, as described perfectly by your anecdote about Buñuel.

throwawayoldie•35m ago
> we know for certain that it will improve at a giant pace

Fun fact: we actually don't, that's a leap of faith on your part.

63•1h ago
"Why should I put in effort to learn when the world already has thousands of writers better than me?"

This problem isn't new. People don't create art based on supply and demand. I don't think there's a future where people stop making things just because computers can do it too. It may be impossible to make a living off of art, but we will keep making it. Ask artists today and many will tell you it never felt like a choice to begin with.

kace91•3h ago
Humans aren’t competitive at chess against computers and haven’t been for a long time. Yet the game is as popular as ever, and people watch human players rather than AIs.

We like playing. We like human touch. That’s still there.

bluefirebrand•3h ago
Just wait until you're actually watching AI generated video of humans that don't even exist playing chess matches that were never played in real life

Isn't the future going to be so great?

threetonesun•2h ago
Why would I watch those. The obvious reaction to all of this is going to be people leaving the house and seeing things in real life.

Sure, some people won’t, but we’re already at the point where AI has ruined any sense of reality online.

Avshalom•3h ago
>Yet, I can’t escape the fact that it’s better reading than what I write.

I mean most obviously: that's because you didn't write it. It has a novelty to it that you don't experience when you write and re-write a story yourself.

More importantly though: as G.K. Chesterton supposedly said "Anything worth doing is worth doing poorly". The idea that you shouldn't write because you're not as good as you could be or you shouldn't plink around on keyboard because you can't play Bach is an idea that destroys any human endeavor and all human joy.

If they are stories that you will never share: why care about quality; it should be pure self exploration.

_def•3h ago
I think it is possible to create art with so called AI. Just not in the way people usually tend to think or expect. Imitating what already exists gives you slop. But used as tools, you can try to create pieces of art that are genuinely valuable. It's just not making it easier. It's more difficult if anything. But not impossible. And some people trailblaze with it.
JKCalhoun•3h ago
I'm still haunted by the "AI slop" called "Jodorowsky's Tron" [example 1] that went viral a bit ago. If an art director had taken those to represent sketches and then created costumed based on them, it would have made for a mind-blowing film.

(Come to think of it, sounds like a good cosplay opportunity. Go as "Jodorowsky's Tron AI Slop".)

[1] https://static.wixstatic.com/media/9414a3_977e028d2ca6472294...

cobbal•2h ago
Probably possible. But if I see something that is AI, I won't bother to engage with it. I'm happier to engage with a human-written piece of bad writing, because it's a meeting of two human minds. There's some innate value to that. Me trying to understand an AI's mind? Not a worthwhile endeavor.

It gets more complicated by the fact that many people don't mark what's AI and what's not, and harder to be certain every day. Many people putting out the slop will have different priorities, and don't care if they're wasting my time.

Meandering back to your point, I would be happy to look at AI-co-created art as long as I knew in advance that it significantly expressed the mind of the human who created it.

Since I can't get people to mark what is AI, I've instead considered signing off all my writing with:

(The above was written by a human without assistance from AI)

roadside_picnic•3h ago
What's unfortunate is that there aren't enough people pushing LLMs to assist in writing in creative ways that really involve messing with the model.

I'm a huge William S. Burroughs fan, and, for those unfamiliar, he and a few others invented an algorithmic technique, the "Cut-up technique" [0], to basically remix their writing. It's a major part of the reason that much of Burroughs' work as a magically confusing aspect to it.

"Prompt and pasting" from LLMs is dull, but awhile back I was experimenting with token-explorer [1] to see what would happen if I started with a prompt and explored the "high-entropy" states of the LLM. By controlling the sample path to stay in a high-entropy state you start getting very different types of responses that feel like nothing that normally comes from an LLM. You could argue it's a form of "statistical automatic writing" [2]

There is tremendous potential for genuinely interesting writing to be created with an LLM but it's going to require popping open the box and playing around. In the Stable Diffusion world there's lots of people trying all sorts of odd experiments to create things and, while not the mainstay of generative AI images, they are able to create really interesting things.

I would love to see more people ripping open local LLMs and seeing just what the real posibilities are.

0. https://en.wikipedia.org/wiki/Cut-up_technique

1. https://github.com/willkurt/token-explorer

2. https://en.wikipedia.org/wiki/Automatic_writing

JKCalhoun•3h ago
I have a set of "chord dice" that you roll and then write a song around those chords.

I have a set of "story telling dice" that you toss and use the result as a writing prompt.

High entropy has existed in other forms (as you point out) before LLMs.

viccis•3h ago
Aleatoric music is mostly just "neat". I've never encountered any of it that is interesting or moving, with the exception of situations in which it is very carefully integrated as one small element of a composition. That includes the likes of Xenakis, who I think writes music whose main form of enjoyment is reading how it was composed. After doing that, listening is an unnecessary and often off putting step.
JKCalhoun•3h ago
I've never been convinced by point 2 (AI Outputs Are Stolen From People Like You).

Every artist has stolen. I mean, that's probably putting too fine a point on things, but you'd have to show me a painting someone has created where they never saw another artist's work before. Or a book written by someone who never read a book before.

I drew all the time as a kid — making a point at age 12 to learn to draw the human figure. I started with the standard proportions that every decent book on drawing the human figure puts forth. I started with shapes representing the hips, the rib cage, the skull — you sketch lines determined by muscles over those hard structures. You draw the clavicle, divot defining the knee caps, suggest the inverted triangle over the figures back-side, shoulder blades protruding....

And in time I started looking at how Mort Drucker drew mouths. How another MAD artist did pockets on short-sleeve shirts. How Angelo Torres draws the ears....

In time you become an amalgam of your favorite bits and pieces of your favorite artists.

(And then you find out that R. Crumb was lifting styles from Warner Brothers, etc. when he was ramping up his craft. But of course he did.)

nkrisc•3h ago
Conflating this to humans learning is missing the point. Everyone is very aware of what you’re talking about, and no one cares, that’s not the problem. Humans learning from art and copying is not the problem. They care when it’s AI done by corporations at an industrial scale.

The fact it’s AI is the issue.

protocolture•3h ago
Its weird to me that I see so many of these posts that dont seem to have any reference to LLMs that have been purpose built for narrative. I guess if you had the correct information, you wouldnt make proudly incorrect blog posts.
GMoromisato•1h ago
If it's true that "AI Doesn’t Understand Stories" and "People Don’t Want to Buy AI Writing", then there's no reason to worry: no one will buy AI stories, which means no one will want to sell AI stories, which means no one will want to write AI stories.

But the scary part is that maybe AI will be able to write stories people want to read. In that case, yes, writers will suffer, just as performing musicians suffered when records/radios appeared, and recording musicians suffered when MP3/streaming appeared.

Even worse, we won't know which future will happen until it actually does. And by then, of course, it will be too late.