It's also fair to use it as a clever dictionary, to find the right expressions, or to use correct grammar and spelling. (This post could really use a round of corrections.)
But in the end, the message and the reasoning should be yours, and any facts that come from the LLM should be verified. Expecting people to read unverified machine output is rude.
I think we havent realized yet that most of us don't really have original thoughts. Even in creative industries the amount of plagiarism (or so called inspiration) is at all times high (and that's before LLMs were available).
Every time I come up with an algorithm idea, or a system idea, I'm always checking who has done it before, and I always find significant prior art.
Even for really niche things.
I think my name Aeonik Chaos might be one of the only original, never before done things. And even that was just an extension of established linguistic rules.
An author that does nothing but "plagiarize" and regurgitate the ideas of others is incredibly valuable... if they exercise their human judgement and only regurgitate the most interesting and useful ideas, saving the rest of us the trouble of sifting through their sources.
Quite. Its the attention economy, you've demanded people's attention, and then you shove crap that even you didn't spend time reading in their face.
Even if you're using it as an editor... you know that editors vary in quality, right? You wouldn't accept a random editor just because they're cheap or free. Prose has a lot in it, not just syntax, spelling and semantics, but style, tone, depth... and you'd want competent feedback on all of that. Ideally insightful feedback. Unless you yourself don't care about your craft.
But perhaps you don't care about your craft. And if that's the case... why should anyone else care or waste their time on it?
That’s the rudeness. But this takes care of itself— we just adjust trust accordingly
This should be viewed as an absolute unacceptable outcome
I want society to become higher trust not even lower trust :(
If the alternative is no editor then yeah i would. Most of what i write receives no checks by anyone other than me. A very small percentage of my output gets a second set of eyes. And it is usually a coworker or a friend (depending on the context of what is being written.) Their qualification is usually that they were available and amenable.
> Unless you yourself don't care about your craft.
This is a tad bit elitist. I care about my craft and would love if a competent, and insightfull editor would go over every piece of writing i put out for others to read. It would cost too much, and would be to hard to arrange. I just simply can’t afford it. On the other hand I can afford to send my writings through an LLM, and improve it here and there occasionaly. Not because i don’t care about my craft, but precisely because I do.
Sometimes we (I) might follow ideas over authority/authorship. e.g.: I'll happily read ai generated stuff all day long on topics I'm super into.
Do I have to be the instigator? Can someone else prompt/filter/etc. for me? I think so. They'll do it differently and perhaps better than me.
A lot of what is structurally important the model knows about your code gets lost whenever the context gets compressed.
Solving this problem will mark the next big leap in agentic coding I think.
1. They deliberately chose to not take a few minutes to communicate with you, but expect something of you.
2. The hard part of writing is organizing thoughts into something coherent, not typing something out. If you don't understand something enough to write it in the first place, the LLM can't magically read your mind and understand what you want to say for you.
So just like any other tool really.
I have discovered this week that Claude is really good at redteaming code (and specs, and ADRs, and test plans), much better than most human devs who don’t like doing it because it’s thankless work and don’t want to be “mean” to colleagues by being overly critical.
I keep seeing people saying how amazing it is to code with these things, and I keep failing at it. I suspect that they're better at some kinds of codebases than others.
Downside: lots of Python, and Python indentation causes havoc with a lot of agentic coding tools. RooCode in particular seems to mangle diffs all the time, irrespective of model.
Probably. My works custom dev agent poops the bed on our front-end monorepo unless you're very careful about context, but then being careful about context is sort of the name of the game anyway...
I'm using them, mainly for scaffolding out test boilerplate (but not actual tests, most of its output there is useless) and so on, or component code structure based on how our codebase works. Basically a way more powerful templating tool I guess.
It is certainly your prerogative to believe that, but know your opinion is far from universal. It is a widespread view that AI-written text is worse.
Why are you on hackernews and not talking to an LLM?
Large Language Models (LLMs), like GPT-4, offer numerous benefits for writing tasks across various domains. Here’s a breakdown of the key advantages:
1. Enhanced Productivity
Faster Drafting: Quickly generate drafts for essays, reports, emails, blog posts, and more.
24/7 Availability: Instant support with no downtime or fatigue.
Reduced Writer’s Block: Provides starting points and creative prompts to overcome mental blocks.
2. Improved Writing Quality
Grammar and Style: Corrects grammar, punctuation, and stylistic issues.
Tone Adjustment: Adapts tone to suit professional, casual, persuasive, or empathetic contexts.
Clarity and Conciseness: Helps simplify complex ideas and remove redundant language.
3. Creativity and Ideation
Brainstorming: Assists in generating titles, outlines, metaphors, and analogies.
Storytelling: Offers plot ideas, character development, and dialogue suggestions for creative writing.
Variations: Produces multiple versions of the same message (e.g., for A/B testing).
4. Language Versatility
Multilingual Support: Translates and writes in many languages.
Localization: Tailors content for different cultural contexts or regions.
5. Research Assistance
Summarization: Condenses large documents or articles into key points.
Information Retrieval: Provides background context on topics quickly (though should be fact-checked for critical work).
Citation Help: Assists in generating citations in formats like APA, MLA, or Chicago.
6. Editing and Rewriting
Paraphrasing: Rewrites text to avoid plagiarism or improve readability.
Consistency Checks: Maintains tone, terminology, and formatting across long documents.
Content Expansion: Adds detail to thin content or elaborates on underdeveloped points.
7. Customization and Integration
Prompt Engineering: Tailors responses for specific industries (e.g., legal, medical, technical).
API Integration: Can be embedded into writing tools, content platforms, or CMS systems.
8. Cost Efficiency
Reduces Need for Human Writers: Especially for repetitive or low-complexity tasks.
Scales Effortlessly: One model can serve multiple users or projects simultaneously.
Would you like a breakdown of how these benefits apply to a specific type of writing (e.g., academic, marketing, business)?
1. Enhanced Productivity
Yes, LLMs can produce text quickly, but speed is not synonymous with quality. Churning out a draft in seconds is only useful if that draft actually advances the writer’s ideas, rather than lulling them into outsourcing thought itself. What often happens is that people mistake “having words on a page” for “having meaningful ideas.” Productivity in writing is not about word count—it’s about clarity of thought, and clarity is something that an LLM cannot supply. It can rearrange existing patterns, but it cannot truly reason or generate original insight. A fast draft is worthless if it’s hollow.
2. Improved Writing Quality
This point assumes that grammar and surface-level polish are the essence of good writing. They are not. Good writing emerges from the writer’s voice, their personality, their quirks, even their mistakes. Grammar-correcting AI tends to standardize expression into a bland, middle-of-the-road prose style. The result is “correct,” but sterile. Moreover, “tone adjustment” and “clarity” are superficial facsimiles of understanding. Simplifying an idea is only valuable if you understand what makes it complex in the first place. AI doesn’t “understand” ideas—it flattens them into patterns of words that look simpler but may remove nuance in the process.
3. Creativity and Ideation
Here is where the hype is the most exaggerated. Brainstorming with an LLM often produces generic, cliché, or predictable results. If you ask for metaphors, you’ll get the most common ones floating around in its training data. If you ask for plots, you’ll get reheated versions of existing tropes. Calling this “creativity” misunderstands what creativity actually is: the human capacity to connect disparate, personal experiences into something novel. An LLM is bounded by statistical averages. It cannot be surprised by itself. Humans, on the other hand, can.
4. Language Versatility
Translation and localization are areas where LLMs seem promising, but again, nuance matters. Language is not merely about syntax or vocabulary; it is deeply cultural, contextual, and historically embedded. Machine translation may be “good enough” for casual use, but it consistently fails to capture subtext, irony, humor, idiom, or cultural resonance. Outsourcing too much of this to AI risks flattening linguistic richness into something utilitarian but impoverished.
5. Research Assistance
This one is especially dangerous. Yes, LLMs can summarize and generate context, but they are notorious for producing confident-sounding misinformation (“hallucinations”). Unless the user already has expertise in the topic, they will not know whether what they’re reading is accurate. This means that instead of empowering research, LLMs encourage intellectual laziness and misinformation at scale. The “citation help” is even worse: fabricated references, garbled bibliographic entries, and misleading formatting are common. Presenting this as a “benefit” is disingenuous without an equally strong warning.
6. Editing and Rewriting
Paraphrasing and consistency checks may sound helpful, but they too come at a cost. When you outsource the act of rewriting, you risk losing the friction that forces you to refine your own ideas. Struggling to find words is not a flaw—it’s part of thinking. Offloading that process to an algorithm encourages passivity. You end up with smoother sentences, but not sharper thoughts. “Consistency” is also a double-edged sword: AI can enforce bland uniformity where variation and individuality might have been more compelling.
7. Customization and Integration
This is just another way of saying “industrialization of writing.” The more writing is engineered through prompts and APIs, the more it shifts from being a human practice to being an automated pipeline. At that point, writing stops being about human connection or expression and becomes just another commodity optimized for scale. That’s fine for spam emails or ad copy, but disastrous if applied to domains where authenticity and trust actually matter (e.g., journalism, education, or literature).
8. Cost Efficiency
Framing this as a cost benefit—“reduces need for human writers”—is perhaps the most telling point in your list. This reduces writing to a purely economic function, ignoring its human and cultural value. The assumption here is that human writers are redundant unless they can outcompete machines on efficiency. That is not just shortsighted; it’s destructive. Human writers don’t merely “generate content”—they interpret, critique, and shape culture. Outsourcing all that to probabilistic models risks a future where the written word is abundant but devoid of depth.
The larger issue is that your entire framing assumes writing is merely a transactional process: input (ideas or tasks) → output (words on a page). But writing is not just about producing text. It is about thinking, communicating, and connecting. By presenting LLMs as a categorical improvement, you erase the most important part of the process: the human struggle to articulate meaning.
So yes, LLMs have uses, but they should be treated as narrow tools with serious limitations—not as the new standard for all writing. To present them otherwise is to flatten human expression into machine-mediated convenience, and to celebrate that flattening as “progress.”
I don’t know you, don’t trust you, and if you write with AI nobody else will get to know you or trust you, either, unless they fall for your false AI mask.
If you call yourself a writer, having tell tale LLM signs is bad. But for people who's work doesn't involve having a personal voice in written language, it might help them getting them to express things in a better way than before.
Can we please stop propagating this accusation? Alright, sure, maybe LLMs overuse the em-dash, but it is a valid topographical mark which was in use way before LLMs and is even auto-inserted by default by popular software on popular operating systems—it is never sufficient on its own to identify LLM use (and yes, I just used it—multiple times—on purpose on 100% human-written text).
Sincerily,
Someone who enjoys and would like to be able to continue to use correct punctuation, but doesn’t judge those who don’t.
Yes it's not a guarantee but it is at least a very good signal that something was at least partially LLM written. It is also a very practical signal, there are a few other signs but none of them are this obvious.
I believe you. But also be aware of the Frequency Illusion. The fact that someone mentions that as an LLM signal also makes you see it more.
https://en.wikipedia.org/wiki/Frequency_illusion
> Yes it's not a guarantee but it is at least a very good signal that something was at least partially LLM written.
Which is perfectly congruent with what I said with emphasis:
> it is never sufficient on its own to identify LLM use
I have no quarrel with using it as one signal. My beef is when it’s used as the principal or sole signal.
Yeah, maybe that's the one thing people who didn't know how to do it before have learnt from "AI" output.
I feel the emdash is a tell because you have to go out of your way to use it on a computer keyboard. Something anyone other than the most dedicated punctuation geeks won't do for a random message on the internet.
Things are different for typeset books.
There’s no incantation. On macOS it’s either ⌥- (option+hyphen) or ⇧⌥- (shift+option+hyphen) depending on keyboard layout. It’s no more effort than using ⇧ for an uppercase letter. On iOS I long-press the hyphen key. I do the same for the correct apostrophe (’). These are so ingrained in my muscle memory I can’t even tell you the exact keys I press without looking at the keyboard. For quotes I have an Alfred snippet which replaces "" with “” and places the cursor between them.
But here’s the thing: you don’t even have to do that because Apple operating systems do it for you by default. Type -- and it converts to —; type ' in the middle of a word and it replaces it with ’; quotes it also adds the correct start and end ones depending on where you type them.
The reason I type these myself instead of using the native system methods is that those work a bit too well. Sometimes I need to type code in non-code apps (such as in a textarea in a browser) and don’t want the replacements to happen.
> I feel the emdash is a tell because you have to go out of your way to use it on a computer keyboard.
You do not. Again, on Apple operating systems these are trivial and on by default.
> Something anyone other than the most dedicated punctuation geeks won't do for a random message on the internet.
Even if that were true—which, as per above, it’s not, you don't have to be that dedicated to type two hyphens in a row—it makes no sense to conflate those who care enough about their writing to use correct punctuation and those who don’t even care enough to type the words themselves. They stand at opposite ends of the spectrum.
Again, using em-dashes as one signal is fine; using it as the principal or sole signal is not.
My keyboard has no keypad so I’m not sure there’s another way.
I also set things up such that hitting Caps Lock twice in a row sends an Escape character, which makes using Vim a tiny bit nicer.
I don't really understand how AI developed a bias towards doing it correctly rather than doing it the lazy way. But hearing so much about emdashes qua LLM detection mechanism eventually just got me to decide that typing an ordinary hyphen really is just lazy. And then I ended up configuring my system to make it reasonably easy to type them.
Latex power users unite against the markdown monkey keyboard mashers!
So... sorry (not sorry!) that LLMs try to be like us and not the heathens.
I so wish people would stop spouting this bogus "sign" — but I know I'm going to be disappointed.
You know what people did before the AI fad? They read other people's books. They found and talked to interesting people. They found themselves in, or put themselves in, interesting situations. They spent a lot of time cogitating and ruminating before they decided they ought to write their ideas down. They put in a lot of effort.
Now the AI salemen come, and insist you don't need a wealth of ezperience and talent, you just need their thingy, price £29.99 from all good websites. Now you can be like a Replicant, with your factory-implanted memories instead of true experience.
That is both a false equivalence and a form of whataboutism.
https://en.wikipedia.org/wiki/False_equivalence
https://en.wikipedia.org/wiki/Whataboutism
It is a poor argument in general, and a sure-fire way to increase shittiness in the world: “Well, everyone else is doing this wrong thing, so I can too”. No. Whenever you mention the status quo as an excuse to justify your own behaviour, you should look inward and reflect on your actions. Do you really believe what you’re doing is the right thing? If it is, fine; but if it is not, either don’t mention it or (ideally) do something about it.
> why don’t we see people mentioning they used specific tools to proofread before AI apparition?
Whenever I see this argument, I have a hard time believe it is made in good faith. Can you truly not see the difference between using a tool to fix mistakes in your work or to do the work for you?
> It feels like an obligation we have to respect in a way.
This was obvious from the beginning of the post. Throughout I never got the feeling you were struggling with the question intrinsically, for yourself, but always in a sense of how others would judge your actions. You quote opinion after opinion and it felt you were in search of absolution—not truth—for something you had already decided you did not want to do.
But LLMs are training wheels being forced on everyone, including experienced developers and we are being gaslit that if we don't use them, we are getting behind. In reality, however, the only study up to date shows 19% decline in productivity for experienced devs using LLMs.
I don't mind folks using crutches if they help them. The cognitive decline and reasoning skills of people using LLMs is not yet studied well but preliminary results show its a thing. I gotta ask: why are you guys doing that to yourselves?
It's also fine to use tire chains when you're going through icy roads, but you have to drive much slower and should take them off when it isn't icy. It's about knowing the environment and conditions. Maybe some people don't need chains in that environment because they have winter tires (experience in our metaphor?). Sure, you can drive faster with chains on an icy road than you can without, but you still have to drive slow and be far more alert than you would when driving on a summer road. It is all about context.
Like it or not, people are using LLMs a lot. The output isn’t universally good. It depends on what you ask for and how you criticize what comes back. But the simple reality is that the tools are pretty good these days. And not using them is a bit of a mistake.
You can use LLMs to fix simple grammar and style issues, to fact-check argumentation, and to criticize and identify weaknesses. You can also task LLMs with doing background research, double-checking sources, and more.
I’m not a fan of letting LLMs rewrite my text into something completely different. But when I'm in a hurry or in a business context, I sometimes let LLMs do the heavy lifting for my writing anyway.
Ironically, a good example is this article which makes a few nice points. But it’s also full of grammar and style issues that are easily remedied with LLMs without really affecting the tone or line of argumentation (though IMHO that needs work as well). Clearly, this is not a native speaker. But that’s no excuse these days to publish poorly written text. It's sloppy and doesn't look good. And we have tools that can fix it now.
And yes, LLMS were used to refine this comment. But I wrote the comment.
When a tool blurs the line between who performed the task, and you take full credit despite being assisted, that is deceitful.
Spell checking helps us all pretend we're better spellers than we are, but we've decided as a society that correct spelling is more important than proving one's knowledge of spelling.
But if you're purportedly a writer, and you're using a tool that writes for you, then I will absolutely discount your writing ability. Maybe one day we will decide that the output is more important than the connection to the person who generated it, but to me, that day has not arrived.
> When a tool blurs the line between who performed the task
Who saws the wood? He who operates the tool, or the tool performing its function? What is the value of agency in a business that, supposedly, sells product? Code authorship isn't like writing, is it? Should it be?
Or is the distinction not in the product, but in the practice? Is the difference in woodworking vs lumber processing?
Or is it about expectation? e.g. when we no longer expect a product to be made by hand due to strong automation in the industry, we prepend terms such as "hand-made" or "artisanal". Are we currently still in the expectation phase of "software is written by hand"?
I have no dog in this race, really. I like writing software, and I like exploring technology. But I'm very confused and have a lot of questions that I have trouble answering. Your comment resonated though, and I'm still curious about how to interpret it all.
That's the real question that people are trying to suss out.
Clearly a trucker does not "deliver goods" and a Taxi Driver is not in the business of ferrying passengers - the vehicle does all of that, right?
Writers these days rarely bother with the actual act of writing now that we have typing.
I've rarely heard a musician, but I've heard lots of CDs and they're really quite good - much cheaper than musicians, too.
Is my camera an artist, or is it just plagiarizing the landscape and architecture?
The distinction I pointed out, applied to people producing writing intended for other people to read, seems to give a really clear "line". Syntactic tools, you're still fully producing the writing, semantic tools, you're not. You can find some small amount of blurriness if you really want, like does using a thesaurus count as semantic, but it seems disingenuous to pretend that has even close to the same impact on the authorship of the piece as using AI.
Those who don't compromise on understanding will benefit from an extra tool under their belt. Those who actively leverage the tool to improve their understanding will do even better.
Those who want shortcuts and not bother understanding are like cheating in school – not in a morally wrong way, but rather in a they missed the entire point way.
It's really a pretty straightforward proposition to understand, and disclosure is absolutely the key so that consumers, if they choose as I do to boycott such output, can make informed decisions.
My workflow right now is to use AI for rough draft and developmental editing stages, then switch AI from changing files to leaving comments on files suggesting I change something. It is slower than letting it line/copyedit itself, but models derp up too much so letting them handle edits at this stage tends to be 2 steps forward 2 steps back.
I recently had a colleague send me a link to a ChatGPT conversation instead of responding to me. Another colleague organised a quiz where the answers were hallucinated by Grok. In some Facebook groups I'm in where people are meant to help each other, people have started just pasting the questions into ChatGPT and responding with screenshots of the conversation. I use LLMs almost daily, but this is all incredibly depressing. The only time I want to interact with an LLM is when I choose to, not when it's forced on me without my consent or at least a disclaimer.
> In some Facebook groups I'm in where people are meant to help each other, people have started just pasting the questions into ChatGPT and responding with screenshots of the conversation
i get the feeling these ai tools will just further the alienation of society even more...My main problem with AI usage is that people use it and turn their brains off. This isn't a new problem, but it is a new scale. People mindlessly punch numbers into a formula, run software they don't understand, or read a summary of a complex topics declaring mastery. The problem is sloppiness and our human tendencies to be lazy. Lazy by focusing on the least amount of energy at the moment, not the least amount of energy through time. That's the critical distinction. Slop is momentary laziness while thoughtfulness is amortized laziness.
The problem is in a way not the AI but us and the cultures we have created. At the end of the day no one cares if you wrote AI code (or docs or whatever), they care about how well it was done. You want to do things fast, but speed is nothing if the quality suffers.
I really like how Mitchell put it in this Ghostty PR[0,1]. The disclosure is to help people know what to pay more attention to. It is a declaration of where you were lazy or didn't have expertise or took some shortcut. It tells us what the actually problem is: slop isn't always obvious.
A little slop generally doesn't do too much harm (unless it grows and compounds), but a lot of slop does. If you are concerned about slop and the rate of slop is increasing then it means you must treat everything as potential slop. Because slop isn't easily recognized, it makes effort increase, exponentially. So by producing AI slop (or any kind of slop) you aren't decreasing the workload, you're outsourcing it to someone else. Often, that outsourcing produces additional costs. It only creates the illusion of productivity.
It's not about the AI, it is about shoving your work onto others. Doesn't matter if you use a shovel or bulldozer. But people are sure going to be louder (or cross that threshold where they'll actually speak up) if you start using a bulldozer to offload your work to others. The problem is it makes others have to constantly be in System 2 thinking all the time. It is absolutely exhausting.
The main reason, however, that one shouldn't "write" with LLMs is because it's a waste of everyone's time. If they wanted to know what GPT-5 thinks, they can ask it themselves.
edit:
> The problem is not the use of AI but the people how think they can, arbitrarily, criticize the work from someone else because he used or not AI in the name of “ethics”.
Ah, I didn't realize that the real problem is that people complain about it. If we can figure out a way to make those people shut up, then using LLMs to write for you would be perfectly fine.
My stance is that if you're about to ask co-pilot, or whatever, to respond to me, then just send me the prompt you're about to enter as that will probably answer the question!
Here’s a thought experiment: Imagine if I handed you a $100 bill and asked you to examine it carefully. Is it real money? Perhaps you immediately suspect it is counterfeit, and subject it to stringent tests. Let’s say all the tests pass. Okay, given that it is indistinguishable from a legit $100 bill, is it therefore correct and ethical for me to spend this money?
You know the answer: “not necessarily.”
This is because spending money is about more than a series of steps in a transaction. It is based on certain premises that, if false, represent a hazard to the social contract by which we all live in peace and security.
It seems to me that many AI fanboys are arguing that as long as their money passes your scrutiny, it doesn’t matter if it was stolen or counterfeit. In some narrow sense, it really doesn’t matter. But narrow senses are not the only ones that matter.
When I read writing that you give me and present it as your work, I am getting to know you. I am learning how I can trust you. I am building a simulation of you in my mind that I use to anticipate your ideas and deeds. All that is disrupted and tainted by AI.
It’s not comparable to a grammar checker, because grammar is like clothing. When an editor modifies my grammar, this does not change my message or prevent me from getting across my ideas. But AI is capable of completely altering your ideas. How do you know it didn’t?
You can only know through careful proofreading. Did you proofread carefully? Whether you did or not: I don’t believe that people who want AI to write for them are the kind of people who carefully proofread what comes out of AI. And of course, if you ask AI to come up with ideas by itself, for all we know that is plagiarism— stolen words.
Therefore: if you use AI in your writing, you better hide that from me. And if I find out you are using, I will never trust you again.
But I'm a native English speaker and (I think) a decent writer. But if I had to write something in another language I was only marginally fluent in I'd probably reach for an LLM pretty quickly.
"There are a lot of tools out there (Gramarly, Antidote for naming the most famous) and I did not see someone mentioning he used this or that."
I was criticized in another thread because I used a translation assistant to improve my text, a tool that, long before the current AI hype, everyone used to write more effectively.
People need to stop believing that the watchdogs of reason are the all-seeing eye(1989). Many people, in general, seek to be ethical and utilize tools to enhance their ideas (such as a text in a non-native language), and that's okay.
First, there is the question of the mythology of the author. Would Shakespeare be himself if he had an AI ghost write his books? Would we care as much?
Setting that aside there's nothing to say that an AI will come up with something wholly novel that's not a pastiche of what's come before. Would it be able to come up with the next Dracula? Or the next meme genre of your particular favorite? What about writing style? It could mimic Clarice Lispector but it couldn't create a new one of her. If it did so we wouldn't recognize it as something human that we would be forced to care about in some way. IF an AI came up with a Lispector and we hadn't seen a type of her before perhaps we would think that the machine is hallucinating.
More than that though, why should I buy a book that an AI wrote? I can just ask an AI to tell me a story. Or I can read all of the books that were written pre 2000 - there are more than enough to satisfy my curiosity and desire for enlightenment before machines were used to print money for those that have access to them. For me that's the most galling - it shows that the people that have access to money and the means to make a machine do the thinking for them are unable to come up with an original idea, excepting insofar as they push a button or give a prompt. In a few years when AI achieves consciousness, which I believe it will, we'll be able to have machines that can write their own novels if we wish and they want to do so. Then we can judge them by it's own merits. In the meantime if the person writing the book doesn't have anything interesting to say and isn't an intelligent person and wants to send me a dead tree with information inside it that a machine wrote, what's the value added other than me taking a picture of the blurb on the back and feeding it into an AI and having that AI recreate the book? The paper it's printed on?
EDIT - where AI (not AGI) is important is in doing the sort of hard combinatorial analysis that is so difficult in diffuse systems like traffic control and industrial control of city services or combining chemical and biological synthesis for drug research such as protein folding. AI as a tool for art is one thing, but having an AI create your doctoral dissertation or come up with a book is another. If you can ask an AI to find a cure for a disease or a novel drug and it tells you how step by step by all means do it because it would be absurd not to. It doesn't prove how intelligent you are in that field however - there probably should be altered qualifications for how we rank how useful people are in society given AI prompts and there will be over time unless society just devolves into a "whoever has the most compute wins" dystopia. In which case I'm going back to Plato and Jules Verne.
Definitely don't rely on AI to substitute for a lack of fluency.. . Or maybe do.
echelon_musk•15h ago
Or
Writing with an LLM is not a shame
riz_•15h ago
squid_ca•15h ago
ares623•14h ago
CRConrad•14h ago
ekianjo•14h ago
Should be "Writing with a LLM is not a shame", no reason to put a "an" here.
catapart•14h ago
Is not about the letter, it's about practical pronunciation. "An r before a u, and an m or an f".
akkad33•6h ago
CRConrad•14h ago
latexr•14h ago
akkad33•6h ago