Think text much more likely from robot than first thought
Grug say this change too big from just one em dash
This is necessary nuance that I'll have take into consideration. Thank you.
In the 20th century, there were two spaces after an end of sentence period. (I still do that.)
[1] https://en.m.wikipedia.org/wiki/Non-breaking_space (see specifically the example section)
Only if you used a typewriter. I was using (La)TeX in the twentieth (1990s), and it defaulted to a rough equivalent of 1.5 spaces (see \spacefactor).
Two ('full') space characters were added because of (tele)typewriters and their fixed width fonts, and this was generally not used in 'properly' published works with proportional typefaces; see CMoS:
* https://www.chicagomanualofstyle.org/qanda/data/faq/topics/O...
Bringhurst's The Elements of Typographic Style (§2.1.4) concurs:
* https://readings.design/PDF/the_elements_of_typographic_styl...
* https://webtypography.net/2.1.4
* https://en.wikipedia.org/wiki/The_Elements_of_Typographic_St...
In HN, you can force two spaces between
sentences. But only in code blocks.
I could never agree with this, because monospace fonts are already adding extra space with the dot character, which is much narrower in proportional fonts. That fact alone makes the visual gap already similarly wide as it would be in typeset proportional text. Adding a second space makes it much too wide visually (almost three positions wide). It looks like badly typeset justified text.
(I understand why people are doing it, I just don’t agree on aesthetic grounds.)
Sam Altman more than anyone else popularised this style and for a while every thrid or fourth comment on any AI-related topic was all lowercase.
Eventually, as models and their users both improve, we'll collectively realize that trying to reliably discriminate between AI and human writing is no different than reading tea leaves. We should judge content based on its intrinsic value, not its provenance. We should call each other out for poor writing or inaccurate information — not because if we squint we can pick out some loose correlations with ChatGPT's default output style.
Consciously trying not to "sound like an LLM" while writing is like consciously trying not to think about the fact that you're currently breathing, or consciously trying to sound like a cool guy.
I don't use AI in my writing. If I were still in school would I be tempted? Probably. But in work and personal writing? Never crosses my mind.
The stakes are a bit different for students unfortunately, who who’ll have their writing passed through some snake oil AI detector arbitrarily. This is unfortunate because “learning how not to trigger an AI detector” is a totally useless skill.
Generally, I don’t think we need AI detection. We need dumb bullshit detection. Humans and LLMs can both generate that. If people can use an LLM in a way that doesn’t generate dumb bullshit, I’m happy to read it.
There are zillions of words produced every second, your time is the most valuable resource you have, and actually existing LLM output (as opposed to some theoretical perfect future) is almost always not worth reading. Like it or not (and personally I hate it), the ability to dismiss things that are not worth reading like a chicken sexer who's picked up a male is now one of the most valuable life skills.
Of course there are cases where you can tell that some text is almost certainly LLM output, because it matches what ChatGPT might reply with to a basic prompt. You can also tell when a piece of writing is copied and pasted from Wikipedia, or a copy of a page of Google results. Would any of that somehow be more worth reading if the author posted a video of themselves carefully typing it up by hand?
1: You're assuming a specific type of output in a specific type of context. If LLM output were never worth reading, ChatGPT would have no users.
Having good heuristics to make quick judgements is a valuable life skill. If you don't, you're going to get swamped.
> Would any of that somehow be more worth reading if the author posted a video of themselves carefully typing it up by hand?
No, but the volume of carefully hand-typed junk is more manageable. Compare with spam: Individually written marketing emails might be just as worthless as machine-generated mass mailings, but the latter is what's going to fill up your inbox if you can't filter it out.
> If LLM output were never worth reading, ChatGPT would have no users.
Only if all potential users were wise. Plenty of people waste their time and money in all sorts of ways.
Why do you think it's not a good heuristic to be able to quickly spot the tell-tale signs of LLM involvement, before you've wasted time reading slop?
Yes, there will be false positives. It's a heuristic after all.
If anything, I'd rather that renderers like Markdown just all agree to change " - " to an en dash and " -- " to an em dash. Then we could put the matter to bed once and for all.
Citation needed.
> Who is it helping if we collectively bully ourselves into excising a perfectly good punctuation mark from human language?
Humans can adapt faster than LLM companies, at least for the moment. We need to be willing to play to our strengths.
Who is it helping if we bully ourselves into ignoring a simple, easy "tell"?
I think nobody is upset about reading an LLM's output when they are directly interacting with a tool that produces such output, such as ChatGPT or Copilot.
The problem is when they are reading/watching stuff in the wild and it suddenly becomes clear it was generated by AI rather than by another human being. Again, not in a context of "this pull request contains code generated by an LLM" (expected) but "this article or book was partly or completely generated by an LLM" (unexpected and likely unwanted).
1. In the context of research/querying, when unverified information from its output is falsely passed off as verified information curated by a human author. There's a big difference between "ChatGPT or some blog says X" and "the answer is X".
2. In the context of writing/communication, when it's used to stretch a small amount of information into a relatively large amount of text. There's a big difference between using an LLM to help revise or trim down your writing, or to have it put together a first draft based on a list of detailed bullet points, and expecting it to stretch one sentence into a whole essay of greater value than the original sentence.
Those are basic misuses of the tool. It's like watching an old person try to use Google 20 years ago and concluding that search engines are slop and the only reliable way to find information is through the index of Encyclopedia Britannica.
It seems like you’re just wrong here? Em dashes aside, the ‘style’ of llm generated text is pretty distinct, and is something many people are able to distinguish.
I like how this is presented as a given thing that will happen, that models are going to just improve forever. That there isn’t some plateau on “user skill with LLMs” like it’s fucking calculus mixed with rocket science that only the elite users will ever attain full fluency in using.
This is starting to read like religious cult propaganda, which is probably scarier than whatever else ends up happing with this shit.
But the people you will reach online will be online, and not some random person-off-the-street. The average person on the street will give the same blank stare on the topic of compilers, regular expressions, black-holes, or robotics, but I still want to read about those topics. And if I want an LLM's take on those topics, everyone knows where to turn to get that.
I think there is a very interesting discussion to be had over how LLMs are actively changing the way we write, or even speak.
"delve" was a red flag 650 years ago!
When Adam delved and Eve span, who was then the gentleman? — Fr John Ball's sermon addressing the rebels of the Peasant's Revolt, 1381
I don’t use the word “delve” anymore, however.
Compose --- should produce —
For en dash it's
Compose --. produces –
Not all fonts show the difference though.
A lot of people are comfortable using the dot, the comma, and maybe exclamation marks.
AI-speech seems to strive for more formal writing by default.
I grew up online in teletype and ADM5. To some extent, my sense of how text presents is dominated by monotype/fixed-width and em-dashes just never worked in that 7 bit world.
Two hyphens is too much. one hyphen is not enough.
An em dash that’s not a sudden interruption shouldn’t have any spacing around it.
It is an interruption to me and I think that little pause is intentional. if the author wants no pause they should have used parentheses
Maybe I’ll take a short pause in a sentence–or show a huge range 0 — 999.
(On the other hand, maybe it's just low-paid writers in South Africa: https://www.theguardian.com/technology/2024/apr/16/techscape... )
When you are talking, an aside can make a lot of sense because you are thinking and speaking in real time. When you write you have the luxury of time to reformulate your words more precisely. Em dashes are best kept for prose that mimics speech rather than constructing logical text.
It's no coincidence that em dashes are rare in legal texts because they are too imprecise. Where as semicolons are extremely common in legal texts.
The S in semicolon stands for S-Tier. Maybe the E in em dash stands for E-Tier?
lolz
I believe you meant "not the no-talent ass clown".
This video will be too voluminous or intrusive to be viewed manually, so it will be analyzed by (you guessed it) AI to determine if the work was authentic.
It will probably be developed and required by the corrupt education industry, but perhaps some writers will voluntarily use it to buy authenticity or stand out. But either way, the machine will once again find another way to take our agency and make our lives less enjoyable.
Remember, meaning is based on common usage, so now em dash is slop-nonymous, semicolons can take on a more casual vibe.
For example: I love pizza — it's my comfort food.
Can just become: I love pizza; it's my comfort food.
For asides: I love pizza — especially pepperoni.
Can just become: I love pizza (especially pepperoni).
> “This was not just X; it’s really Y”
Here are some real examples taken from various sources:
> "Regenerative businesses don't just minimise harm; they actively create positive change for the environment and people."
> "This milestone isn’t just about our growth. It’s about deepening our commitment to you…"
> "This wasn’t just a market rally. It was a real-time lesson in how quickly sentiment can fracture and recover when fundamentals remain intact."
Hard to say for certain that this is AI slop, but just like em dashes, I see it routinely pop up in LLM prose. And I feel like it’s infected nearly everything I’ve read that was written within the last year.
Other than splitting infinitives and ending sentences with a preposition, of course. They are a weighty burden no soul should have to ever put up with.
Getting your knickers in a twist over a minor typographical construction is rather contrived as an indicator of a non human author of a text. It will do for now but won't tomorrow.
I can mostly spot LLM output on sight but I can be fooled. I never use silly rules like "em-dash => LLM". That's just silly.
The key to do it without the LLM stigma is surrounding it with spaces which still doesn't violate typical writing rules.
dhotson•2h ago
crooked-v•2h ago
bitwize•2h ago
typpilol•2h ago
QuantumNomad_•1h ago
But I agree that triple em-dash for pause is not half bad either. I could see it becoming a thing, with how it goes the opposite direction and is so over the top :)