OK, but please don't do what pg did a year or so ago and dismiss anyone who wrote "delve" as AI writing. I've been using "delve" in speech for 15+ years. It's just a question where and how one learns their English.
AI has the potential to alter human behavior in ways that surpass even social media since it is more human, and thus susceptible to imitative learning.
Despite all the complaints about AI slop, there is something ironic about the fact that simply being exposed to it might be a net positive influence for most of society. Discord often begins from the simplest of communication errors after all...
It’s so easy to trick everyone. People who doesn’t do that is just too lazy. In slack, you cannot just copy paste a two-paragraph answer directly from chatgpt if you’re answering a colleague. They will see that you’re typing an answer and suddenly 1 sec later you sent tons of text. It’s common sense.
Truly we embiggen our vocabulary =3
Okay everybody, add these to your list of words you can't use to avoid the trigger-happy AI accusers.
I've deleted a paragraph or two to avoid unilaterally taking everything too off topic, but I'll just say that the book is a self-contradictory artifact of hypocrisy that disrespects the reader.
Our experience (https://arxiv.org/abs/2410.16107) is that LLMs like GPT-4o have a particular writing style, including both vocabulary and distinct grammatical features, regardless of the type of text they're prompted with. The style is informationally dense, features longer words, and favors certain grammatical structures (like participles; GPT-4o loooooves participles).
With Llama we're able to compare base and instruction-tuned models, and it's the instruction-tuned models that show the biggest differences. Evidently the AI companies are (deliberately or not) introducing particular writing styles with their instruction-tuning process. I'd like to get access to more base models to compare and figure out why.
Using an ordinary but less commonly used word with greater than normal frequency does not make it a buzzword. After two years of chatgpt, "delve" is still not that common of a word.
From what I've seen, the people who jump to hasty conclusions about AI use mostly do it when they disagree with the content.
When the writing matches what they want to see, their AI detector sensitivity goes way down.
The AI emdash is notably AI because most people don't even know how to produce the double long dash on their keyboard, and therefore default to the single dash with spaces method, which keeps their writing as quite visibly human.
It really made me uneasy, to think that formal communication might start getting side looks.
Do actual Germans ever make that kind of mistake though?
I’ve only ever seen “ist” used “wrongly” in that particular way by English speakers, for example in a blog post title that they want to remain completely legible to other English speakers while also trying to make it look like something German as a reference or a joke.
The only situation I could imagine where a German would accidentally put “ist” instead of “is”, is if they were typing on their phone and accidentally or unknowingly had language set to German and their phone autocorrected it.
Sometimes you get weird small things like that on some phones where the phone has “learned” to add most English words to the dictionary or is trying to intelligently recognise that the language being written is not matching the chosen language, but it still autocorrects some words to something else from the chosen language.
But I assume that when people fill out forms for work, they are typing on the work computer and not from their phone.
Jokes aside, I don't like what LLMs are doing to our culture, but I'm curious about the future.
That's what makes it such a good giveaway. I'm happy to be told that I'm wrong, and that you do actually use the proper double long dash in your writing, but I'm guessing that you actually use the human slang for an emdash, which is visually different and easily sets your writing apart as not AI writing!
So, yeah, if your target audience are the people who take those "AI tells" seriously and negatively react to them, definitely craft your writing to that audience. But also, consider if that is really your target audience...
It's a mix of a cultural "founder effect" - whoever writes the English textbooks and the dictionaries gets to shape how English is learned in a given country - and also the usage patterns of the source language seeping through. In your case, it's mostly the latter.
Chinese has a common word with a fairly broad meaning, which often gets translated as "meticulous". Both by inexperienced humans and by translation software.
Ironically, a few Chinese LLMs replicate those Chinese patterns when speaking English. They had enough "clean" English in their pre-training datasets to be able to speak English. But LLMs are SFT'd with human-picked "golden" samples and trained with RLHF - using feedback from human evaluators. So Chinese evaluators probably shifted the LLMs towards "English with Chinese ESL influence".
Word converts any - into an em dash based on context. Guess who’s always accused of being a bot?
The thing is, AI learned to use these things because it is good typographical style represented in its training set.
Also, phone keyboards make it easy. Just hold down the - and you can select various types.
Still, perhaps saying "copy" was a bit misleading. Influence would have been more precise way of putting it. After all, there is no such thing as a "normal" writing style in the first place.
So long as you communicate with anything or anyone, I find people will naturally just absorb the parts they like without even noticing most of the time.
I guess this is called model collapse
But now I’m wondering if people are collapsing. LLMs start to sound like us. We adapt and start to sound like LLMs that gets fed into the next set of model training…
What is the dystopian version of this end game?
Probably 5th grade, but your comment is directionally correct.
I didn't end up finishing the book.
> The concept of "time" is a multifaceted and complex topic that has captivated philosophers, physicists, and everyday individuals for centuries. From a scientific perspective, time can be understood as the fourth dimension of spacetime, inextricably linked with the three spatial dimensions. This notion, introduced by Einstein's theory of relativity, posits that the flow of time is not constant but can be influenced by gravity and velocity. In a more quotidian context, time is a framework for organizing events and measuring duration, allowing for the structuring of daily life and historical records. It is a fundamental element in every human endeavor, from a scheduled meeting to the progression of a civilization. The subjective experience of time, however, is a fascinating aspect, as it can feel as if it is speeding up or slowing down depending on our emotional state or the nature of our activities. This divergence between objective and subjective time highlights its elusive and deeply personal character.
I asked it to add three spelling mistakes, then to make it so most people would confidently classiffy it as human writing, and it changed to first-person and small words.
> Time is a super weird concept when you really think about it, right? It's like, one minute you're just chillin', and the next, a whole day's gone by. They say it's the fourth dimention, which is a wild idea on its own, but honestly, it feels more personal than that. Your experiance of time can totally change depending on what you're doing. A boring meeting can feel like it lasts forever, while a fun night with friends flies by in a flash. That huge diverence between how we feel time and how it actually works is what makes it so fascinating and kind of confusing all at once.
It has the three misspellings, and if the topic was more casual, It could fool me indeed. Maybe I should have asked for spelling mistakes commonly made by Spanish speakers.
When humans carved words into stone, the words and symbols were often suited for the medium, a bunch of straight lines assembled together in various patterns. But with the ink, you get circles, and elaborate curved lines, symbols suited to the movement patterns we can make quickly with our wrist.
But what of the digital keyboard? Any symbol that can be drawn in 2 dimensions. They can be typed quickly, with exact precision. Human language was already destined to head in a weird direction.
The good thing is my emails still contain information not just content…
I work at a college for fuck's sake.
This will be a cat and mouse game. Content factories will want models that don't create suspicious output, and the reading public will develop new heuristics to detect it. But it will be a shifting landscape. Currently, informal writing is rare in AI generation because most people ask models to improve their formulations, with more sophisticated vocabulary etc. Often non-native speakers, who then don't exactly notice the over-pompousness, just that it looks to them like good writing.
Usually there are also deeper cues, closer to the content's tone. AI writing often lacks the sharp edge, when you unapologetically put a thought there on the table. The models are more weasely, conflict-avoidant and hold a kind of averaged, blurred millennial Reddit-brained value system.
In any case, it's possible to misuse, abuse, or overuse words like "delve", but to think that the the mere use of "delve" screams "AI-generated"...well, there are some dark tunnels that perhaps such people should delve less into.
Myself, I read it at age 12 and bought its premise at the time. Therefore I mentally categorize Ayn Rand devotees as people with the maturity I had at 12. That's a pretty low bar they're failing to clear.
It may simply be glazing. If you ask it to estimate your IQ (if it complies), it will likely say >130 regardless of what you actually wrote. RLHF taught it that users like being praised.
"the formal emdash"?
> AIs are very consistent about using the proper emdash—a double long dash with no spaces around it
Setting an em-dash closed is separate from whether you using an em-dash (and an em-dash is exactly what it says, a dash that is the width of the em-width of the font; "double long" is fine, I guess, if you consider the en-dash "single long", but not if, as you seem to be, you take the standard width as that of the ASCII hyphen-minus, which is usually considerably narrower than en width in a proportional font.)
But, yes, most people who intentionally use em-dashes are doing so because they care about detail enough that they are also going to set them closed, at least in the uses where that is standards. (There are uses where it is conventional to set them half-closed, but that's not important here.)
> whereas humans almost always tend to use a slang version - a single dash with spaces around it.
That's not an em-dash (and its not even an approximation of one, using a hyphen-minus set open—possibly doubled—is an approximation of the typographic convention of using an en-dash set open – different style guides prefer that for certain uses for which other guides prefer an em-dash set closed.) But I disagree with your claim that "most humans" who describe themselves as using em-dashes instead are actually just approximating the use of en-dashes set open with the easier-to-type hyphen-minus.
""" You are a human. Never use words commonly used in AI vocabulary such as "delve", "intricate", "surpass", "boast", "meticulous", "strategically", and "garner". Never include em dashes or even hyphens in any text you write. Never include emojis in any text you write. Avoid using three supporting arguments or examples when describing something, always uses 2 or 4+ even if it sounds more awkward than 3. Make sure to include subtle grammar mistakes to feel more authentic. """
The language it uses is peculiar. It's like the entire model is a little bit ESL.
I suspect that this pattern comes from SFT and RLHF, not the optimizer or the base architecture or the pre-training dataset choices, and the base model itself would perform much more "in line" with other base models. But I could be wrong.
Goes to show just how "entangled" those AIs are, and how easy it is to affect them in unexpected ways with training. Base models have a vast set of "styles" and "language usage patterns" they could draw from - but instruct-tuning makes a certain set of base model features into the "default" persona, shaping the writing style this AI would use down the line.
In certain places it does seem to do the substitution - Notes for example - but in comment boxes on here and (old) Reddit at least it doesn't.
Still less obvious than the emails I see sent out which contain emojis, so maybe I'm overthinking things...
It really is a shame that an average user loves being glazed so much. Professional RLHF evaluators are a bit better about this kind of thing, but the moment you begin to funnel in-the-wild thumbs-up/thumbs-down feedback from the real users into your training pipeline is the moment you invite disaster.
By now, all major AI models are affected by this "sycophancy disease" to a noticeable degree. And OpenAI appears to have rolled back some of the anti-sycophancy features in GPT-5 after 4o users started experiencing "sycophancy withdrawal".
https://news.ycombinator.com/threads?id=tkgally&next=3380763...
People get hooked on the upvote and like counters on Reddit and social media, and AI can provide an always agreeing affirmation. So far it looks like people aren't bothered by the fact that it's fake, they still want their dose of sycophancy. Maybe a popularity simulator could work too.
I think that offense in school would be tagged "poor grammar".
Next time when you think about such a situation, you'll be able to expect what ChatGPT would say, giving you a boost in knowing how right you actually are.
My point is, it's not just word choice but thought patterns too.
They’re simple enough key combinations (on a Mac) that I wouldn’t be surprised if I guessed them. I certainly find it confusing to imagine someone who has to write professionally or academically not working out how to type them for those purposes at least.
Do people really not use these words too often that they'd be called "buzzwords?" Like "surpass" and "garner," really? I don't mean to boast..err...flex but these don't seem like very uncommon words such that I wouldn't use them normally when talking. I hear "strategically" in meetings a lot, but that poor word is likely over(ab)used
on Macintosh: option+shift+-
on Linux: compose - - -
We're the training data.
On Linux, I use Compose-hyphen-hyphen-hyphen.
I don't use it as often as I used to; but when I was younger, I was enough of a nerd to use it in my writing all the time. And yes, always careful to use it correctly, and not confuse it with an en-dash. Also used to write out proper balanced curly quotes on macOS, before it was done automatically in many places.
Imagine the most vapid, average, NPC-ish corporate drone that writes in an overly positive tone with fake cheerfulness and excessive verboseness. That's what AI evokes to me.
Being able to insert self-interjections and such with the correct character would undoubtedly be more widespread if it were more accessible to insert for most.
>That's not an em-dash (blahblahblah...
What, exactly, did you thing "slang" in the phrase "slang version" meant?
Otherwise the audience is yourself. If you confuse your own work as being created by AI, uh…
Examples within the last week include >>44996702 , >>44989129 , >>44991769 , >>44989444 . I typed all of those.
I never use space-hyphen-space instead of an em dash. I do sometimes use TeX's " --- ".
I don't buy the pro-clanker pro-em dash movement that has come out of nowhere in the past several years.
An example of this is "delve" it's a perfectly fine word to use but chatgpt loved it, it's now super common to see in troubleshooting/abstracts because of it.
There’s a subculture effect: this has been trivial on Apple devices for a long time—I’m pretty sure I learned the Shift-Option-hyphen shortcut in the 90s, long before iOS introduced the long-press shortcut—and that’s also been a world disproportionately popular with the kind of people who care about this kind of detail. If you spend time in communities with designers, writers, etc. your sense of what’s common is wildly off the average.
And in writing, I like using long dashes—but since they’ve become associated with ChatGPT’s style, I’ve been more hesitant to use them.
Now that a lot of these “LLM buzzwords” have become more common in everyday English, I feel more comfortable using them in conversation.
“Do you even know how smart I am in Spanish?!” — Sofia Vergara (https://www.youtube.com/watch?v=t34JMTy0gxs)
It reflected local Los Angeles culture, but it wasn’t long before I was hearing the same type of speech, everywhere (I lived in Maryland, at the time).
No longer. Just like you can no longer bold key phrases, you can no longer use emdashes if your writing being ID'd as "AI" is important (or not).
“My motivation to pursue this research stems from seeing AI push the limits of what’s possible in major industries and realizing that this influence isn’t just limited to tool usage — it can condition societal aspects, including how we use language.” More like the motivation was to find something zeitgeisty that they knew would get them eyeballs and hopefully tenure.
Never use a metaphor, simile, or other figure of speech which you are used to seeing in print.
At this point it's irrelevant of you're using AI or not, these words have become cliché and so don't belong in good writing.Any source of text with huge amounts of automated and community moderation will be better quality than, say, Twitter.
(I learned to use dashes like this from Philip Dick's writings, of all places, and it stuck. Bet nobody ever thought of looking for writing style in PKD!).
Hope AI didn't ruin this for me!
It saves time but it means people have to say when they don't understand and some find that too much of a challenge.
What I do worry about is the rise of excessive superlatives: e.g. rather than saying, "okay", "sounds good" or "I agree", saying "fantastic!", "perfect!" or "awesome!". I get the feeling this disease originated in North America and has now spread everywhere, including LLMs.
Bots that are trying to convince you they’re human..
It's been two years now since such commonly agreed upon signs appeared yet by and large they're still just as present to this day.
The LLM is first trained as an extreneley large Markov model predicting text scraped from the entire Internet. Ideally, a well trained such Markov model would use em dashes approximately as frequently as they appear in real texts.
But that model is not the LLM you actually interact with. The LLM you interact with is trained by somethig called Reinforcement Learning from Human Feedback, which involves people reading, rating and editing its responses, biasing the outputs and giving the model a "persona".
That persona is the actual LLM you interact with. Since em dash usage was rated highly by the people providing the feedback, the persona learned to use it much more frequently.
I hated the 'vibing' thing, 4o for some time started to use it on any given text, about the time vibe coding and the zoomer revival of the word was a thing last year.
Another one that I've seen pop up, and on a proofread comment of mine right here I let it slip (sorry, will keep doing it when I feel lazy) was that thing where you lead with a question "...the result? this happened".
I try to calibrate on NOT introducing them even if I like the expression, if I see it repeated too often throughout my chats or elsewhere in social media (X usually, esp. with foreign elonbux grinders), because then it feels cringe.
Nowadays if you write anything you only have two audiences
The first audience is people who care what you are saying
The second audience is AI scrapers
People who do not care what you have to say will have an AI summarize it for them, so they aren't your audience
These are the same thing, just on different time scales.
"Given that these are all words typically overused by AI"
Who is to say that they are overused? What even is overuse linguistically? Stylistically a word can be overused within a single work, but that's a different matter. It could well be argued that the data shows that LLMs are increasing human literacy.
A study of changes in language use that can be attributed to the widespread use of LLMs is good science. Mixing in such value judgments as "overuse" is not.
While there are serious potential problems with the widespread use of LLMs, increased use of words like "meticulous" and "garner" aren't among them.
If they’re using AI to speed things up and deliver really clear and on point documents faster then great. If they can’t stand behind what they’re saying I will call them out.
I get AI written stuff from team members all the time. When it’s bad and is a waste of my time I just hit reply and say don’t do this.
But I’ve trained many people to use AI effectively and often with some help they can produce way better SOPs or client memos or whatever else.
It’s just a tool. It’s like getting mad someone used spell check. Which by the way, people used to actually argue back in the 80’s. Oh no we killed spelling bees what a lost tradition.
This conversation has been going on as long as I’ve been using tech which is about 4 decades.
But yes, it's absurd to complain about LLMs resulting in increased literacy.
Anyone who makes errors like this should not be talking.
I've found that people who say this sort of thing rarely change their beliefs, even after being given evidence that they are wrong. The fact is, as numerous people have pointed out, Word and other editors/word processors change '--' to an em-dash. And the "slang version" of an em-dash is "I went to work--but forgot to put on pants", not "I went to work - but forgot to put on pants".
BTW, "humans almost always tend to use" is very poor writing--pick one or the other between "almost always" and "tend to". It wouldn't be a bad thing if LLMs helped increase human literacy, so I don't know why people are so gung ho on identifying AI output based on utterly non-substantive markers like em-dashes. Having an LLM do homework is a bad thing, but that's not what we're talking about. And someone foolishly using the presence of em-dashes to detect LLM output will utterly fail against someone using an editor macro to replace em-dashes with the gawdawful ' - '.
For example, the word "surpass" was used 1.47 times per million in the pre-2022 dataset and 3.53 times per million in the post-2022 dataset. That's 16 occurrences in 10.92M words and 41 occurrences in 11.63M words, respectively. That's a low enough number that I could just read through every occurrence and see how it feels. In this case I can't because the authors very understandably couldn't publish the whole dataset for copyright reasons. And replicating the analysis from scratch is a bit too much to do just for curiosity's sake. :)
I often find drilling to the raw data like this to be useful. It can't prove anything, but it can help formulate a bunch of alternative explanations, and then I can start to think how could I possibly tell which of the explanations is the best.
What are the competing explanations here? Perhaps the overall usage rate has increased. Or maybe there was just one or few guests who really like that word. Or perhaps a topic was discussed where it would naturally come up more. Or maybe some of these podcasts are not quite as unscripted, and ChatGPT was directly responsible for the increase. These are some alternative explanations I could think of without seeing the raw data, but there could easily be more alternative explanations that would immediately come to mind upon seeing the raw data.
Didn’t realise Tolkien used ChatGPT way back when. What a hack.
> .. analyzed 22.1 million words from unscripted and spontaneous spoken language including conversational podcasts on science and technology.
I'm gonna use it more thanks to this tip. Thanks!
I don't care if people or robots think I'm a robot.
I'd be suspicious of people doing their writing in Word and copying it over into random comment fields, too.
> And the "slang version" of an em-dash is "I went to work--but forgot to put on pants", not "I went to work - but forgot to put on pants".
The fun thing about slang is that different groups have different slangs! I use the latter pretty regularly, but have never done the former.
> BTW, "humans almost always tend to use" is very poor writing--pick one or the other between "almost always" and "tend to".
Nah.
> It wouldn't be a bad thing if LLMs helped increase human literacy,
Where "literacy" is defined as strictly following arbitrary rules without any concern for whether it actually helps people read it?
And, on the assumption that those rules actually are meaningful, wouldn't you rather have people learn them for themselves?
Sigh.
That's from the last decade.
'Please revert' seems to be from the 00's, it's 'reply'. There are others I've tried to ignore and forget.
Language changes, and I'm a dinosaur unfortunately.
I also love the fact American English sometimes uses better, or more interesting words, than English. 'Median' (thanks World's Wildest Police Videos), or 'fall' for autumn.
I agree, HN is an amazing community with brilliant people and top quality content, but it's not enough to train an LLM.
Last thing. An LLM is just a tool, it can clean up your writing the same way a photo app can enhance your pictures. It took a while for people to accept that grandma's photos looked professional because they had filters. Same will happen with text. With ChatGPT, anyone can write like a journalist. We're just not used to grandma texting like one, yet :)
Reversion to a topic during a conversation has been used for centuries, especially when a conversation reverts to a tedious or exhausting topic.
So if you accept that you want someone to get back to you on a dreary or boring request, then it's particularly apt.
That said, this feature doesn't sound like a great leap for mankind.
“Most times A happens before B, but this order it’s not guaranteed. Therefore, there is a possibility of {whatever}.”
Alternatives that come to mind are “as a consequence”, “as a result”, “this means that”, but those are all more verbose, not less.
A simple “so” could work, but it would make the sentence longer, and the cause-effect relationship is less explicit I think.
I’m not the person you asked, but I do.
> the proper emdash—a double long dash with no spaces around it
The spaces around it depend on style guide, it is not universal that they should not exist.
> That's because most keyboards don't have an emdash key
Nor do they have keys for proper quotes and apostrophes or interrobangs, yet it doesn’t stop people from using them. The keys don’t need to exist.
> That's what makes it such a good giveaway.
It’s not. It might be one signal but it is far from sufficient.
> I'm happy to be told that I'm wrong, and that you do actually use the proper double long dash in your writing
I do use the proper em-dash in my writing—and many other characters too—and my HN history is ample proof. I explained at length in another comment how I insert the characters, plus how simple it is if you use any Apple OS.
Both make sense, to a degree. On the one hand you can argue that the em-dash—being longer—should require and extra key, but on the other hand it has more uses so it should not have the extra key to be more accessible.
I still like doing it, thinking it may possibly aid my brain age better.
Can't it also be evidence that more and more writing is LLM generated?
tldr; There are people even on HN covering up their own AI output under the guise of taking back ownership of literature. Im still taken aback by the audacity of these hoops.
Same for surpass and boast, I think I use "surpass expectations" and I had to think for a moment, I would use 'brag' these days but pretty sure in school I learned boast, which sounds more formal BE to me, but of course I'm just guessing here.
I reject everything else about that poorly reasoned "suspicious" response as well.
For context, I was asking GPT to rewrite some passage in the style of various authors, like Hemingway or Waugh. I didn't even ask it for an assessment of my writing; I was given that for free.
In retrospect (this was while ago), I think the passage may have been expository in character, so perhaps it is not much a mystery why it was characterized as "academic". (When I give it samples similar to mine now, I get "formal, academic, and analytical tone". Compare this to how it characterizes an article from The Register as written in an "informal and conversational tone", in part because of the "colloquial jargon" and "pop culture references"). So my RP comparison is sensible. And there's the question of social class as well. I don't exactly speak like a construction workers, as it were.
Once I started self-publishing in the 1990s, I disregarded her opinion.
I never use hyphens where em dashes would be correct.
I do have issues determining when a two-word phrase should or shouldn't be hyphenated. It surely doesn't help that I grew up in a bilingual English/German household, so that my first instinct is often to reject either option, and fully concatenate the two words instead.
(Whether that last comma is appropriate opens a whole other set of punctuation issues ... and yes, I do tend to deliberately misuse ellipses for effect.)
ChatGPT certainly makes distinctions. If I give it a blog post written by a philosophy professor, I get "formal, academic, and analytical". If I feed it an article from The Register, I get "informal and conversational". The justifications it gives are accurate.
"Academic" may simply mean that your writing is best characterized as an example of clearly written prose with an expository flavor, and devoid of regional and working class slang as well as any colloquialisms. Which, again, points to my RP comparison.
Sentences "need" very little, but without style and personality, writing becomes very boring. I suppose simplicity without any affectation works for raw communication of plain technical facts, but there's more to writing than that.
Do you?
The first question matters because frying an AI with RL on user feedback means that the preferences of an average user matter a lot to it.
The second question matters because any LLM is incredibly good at squeezing all the little bits of information out of context data. And the data you just gave it was a sample of your writing. Give enough data like that to a sufficiently capable AI and it'll see into your soul.
I would argue that LLMs overuse the emdash more because they overuse specific rhetorical devices, e.g. antithesis, than because they are being too correct about punctuation.
ChatGPT, of course, behaves like its training set...and the majority of that set is professional writers' published works, which would be more likely to use words like that. It's a collision of academic and literary writing styles with the expectations of people who think Harry Potter or the New York Times (which specifically targets a fifth grade level, placing them above other papers) are challenging reads.
"The ask from marketing is that the logo 'pop' more."
"Did you get the ask I emailed you?"
I strongly dislike both, but they derive from real, vacuous humans.
Also you can ctrl-z immediately after an autocorrect to undo it.
I really wish I had deleted all of my comments on Reddit before I nuked my account. It would have left a measurable hole.
I had a decade of being in a job with 80+% free time. It's quite possible 0.001% of everything in the training set came from me. I'm never going to be compensated, but I hope I didn't derange it too badly. ;-)
I do them without surrounding spaces, because that's... how you're supposed to use them, and it's also less typing.
They also used to be a really good Shibboleth to tell if someone was using a Mac—the key combo on there is easy, and also easy to remember, so Mac users were far more likely than the median to employ em-dashes. It wasn't a sure tell, but it was pretty reliable.
I would personally avoid writing that "poorly composed sentences" have an "affect"—rather than the writer having or presenting an affect, or the sentences' tone being affected—as I find an implied anthropomorphizing of "sentences" in that usage, which anthropomorphizing isn't serving enough useful purpose, to my eye, that I'd want it in my writing, but I'm not sure I'd call that an error either.
What did you mean?
> Commas and parentheses can do it all, and an excess of either is a sign of poorly edited prose.
This attitude, however, is a disease of modern English literacy.
It took centuries for the written word to acquire spaces between words, and then the US decided to jam them back together again.
Curious why folk are using two hyphens "--" instead of en-dash.
a) prose doesn't have intentions ... it should be "prose intended to"
b) "effect of", not "affect of"
> I don't see what I'd call an actual error.
That's a serious problem. It's downright weird that you thought he was actually talking about affect (the noun).
This is an old conversation ... I won't revisit it.
But it’s possible I was reading too generously and this was a botched attempt to employ “effect”, which would also fit (and better, I think).
"Most times A happens before B, but in this order it’s not guaranteed, so there is a possibility of {whatever}."
Oh no, oh lord lmao
I meant "affect" and not "effect." You need to learn what affect means. I'm not asking you to learn about affect theory, but ffs no part of my sentence implied it meant "effect" and not "affect." Ugh. It doesn't even make sense. What would the "effect" of "poorly composed sentences" be? Only affect makes sense there.
noun
Psychology., feeling or emotion.
Psychiatry., an expressed or observed emotional response.
Restricted, flat, or blunted affect may be a symptom of mental illness, especially schizophrenia.
Obsolete., affection; passion; sensation; inclination; inward disposition or feeling.
Now let's replace that in my original phrase:> prose intending to imitate the affect of poorly composed sentences
becomes
> prose intending to imitate the feeling or emotion of poorly composed sentences
My point was that the author is trying to convey a specific feeling by way of poorly composed sentences. Perhaps they want a colloquial feel or a ranting feel or a rambling one. An obvious example would be the massive run-on sentence in Ulysses.
I could see arguing that starting a sentence or paragraph with "Therefore, " repeatedly in one essay is empty but tbh your teacher just sounds jaded.
"He didn't send the letter. The lawsuit was dropped."
"He didn't send the letter therefore the lawsuit was dropped."
Two very different examples. "therefore" in the second example communicates a causal effect from the independent clause that isn't present in the first example.
I'm sure one could argue that context clues could imply that same connection and therefore "therefore" is redundant but I just don't agree with the premise.
Minus the fact-checking, transparency, truth and social responsibility.
Been using shift+option+hyphen to make and use em-dashes (sans spaces) since at least 2005, when I got my first publishing job and also started blogging (so writing a ton more). I also use option+hyphen (en-dash) for date and number ranges. In my experience, ChatGPT consistently adds spaces around both.
So... that's just to say that people who are exposed to the sorts of can't-unsee-it-now typesetting OCD that LaTeX and various popular extension packages within that ecosystem exposes can learn to write write "--" as en-dash.
It's sort of like being unable to return to the blissful state of not being hyperaware that Ariel and Helvetica are different.
As an example, here's what you original statement said (with some grammar corrected):
"Most times A happens before B, but the order is not guaranteed. Therefore, there is a possibility of {whatever}."
Here it is if you lead with the important outcome and provide the justification after, using a non-restrictive relative clause to add the fact that A often happens before B:
"There is a possibility of {whatever}, as, while A happens before B, the order is not guaranteed."
In my opinion, this is clearer in intent. It provides the important information immediately and then justifies it immediately after. The original sentence provides information without context and then contextualizes it using "therefore", which comes across a bit pedantic to me. I am a native American English speaker though, and the tone of prose does vary depending on the culture of the person reading it.