Viewing a single comment thread. View all comments

unemployedprofessors OP t1_jc49jc6 wrote

I love this question.

I think of ChatGPT like a Faberge egg (or maybe at least like the original Mechanical Turk). It looks very fancy and valuable, but it's hollow. There is no internal reasoning within it. It is auto-correct on steroids and the things people say about it are frightening. It's not sitting there making algorithmic decisions based on questions like "What is the best way to write a five-paragraph essay on this topic?" or "As a large language model, what is my opinion?" or "What are the best resources to use to make this argument?" ChatGPT is just parsing through its memory of the corpus to which it's been exposed and making calculations based on what words or patterns it's seen previously.

ChatGPT can generate text and we can parse that text and think "Yes, these are definitely words, sentences, and paragraphs." It glitters with banal transitional phrases and its absolute adherence to rubric-driven writing. But if you tap on that facade of error-free, style-free sentences, it falls apart. It's all just filler, as I'm sure you know.

The problematic Venn Diagram is the intersection of expectations for algorithmic writing, under paid / apathetic / burned out educators or readers who just skim and look for writing with features that tick off boxes, and people who either cannot or who choose not to acknowledge the value of critical thinking and writing. That's where the real "magic" of ChatGPT lies: People who don't know good writing when they see it and perceive anything with written language as a horrible burden, who are producing content for an audience that is mostly only able or willing to identify superficial stylistic deficits, and an underlying structure (school, algorithm, whatever) that has created this, I don't know, golden calf of what "good" writing is: Something that can be crammed into five-interval rubrics and graded by AI and that keeps lowering the common denominator down to another level of hell.

So if ChatGPT is a Faberge egg, maybe we're the golden goose (or geese), because we write something that's more than just not incorrect. It isn't algorithmic because writing was never really supposed to be, because writing is supposed to be for humans: Messy, holistic, heuristic humans, not robots that are only capable of skimming for algorithms.

34

RamsesThePigeon t1_jc4bky8 wrote

That's a great response. Thank you!

As a writer who has encountered similar challenges, I've taken to making comparisons between fast food and chef-prepared meals: Yes, you can get something from McDonald's in as much time as it takes you to groan out an order and swerve past the pickup window, and yes, the FDA has reluctantly classified the menu options there as "probably food," but you won't get nearly as much enjoyment, nourishment, or satisfaction out of the experience as you would from eating a dish that was prepared by a devoted and attentive professional.

If I feel the need to be less snarky, I just say that it's "bespoke" writing.

That brings me to my follow-up question: You mentioned that you specialize in "fast, effective writing," but "effective" can mean very different things in the contexts of different projects. How do you guarantee (or prioritize, at least) speed when effectiveness requires a slower pace, as with – to quote you once more – "your wacky aunt's self-published book," for example?

6

unemployedprofessors OP t1_jc4t9gp wrote

That's another great question. Pragmatically, as in, as a business, we guarantee effectiveness by offering revisions, refunds in certain cases, and we work hard to try and understand what our clients want and need on a general level. Most of the time, they have pretty specific, transaction-oriented goals in mind and that is where our domain expertise comes in handy.

From the writing perspective, a lot of effectiveness has to do with understanding goals and managing expectations. I don't do the wacky aunt crackpot ebooks anymore.

Apparently this post was deleted and we have no idea why :( so I'm not even sure this comment will go through, but I do appreciate your thoughtful questions.

1

Raescher t1_jc4e2bd wrote

"It's just ...". Humans describing any revolutionary change ever.

−2

Ashendarei t1_jc4p75b wrote

I mean sure, in this case it robs a fair bit of nuance from what the AI does, but it seems to get the point across well enough, particularly to people with a layman's understanding of the tech involved.

1

Raescher t1_jc603x9 wrote

It does not give it justice though. A few years ago plenty of people would have argued that AI could never emulate human speech properly. Now it does and people who claim to understand it are like "it's nothing special. I could have done it myself."

0