RamsesThePigeon

RamsesThePigeon t1_jds0kei wrote

> I'm actually a programmer and at least know the basics of how machine learning works

Then you know that I'm not just grasping at straws when I talk about the fundamental impossibility of building comprehension atop an architecture that's merely complicated instead of complex. Regardless of how much data we feed it or how many connections it calculates as being likely, it will still be algorithmic and linear at its core.

>It can extract themes from a set of poems I've written.

This statement perfectly represents the issue: No, it absolutely cannot extract themes from your poems; it can draw on an enormous database, compare your poems with things that have employed similar words, assess a web of associated terminology, then generate a response that has a high likelihood of resembling what you had primed yourself to see. The difference is enormous, even if the end result looks the same at first glance. There is no understanding or empathy, and the magic trick falls apart as soon as someone expects either of those.

>It wasn't long ago we said a computer could never win at Go, and it would make you a laughing stock if you ever claimed it could pass the Bar exam.

Experts predicted that computers would win at games like Go (or Chess, or whatever else) half a century ago. Authors of science fiction predicted it even earlier than that. Hell, we've been talking about "solved games" since at least 1907. All that victory requires is a large-enough set of data, the power to process said data in a reasonable span of time, and a little bit of luck. The same thing is true of passing the bar exam: A program looks at the questions, spits out answers that statistically and semantically match correct responses, then gets praised for its surface-level illusion.

>The goalposts just keep shifting.

No, they don't. What keeps shifting is the popular (and uninformed) perspective about where the goalposts were. Someone saying "Nobody ever thought this would be possible!" doesn't make it true, even if folks decide to believe it.

>You're going really against the grain if you think it's not doing anything impressive.

It's impressive in the same way that a big pile of sand is impressive. There's a lot of data and a lot of power, and if magnitude is all that someone cares about, then yes, it's incredible. That isn't how these programs are being presented, though; they're being touted as being able to write, reason, and design, but all they're actually doing is churning out averages and probabilities. Dig into that aforementioned pile even a little bit, and you won't find appreciation for your poetry; you'll just find a million tiny instances of "if X, then Y."

Anyone who believes that's even close to how a human thinks is saying more about themselves than they are about the glorified algorithm.

1

RamsesThePigeon t1_jdrcakx wrote

The comparison to neurons is flawed, and it’s one of the main reasons why this debate is even happening.

Chat-bots do not understand or comprehend. They are physically incapable of non-linear thinking, and increased amounts of data won’t change that; it’s a function of their underlying architecture. They don’t have neurons, nor do they have anything even functionally close. They absolutely do not innovate; they just iterate to a point that fools some humans.

If you consider yourself a writer, then you know that comprehension and empathy are vital to decent writing. Until such time as a computer can experience those (which – to be completely clear – is fundamentally impossible for as long as it’s being built according to modern computing principles), it won’t be able to match anything offered by someone who already does.

Put bluntly, it isn’t doing anything impressive; it’s revealing that the stuff being thrown at it is less complex or reason-based than we have assumed.

Edit: Here’s a great example.

1

RamsesThePigeon t1_jdq9vxh wrote

I must be incredibly dumb, then.

If you see something wrong with that clause, please feel free to point it out.

As for the “why,” I already covered that in my original comment. If you’d like, you can read it… or you can keep checking other people’s replies, which it looks like we’re both doing.

“People’s” needs an apostrophe there, by the way.

1

RamsesThePigeon t1_jdq8m72 wrote

There’s nothing wrong with the clause.

I was making an indirect joke about the fact that they couldn’t even read a full sentence before responding, which kind of undermines their ability to gauge the quality of someone else’s writing.

I’d still like to know what they thought was wrong with the clause, though!

1

RamsesThePigeon t1_jdq83xj wrote

You know, I’d be willing to take that bet.

I don’t think that a person needs to be an expert in order to tell if something has “life” in it; they just need to care enough to look. You suggested as much yourself: It isn’t a lack of experience (or even taste) that causes junk to become popular; it’s apathy, and that same apathy is being enabled by the incredible amount of “content” available to people nowadays.

Granted, you could make the claim that humans just default to gorging themselves on garbage, and it isn’t much of a step to go from there to the idea that predators – be they television executives or designers of Skinner boxes masquerading as games – will rush to exploit that… but even then, on some level, people tend to realize when they aren’t actually enjoying themselves. That realization might take a while to grow from a vague sense of boredom to a conscious conclusion (and a person could very well move on before the transition takes place), but I’m pretty confident that everyone is capable of experiencing it.

Back to my point, though, the fact that the accusations were downvoted isn’t really relevant. What saddens me is the fact that said accusations were made at all. Yes, you’re right, the accusers are just armchair experts who aren’t really qualified to discuss the topic… but aren’t you the least bit bothered by the fact that folks like them are currently shaping the narrative about ChatGPT and its ilk?

Maybe I’m just getting old, but as that same narrative gets increasing amounts of attention, all I can see is a growing audience that’s going to waste a lot of their limited time on feeling dissatisfied.

−1

RamsesThePigeon t1_jdq115l wrote

I wrote a couple of (not especially good) poems the other day.

There were a small number of users in the comments who immediately accused me of having tasked a glorified algorithm with drooling out what I’d written.

Now, had said accusations been meant as insults, that actually would have been less distressing than the truth, which is that people on the Internet are proving themselves to be even less literate than I’d assumed. I mean, fine, most folks don’t know how to use hyphens, and I’ve come to terms with that… but the idea that a person genuinely can’t tell the difference between human-written text and program-written text really, really saddens me. It’s the equivalent of being unable to distinguish between a chef-prepared meal and something with the Lunchables logo on it.

Chat-bots produce painfully average offerings; works that check all of the surface-level boxes, but that are completely flat. They still scare me, though, because they’re revealing the fact that people who are willing and able to recognize as much are apparently in the minority. Put another way – and reusing the idea of flatness – it’s starting to seem like much of the Web (and therefore the world) is populated by individuals who lack the ability to see in three dimensions… and as such, they’re conflating actual structures with façades.

In short, I’m not worried about ChatGPT ever being better than a real writer; I’m worried about the humans who are letting themselves be convinced that it will be. The façades will eventually fall over, I’m sure – a chat-bot can only ever be as good as the best content fed into it, after all, and it can’t actually innovate – but until that happens, there are going to be millions of people consuming shallow, empty junk and not understanding why they hate reading.

1

RamsesThePigeon t1_jc5bdsk wrote

The "uncanny valley" feeling is still pretty damned profound there, don't you think?

For instance, have a look at the first paragraph:

>A ghostwriter is a professional writer who is hired to write a book, article, or other type of content on behalf of someone else, without receiving any public credit for the work. In other words, the ghostwriter's name does not appear on the work, and the person who hired the ghostwriter takes credit for the writing.

To my eye, that almost reads like an on-the-nose parody of AI writing: It says essentially the same thing twice, prefacing its second second with the phrase "in other words." Had the piece actually been intentionally humorous, it might have continued with "to reiterate" and another repetition... but instead, it just went right on repeating itself in the second paragraph.

A human who was casually reading the above passage might think that it was decent enough, but the façade would crumble pretty quickly once that same human started to pay attention. Put in slightly harsh terms, AI output reads like what you'd expect from a supremely average tenth-grader following along with a book entitled "How To Write Your Term Paper In Ten Easy Steps." There's no motion or melody or meter to the words; no change-ups in tone or timbre that might match the meaning that's meant to emerge.

(What I just did there was pretty clunky, but I daresay you get the point.)

1

RamsesThePigeon t1_jc4bky8 wrote

That's a great response. Thank you!

As a writer who has encountered similar challenges, I've taken to making comparisons between fast food and chef-prepared meals: Yes, you can get something from McDonald's in as much time as it takes you to groan out an order and swerve past the pickup window, and yes, the FDA has reluctantly classified the menu options there as "probably food," but you won't get nearly as much enjoyment, nourishment, or satisfaction out of the experience as you would from eating a dish that was prepared by a devoted and attentive professional.

If I feel the need to be less snarky, I just say that it's "bespoke" writing.

That brings me to my follow-up question: You mentioned that you specialize in "fast, effective writing," but "effective" can mean very different things in the contexts of different projects. How do you guarantee (or prioritize, at least) speed when effectiveness requires a slower pace, as with – to quote you once more – "your wacky aunt's self-published book," for example?

6

RamsesThePigeon t1_jc42xl3 wrote

Programs like ChatGPT have mesmerized a growing number of people, many of whom have taken to praising said programs for their abilities, their versatility, and their speed. While writers, editors, and people who read for enjoyment typically have no trouble detecting (and criticizing) details that make these machine-produced offerings utterly atrocious – the stark mismatches between concepts, word-choices, and emotional tones, for instance – the "magic trick" has nonetheless resulted in a lot of hype, especially amongst individuals who only pay attention to surface-level elements.

How do you, as professional ghostwriters, explain to writing-averse (or reading-averse) clients that your work – which is slower and comes with a monetary cost – will likely always be superior to what a glorified algorithm can offer?

16

RamsesThePigeon t1_j5npjt8 wrote

There is not enough proof included in the post that connects your identity to the IAmA.

Unfortunately, the links or photos you've posted could have been posted by anyone, and they don't prove that you are the person doing the AMA. Your proof needs to be something that connects the fact that you're doing an AMA with your identity. This could be something like a photo of you in a work uniform or at a relevant location with a sign that has your username and the date. It could also be documents (partially redacted if desired) with a note that has your username and the date. We're happy for you to get creative with your proof as long as it makes it clear to a reasonable person that the person doing the AMA does meet the criteria laid out in the topic of the AMA.

If you can't think of a way to prove your claims publicly, you can also submit confidential proof to the moderators at this link, though bear in mind it may take some time to review.

Here's a link to the section of our wiki that discusses proof.

Please edit your post and add new proof, and reply here to let us know. If your post is more than a couple of hours old, it may be more effective to create a new post and include the proof from the start. Thanks!

1

RamsesThePigeon t1_iyszs99 wrote

"More literal" precedes a noun, meaning that it needs to be hyphenated if it's meant to be read as a standalone adjective. This would be true with or without the presence of the word "the." Since the hyphen was omitted, the former interpretation – "a greater number of literal versions of the Bible" – becomes the "correct" one, making the word "the" a grammatical mistake.

Remember, context – which so many people cite without really understanding – is derived from structure (punctuation, mainly) first, grammar second, and definition last.

The most compelling examples stand on their own.

The most-compelling examples stand on their own.

1

RamsesThePigeon t1_iysepim wrote

You’re missing the point.

Yes, a person can parse what was intended, but that wasn’t what was written. There are correct and incorrect ways to write in English, and knowledge of the former is altogether too rare.

Most people will even respond with some variation of “Who cares?” Frankly, that should be concerning, too.

0

RamsesThePigeon t1_iyrmv80 wrote

3

RamsesThePigeon t1_iyevx9k wrote

Back when I was in kindergarten, I was friends with a kid whom I'll refer to as "Stephen."

This will end up being relevant, I promise.

Stephen absolutely loved Cheetos. (There's some relevance already!) Whenever one of our classmates had a bag of the dust-encrusted snack in their possession, Stephen could be found hovering over that person's shoulder, usually offering hilariously unsubtle suggestions that he should assist with any eating-related tasks. This resulted in a quasi-serious practice of students actively hiding when they wanted to eat Cheetos in peace... but somehow, Stephen always found a way of sniffing them out.

One fateful morning, I was the one who had brought Cheetos to school. These weren't ordinary examples of orange-colored cornmeal, though: They were the "puffed" variety, and it turned out that Stephen – despite his purported connoisseurship – had never seen them before.

The following is as close to a verbatim account of our exchange as I can remember.

"What are those?!" Stephen asked me, equal parts awe and excitement in his voice.

I reluctantly responded with "They're Cheetos," then made a show of eating one.

"Why are they so big?!" The boy stepped closer, his eyes never leaving my hands. "Where did you get them?!"

Now, those questions were actually more reasonable than they might have seemed, because I wasn't eating from a branded bag: My mother had purchased a very large package of the chips (if you can even call them "chips") in question, and she had taken to doling them out by way of smaller, plastic baggies.

Had he been older, Stephen might have likened my parent to a drug-dealer.

Anyway, I replied by saying "I have a big bag of them at home." Somehow or other, though, Stephen interpreted this to mean that I was personally manufacturing the larger-than-normal Cheetos over which he was salivating.

"You... you..." he stammered. "You have a machine at home that makes them big?!"

Paragon of honesty that I was, I hurried to correct him: "Yes. Yes, I do."

Our exchange went on for a while longer (with most of that time being filled by Stephen trying to talk me in to "sharing"), but toward the end of it, I was encouraged to "use my machine" to make Cheetos that were even bigger. This put me in something of a pickle... so when I got home that afternoon, I immediately went to the cupboard where my mother kept snacks, retrieved the bag that had prompted my lie, then spent the next half-hour or so picking out the largest pieces that I could find.

By the time that my mother caught me, I had amassed what looked like a neon pile of cat poop on the dining-room table... and let me tell you, I had a hell of a time explaining why I was sorting Cheetos by size.

See? I told you it was relevant.

TL;DR: Snack-stealer starts stupid sorting scheme.

20

RamsesThePigeon t1_iw1jaxl wrote

“Century’s” means “belonging to century.”

“Centuries” is the plural form of “century.”

Apostrophes do not pluralize. The only exceptions are in the cases of initialisms comprising both uppercase and lowercase letters (as with “RoUS’s”), and standalone letters (as with “Mind your P’s and Q’s”).

Dates, numbers, and other non-standard nouns are pluralized by appending the letter S to them. “1990s” means “the decade which includes the years 1990 to 1999,” for instance, whereas “90’s” means “belonging to the specific year or number 90.”

7