Submitted by Vucea t3_118gzrp in Futurology
Vucea OP t1_j9gzjyx wrote
One side effect of unlimited content-creation machines—generative AI—is unlimited content.
On Monday, the editor of the renowned sci-fi publication Clarkesworld Magazine announced that he had temporarily closed story submissions due to a massive increase in machine-generated stories sent to the publication.
In a graph shared on Twitter, Clarkesworld editor Neil Clarke tallied the number of banned writers submitting plagiarized or machine-generated stories.
The numbers totaled 500 in February, up from just over 100 in January and a low baseline of around 25 in October 2022.
The rise in banned submissions roughly coincides with the release of ChatGPT on November 30, 2022.
Ian_ronald_maiden t1_j9h3eon wrote
It’s shit content though. I’m hoping this whole AI thing actually reduces the amount of bad writing overall, by highlighting just how terrible so much of it is.
Things like ChatGPT are appallingly bland, and dull, wordsmiths
amadmongoose t1_j9hhi0q wrote
It's shit now, just remember how long it was from when computers started playing chess until they were able to consistently beat all humans. You're right that it won't affect great authors-- yet.
Ian_ronald_maiden t1_j9hibpu wrote
I’m very interested in the concept of AI developing deep and unique insights into the human experience.
DomesticApe23 t1_j9hlhwf wrote
They don't need to. They just need to accurately mimic it.
Ian_ronald_maiden t1_j9hlsug wrote
That’s the same thing as just doing it. You can’t mimic this factor, you either do it well or you do it badly. That’s it.
DomesticApe23 t1_j9hmune wrote
It's not the same thing as doing it. Are you familiar with the concept of the Chinese Room?
Currently AI can trawl data and build human language into sensible sentences and paragraphs. It understands nothing. All it needs to do to mimic meaning, or to further expand on its 'creative' properties, is to keep on learning.
SandAndAlum t1_j9ik8xx wrote
The chinese room is just an exercise in shuffling complexity around and argument from incredulity. Nothing is proven other than the human in the room isn't the person being spoken to, which we started with in the premise.
DomesticApe23 t1_j9ikugc wrote
I'm sure that sounded really good in your head, but it doesn't seem to mean anything. Perhaps try using simple language to convey your ideas.
SandAndAlum t1_j9ikzze wrote
It's perfectly coherent, unlike the Chinese room arguments.
DomesticApe23 t1_j9il9uy wrote
It may be coherent but it doesn't say anything. What do you mean by 'shuffling complexity around'? How is it an argument from incredulity? Say something worthwhile.
SandAndAlum t1_j9ilsm5 wrote
All of Searle's no-simulation arguments consist of making an information processing machine out of silly parts, hiding how much information such a system would contain, and then saying 'look those parts are silly! There can't be meaning here'. It's pointless and circular.
But neither you nor he have defined meaning, and are saying nothing about whether or not meaning is an emergent property. Facile dismissals based on the presumption that it cannot emerge are what's hollow. Pointing out how tautogical that argument is is not.
DomesticApe23 t1_j9im59f wrote
ChatGPT is literally a Chinese Room. It understands nothing, yet it delivers meaning well enough, just as the Chinese Room translate Chinese well enough. Your failure to understand the specifics of ChatGPTs software is exactly analogous to 'hiding how much information a system such a system would contain'.
SandAndAlum t1_j9imdzi wrote
I know what a transformer is. Define understanding and prove there isn't any in one.
It's also not a chinese room because it's not indistinguishable so the argument is doubly stupid.
DomesticApe23 t1_j9imi1p wrote
Yeah I think I'll leave the sophomoric philosophy to you mate, you're obviously very enamoured of your own opinions.
SandAndAlum t1_j9iml4q wrote
And yet you're the one sophomorically insisting on a conclusion with no supporting logic or evidence.
DomesticApe23 t1_j9imwch wrote
What conclusion is that?
SandAndAlum t1_j9in9v7 wrote
Your presupposition that understanding cannot emerge from a table of numbers and some rules for multiplying and adding them is your conclusion that there is no understanding or new meaning that can emerge.
Your conclusion is identical to your assumption, so you're just extremely arrogantly saying nothing, then even more arrogantly falling back to an argument from authority where someone else did the same thing.
DomesticApe23 t1_j9inji2 wrote
I'm sure you can point out where I said that.
CaseyTS t1_j9ioa8y wrote
You're so aggressive for literally no reason at all.
CaseyTS t1_j9io8yn wrote
The thing you were talking about was developing deep and unique insights about the human experience, from the comment. Yes, you can do that with a generative model that does not have subjective experience. It can intelligently and creatively synthesize information from vast amounts of documented human experience. That is literally what generative LLMs are designed to do - learn from humans and talk about it.
Rofel_Wodring t1_j9m2k22 wrote
What SandAndAlum means is that the Chinese Room Experiment shuffles the responsibility for explaining humanity's (self-oriented and essentialist) viewpoint of consciousness onto the computer. It just takes human consciousness as a given that doesn't have to justify itself, and certainly not through reductionism.
Because if our mode of consciousness did have to justify itself by the same rules of the computer in the Chinese Room Experiment, we'd fail in the same way the computer would fail.
CaseyTS t1_j9inh7k wrote
I understood it. I think i get "incredulous," but I didn't google it.
Ian_ronald_maiden t1_j9hp0mm wrote
That’s not quite what I’m talking about.
The nature of actual artistic insight means it is impossible to mimic by virtue to the fact that successful mimicry in this sense cannot exist, because successful mimicry of insightful art would just be actual insightful art.
It’s not a question if “can a computer create something that looks like art”. We know it can. We already know that ChatGPT can produce good writing from the perspective if someone with no artistic understanding.
What’s fascinating here is the idea that AI could create actual art here, because it a machine is able to create something from which people gain a new or unique perspective via whatever artistic medium, then we have machines that aren’t mimicking, they’re just doing.
DomesticApe23 t1_j9hr70b wrote
That's not even new. People have been finding meaning in sunsets and the sound of babbling brooks for millennia. People already assign meaning to nonsense, are unable to distinguish bullshit from meaning and Rupi Kaur is a famous poet. You can generate trite verse with ChatGPT right now that is just as meaningful as her banal nonsense, and if you market it right people will lap it up. What's the difference?
It's not an intrinsic property of the work you're talking about, it's perceptions. Right now ChatGPT sucks at creating fiction, not because 'it still doesn't understand'. It will never understand. But all it has to do is complexify its model enough that it encompasses longer forms. All that takes is raw data.
I don't really know what you mean by 'actual art'.
Ian_ronald_maiden t1_j9hx5z0 wrote
I mean when ChatGPT has the flare and reasoning to know why “banal nonsense” was the best possible way to not only convey your idea but your tone and feeling.
DomesticApe23 t1_j9i25rb wrote
Write me a paragraph sarcastically mocking Avatar: The Last Airbender movie
Oh, where do I begin with Avatar: The Last Airbender movie? It's a masterpiece! An absolute cinematic treasure that completely did justice to the beloved animated series. Who needs compelling character development and a coherent storyline when you have a cringe-worthy script and laughable acting? And why bother staying true to the original source material when you can just completely butcher it beyond recognition? The stunning visual effects and impressive choreography are obviously enough to make up for any other shortcomings. Clearly, the filmmakers had a deep understanding of what made the animated series so iconic and chose to completely ignore it in favor of their own creative genius. Bravo, Avatar movie, bravo.
Ian_ronald_maiden t1_j9igc66 wrote
Exactly. A cliche machine as it stands.
DomesticApe23 t1_j9hxd9a wrote
It will never know anything. What are you not understanding here.
Ian_ronald_maiden t1_j9igda6 wrote
I think you’re failing to understand that I’m agreeing with you
CaseyTS t1_j9indh5 wrote
I'm under the impression that our own cognition is like a chinese box. Sincerely, a physicalist.
SandAndAlum t1_j9io0d3 wrote
There is the kinda-open question of whether there are physical phenomena that cannot be modelled as an information process. True randomness would be one. Free will (insofar as the phrase is at all well defined) would potentially be another.
If so then all physical phenomena are not reducable to information processes and "meaning" could be one.
[deleted] t1_j9ip1d7 wrote
[deleted]
Rofel_Wodring t1_j9m1l2v wrote
>There is the kinda-open question of whether there are physical phenomena that cannot be modelled as an information process.
Spiritualists pretend like there is so they can have a scientific justification for crap like souls and telepathy, but from a materialist perspective: no, there isn't. If it can't be modelled as an information process, it doesn't fucking exist.
For example: randomness can be modelled as an information process. It's probably one of the easiest ones there is. It only seems complex because our brains are bad at handling iterative probability, or even non-linear change.
But that just means we're weak babies with simple minds, unable to comprehend the full consequences of our actions. It doesn't mean that it's actually a difficult thing to simulate in an information process, and it certainly doesn't mean that there exist physical phenomena that cannot be modelled as an information process. Because, again, such things don't and can't exist outside of spiritualists' imagination.
SandAndAlum t1_j9m3r1g wrote
> For example: randomness can be modelled as an information process. It's probably one of the easiest ones there is. It only seems complex because our brains are bad at handling iterative probability, or even non-linear change
You can model stochastic systems, but a turing machine cannot produce a non-deterministic output. You can model the random system as a whole, but there is no rule saying when each particle will decay.
It could be some variant of superdeterminism/bohmian nonsense, but that's even more mystical than souls. A block universe or many worlds doesn't tell you why you're the you experiencing one branch and not the you experiencing another.
WetnessPensive t1_j9l30ph wrote
Can you elaborate on this? I'd never heard this before, and would like to know where to be pointed to know more.
FindorKotor93 t1_j9hnf62 wrote
Lets put it this way, you take someone else's insight on the human experience, change the words in a way that doesn't impact understanding. You now have an insight on the human experience that may resonate better with certain people purely based on the language used. If the AI figures out how to do that reliably, things are going to get very fucky for writers.
Ian_ronald_maiden t1_j9hodn4 wrote
That’s not insight though, it’s just plagiarism.
Life might get difficult for shitty writers, but it’s very interesting to think about AI perhaps communicating some beautifully crafted artistic truth that we’ve never considered.
FindorKotor93 t1_j9hprrt wrote
New Writers.* Pretty much anyone without a brand existing could be indistinguishable from AI at that point. Authors don't constantly generate new unique insights into the human experience, they put down a version that resonates with different people, and it is no more plagiarism than GRRM plagiarised Tolkien or Tolkien plagiarised myth or Lewis plagiarised the Bible.
If the AI can teach itself how to conserve meaning whilst rewriting, then the written word becomes a dangerous world indeed.
Ian_ronald_maiden t1_j9hqhl2 wrote
Wasn’t photography supposed to destroy painting as well though?
If new writers cannot provide a single original thought, then perhaps they don’t deserve to break in anyway. No one is actually owed a successful novel, and if an expert craftsman can’t produce something any better than a literary sausage maker, then, well… perhaps this can provide some impetus for a sorely needed new phase of creativity.
Because it is quite notable that no one has done anything truly new and game since Tolkien - and he started writing more than a century ago.
FindorKotor93 t1_j9hqr1w wrote
Nobody has an original thought, all thoughts are informed by experience and understanding passed onto us. The same thing the AI will mimic. I don't know why you're so opposed to this logic beyond a need to feel right.
Ian_ronald_maiden t1_j9igsmd wrote
Determinist art is boring though
Rofel_Wodring t1_j9m3zf6 wrote
It's also the only kind of art that exists, will ever exist, or even can ever exist.
Unless you're one of those spiritualists who think artistic talent comes from ~the human spirit~ instead of something more mundane and deterministic such as 'the artist's wartime experiences as a child' or 'exposure to hundreds of other artists of that genre'.
Ian_ronald_maiden t1_j9mjbpf wrote
Determinism is junk.
Decisions and emotions exist, and it’s art’s ability to evoke and reflect them both that makes it interesting.
Being shaped by experience is different to being a slave to them.
yaosio t1_j9ken3g wrote
Photography couldn't replace all forms of painting. It could only replace art that attempted to replicate real life as perfectly as possible.
KillianDrake t1_j9i0ysj wrote
And people don't always need new original thoughts. They just want to be entertained cheaply. If an AI can write a full 10-novel series in an hour that entertains people enough and only costs a quarter, then... so be it? Better than waiting for 20 years for a series that the author's gotten tired of (ahem, GRRM).
The real thing that is attempting to be protected here is the gate-keeping. We no longer need editors, we no longer need publishers, we no longer need bored millionaire authors...
Ian_ronald_maiden t1_j9igle0 wrote
Sure. But derivative cliche ridden re-hashes are no great loss to anyone.
The gate keeping if reliable sources, however, as we’ve seen in recent years, is a critical function.
Zuckerberg and Musk’s etc refusal to accept editorial responsibility in tech platforms has been disastrous
Adorable-Ad-3223 t1_j9hu4bm wrote
From the perspective of me, as a reader it doesn't matter whether the content is written by a feeling human or a bot pretending to have feelings. If it is good.
Ian_ronald_maiden t1_j9hwxgs wrote
That’s kind of how I feel about it too. If you engage in the text on its own terms, then it’s either good or it’s not.
Trips-Over-Tail t1_j9ioges wrote
Oh god, they're stealing my techniques too?
jawshoeaw t1_j9ioh96 wrote
Chess is in some sense a solvable math problem. Writing is not
amadmongoose t1_j9iop1y wrote
We thought that writing, art and music would be the last bastions of human-only competencies but if ChatGPT is already this good then I'm sorry, it's just a matter of time
jawshoeaw t1_j9ip89n wrote
Maybe. I have been impressed with chatGPT , but mostly in its ability to replicate the tedious and practical. The things so many of us must do for a paycheck. You know that feeling that you love a song and wonder , will there ever be another song this good? Or a book where you’re literally depressed that it’s over and want to cry that nothing written will ever make you feel that way again? I don’t believe that will be reproduced by an AI . If it is I’m done
hxckrt t1_j9iv5lh wrote
It's good at replicating text patterns, but it doesn't reason, and can only basically only copy humans chatting. Midjourney might have been a better example. Point is that those systems will fundamentally not surpass human, just become better at copying us.
KillianDrake t1_j9kmqd9 wrote
what is "reason"? humans simply have more neurons firing in an insanely efficient manner.
when ML reaches the same number of "neurons" firing, it will produce the same kind of results. then it will be focusing on increasing the efficiency.
there is nothing special about humans
hxckrt t1_j9lvpfi wrote
When you make a chip with just as many transistors as a calculator, does it automagically become a calculator? No, it needs to be wired for the job and you need to program it. In the same way, neural networks need weights and biases, their "training".
You can get the calculations going, but where are you getting the training data to make art and music superhuman? Because that's what the argument is about. Are you going to model the subjective appreciation of it? That doesn't work that way because you can't write a loss function for what "better" art is.
KillianDrake t1_j9ma7x7 wrote
adversarial networks, the same way they train Alphago - once you have something that can produce and understand stories, then it can rate them. It will generate and rate itself millions of times faster than the human race did, and just like Alphago became dominant enough to take down Go grandmasters, so will this.
No point fighting against it, learn to adapt, learn to adjust.
hxckrt t1_j9nq67q wrote
Ah so the answer is "yes, we're going to model subjective appreciation of art"?
Go has an objective score you can quickly calculate to get better than humans. Writing and art do not, so you're still stuck copying humans, because you need them to rate the output. You're confusing objective score (quantity) with subjective quality.
And "no point fighting against it"? You're starting to sound like the Borg . Try to understand how this works before you abandon all hope in favor of our robot overlords.
Vezeri t1_j9jtttl wrote
And it really still is, because they copy from existing material, but they don't think for themselves. Current AI is more capital A artificial like cheese whiz and not really capital I intelligent like humans are. It is impressive how well it imitates things, but the key thing is that it only imitates and it doesn't make anything that doesn't exist already. Maybe one day we will have true AI that will surpass humanity, but that really isn't ChatGPT or Midjourner lol.
KillianDrake t1_j9km4p9 wrote
how do you think humans learn? by being forced to read and learn from a ton of existing material... as a blubbering mass of baby fat, you don't know how to speak, write or do anything unless someone shows you from EXISTING MATERIAL.
DrinkBen1994 t1_j9jqlgb wrote
And just like how no-one gives a crap about AI chess, but still care about human chess players, no-one will care about AI writers.
ZealousidealBus9271 t1_j9ku24t wrote
Have one fill in the blanks for the Asoiaf books to complete the damn thing. George RR Martin be damned.
JoaoMXN t1_j9hmrkp wrote
Keep believing in that. ChatGPT is very simple AI, imagine the new ones that are being made with billions more data and complexity. And how blogs and news sites are already trash with humans writing it, AIs will actually destroy their jobs easily in the future.
Ian_ronald_maiden t1_j9hp4ms wrote
You appear to completely agree with me
just-cuz-i t1_j9h750t wrote
It’s like taking to a 12 year old who happens to know how to search the internet really well.
pinkfootthegoose t1_j9i2dvj wrote
> Things like ChatGPT are appallingly bland, and dull, wordsmiths
have you seen corporate correspondence? it's perfect.
Ian_ronald_maiden t1_j9igfee wrote
Indeed. But corporate correspondence is far from good writing.
In fact, it’s one of the best examples of terrible writing you can find
yaosio t1_j9kdjhz wrote
Bing Chat uses a better model than ChatGPT which results in better written stories. Biggest improvement is I don't have to tell Bing Chat not to start the story with "Once upon a time." It's now at the level of an intelligent 8 year old fan fiction writer that needs to write their story as fast as possible because it's almost bed time. https://pastebin.com/G8iTJmqk
Every time they improve the model it becomes a better writer. I remember when AI Dungeon had the original GPT-3 and it could not stay on topic, and that was fine tuned on stories.
Lagiacrus111 t1_j9mjcke wrote
In five years it'll be better...sadly
Ian_ronald_maiden t1_j9mk2vk wrote
Some impetus for human writers to stop rehashing and do better too I hope
Viewing a single comment thread. View all comments