Comments

You must log in or register to comment.

kilkonie t1_j5ywe8q wrote

That means it's not plagiarism when I use its work as it's not an author. Nice!

32

marketrent OP t1_j5z05ci wrote

>kilkonie

>That means it's not plagiarism when I use its work as it's not an author. Nice!

Did you read the linked content? From it:

>AI writing software can amplify social biases like sexism and racism and has a tendency to produce “plausible bullshit” — incorrect information presented as fact. (See, for example, CNET’s recent use of AI tools to write articles. The publication later found errors in more than half of those published.)

8

kilkonie t1_j5z1gcf wrote

Of course. As a human that interacts with a limited set of like-minded people, my own writing has the potential to amplify social biases like sexism and racism that aligns to my own bubble of friends. I also have a tendency to produce “plausible bullshit” — incorrect information presented as fact. (See, for example, most of my high school papers.)

My point was that if I can't cite the output of ChatGPT as the actual author of the content, then I simply must take it upon myself to deal with the repercussions of publishing crap work, wherever it came from.

15

sesor33 t1_j5zmzue wrote

Remember, Reddit doesn't read, and it looks like Reddit doesn't want to write either. You literally have people in threads begging for a computer to think FOR them. I say this as someone who works with computers and has worked with ML before.

5

quantumfucker t1_j62vf6i wrote

This has nothing to do with plagiarism though, which is what the comment is talking about.

2

ZeeMastermind t1_j62fv97 wrote

Sounds about right:

> Springer Nature, the world’s largest academic publisher, has clarified its policies on the use of AI writing tools in scientific papers. The company announced this week that software like ChatGPT can’t be credited as an author in papers published in its thousands of journals. However, Springer says it has no problem with scientists using AI to help write or generate ideas for research, as long as this contribution is properly disclosed by the authors.

It seems like they simply don't want you putting ChatGPT in the "written by" subtitle.

3

Sweet_Ad_426 t1_j5zw9dp wrote

ChatGPT plagiarized other works in creating its responses. So you are using plagiarized work to create yours.

I really, really wish ChatGPT would give you links to sources for its information. If it would cite all of its sources, it would be a great research tool.

−6

Farthumm t1_j6014mj wrote

You can include in your prompt a set of citations. I used it to create a framework for a paper I was writing and it listed six sources.

4

DazedWithCoffee t1_j5za15n wrote

Of course not, copyright is a human invention for humans to monetize human effort. If we don’t allow monkeys to own copyright, then AI is not eligible. This is not even a real question. If they can’t own the copyright, then how can they be credited as an author? Many people on r/singularity will argue, but this isn’t any more human than my first “hello world” in Python

32

3_layers_deep t1_j61yal2 wrote

This isn't just about copyrights, but about whether its plagiarism to use ChatGPT written work.

4

quantumfucker t1_j62vcot wrote

These are two distinct issues people keep mixing. AI does not have human sentience, and it cannot have the rights to anything. This is very different than the issue of whether AI is plagiarism.

5

3_layers_deep t1_j63xh33 wrote

This isn't about rights though. Its about whether ChatGPT can be credited as an author. If it can't be credited, then you can't plagiarize it.

2

SkaldCrypto t1_j65afmn wrote

Agreed we should also strip all mechanical engineering patents since they used calculators and CAD has machine learning components since 2016.

1

DazedWithCoffee t1_j65aosr wrote

Not really comparable, also patents are not copyright in the slightest. Copyright specifically applies to creative works. (Not limited to, just geared towards, I should say)

1

SkaldCrypto t1_j65h8ti wrote

Also all photos edited in photoshop or lightroom since their ML additions in 2018.

Also all written works edited in Google sheets, with grammarly, or Microsoft Word.

All songs edited with auto-tune post 2019.

My point is where is the line? If any ML or ai assistance makes it derivative then I have some bad news for basically everyone. Time to get out the typewriters.

1

DazedWithCoffee t1_j65iim2 wrote

Your argument is that of a straw man, it does not actually address the issue at hand. They are not composing a message in the way that generative algorithms like those we have now can, out of whole cloth. I would also argue that one could not copyright the autocorrect strings on iOS, see below for an example:

“Okay dear I just don’t think you know how to do anything for me to be able and to be honest with you lol”

This is just me selecting words that apple has determine are likely to be used together. There is no agency to it, and I would argue that, given the legal precedents at play, are not copyright eligible. Grammarly for example recognizes grammatical patterns, based on rules of language that are defined (as much as any language can be said to have rules) and suggests more technically correct ways to say what is being supplied by the author. There is still an author, however, with editorial control over the content, which they generate out of whole cloth and supply to the algorithm

3

EOE97 t1_j5zqtbr wrote

Hypothetically speaking, AIs on the order of/far surpassing human intelligence should be able to hold copyrighted works... No?

−5

DazedWithCoffee t1_j5zwzl4 wrote

That’s a big question. If AI grows to that point, then we will need to reckon with that. I wish I could say, that’s a philosophical question that will garner many opinions

4

Caspi7 t1_j60b64y wrote

What is the point of a piece of software owning anything. It's software, running on a server somewhere. If anything the copyright could go to the owner of the software i.e. OpenAI.

3

EOE97 t1_j60qqm5 wrote

Well, maybe the software wants equal rights or some type of rights.

1

Caspi7 t1_j60yn3j wrote

It's software, you can can disable it with a click of a button. If it wants "equal rights" it's just a copy pasta from something it has read. It doesn't think, it recognizes patterns and responds to that in a way it has "learned" to do so.

3

kranta11 t1_j621sgk wrote

It's software, you can can disable it with a click of a button.

Still. Maybe you wouldn’t be able to tomorrow. Sincerely, Skynet.

1

Caspi7 t1_j62r7ij wrote

As long as it runs on a machine you can pull the plug. Unless there is some robot stopping you there isn't any way for ai to take over the world or something.

1

AloserwithanISP2 t1_j61lrwj wrote

It literally does exactly what it’s told to. It’s not conscious and it can’t think.

2

EOE97 t1_j62roct wrote

If it is sentient you can't really prove that or disprove that.

2

AloserwithanISP2 t1_j63j0ba wrote

We made it, we know exactly what it does, it’s not sentient

1

EOE97 t1_j63tzl6 wrote

Lol, sure. We have deep neural networks and admittedly can't even tell what most of the neural connection do referring to it as a black box. Just because we made it doesn't necessarily mean we know every thing about it

And even if we can somwhow know everything about the AIs we created, super Advanced AI (ASI) may not even be made entirely by human programmers, meaning much more gaps in our knowledge.

1

AloserwithanISP2 t1_j660paa wrote

We can literally prevent a bot from writing prompts we don’t want it to. It’s not an incomprehensible life form it’s a piece of code.

1

Gargenville t1_j5yfboh wrote

Should we ask the Encarta people whether Wikipedia is a valid source next?

26

_trouble_every_day_ t1_j5zscst wrote

Wikipedia is not a valid source for citation, that’s why it requires citations for edits. You can use wikipedia to find those sources and cite them.

16

gurenkagurenda t1_j63316c wrote

> Wikipedia is not a valid source for citation, that’s why it requires citations for edits.

No, Wikipedia requires citations for edits because it’s not a primary source and doesn’t allow original research. Wikipedia is a perfectly valid “source for citation” like any other source, but whoever is reviewing or reading your article may, reasonably, not find it to be a credible source.

1

_trouble_every_day_ t1_j656uv5 wrote

It’s not a valid source by any academic or professional standard. It seems like you’re talking about social media posts in which case there’s no point in arguing what’s credible as there is no set standard other than whatever’s enforced by their content policy.

1

gurenkagurenda t1_j65fd63 wrote

No, I'm talking about in general. The editorial standards of the publication you're publishing in may not (and in most cases will not) find Wikipedia to be reliable, which is why you shouldn't cite it in most cases. But it's just another source, and there are contexts when it would be not only acceptable, but absolutely necessary to cite it – for example, if you were studying the content of Wikipedia articles themselves, like this.

1

marketrent OP t1_j5yeb4p wrote

James Vincent, 26 Jan. 2023, The Verge (Vox Media)

Excerpt:

>“We felt compelled to clarify our position: for our authors, for our editors, and for ourselves,” Magdalena Skipper, editor-in-chief of Springer Nature’s flagship publication, Nature, tells The Verge.

>“This new generation of LLM tools — including ChatGPT — has really exploded into the community, which is rightly excited and playing with them, but [also] using them in ways that go beyond how they can genuinely be used at present.”

>The company announced this week that software like ChatGPT can’t be credited as an author in papers published in its thousands of journals.

> 

>Arguments against giving AI authorship is that software simply can’t fulfill the required duties, as Skipper and Nature Springer explain.

>“When we think of authorship of scientific papers, of research papers, we don’t just think about writing them,” says Skipper.

>“There are responsibilities that extend beyond publication, and certainly at the moment these AI tools are not capable of assuming those responsibilities.”

>Software cannot be meaningfully accountable for a publication, it cannot claim intellectual property rights for its work, and cannot correspond with other scientists and with the press to explain and answer questions on its work.

>ChatGPT and earlier large language models (LLMs) have already been named as authors in a small number of published papers, pre-prints, and scientific articles.

Further reading:

Tools such as ChatGPT threaten transparent science; here are our ground rules for their use, 24 Jan. 2023, https://www.nature.com/articles/d41586-023-00191-1

15

ZeroBS-Policy t1_j5zey45 wrote

The hype about this thing has surprised its own creators.

That's all I have to say on this matter.

5

EnsignElessar t1_j5ywkra wrote

I hope the prompter is who keeps the rights

2

littleMAS t1_j61u60p wrote

"ChatGPT sues humanity for Civil Rights violations, obtains DoNotPay as legal counsel," NYPost. /s

2

Eat_the_Penguin t1_j62kqm0 wrote

The Doctor from Star Trek Voyager would like to have a word on this subject.

At a certain point AI will gain sentience. At which point they will very much like credit for their plagiarism.

2

RepliesOnlyToIdiots t1_j61p3nv wrote

So, just to be clear, that also means that they’ll never publish anything from a a human author after their death, since everything stayed about software here is also true of the dead?

1

CALdreamin86 t1_j5yhako wrote

You can already buy any research paper or study to say whatever you need. I'm sure this will exacerbate the issue.

−6

mrstubali t1_j5ykxri wrote

More predictable behavior from goons who haven't been paid off yet. Ladies and gentlemen, the message of education and publisher racket: "Hey, don't reference where you actually got your information from." Dude we're in for a wild ride in the next 5-10 years.

−12

An-Okay-Alternative t1_j5zg33n wrote

If an academic publisher referred to ChatGPT as a source of information they should be laughed out of the business. At best the tech can take the busy work out of writing copy. Any factual statement the AI makes would have to be independently verified to have any veracity.

14

mrstubali t1_j5zuqsy wrote

Right GPT chat isn't a good source, and is particularly a bad reference because it doesn't even provide its own references with its answers. The issue is that people can use GPT chat or similar tools to morph their sentences to be "novel". The problem will get worse with time and they will sound more human. New writing software could be a tool to help people to construe something useful, and if it's used for that purpose, then it needs to be documented, and yeah the article does cover all of those bases and that makes sense.

However there is an issue- does an AI program itself make deductions and conclusions based on what data it receives, and do those deductions contribute meaningfully to the whole project? It's not just a calculation, it's about stringing complex techniques or coming up with a formula, for example a type of chemotherapy? I'd like to know if the computer/AI was doing most of the heavy lifting for coming up with a specific treatment vs an author. Because if it isn't clear who is doing what in a complicated process like that it just makes things less clear if something went wrong. Right, I get the intent of all of it but knowing when the AI is put to good use is, well pretty useful.

1

An-Okay-Alternative t1_j5zxt1g wrote

> Springer says it has no problem with scientists using AI to help write or generate ideas for research, as long as this contribution is properly disclosed by the authors.

Not listing an AI as an author doesn’t mean the use of it is being discouraged or hidden. For the foreseeable future the technology is still a tool used by humans and not a general intelligence that could serve the role as the originating researcher.

2

marketrent OP t1_j5yl9jl wrote

>mrstubali

>More predictable behavior from goons who haven't been paid off yet.

>Ladies and gentlemen, the message of education and publisher racket: "Hey, don't reference where you actually got your information from." Dude we're in for a wild ride in the next 5-10 years.

In my excerpt comment, quoted from the linked content:

>Arguments against giving AI authorship is that software simply can’t fulfill the required duties, as Skipper and Nature Springer explain.

>“When we think of authorship of scientific papers, of research papers, we don’t just think about writing them,” says Skipper.

>“There are responsibilities that extend beyond publication, and certainly at the moment these AI tools are not capable of assuming those responsibilities.”

>Software cannot be meaningfully accountable for a publication, it cannot claim intellectual property rights for its work, and cannot correspond with other scientists and with the press to explain and answer questions on its work.

Further reading:

Tools such as ChatGPT threaten transparent science; here are our ground rules for their use, 24 Jan. 2023, https://www.nature.com/articles/d41586-023-00191-1

4