Comments

You must log in or register to comment.

wockyman t1_j5s0s9z wrote

>Artificial intelligence is writing fiction

There's no difference between fiction and disinformation as far as the AI's concerned. It's not learning to lie, it's just proceeding from the premise it's given. If I tell ChatGPT to explain what it is in the style of a witch, it will. It's not lying to me about being a soothsayer. It's just doing what I asked it to. Also, pedantically, the AI's writing misinformation. Disinformation requires intent.

3

SierraVictoriaCharli t1_j5s8mn5 wrote

Proving an AI can create information borders ridiculously close to the fundamental halting problem. Any information produced by AI needs to be parsed and curated both in input and output, by a human who can determine if the information provided is feasible. Trusting an AI provides any real answer is wrote incompetence with computation.

3

EmbarrassedHelp t1_j5s4bwa wrote

> OpenAI, the nonprofit that created ChatGPT,

The reporter either couldn't even do the bare minimum of research to see that its a for profit company since 2019, or they failed to double check the output of the automatic news writing bot (ironically spreading misinformation). Why should I trust the article when it fails to get the basic information right?

2

SierraVictoriaCharli t1_j5s8w7y wrote

People who are incompetent with computation think than AI can 'provide answers.' Basic computational theory starting with the halting problem tells anyone competent that this is a non-starter as a concept. Incompetent people believe that AI is somehow magically capable of coming up with answers that are either 'right' or 'wrong'. this is simply indicative of their fundamental lack of competence regarding computational theory.

1

gurenkagurenda t1_j5sgcdp wrote

Creating disinformation is low effort and low skill, and you can hire people to do it for very little money. Simply producing disinformation at scale is not worrying; bad actors already have all the scale they could ever want.

What would be worrying would be an AI that could craft especially viral disinformation. That is, an algorithm that could model what it is about pizzagate, vaccines and autism, etc. which makes them so contagious, and then design a campaign intended to achieve a specific goal rather than just sowing chaos. I don’t think we’re very close to that technology, and I don’t know of any research that even hints in that direction.

1

SierraVictoriaCharli t1_j5rtjsk wrote

Disinformation is not inherently false; within it's functional axiomatic ontology it is a true incomplete solution. Having a computer produce a right-ish answer and analyzing that to produce better queries and refine the solution is one thing- Proving a computer that can produce the right answer from the get go borders on the fucking halting problem. Everything coming out of the AI is 'disinformation', it is only by synthesizing its analysis with recorded data that one can produce not-disinformation.

If you cannot comprehend that, you are not competent to write sensationalist nonsense like this crap. Anyone who expects anything else will automagically provide the answer pro or con is doomed to failure because they lack the competence to understand what it is AIs provide.

−1