Viewing a single comment thread. View all comments

likethatwhenigothere t1_j76c7nb wrote

But aren't people using it as factual tool and not just getting it to write content that could be 'plausible'? There's been talk about this changing the world, how it passed medical and law exams - which obviously needs to be factual. Surely if theres a lack of trust in the information its providing, people are going to be uncertain about using it. If you have to fact check everything its providing, you might as well just to do the research/work yourself because you're effectively doubling up the work. You're checking all the work chatgpt does and then having to fix any errors its made.

Here's what I actually asked chatgtp in regard to my previous comment.

I asked if the borrowmean symbol (three interlinked rings) was popular in Japanese history. It stated it was, and give me a little bit of history about how it became popular. I asked it to provide examples of where it can be seen. It came back saying temple gates, family crests etc. But it also said it was still widely used today and could be seen in Japanese advertising, branding and product packaging. I asked for an example of branding where its used. It responded...

"One example of modern usage of the Borromean rings is in the logo of the Japanese video game company, Nintendo. The three interlocking rings symbolize the company's commitment to producing quality video games that bring people together".

Now that is something that can be easily checked or confirmed or refuted. But what if its providing a response that can't be?

2

Fake_William_Shatner t1_j77obea wrote

These people don't seem to know the distinctions you are bringing up. Basically, it's like expecting someone in the middle ages to tell you how a rocket works.

The comments are "evil" or "good" and don't get that "evil and good" are results based on the data and the algorithm employed and how they were introduced to each other.

Chat GPT isn't just one thing. And if it's giving accurate or creative results, that's influenced by prompts, the dataset it is drawing from, and the vagaries of what set of algorithms they are using that day -- I'm sure it's constantly being tweaked.

And based on the tweaks, people have gotten wildly different results over time. I can be used to give accurate and useful code -- because they sourced that data from working code and set it to "not be creative" but it's understanding of human language helps do a much better job of searching for the right code to cut and paste. There's a difference between term papers and a legal document and a fictional story.

The current AI systems have shown they can "seem to comprehend" what people are saying and give them a creative and/or useful response. So that I think, proves it can do something easier like legal advice. A procedural body of rules with specific results and no fiction is ridiculously simple compared to creative writing or carrying on a conversation with people.

We THINK walking and talking are easy because almost everybody does it. However, for most people -- it's the most complicated thing they've ever learned how to do. The hardest things have already been done quite well with AI -- so it's only a matter of time that they can do simpler things.

Getting a law degree does require SOME logic and creativity -- but it's mostly memorizing a lot of statutes, procedures, case law and rules. It's beyond ridiculous if we think THIS is going to be that hard for AI if the can converse and make good art.

1