Submitted by mossadnik t3_10sw51o in technology
likethatwhenigothere t1_j74qmyo wrote
Reply to comment by I_ONLY_PLAY_4C_LOAM in ChatGPT: Use of AI chatbot in Congress and court rooms raises ethical questions by mossadnik
I asked it something today and it came back with an answer that seemed correct. I then asked for it to give me examples. It gave two examples and the way it was written seemed absolutely plausible. However I knew the examples and knew that they were wrong. It gave other examples that I couldn't verify anywhere, yet as I asked more questions it kept doubling down on the previous examples.
I won't go into detail about what I was asking, but it basically said the Nintendo logo was made up of three rings to represent three core values of the business. I went through Nintendo's logo history to see if it ever had three rings and as far I can tell it didn't. So fuck knows where it got the info from.
I_ONLY_PLAY_4C_LOAM t1_j74rwgf wrote
It's just giving you a plausible and probabilistically likely answer. It has absolutely no model of what is and isn't true.
likethatwhenigothere t1_j76c7nb wrote
But aren't people using it as factual tool and not just getting it to write content that could be 'plausible'? There's been talk about this changing the world, how it passed medical and law exams - which obviously needs to be factual. Surely if theres a lack of trust in the information its providing, people are going to be uncertain about using it. If you have to fact check everything its providing, you might as well just to do the research/work yourself because you're effectively doubling up the work. You're checking all the work chatgpt does and then having to fix any errors its made.
Here's what I actually asked chatgtp in regard to my previous comment.
I asked if the borrowmean symbol (three interlinked rings) was popular in Japanese history. It stated it was, and give me a little bit of history about how it became popular. I asked it to provide examples of where it can be seen. It came back saying temple gates, family crests etc. But it also said it was still widely used today and could be seen in Japanese advertising, branding and product packaging. I asked for an example of branding where its used. It responded...
"One example of modern usage of the Borromean rings is in the logo of the Japanese video game company, Nintendo. The three interlocking rings symbolize the company's commitment to producing quality video games that bring people together".
Now that is something that can be easily checked or confirmed or refuted. But what if its providing a response that can't be?
Fake_William_Shatner t1_j77obea wrote
These people don't seem to know the distinctions you are bringing up. Basically, it's like expecting someone in the middle ages to tell you how a rocket works.
The comments are "evil" or "good" and don't get that "evil and good" are results based on the data and the algorithm employed and how they were introduced to each other.
Chat GPT isn't just one thing. And if it's giving accurate or creative results, that's influenced by prompts, the dataset it is drawing from, and the vagaries of what set of algorithms they are using that day -- I'm sure it's constantly being tweaked.
And based on the tweaks, people have gotten wildly different results over time. I can be used to give accurate and useful code -- because they sourced that data from working code and set it to "not be creative" but it's understanding of human language helps do a much better job of searching for the right code to cut and paste. There's a difference between term papers and a legal document and a fictional story.
The current AI systems have shown they can "seem to comprehend" what people are saying and give them a creative and/or useful response. So that I think, proves it can do something easier like legal advice. A procedural body of rules with specific results and no fiction is ridiculously simple compared to creative writing or carrying on a conversation with people.
We THINK walking and talking are easy because almost everybody does it. However, for most people -- it's the most complicated thing they've ever learned how to do. The hardest things have already been done quite well with AI -- so it's only a matter of time that they can do simpler things.
Getting a law degree does require SOME logic and creativity -- but it's mostly memorizing a lot of statutes, procedures, case law and rules. It's beyond ridiculous if we think THIS is going to be that hard for AI if the can converse and make good art.
ritchie70 t1_j75anat wrote
I played with it today. It wrote two charming children’s stories, a very simple program in C, a blog post about the benefits of children learning ballet, a 500 word essay about cat claws, answered a “how do I” question about Excel, and composed a very typical corporate email.
Of the fact based items, they were correct.
I may use it in future if I need an especially ass-kissy email.
Fake_William_Shatner t1_j77myki wrote
>I went through Nintendo's logo history to see if it ever had three rings and as far I can tell it didn't.
You are working with a "creative AI" that is designed to give you a result you "like." Not one that is accurate.
AI can definitely be developed and trained on case law and give you valid answers. Whether or not they've done it with this tool is a very geeky question that requires people to look at the data and code.
Most of these discussions are off track because they base "can it be done" by current experience -- when the people don't even really know what tool was used.
Viewing a single comment thread. View all comments