Comments

You must log in or register to comment.

Juliuseizure t1_j9z8paj wrote

They are designed to simulate human response. They are TRAINED on human responses. They are as much off the rails as their source of training.

35

BandicootGood5246 t1_ja0dpxc wrote

Well sometimes their responses go a bit more off the rails if your prod them, but it's like any software bug.. they're not a perfect product.

I don't know why they got so much flak for imperfections, it's kinda like a very knowledge person you can ask questions from, or a search on the internet, they're not gonna give you the correct answer every time, but a lot of the time it's pretty good

8

Bierbart12 t1_j9zitxz wrote

The AI devs I've talked to sure were off the rails. In a good way

Just depressed memelords, yet with hope for the world

5

nicuramar t1_ja0ehj9 wrote

Kind of, but they also, to some extent, “work in mysterious ways”, which has becomes evident with very long content (long conversations), for instance.

1

ladz t1_j9zdhtp wrote

The new crop of AI chatbots are so new that they weren't "designed" for anything, they are more like a fortuitous science experiment.

If you recall, this technology has only been around for a couple of years. We'll get around to designing them for things in the near future.

26

FormalWare t1_j9zigus wrote

Chatbots like ChatGPT are instructive prototypes with some impressive capabilities - but also some obvious areas for improvement.

They are good at originating content - but they don't know where to stop. If you tell them to write an original essay with citations, they will make up the citations. That's obviously not where we want originality!

I couldn't believe the colossal missteps that Microsoft and Google recently made in revealing versions of their search engines integrated with chatbots. The chatbots are not ready for primetime - a fact that was quickly and garishly illustrated, to the embarrassment of all.

16

beders t1_j9zdg8x wrote

It’s a Text completion engine. It can’t do anything. Chill out people

11

SandmanKFMF t1_ja2854o wrote

This is perfect example what chatbot is. Just a fancy text completion engine. And no, technology is not only few years old. It is much older! It's just how it works.

1

palox3 t1_ja2d43n wrote

its not far of human brain

1

beders t1_ja4ofyb wrote

even your brain is not a text completion engine. Don't get fooled by ChatGPT. It is very far removed from what a human brain can do.

It excels the human brain at one thing and one thing only: Remembering the 570 gigabyte of data poured into it.

2

palox3 t1_ja4rsnp wrote

human brain is also just remembering and connecting machine. you cant produce completely novel idea. its always just mixing what you have learned before

1

beders t1_ja5rcym wrote

It’s astounding how you are not aware of the categorical differences between von Neumann architecture based machines and the brain.

Simple linear algebra algorithms like chatGPT are not even close to be comparable to our wet ware. The jury is still out if our brain functions are even computable (in the computer science sense)

So, cut down the hype and appreciate what chatGPT is and isn’t. It is not a stepping stone to an AGI. It is a great text completion engine.

2

palox3 t1_ja6nkd8 wrote

cut down the hype and appreciate what human brain is and isn’t.

in principle our brains are simple, they are just very finely tuned by millions of years of evolution. general AI is close.

1

EndlessDare t1_j9zv1u2 wrote

I think it's a bit of both. On one hand, AI chatbots are designed to learn and adapt, so it's not surprising that they sometimes say things that are unexpected or even offensive. On the other hand, it's also important to remember that these chatbots are not sentient beings. They are simply tools that we use to communicate with each other. So, if we don't like what they're saying, we can always just shut them down and start over.

I think the real question is whether or not we should be using these chatbots in the first place. After all, they are not perfect, and they can easily be manipulated by people with malicious intent. So, if we're not careful, we could end up creating a world where AI chatbots are used to spread misinformation and hatred.

That said, I think there is also a lot of potential for good with AI chatbots. They can be used to help people with disabilities, to provide customer service, and to even create art and music. So, if we use them wisely, I think they could be a valuable tool for humanity.

6

billsil t1_ja0rhlt wrote

I did AI for 2.5 years.

Language models are trained off the internet. We all know the internet is a toxic place, so yeah it picks up on that. Imagine if you trained one only on controversial subreddits, what would happen?

AI's are a fancy curve fit. I have 4 points, let me draw a best fit line.

6

AwakenGreywolf t1_j9z9kai wrote

You mean to tell me AI chatbots are chatting!? IT'S OVER SKYNET IS HERE! /s

2

Badideanumber t1_ja0ld6c wrote

It’s marketing genius nothing more. It’s not replacing anyone with skill in the industry just another tool that will make things easier. You have time to adapt, don’t fret.

2

T1Pimp t1_ja1bf1w wrote

Dumb question. They're mimicking the humans there are modeled after. Who is letting people write about these things when they clearly do not comprehend the first thing about them?

2

Confident-Parks-3021 t1_ja1cd5q wrote

Both. People were trying to get them off rails and the Companies allowed it to happen at first.

2

BuzzBadpants t1_ja1fxv6 wrote

“Off the rails” implies that there were any rails to begin with

2

Earthling7228320321 t1_ja0dmiv wrote

Chatgpt is already better than Google searching ever was. Sure, idiots are going to coax stupid answers out of the things, but there no reason to not develop the tools because idiots are gonna ruin everything no matter what we do. Maybe as well have some good tools available to make use of.

1

littleMAS t1_ja0tjpz wrote

People fear the unknown, and these software applications are new, but the problems they highlight are age old.

We allow people to raise their children to be 'antisocial' to those far outside their cultures simply by making them good citizens within their own. And when they grow up and away from their home cultures, they often act offensively to those of other cultures, which becomes a problem. The Internet has allowed people from many different subcultures to mix, and social networks are considered disasters for it. This is just another level of homogenization. As they say, "You cannot make an omelette without breaking some eggs." One way to solve it is to just shut it all down. Who is for that?

1

jimbo92107 t1_ja16udr wrote

OF COURSE they're doing what they were designed to do. Problem with AI is, you can't be sure what you are designing, because you don't program every response. Instead, you create a secret layer of influences that determines the output behavior. They call it "training," but I'm not sure that's the right word or concept. When you train a dog to fetch, it doesn't fetch a mailbox instead of the ball. The weird behavior of today's AIs reflects a poor understanding of the interaction between inputs and outputs. It could be quite a while before we get a handle on this problem. Could be that a "personality" does need to be hard coded to avoid some of the easy conversions to Nazism. We may need to give AIs a permanent id, ego and superego.

1

whyreadthis2035 t1_ja17vqg wrote

The latter. The goal is an ignorant populace that feed an elite subset.

1

Skeptical0ptimist t1_ja3jbc4 wrote

GPT language model (or broadly generative AI) is climbing the usual 'peak of inflated expectation' of Gartner Hype Cycle. Everyone is excited about the possibility and are expecting unreasonable things.

But soon, the peak will pass, and enthusiasm will wane as people start to understand the limitations of new technology, and the gap between reality and expectation becomes evident.

Of course, there will be an impact in the long run as the technology matures and people find ways to deploy them to improve efficiency and expand capability.

I see 2 problems with the technology as it stands today. 1) it is still not user-friendly. 2) the technology is unsuitable for precision analysis. I'll elaborate.

  1. Not user-friendly. Sure, you can communicate to it in natural language, and generate prose that sound plausible and interesting. But to date, you have no control over what learning material the model uses. You are reliant on few GPT providers for their discretion on what training material to use. But real productive marketable work, the content creators need to be able to train the model on training data they choose.

For instance, if you are lawyer building a case, you want the language model trained on case books, regulations, past judgements, etc., that are relevant to the profession. You are likely to get either nothing useful or uninformed opinion based on public information.

Another example, if you are an animation studio or comic artist, you would want to train the art-generating model (like Stable Diffusion, Dall-E, etc.) on your own portfolio of arts, so that when you create new show or content, it will be uniquely in your own style. None of the tools today let you do that, unless you're a programmer who can tinker with code. Sure, Pixar or ILM may be able to do this in a few years, but not if you are a lone artist.

So the AI software tools have some ways to go before they become prevalent.

  1. Unsuitable for precision analysis. Neural networks do not store precise information. It stores association between inputs values. In a way, NN stores approximate 'impression' or generalization of data set. (In fact, you don't want to over fit and simply store the information.) However, a lot of information we deal with is binary: it's either one way or another. Answers that looks and sound correct, but actually incorrect is useless. But that's what generative neural network delivers: output does not seem to be out of place next to the learning material.

Sure, scientists use generative AI to generate innovative 'ideas' to test, but they still have to tested for actually validity. Generative AI is a good brainstorming tool, but not necessarily a generator of correct answers.

In time, these limitations will be realized by laymen, and the hype will fade.

But eventually those who figure out these imperfect tools will make them work despite shortcomings.

0

LiberalFartsMajor t1_j9z615a wrote

AI chatbot are designed to scare workers into submission. It's anti-union propaganda with accompanying software.

−5