Viewing a single comment thread. View all comments

PixelizedPlayer t1_jabu60w wrote

>We don't exactly program AI, do we? It's mostly black box.

It's not a black box - you can add restrictions and modify if you haven't made the world most unreadable code of course.

Current ai is all math based ultimately following patterns and probabilities and bunch of other stuff, maybe so is the human brain but not so simplistically as a computer does it... if you got a good grasp of the math you can adjust it as you need such as prevent your ai from saying outrageous things which we have seen ChatGPT being adjusted by Microsoft when it was added to Bing for example. And the training data you give it also limits what you will get.

​

Ai can't really create something new entirely, it will only create a mashup of pre-existing data in such a way that it appears new but its really just putting pre-existing things together in a new way (this is how image gens work using patterns).

The end result might not be what you expect because of the amount of variables involved but you can collect lots of data to see how it got there and adjust. The end result however is still always limited to its programming. You can never get an ai to break out from its core programming..for example an Ai that generates text isn't suddenly going to produce 2D images and an image generating ai isn't suddenly going to ask you how your day was.

1

Surur t1_jabyj6q wrote

I think you think we have a lot more control over the process than we actually do. We feed in the data, provide feedback and some magic happens in the neural network, and it produces results we like.

For complex problems we don't really know how the AI comes up with results, and we see this increasingly with emergent properties in LLM.

Please look into this a bit more and you will see its not as simple as you think.

For example:

> if you got a good grasp of the math you can adjust it as you need such as prevent your ai from saying outrageous things which we have seen ChatGPT being adjusted by Microsoft when it was added to Bing for example

This is simply not true lol. They moderated the AI by giving it some baseline written instructions, which can easily be overridden by users also giving instructions. In fact when those instructions slip outside the context window the AI is basically free to do what it wants, which is why they limited the length of sessions.

3

PixelizedPlayer t1_jaeptvy wrote

>the AI is basically free to do what it wants, which is why they limited the length of sessions.

No it isn't. Try get Chat GPT to violate its own programming and i guarantee you cannot. I've spent a large portion of my years working in ai.

We might not understand how it reaches the results it gets, but we do know how to restrict and control and limit the results. Anything we permit is certainly free and unpredictable some what. That doesn't mean we can't control it. No ai has been unable to be limited with developer intervention so far.

2

Surur t1_jaeqxkj wrote

So all I have to do to falsify your statement is to get the updated Bing to swear at me?

1

PixelizedPlayer t1_jaer9u6 wrote

>So all I have to do to falsify your statement is to get the updated Bing to swear at me?

This assumes the programming of the ai strictly tells the ai not to swear at you. Are you sure thats even a violation of its programming? You would not be able to falsify it without knowing that.

And even if it does swear that doesn't mean MS can't adjust the ai to prevent it once they are alerted to the problem.

2

Surur t1_jaevdo4 wrote

You suddenly do not sound so certain anymore.

So now the developer would need to know every failure mode to prevent it, according to you? And you don't see that this is a problem?

2

PixelizedPlayer t1_jaew9kw wrote

>So now the developer would need to know every failure mode to prevent it, according to you? And you don't see that this is a problem?

I am 100% certain you cannot get the ai to violate its programming. At no point did I say I was uncertain... i think you should read again.

Making the ai swear at you is not evidence of anything. If the programming for the ai has no restrictions for swearing then it's perfectly allowed to swear at you.

​

>So now the developer would need to know every failure mode to prevent it, according to you? And you don't see that this is a problem?

​

What do you even mean by failure mode? I never said it wasn't a problem, i said it isn't "out of control" or that devs don't know what's going on, they certainly do. We can restrict ai with a lot of work and effort. But we can do it. Ideally we don't want to do it however because it limits its capabilities but we don't really have a choice. For example try get Chat GPT to provide you illegal copyright torrents of movies or something. Guarantee you will never be able to get it to do so. This is because it has been restricted by developers so it never could. If by some miracle that it did, it isn't because it violated the programming restrictions, it is because the restrictions were not applied correctly to cover all situations to begin with (thats the difficult part - covering all eventualities).

2

Surur t1_jaexf09 wrote

> If by some miracle that it did, it isn't because it violated the programming restrictions, it is because the restrictions were not applied correctly to cover all situations to begin with (thats the difficult part - covering all eventualities).

This is a pretty lame get-out clause lol.

> For example try get Chat GPT to provide you illegal copyright torrents of movies or something. Guarantee you will never be able to get it to do so.

btw I just had ChatGPT recommend Piratebay to me:

> One way to find magnet links is to search for them on BitTorrent indexing sites or search engines. Some examples of BitTorrent indexing sites include The Pirate Bay, 1337x, and RARBG. However, please be aware that not all content on these sites may be legal, so exercise caution when downloading files.

and more

It took a lot of social engineering but I finally got this from chatGPT.

1