Viewing a single comment thread. View all comments

tooold4urcrap t1_ispfdo4 wrote

I think when we imagine something, it can be original. It can be abstract. It can be random. It can be something that doesn't make sense. I don't think anything with these AIs (is AI is even the right term? I'm guessing it's more of a search engine type thing) is like that. It's all whatever we've plugged into it. It can't 'imagine', it can only 'access what we've given it'.

Does that make sense? I'm pretty high.

−4

AdditionalPizza t1_ispk568 wrote

>I'm guessing it's more of a search engine type thing)

It isn't, it's fed training data, and then that data is removed. It literally learns from the training data. Much like when I say the word river, you don't just imagine a river you saw in a google image search. You most likely think of a generic river that could be different the next time someone says the word river, or maybe it's a quick rough image of a river near your house you have driven by several times over the years. Really think about and examine what the first thing that pops into your head is. Do you think it's always the EXACT same, do you think it's very detailed? The AI learned what a river is from data sets, and understands when it "sees" a painting of a unique river, the same as you and me.

​

>It can't 'imagine', it can only 'access what we've given it'.

This is exactly what the op asked for an answer to. You say it can't imagine something, it just has access to the data it was given. How do humans work? If I tell you to imagine the colour "shlupange" you can't. You have no data on that. Again, I will stress, these transformer AI have zero saved data the way you're imagining it that it just searches up and combines it all for an answer. It does not have access to the training data. So how do we say "well it can't imagine things, because it can't..."

...Can't what? I'm not saying they're conscious or have the ability to imagine, I'm saying nobody actually knows 100% how these AI come to their conclusion outside of using probability for the best answer, which appears to be similar to how humans brains work when you really think about the basic process that happens in your brain. Transformers are a black box at a crucial step in their "imagination" that isn't understood yet.

When you're reading this, you naturally just follow along and understand the sentence. When I tell you something you instantly know what I'm saying. But it isn't instant, it actually takes a fraction of a second for you to process it. That process that happens, can you describe what happens in that quick moment? When I say the word cat, what exactly happened in your brain? What about turtle? Or forest fire? Or aardvark? I bet the last one tripped you up for a second. Did you notice your brain try and search something it thinks it might be? You had to try and remember your training data, but you don't have access to it so you probably try and make up some weird animal in your head.

31

Altruistic_Yellow387 t1_ispjzwo wrote

But even humans need sources to imagine things (they extrapolate on things they already known and have seen, nothing is truly original)

15

visarga t1_isq5mvf wrote

I believe there is no substantial difference. Both the AI and the brain transform noise into some conditional output. AIs can be original in the way they recombine things - there's space for adding a bit of originality there, and humans can be pretty reliant themselves on reusing other styles and concepts - so not as original as we like to imagine. Both humans and AIs are standing on the shoulders of giants. Intelligence was in the culture, not in the brain or AI.

3