Viewing a single comment thread. View all comments

SejaGentil OP t1_irk65sz wrote

That doesn't make sense to me. I don't think we're speaking the same language. I absolutely understand that is how it works, but why would it? Humans learn and adjust their synaptic weights. That is fundamental for us to function as intelligent beings. It is fundamentally impossible for an AGI to be just a static set of weights that isn't updated, as it won't learn. Humans don't need any labelling to learn, why would deep neural networks?

0

[deleted] t1_irk6vyz wrote

[deleted]

1

SejaGentil OP t1_irkfnwg wrote

So that's where we disagree, I'd say humans learn a lot with no supervision. Like, we pick up our first language with no teaching whatsoever, we just do.

I don't have anything in mind actually, I'm honestly just bothered that AI programs like GPT-3 have static weights. It would make a lot more sense to me if they learned from their own prompts. Imagine, for example, if GPT-3 could remember who I am? I actually thought that was how Lamda worked, for example, i.e., that it had memories of that Google developer. But yea, I guess that's just how things are made.

1