Viewing a single comment thread. View all comments

[deleted] t1_irjrp24 wrote

[deleted]

3

SejaGentil OP t1_irk0r98 wrote

Yes I understand that, my point is that if it was continuously learning, the prompt would have no limit as it would learn from your previous prompts. Thus you could teach it way more complex concepts than what you can do with the limit of the prompt. You could guide it to learn a domain specific problem, and then get it to help you with insights and answers. That's not possible right now, since you can't fit an entire field or a complex problem in a single prompt.

−1

[deleted] t1_irk31r2 wrote

[deleted]

−1

visarga t1_irloorr wrote

> You can just split a large text to parts and feed each one of them

This won't capture long range interactions between passages or care about their ordering.

3

SejaGentil OP t1_irk65sz wrote

That doesn't make sense to me. I don't think we're speaking the same language. I absolutely understand that is how it works, but why would it? Humans learn and adjust their synaptic weights. That is fundamental for us to function as intelligent beings. It is fundamentally impossible for an AGI to be just a static set of weights that isn't updated, as it won't learn. Humans don't need any labelling to learn, why would deep neural networks?

0

[deleted] t1_irk6vyz wrote

[deleted]

1

SejaGentil OP t1_irkfnwg wrote

So that's where we disagree, I'd say humans learn a lot with no supervision. Like, we pick up our first language with no teaching whatsoever, we just do.

I don't have anything in mind actually, I'm honestly just bothered that AI programs like GPT-3 have static weights. It would make a lot more sense to me if they learned from their own prompts. Imagine, for example, if GPT-3 could remember who I am? I actually thought that was how Lamda worked, for example, i.e., that it had memories of that Google developer. But yea, I guess that's just how things are made.

1