Submitted by Dan60093 t3_10came3 in singularity
Lawjarp2 t1_j4fatix wrote
Complex understanding and reasoning of the world is necessary to do so. ChatGPT is still just going to spit out what it feels will be the most probable code and not something insightful. Well, it can actually, but that's an emergent property that is weak at best. With big enough dataset and parameters it could identify relationships and become complex enough to do recursive self improvement but it would take a lot of money to do that.
Somebody posted a while ago how LLMs are animal intelligence like us and not true intelligence. That seems true, we haven't truly cracked intelligence. The thing is we may not need to. We will eventually be able to train bigger, better and more complex models and get to human level intelligence.
Viewing a single comment thread. View all comments