Viewing a single comment thread. View all comments

alexmin93 t1_jdmocbw wrote

The problem is that LLMs aren't capable to make decisions. While GPT-4 can chat almost like a sentient being, it's not sentient at all. It's not able to coprehend the limitations of it's knowledge and capabilities. It's extremely hard to make it call an API to ask for more context. There's no way it will be good at using a computer like a user. It can predict what wappens if you do something but it won't be able to take some action. It's a dataset limitation mostly, it's relatively easy to train language models as there's almost infinite ammount of text on the Internet. But are there any condition-action kind of datasets? You'd need to observe human behavior for millenias (or install some tracker software on thousands of workstations and observe users behavior for years)

2