Viewing a single comment thread. View all comments

Username912773 t1_jbz7y8x wrote

LLMs cannot be sentient as they require input to generate an output and do not have initiative. They are essentially giant probabilistic networks that calculate the probability of the next token or word.

As you scale model size up, not only do you need more resources to train it but you also require more time and data to train it. So, why would anyone just “screw it” and spend potentially millions or billions of dollars on something that may or may not work and almost certainly have little monetary return?

−2

TemperatureAmazing67 t1_jbzc8cc wrote

'require input to generate an output and do not have initiative' - use random or other's network output.

Also, the argument about next token is skrewed up. For a lot of task everything you need is perfectly predicted next token.

2

Username912773 t1_jbze0ug wrote

That’s not a solution. That doesn’t make LLMs sentient it just makes them a cog in a larger machine.

Logic and task performance and sentience are different.

1