Viewing a single comment thread. View all comments

dudaspl t1_je3zq9d wrote

Exactly. LLMs mimic intelligence by just generating text, and since they are trained on civilization-level knowledge/data they do it very well and can seem as intelligent as humans.

The real test is to put them to novel scenarios and see how their intelligence can produce solutions to these, i.e. put it in an some sort of escape room and see if they can escape.

1