3_Thumbs_Up

3_Thumbs_Up t1_je8tvtc wrote

>We can test with things that are highly unlikely to be in the training data.

We can also test things where theres an infinite amount of alternatives so that memorization would be impossible.

If GPT could solve every arithmetic problem thrown at it, then it's obvious that it has developed some understanding of arithmetic, as it's simply impossible to memorize the answer for every possible problem.

However, the fact that it fails on arithmetic of large numbers could be an indication that it doesn't understand, but failure could also be caused by other factors, such as lack of enough working memory or similar (humans would fail at multiplying large numbers in their head as well).

So I think one could prove understanding, but proving lack of understanding seems harder.

2

3_Thumbs_Up t1_jadq63e wrote

There is an infinite multitude of ways history might play out, but they're not all equally probable.

The thing about the singularity is that its probability distribution of possible futures is much more polarized than humans are used to. Once you optimize hard enough for any utility curve you get either complete utopia or complete dystopia the vast majority of times. It doesn't mean other futures aren't in the probability distribution.

12