Viewing a single comment thread. View all comments

jms4607 t1_j0d65c3 wrote

  1. Projecting can be interpolation, which these models are capable of. There are a handful of image/text models that can imagine/project an image of a puppy wearing a sailor hat.

  2. All you need to do is have continuous sensory input in your RL environment/include cost or delay of thought in actions, which is something that has been implemented in research to resolve your f(x) = 2x issue.

  3. The Cat example is only ridiculous because it obviously isn’t a cat. If we can’t reasonably prove that it is or isn’t a cat, then asking whether it is a cat or not is not a question worth considering. Similar idea goes for the question “is ChatGPT capturing some aspect of human cognition”. If we can’t prove that our brains work in a functionally different way that can’t be approximated to arbitrary degree by a ML model, then it isn’t something worth arguing ab. I don’t think we know enough ab neuroscience to state we aren’t just doing latent interpolation to optimize some objective.

  4. The simba is only cute because you think it is cute. If we trained an accompanying text model for the simba function, where it was given the training data “you are cute” in different forms, it would probably respond yes if asked if it was cute. GPT-3 or ChatGPT can refer and make statements ab itself.

At least agree that evolution on earth and human actions are nothing but a MARL POMDP environment.

1