Viewing a single comment thread. View all comments

Mortal-Region t1_j8j9mii wrote

Neural networks in general are basically gigantic, static formulas: get the input numbers, multiply & add a billion times, report the output numbers. What you're imagining is more like reinforcement learning, which is the kind of learning employed by intelligent agents. An intelligent agent acts within an environment, and actions that lead to good outcomes are reinforced. An agent balances exploration with exploitation; exploration means trying out new actions to see how well they work, while exploitation means performing actions that are known to work well.

3

phloydde OP t1_j8l76wr wrote

There have been documented cases where a human who is suffering from short term memory loss repeat themselves verbatim to a given stimulus. This leads me to believe that one of the missing links is only short/long term memory.

2

Mortal-Region t1_j8lcmqf wrote

Yeah, I think that's right. An agent should maintain a model of its environment in memory, and continually update the model according to its experiences. Then it should act to fill in incomplete portions of the model (explore unfamiliar terrain, research poorly understood topics, etc).

1