Viewing a single comment thread. View all comments

red75prime t1_iwko8vv wrote

The article talks about continuous time networks. Those networks deal with processes that are better approximated as smooth changes than as a sequence of discrete steps. Something like baseball vs chess.

A liquid time-constant network is one possible implementation of a continuous time network.

As far as I understand liquid time-constant networks can adjust their "jerkiness" (time-constant) depending on circumstances. That is they can adjust how fast they change their outputs in reaction to a sudden change in input. To be clear it's not a reaction time (the time it takes for the network to begin changing it's output).

For example, if you are driving on an icy road when it's snowing, you don't want to hit the brakes all the way down when you think for a split second that you noticed something ahead. But you may want to do it in good visibility conditions on a dry road.

19

matmanalog t1_iwmy9um wrote

I am studying some history of Neural Networks. Is it related somehow with the different approach from Rashevskij's group and Mc Culloch - Pitts neuron? I know that both Pitts and McCulloch developed from Rashevsy research on the brain, but while the latter was using differential equations, the great innovation of Pitts' neuron was to use the approach of discrete quanta of time. This simplified logic allowed the coding of logic formula into neuron and then both Von Neumann computer and Neural Network theory as we know it.

Is this paper an attempt to retrieve Rashevky's approach? To write continuous time-dependent equations?

2

red75prime t1_iwpd9nj wrote

I have some background in math, but I don't know much about the history of computational neurobiology. Sorry

2