Comments

You must log in or register to comment.

ihateshadylandlords t1_iwjfm24 wrote

>”The new machine-learning models we call 'CfC's' replace the differential equation defining the computation of the neuron with a closed form approximation, preserving the beautiful properties of liquid networks without the need for numerical integration," says MIT Professor Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and senior author on the new paper. "CfC models are causal, compact, explainable, and efficient to train and predict. They open the way to trustworthy machine learning for safety-critical applications."

Cool, excited to see what comes after this.

!RemindMe 3 years

68

Ezekiel_W t1_iwjzvtq wrote

Fantastic work, a major advance for AI.

15

red75prime t1_iwkjws4 wrote

> CfCs could bring value when: (1) data have limitations and irregularities (for example, medical data, financial time series, robotics and closed-loop control, and multi-agent autonomous systems in supervised and reinforcement learning schemes, (2) the training and inference efficiency of a model is important (for example, embedded applications) and (3) when interpretability matters.

Something akin to the cerebellum it seems. It is better suited for continuous motor control (and some other tasks). Yet another component for the human-level AI.

My 50% AGI estimation went down from 2033 to 2030

20

vhu9644 t1_iwkn78y wrote

I think I have the training to do this (math + BME undergrad, in grad school for comp bio), but currently busy with some work. If nothing posted in 2 days send me a reminder and I’ll try.

−2

red75prime t1_iwko8vv wrote

The article talks about continuous time networks. Those networks deal with processes that are better approximated as smooth changes than as a sequence of discrete steps. Something like baseball vs chess.

A liquid time-constant network is one possible implementation of a continuous time network.

As far as I understand liquid time-constant networks can adjust their "jerkiness" (time-constant) depending on circumstances. That is they can adjust how fast they change their outputs in reaction to a sudden change in input. To be clear it's not a reaction time (the time it takes for the network to begin changing it's output).

For example, if you are driving on an icy road when it's snowing, you don't want to hit the brakes all the way down when you think for a split second that you noticed something ahead. But you may want to do it in good visibility conditions on a dry road.

19

3pix t1_iwlunkd wrote

Hard

1

matmanalog t1_iwmy9um wrote

I am studying some history of Neural Networks. Is it related somehow with the different approach from Rashevskij's group and Mc Culloch - Pitts neuron? I know that both Pitts and McCulloch developed from Rashevsy research on the brain, but while the latter was using differential equations, the great innovation of Pitts' neuron was to use the approach of discrete quanta of time. This simplified logic allowed the coding of logic formula into neuron and then both Von Neumann computer and Neural Network theory as we know it.

Is this paper an attempt to retrieve Rashevky's approach? To write continuous time-dependent equations?

2

94746382926 t1_iwool7i wrote

I mean if I'm reading this right this is potentially huge right?

3

Danger-Dom t1_iwze8zr wrote

Yes it opens up the possibility for large scale networks that use this type of formulation. So it's hugeness will be dependent on how useful larger versions of those networks turn out to be.

1