Comments
ihateshadylandlords t1_iwjfm24 wrote
>”The new machine-learning models we call 'CfC's' replace the differential equation defining the computation of the neuron with a closed form approximation, preserving the beautiful properties of liquid networks without the need for numerical integration," says MIT Professor Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and senior author on the new paper. "CfC models are causal, compact, explainable, and efficient to train and predict. They open the way to trustworthy machine learning for safety-critical applications."
Cool, excited to see what comes after this.
!RemindMe 3 years
RemindMeBot t1_iwjfqs7 wrote
I will be messaging you in 3 years on 2025-11-16 01:59:14 UTC to remind you of this link
25 OTHERS CLICKED THIS LINK to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) ^(delete this message to hide from others.)
^(Info) | ^(Custom) | ^(Your Reminders) | ^(Feedback) |
---|
ReasonablyBadass t1_iwk1f0i wrote
Can someone ELI5 these liquid networks?
modestLife1 t1_iwk945n wrote
u gonna be hit with a slew of reminders in the coming years
night_dude t1_iwkhutj wrote
Please. We need a scientist in here. It sounds awesome - presumably emulating neuron function with AI is a huge deal?
red75prime t1_iwkjws4 wrote
> CfCs could bring value when: (1) data have limitations and irregularities (for example, medical data, financial time series, robotics and closed-loop control, and multi-agent autonomous systems in supervised and reinforcement learning schemes, (2) the training and inference efficiency of a model is important (for example, embedded applications) and (3) when interpretability matters.
Something akin to the cerebellum it seems. It is better suited for continuous motor control (and some other tasks). Yet another component for the human-level AI.
My 50% AGI estimation went down from 2033 to 2030
vhu9644 t1_iwkn78y wrote
I think I have the training to do this (math + BME undergrad, in grad school for comp bio), but currently busy with some work. If nothing posted in 2 days send me a reminder and I’ll try.
ssssssssssus t1_iwknpti wrote
!RemindMe 2 Years
red75prime t1_iwko8vv wrote
The article talks about continuous time networks. Those networks deal with processes that are better approximated as smooth changes than as a sequence of discrete steps. Something like baseball vs chess.
A liquid time-constant network is one possible implementation of a continuous time network.
As far as I understand liquid time-constant networks can adjust their "jerkiness" (time-constant) depending on circumstances. That is they can adjust how fast they change their outputs in reaction to a sudden change in input. To be clear it's not a reaction time (the time it takes for the network to begin changing it's output).
For example, if you are driving on an icy road when it's snowing, you don't want to hit the brakes all the way down when you think for a split second that you noticed something ahead. But you may want to do it in good visibility conditions on a dry road.
ihateshadylandlords t1_iwl9o98 wrote
drizel t1_iwmicnh wrote
Could this lead to more fluid motor control in robots?
matmanalog t1_iwmy9um wrote
I am studying some history of Neural Networks. Is it related somehow with the different approach from Rashevskij's group and Mc Culloch - Pitts neuron? I know that both Pitts and McCulloch developed from Rashevsy research on the brain, but while the latter was using differential equations, the great innovation of Pitts' neuron was to use the approach of discrete quanta of time. This simplified logic allowed the coding of logic formula into neuron and then both Von Neumann computer and Neural Network theory as we know it.
Is this paper an attempt to retrieve Rashevky's approach? To write continuous time-dependent equations?
ThePerson654321 t1_iwn3fdj wrote
It really is amazing how naive you are.
[deleted] t1_iwnbifi wrote
That wasn't very nice said in this context
94746382926 t1_iwool7i wrote
I mean if I'm reading this right this is potentially huge right?
red75prime t1_iwpd9nj wrote
I have some background in math, but I don't know much about the history of computational neurobiology. Sorry
Danger-Dom t1_iwze8zr wrote
Yes it opens up the possibility for large scale networks that use this type of formulation. So it's hugeness will be dependent on how useful larger versions of those networks turn out to be.
blueSGL t1_iwj4xs8 wrote
direct link to the nature paper: https://www.nature.com/articles/s42256-022-00556-7