Viewing a single comment thread. View all comments

chromoscience OP t1_ix5qptl wrote

Please support by following us. Thank you! Woot!

Spending time doing nothing helps artificial neural networks learn faster

Humans require 7 to 13 hours of sleep every night, depending on their age. Numerous things take place during this period, including changes in hormone levels, the body relaxing, and variations in heart rate, respiration, and metabolism. Not much happens in the brain.

The University of California San Diego School of Medicine’s Maxim Bazhenov, PhD, professor of medicine and a sleep researcher said that the brain is quite busy when we sleep, reiterating what we have learnt throughout the day. Sleep aids in memory reorganization and delivers memories in their most effective form.

Bazhenov and colleagues have described how sleep strengthens rational memory, the capacity to retain arbitrary or indirect relationships between things, persons, or events, and guards against forgetting past memories in earlier published work.

The architecture of the human brain is used by artificial neural networks to enhance a wide range of technologies and systems, from fundamental research and health to finance and social media. However, they fall short in one crucial area. When artificial neural networks learn sequentially, new input overwrites existing knowledge, a process known as catastrophic forgetting. In other areas, they have exceeded human performance, such as computing speed.

Conversely, according to Bazhenov, the human brain continually learns and integrates new information into existing knowledge, and it normally learns best when fresh instruction is combined with intervals of sleep for memory consolidation.

Senior author Bazhenov and colleagues describe how biological models may help reduce the risk of catastrophic forgetting in artificial neural networks, increasing their usefulness across a spectrum of research interests. Their article will appear in the PLOS Computational Biology journal on November 18, 2022.

The researchers employed spiking neural networks, which artificially imitate natural brain systems by transmitting information as discrete events (spikes) at certain times rather than continually.

They discovered that catastrophic forgetting was reduced when the spiking networks were taught on a new task but with sporadic off-line intervals that mirrored sleep. The networks may repeat previous memories while “sleeping,” just like the human brain, according to the study’s authors, without explicitly requiring prior training data.

In the human brain, patterns of synaptic weight—the force or amplitude of a link between two neurons—represent memories.

According to Bazhenov, neurons fire in a precise order as we acquire new information, which increases the number of synapses between them. The spiking patterns we learnt while awake are automatically replicated when we sleep. Reactivation or replay is the term used.

Synaptic plasticity, the ability to change or shape synapses, is still present during sleep and can further increase synaptic weight patterns that represent the memory, helping to avoid forgetting or enabling the transfer of information from old to new activities.

This method was used to prevent catastrophic forgetting in artificial neural networks, as discovered by Bazhenov and coworkers.

It implied that these networks may continue to learn, much like people or animals. Improving memory in humans may be made easier by having a better understanding of how the brain processes information when we sleep. Improving sleep patterns can improve memory.

In other studies, we employ computational tools to create the best plans for applying stimulation while we sleep, such audio tones that promote learning and sleep patterns. This may be crucial in situations where memory isn’t functioning at its best, such when it deteriorates with age or under certain medical illnesses like Alzheimer’s disease.

Sources:

Golden R, Delanois JE, Sanda P, Bazhenov M (2022) Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation. PLoS Comput Biol 18(11): e1010628. https://doi.org/10.1371/journal.pcbi.1010628

https://today.ucsd.edu/story/artificial-neural-networks-learn-better-when-they-spend-time-not-learning-at-all

44

ShufflePlay t1_ix6bzhu wrote

So androids do dream of electric sheep. Very cool. Wow.

6

[deleted] t1_ix6fw84 wrote

I have a feeling this paper, or a follow-up paper closely related to this line of work, will become a seminal paper across domains. Absolutely fascinating

6

xeneks t1_ix6brry wrote

This is why the most intelligent people you get in science are kindergarten kids, where they actually get to rest on a daybed after a meal. :)

Definitely will follow, but also will encourage workplaces to have daybeds or places where you can sleep a powernap, so that quality and skill level and competence continually increases rather than stagnates.

1