Comments

You must log in or register to comment.

[deleted] t1_ix5s5di wrote

[removed]

184

[deleted] t1_ix601on wrote

I just skimmed it a bit so this is what I grok of the big idea:

I'm cramming for a test on similar ish right now so I'm not going to do too much of what they were trying to solve for... which is that when human minds, and neural networks of the sort they were working with, learn about data which is extremely similar the "solution" to that "problem" is overwritten to an extent.

In the case of this paper problem_1 was to solve for a sequence of actions when presented with "objects" presented horizontally, which are identical to a sequence of actions when presented vertically in problem_2. Think of a maze rotated 90 degrees.

If you train this network on problem_1 for awhile, then train it on problem_2 for awhile, it forgets how to solve problem_1.

If you train it on sequences of problem_1, then 2, then 1, then 2, and so on, it learns a solution which applies to both. Essentially some of it's "neurons" learn to rotate the maze depending on orientation. However this isn't how the human mind solves this issue, and solving it that way is impractical especially for a large number of tasks.

What they did instead was to "batch" that retraining by first learning 1 well, then start learning 2, but after say five times through the "maze" of learning problem_2, the network "sleeps" and the firing patterns which were characteristic when solving for problem 1 were replayed by a module which is playing the role of the hippocampus. So rather than replaying the problem, they replay the solution to problem 1.

They show that the results are statistically extremely similar to training on the interleaved approach of 1,2,1,2,1,2, where the network learns that 1 and 2 are essentially the same task with a slight variation. It learns an abstraction essentially to rotate the maze, and does so without having to store the raw data (for interleaving).

​

Edit: here's a very relevant paper on how our brains do it and likely what they were trying to mimic https://pubmed.ncbi.nlm.nih.gov/19709631/

34

DrXaos t1_ix66waj wrote

The introduction of the paper is explanatory and not particularly technical.

Conventionally artificial neural networks learn only when old and new data are shuffled when presented for training. People don’t learn like that, they can concentrate and learn new skills while not forgetting old ones, but conventional neural network algorithms fail to do that. This paper presents a model of sleeping in a biologically inspired neural network model in which the sleep phase algorithms overcomes the problem.

27

[deleted] t1_ix6v9dc wrote

We also have problems of overwriting data without intervening sleep. Breaks can help to an extent as well. When you space out your brain is compressing ideas into abstractions as it does when you sleep, but only for short less effective bursts.

This is why they sought to replicate that mechanism in silico

8

Patarokun t1_ix5ydv2 wrote

This certainly makes sense for why sleep is such a universal and critical trait. Without it you essentially run out of "RAM" and forget stuff that you need to survive. Sleep helps put that RAM into long term memory without overwriting the important stuff.

*I know the computer analogy has major issues I'm just trying to parse what the study said in the best framework I understand.

131

[deleted] t1_ix6x5gl wrote

Naw it's fine haha. Modern neuro and cognitive research often develop ML models for hypothesis generation and testing... because it works. ML draws extremely heavily from psych/neuro. I lean a lot on my psych background, and more recent training, to understand machine learning concepts as I am pushing to transition to data science.

I think the more accurate way to think about it though is as compression and refinement. The brain attempts to train itself to find a model which best fits what you experienced. It wants to extract the essence. Compression looks for equations which can reconstruct something without representing every detail. Same idea.

When you sleep in fact random noise is injected, just as it is to train machine learning models, so that you learn a more general idea rather than latching too much onto specifics which may be non-significant. Before it has the abstraction/compression algorithm figured out it doesn't know where the big idea ends, and the details begin.

Ram is more like not being able to recite a poem after the first time you hear it, or to do complex math without pen and paper.

24

Patarokun t1_ix6z0dd wrote

Interesting, how does random noise help ML not become too narrow-focused? I'm trying to think of examples, like how they'll kick the Boston Dynamics robots to test their reactions in unstable environments, but that's not quite right.

4

[deleted] t1_ix72jka wrote

Great question! It's a philosophically beautiful answer (I think).

Essentially everything in the universe exists in some state which resulted from some finite set of possibilities.

Your height for example is a combination of genetic, nutritional, and possibly behavioral influences. Each of those influences has a different magnitude of effect. Perhaps you personally might have been a few inches taller or shorter if some of those variables had different values. However on the whole people are more or less the same height, with some degree of variation.

That variation is driven by said differences in genes, etc, but there's only so much those variables can differ. They have a finite set of possibilities. So it turns out that most people will be close to some average height, with a predictable degree of variation to either side. A bell curve.

However you need to be sure you have a representative sample of people to determine what that average is and how much variation to expect. If you were to say look at NBA players you might get a very different idea what the average height was, and how much it might vary.

If you were then to try to use that expectation of how tall people are to inform other ideas like the size of cars, and the size of garages, and how much gas people might use, and so on, you could end up with a less than ideal model of the world.

So the best thing to do would be to get a really big sample of people in different situations, so that you can figure out the true average. You want to account for problems with how you selected people to measure (the basketball team) by averaging it out.

That isn't always possible or practical so what we and our brains do is to expect that some variation exists and to simulate this by taking an average, and then adding randomness distributed around that average, because that's how pretty much everything works anyways. We generate a bigger dataset from a smaller one, assuming some degree of randomness which tends towards a central limit.

Randomness is useful in other ways as well.. basically if you want to better estimate the influence basketball has on height, delete that variable at random. Pretend it doesn't exist and run a simulation and get a bettr idea of how the other variables play out.

Statistics is beautiful

9

Patarokun t1_ix863nv wrote

Ok so the randomness lets the brain/AI have a better sense of normal distribution, that makes sense.

And in this case sleep almost helps lump things into the different sigmas so it's easy to make decisions without getting lost in datapoints.

2

elanalion t1_ix63vz2 wrote

Thank you! That really helps me understand what they meant.

6

ToShrt t1_ix8lzlg wrote

I thought your analogy worked great. Ive always tried to find a great way to compare computers to the human body and your analogy has helped me make a great connection

2

chromoscience OP t1_ix5qptl wrote

Please support by following us. Thank you! Woot!

Spending time doing nothing helps artificial neural networks learn faster

Humans require 7 to 13 hours of sleep every night, depending on their age. Numerous things take place during this period, including changes in hormone levels, the body relaxing, and variations in heart rate, respiration, and metabolism. Not much happens in the brain.

The University of California San Diego School of Medicine’s Maxim Bazhenov, PhD, professor of medicine and a sleep researcher said that the brain is quite busy when we sleep, reiterating what we have learnt throughout the day. Sleep aids in memory reorganization and delivers memories in their most effective form.

Bazhenov and colleagues have described how sleep strengthens rational memory, the capacity to retain arbitrary or indirect relationships between things, persons, or events, and guards against forgetting past memories in earlier published work.

The architecture of the human brain is used by artificial neural networks to enhance a wide range of technologies and systems, from fundamental research and health to finance and social media. However, they fall short in one crucial area. When artificial neural networks learn sequentially, new input overwrites existing knowledge, a process known as catastrophic forgetting. In other areas, they have exceeded human performance, such as computing speed.

Conversely, according to Bazhenov, the human brain continually learns and integrates new information into existing knowledge, and it normally learns best when fresh instruction is combined with intervals of sleep for memory consolidation.

Senior author Bazhenov and colleagues describe how biological models may help reduce the risk of catastrophic forgetting in artificial neural networks, increasing their usefulness across a spectrum of research interests. Their article will appear in the PLOS Computational Biology journal on November 18, 2022.

The researchers employed spiking neural networks, which artificially imitate natural brain systems by transmitting information as discrete events (spikes) at certain times rather than continually.

They discovered that catastrophic forgetting was reduced when the spiking networks were taught on a new task but with sporadic off-line intervals that mirrored sleep. The networks may repeat previous memories while “sleeping,” just like the human brain, according to the study’s authors, without explicitly requiring prior training data.

In the human brain, patterns of synaptic weight—the force or amplitude of a link between two neurons—represent memories.

According to Bazhenov, neurons fire in a precise order as we acquire new information, which increases the number of synapses between them. The spiking patterns we learnt while awake are automatically replicated when we sleep. Reactivation or replay is the term used.

Synaptic plasticity, the ability to change or shape synapses, is still present during sleep and can further increase synaptic weight patterns that represent the memory, helping to avoid forgetting or enabling the transfer of information from old to new activities.

This method was used to prevent catastrophic forgetting in artificial neural networks, as discovered by Bazhenov and coworkers.

It implied that these networks may continue to learn, much like people or animals. Improving memory in humans may be made easier by having a better understanding of how the brain processes information when we sleep. Improving sleep patterns can improve memory.

In other studies, we employ computational tools to create the best plans for applying stimulation while we sleep, such audio tones that promote learning and sleep patterns. This may be crucial in situations where memory isn’t functioning at its best, such when it deteriorates with age or under certain medical illnesses like Alzheimer’s disease.

Sources:

Golden R, Delanois JE, Sanda P, Bazhenov M (2022) Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation. PLoS Comput Biol 18(11): e1010628. https://doi.org/10.1371/journal.pcbi.1010628

https://today.ucsd.edu/story/artificial-neural-networks-learn-better-when-they-spend-time-not-learning-at-all

44

ShufflePlay t1_ix6bzhu wrote

So androids do dream of electric sheep. Very cool. Wow.

6

[deleted] t1_ix6fw84 wrote

I have a feeling this paper, or a follow-up paper closely related to this line of work, will become a seminal paper across domains. Absolutely fascinating

6

xeneks t1_ix6brry wrote

This is why the most intelligent people you get in science are kindergarten kids, where they actually get to rest on a daybed after a meal. :)

Definitely will follow, but also will encourage workplaces to have daybeds or places where you can sleep a powernap, so that quality and skill level and competence continually increases rather than stagnates.

1

SwimmingAd2294 t1_ix6jb6f wrote

This is what happens when you throw a dictionary in a blender.

11

[deleted] t1_ix9my0h wrote

I have to say that as someone who straddles both domains and lacks a doctorate in either, I think the writing is absolutely outstanding. I would have gotten at least three times the sleep I have over the last couple of years if most papers were this well written

1

jabulaya t1_ix60x9o wrote

My neural networks spiked under the description of joint synaptic weight representation.

7

[deleted] t1_ix6l83u wrote

Starting to think these neuroscientist don't know (applied) neural networks as well as they think they do

5

TheShardsOfNarsil t1_ix65fae wrote

Ah yes, I hate when my lunar wane shaft experiences bivalent deplenaration

3

xeneks t1_ix6b9wt wrote

-remindme every 10 minutes /remindme every 10 minutes

How does that command go?

3

bars2021 t1_ix6s5el wrote

Always thought it was the amyloid beta protein being "cleaned up" off the nuerons during stage 4 sleep.

2

MurkDiesel t1_ix9n9a4 wrote

due to ptsd etc, i haven't been able to sleep longer 2 hours at a time for 10 years now

i forget things every day, all-day

2

AutoModerator t1_ix5qlfu wrote

Welcome to r/science! This is a heavily moderated subreddit in order to keep the discussion on science. However, we recognize that many people want to discuss how they feel the research relates to their own personal lives, so to give people a space to do that, personal anecdotes are allowed as responses to this comment. Any anecdotal comments elsewhere in the discussion will be removed and our normal comment rules apply to all other comments.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

1