samloveshummus
samloveshummus t1_iwhvkln wrote
Reply to comment by fusionliberty796 in Scientists Taught an AI to ‘Sleep’ So That It Doesn't Forget What It Learned, Like a Person. Researchers say counting sleep may be the best way for AIs to exhibit life-long learning. by mossadnik
As weights in a neural network data structure, which is not a database.
samloveshummus t1_iwhvgyc wrote
Reply to comment by ItsAConspiracy in Can post-quantum encryption save the internet? by gregnoone
You're assuming quantum measurements are random, but actually hidden variables a la Bohm is true.
samloveshummus t1_iwhuw4c wrote
Reply to comment by djdefenda in Scientists Taught an AI to ‘Sleep’ So That It Doesn't Forget What It Learned, Like a Person. Researchers say counting sleep may be the best way for AIs to exhibit life-long learning. by mossadnik
That contradicts your point.
samloveshummus t1_iw9jkfs wrote
Reply to comment by ItsAConspiracy in Can post-quantum encryption save the internet? by gregnoone
The proofs against using quantum entanglement to send information assume that there are no closed time-like curves, but these are allowed in nature, so there's no proof.
samloveshummus t1_iw9ig76 wrote
Reply to comment by djdefenda in Scientists Taught an AI to ‘Sleep’ So That It Doesn't Forget What It Learned, Like a Person. Researchers say counting sleep may be the best way for AIs to exhibit life-long learning. by mossadnik
What's the difference between defragmentation and sleep?
samloveshummus t1_iw9id9t wrote
Reply to comment by fusionliberty796 in Scientists Taught an AI to ‘Sleep’ So That It Doesn't Forget What It Learned, Like a Person. Researchers say counting sleep may be the best way for AIs to exhibit life-long learning. by mossadnik
Why do you think sleep is simply saving to a database?
samloveshummus t1_iw9i9t8 wrote
Reply to comment by Specialist-Car1860 in Scientists Taught an AI to ‘Sleep’ So That It Doesn't Forget What It Learned, Like a Person. Researchers say counting sleep may be the best way for AIs to exhibit life-long learning. by mossadnik
How is that "dumbing down"? Do you object to the term "neural network"? Do you object to the term "machine learning"? Can you explain the neurological importance of sleep and say why you think it should not be directly relevant to artificial intelligence? Are you assuming that sleep is a biological artifact, not a matter of information processing?
samloveshummus t1_iw1qaft wrote
Reply to comment by martinkunev in [R] ZerO Initialization: Initializing Neural Networks with only Zeros and Ones by hardmaru
As well as what the other commenters are saying, sometimes deeper stuff takes longer to have an impact. If you look through the history of science (and human endeavor more generally), there are many famous examples of people whose work revolutionized our modern world, but who weren't recognized in their lifetime - society needed time to catch up.
Now I think we can do a lot better than that. We're a global civilization that communicates at lightspeed. However, we are still also big hairless apes with CPUs made of electric jelly, so we take a while to process things. The more unexpected, the more processing we need.
samloveshummus t1_iw1oyhf wrote
Reply to comment by elcric_krej in [R] ZerO Initialization: Initializing Neural Networks with only Zeros and Ones by hardmaru
>I would love if it were picked up as a standard, it seems like the kind of thing that might get rid of a lot of the worst seed hacking out there.
I don't want to be facetious, but what's wrong with "seed hacking"? Maybe that's a fundamental part of making a good model.
If we took someone other than Albert Einstein, and gave them the same education, the same career, the same influences and stresses, would that other person be equally as likely to realise how to explain the photoelectric effect, Brownian motion, blackbody radiation, general relativity and E=mc^(2)? Or was there something special about Einstein's genes meaning we need those initial conditions and that training schedule for it to work.
samloveshummus t1_iw1ofup wrote
Reply to comment by ThisIsMyStonerAcount in [R] ZerO Initialization: Initializing Neural Networks with only Zeros and Ones by hardmaru
Good point. But it's not the end of the world; those frameworks are open source, after all!
samloveshummus t1_iw1o1jg wrote
Reply to comment by maybelator in [R] ZerO Initialization: Initializing Neural Networks with only Zeros and Ones by hardmaru
This has to be one of the most useful comments I've read in nearly ten years on Reddit! You must be a gifted teacher.
samloveshummus t1_iwhwh8y wrote
Reply to comment by machinelearner77 in [R] ZerO Initialization: Initializing Neural Networks with only Zeros and Ones by hardmaru
Sure, but maybe it's inescapable.
When we recruit for a job, we first select a candidate from CVs and interviews, and only once we've chosen a candidate do we begin training them.
Do you think it makes sense to strive for a recruitment process that will get perfect results from any candidate, so we can stop wasting time on interviews and just hire whoever? Or is it inevitable that we have to select among candidates before we begin the training? Why should it be different for computers?