Viewing a single comment thread. View all comments

IntelArtiGen t1_ivfxox3 wrote

For many tasks you can't really compare because we are fed with multiple types of raw data continuously while most models train on one specific type of data coming from one clean dataset.

>we can distinguish, say, dogs from cats given only a couple input examples.

After we've seen billions of images during multiple months/years of life. We had a very large and long "pretraining" before being able to perform "complex" tasks. So it depends on what you compare, most models need less data but train on a cleaner dataset with architectures that are already optimized for that specific task.

103

billjames1685 OP t1_ivfyks8 wrote

I’m pretty sure it’s been well established that we can learn after seeing a few images even for things we haven’t seen before, I remember reading some paper testing on some random animals no one has seen before.

But yeah we do have a lot of general pre training; we have image classification training before so there might be some transfer learning stuff endowing us with our few short capabilities. And part of the reason is that our brains have way more compute than any model, so we can probably learn things better as well

−39

ramblinginternetnerd t1_ivgawp3 wrote

The human brain was "pretrained" by around 500 million years of evolution (origin of first vertebrate).

​

Fear of heights seems somewhat inborn for example.

75

blimpyway t1_iviwvnt wrote

Now let's figure out how to store such a pretrained model in ~700MBytes of genetic code, without disturbing the other info about how all non-brainy enzymes, organs and tissues, etc.. should be built and assembled together.

8

[deleted] t1_ivitst9 wrote

[deleted]

−1

uishax t1_ivixeq6 wrote

The stable diffusion model is only 4GB, yet its enough to describe almost anything visually. Its also an extremely size-unoptimized model.

Now the 800MB is mostly spent on other things, but even 8mb, if optimized enough, is plenty to encode a vast amount of visual information into our brains, including a hyper-efficient-accurate human face recognizer, a hatred of bugs/mice/spiders/snakes, a liking of fluffy and shiny stuff, etc.

9

KPTN25 t1_ivjf6z5 wrote

There is epigenetics, microbiome, and some other stuff as well.

2

IntelArtiGen t1_ivg33d5 wrote

>I’m pretty sure it’s been well established that we can learn after seeing a few images even for things we haven’t seen before

An 18 years old can do that. Ask a 1 y.o. to identify 50 different objects, it won't work, even though this 1 y.o. was trained continuously on thousands of images during his first year of life. Of course you were not talking about training a 1 y.o. but an adult, and that's why you can't really compare. In order to be an adult you need to be a 1 y.o., you need to watch the world during thousands of days before you can have that "pretraining" that makes adults able to handle all these tasks more easily than most models.

>our brains have way more compute than any model

That's not as well-established as many people could think. We would want models to do what an 18 years old could do, yet no deep learning model has been trained with real-world interactions for 18 years.

30

blimpyway t1_ivivcwr wrote

Tesla collected 780M miles of driving till 2016

A human learning to drive for 16h/day at an average speed of 30mph for 18years would have a data set of ~3M miles.

So we can say humans are at least 1000 times more sample efficient than whatever Tesla and any other autonomous driving companies are doing.

−1

The_Real_RM t1_ivizjvu wrote

You are assuming Tesla actually needs all that data to train a competing model, you're also ignoring all of the other training a human has before ever starting to drive. It's not so clear who is more efficient, not at all.

I think a better way to compare is thorough the lense of energy, a human brain runs on about 40w of energy, Tesla's models are trained on MW scale computers, how do they compare in terms of total energy spent to achieve certain performance?

5

IntelArtiGen t1_ivj6nih wrote

Probably not, because a 16 y.o. human has 16 years of interactive navigation pretraining in a real world environment in real time before learning to drive. So it depends on how you include this pretraining.

And it also depends on the accuracy of the model as a function of the size of the dataset. Let's say Tesla is 80% (random number) accurate while driving after training on 780M miles, a human is 75% accurate after 3M miles, and if you train the Tesla model on 3M miles instead of 780M it's 75% accurate, on these metrics alone Tesla would be as efficient as a human.

No comparison is perfect but we can't ignore that during the first years of our lives we train to understand the world while not being very efficient to perform tasks.

1

billjames1685 OP t1_ivg3dy8 wrote

Yeah I addressed that in the second paragraph; we have been pretrained on enough image classification tasks that we probably have some transfer learning-esque reasons leading to our few shot capabilities.

−13

IntelArtiGen t1_ivg4d74 wrote

I think it's not just "transfer learning" or "image classification" it's also just learning without explicitly using "labels". Like contrastive learning / self supervised learning / reinforcement learning etc.

12

billjames1685 OP t1_ivg5bij wrote

Yeah I agree. Not sure if I’m misunderstanding you, but by “transfer learning” I basically mean like all of our pre training (which occurred through a variety of methods as you point out) has allowed us to richly understand images as a whole, so we can apply and generalize well in semi-new tasks/domain.

−5

IntelArtiGen t1_ivg765c wrote

Ok that's one way to say it I also agree. I tend to not use the concept of "transfer learning" for how we learn because I think it's more appropriate for well-defined tasks and we are rarely confronted with tasks that are as well-defined as the ones we give to our models.

And transfer learning implies that you have to re-train a part of the model on a new task, and that's not exactly how I would define what we do. When I worked on reproducing how we learn words I instead implemented the solution as a way to put a new label on a representation we were already able to produce based on our unsupervised pretraining. I don't know which way is the correct one I just know that doing that works and that you can teach new words/labels to a model without retraining it.

3

billjames1685 OP t1_ivh33jr wrote

That’s a fair point; I was kind of just using it as a general term.

2

OptimizedGarbage t1_ivh2jio wrote

There's a famous psych experiment where cats raised in a room containing only horizontal stripes never see vertical lines. After leaving the room their brains simply haven't learned to recognize vertical lines, so they'll walk face first into vertical bars without realizing they're there. There's a massive amount of data that goes into learning the features needed to distinguish objects from each other and learn the basics of how objects in 3D space appear.

Similarly, if you pretrain a neural net on any random assortment of images, you can get very fast learning after that by fine-tuning using new classes. But the overwhelming majority of the data is going towards "how to interpret images in general" not "how to tell two novel object classes apart".

6

billjames1685 OP t1_ivh2w9b wrote

Wow, that’s fascinating. I think the studies I saw weren’t quite saying what I thought they were as I explained elsewhere; we have so much data and training just by being in the world and seeing stuff that we are able to richly classify images even in new domains, but yeah it seems that pretraining is necessary. Thanks!

3

LordOfGalaxy t1_ivg17bi wrote

I don't think we have that much compute power. A human brain has about 100 billion neurons and they can fire at about 100Hz on average at best. Each neuron has about 1000-10000 synapses. If each firing counted as one operation for every synapse, this puts the compute power at an absolute maximum of about 100 POPS (Peta Operations Per Second). A single graphics card can manage about 100 TFLOPS these days, so this is really only about a thousand graphics cards - nothing unachievable. And the human brain does a LOT more than any model we currently have. Something like a rat brain probably has less compute power than a single graphics card, and yet in many ways our models are incapable of what a rat can do. The problem is more fundamental than just "not enough compute" IMHO.

5

billjames1685 OP t1_ivg2iio wrote

I am basing on this blogpost: https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/

Written in 2015 but the author has commented recently that he still holds the same opinion.

More recent (Jan 2022): https://www.scienceabc.com/humans/the-human-brain-vs-supercomputers-which-one-wins.html#evolution-of-computers

Generally though I don’t think there is a consensus on this because there are a lot of loosely defined terms and the brain is basically impossible to simulate.

I agree that the brain is just more optimized in general than NNs, but I’m pretty sure it’s also just way more powerful as well.

The estimated computational capacity of the brain keeps increasing as we learn more about it.

5

LordOfGalaxy t1_ivg90on wrote

A lot of the author's estimates are on the higher side, which takes him to the ~10^21 number. Fair enough. But even then one must concede that, say, a rat brain, with 1000 times fewer neurons, should still be within reach of modern supercomputers in terms of sheer processing power.

And even the authors of both those posts note that biological brains are VERY different from ANNs, which could confer them significant advantages. That is my own view - the biological brain is just better at what it does, and our algorithms will require significant changes to match that level of efficiency. Of course, we still need significant advances at the hardware level as well (the human brain barely uses 30W and still has some 3-6 orders of magnitude more computing power than the most powerful GPUs that easily use ten times that much power), but even with such advances we may not be able to match the biological brain unless we make some more fundamental changes to our methods.

8

billjames1685 OP t1_ivg9nxw wrote

Oh absolutely. Our brain is absolutely insane - with 20-30 watts it’s able to possibly have more compute than supercomputers that run on several megawatts of energy. The level of efficiency it displays is just ridiculous.

4

DarkCeldori t1_ivgnyd2 wrote

We also have to remember the brain has very sparse activity. IIRC on the order of 2% activity. Also most of the neurons are on the cerebellum, and humans without cerebellum still have general intelligence albeit with some difficulty with precise motion. The neocortex only has about 16Billion neurons and it is here that general intelligence occurs. That brings the 100POPs down to 16POPs times 0.02% activity = 320TOPS.

https://aiimpacts.org/rate-of-neuron-firing/

3

LordOfGalaxy t1_ivit0x2 wrote

True, every neuron in the brain cannot possibly be firing at the same time, and much of the brain is dedicated to just keeping us alive

2

ginsunuva t1_iviwmie wrote

>some transfer learning

We know general physics, 3d projection, lighting, and biological concepts. So much transfer that it’s always an entirely unfair comparison.

1