aggasalk t1_jea3rqy wrote

The same, I guess? When it comes down to it, binocular correspondence is as precise as the location of photoreceptors in the retina, at least this is true for central (foveal) vision, it might be less precise than that in the periphery.

But.. when it comes to binocular correspondence, the correspondence isn't really between receptors or pixels ("points") in the retina - starting with the optic nerve, visual neurons have "receptive fields" that cover a fuzzy region (but still clearly localized) of the retina. So correspondence isn't technically between points but between areas.

But those areas are at many scales, and I tell you it gets really complicated really fast when you look at it closely: pick a point in the binocular visual field (like, look at a single pixel on your screen). This point, if small enough, might fall on a single photoreceptor in each eye - photoreceptors at "corresponding positions". But the correspondence is being encoded, in the brain, by many many many neurons with receptive fields of different sizes, all of which overlap that point.

I guess this can suggest to you how to think about binocular correspondence. There is a tiny point of light shining out in space, and you look at it. Certain monocular neurons (in each eye, and downstream from there all the way to primary visual cortex) are excited by this point of light. Starting in primary visual cortex (and especially after that) there will be binocular neurons that are excited by that point, and that would be excited by it even if one eye were closed (meaning, they "want" a specific point in space, regardless of which eye it came from). That is, those binocular neurons are encoding the same point in space, and this is the basis of binocular correspondence.

If you move the point of light over so that it excites a different set of receptors, then the downstream activity will also shift, and some different neurons will be excited. But there will be overlap: some binocular neurons will be excited by both positions (they have "large receptive fields") but some will be more selective, excited only by one position or another. So not only is there binocular correspondence encoded, but it is multiscale - there is correspondence between points of many sizes.


aggasalk t1_je2nbh0 wrote

It's ok to say it, but I think "same" might give the wrong sense, since it's not necessarily clear what "same" means here.

Correspondence is really the clearest concept - two retinal locations correspond in that they both respond to the same point in physical space, given certain optical & mechanical conditions. Those conditions are that the the physical point is at the same distance as the vergence distance of the two eyes (in other words, where they are both 'pointing', taking the axis of an eye to be a line between the center of the pupil and the foveola of the retina).

Under those conditions, a point in physical space will be imaged on precisely corresponding positions in the two retinas, and then I suppose it's fine to think of those as "the same positions".

You get the finest depth information, about the smallest differences in depth, from slightly different inputs both from the "same" i.e. precisely corresponding positions. The coarser the spatial grain (i.e. the more spread out in space it is), the larger the depth it can signal. So coarser depth signals will be transmitted by neurons with larger receptive fields, and potentially also by neurons with looser or less precise binocular correspondence. but I think the general rule will be that binocular neurons are for corresponding positions, and lack of precision amounts to noise, not a special source of information in itself.


aggasalk t1_je0ov30 wrote

when you get to cortex, spatial tuning is rather precise, and binocular neurons are generally tuned for the same retinal position (this suggests another question of "what is retinal position anyway?" but I don't think that's actually too problematic). I'm sure if you looked at a large number of such neurons, you'd find that (like everything else) it's actually a random distribution, albeit very narrowly distributed.

the precision of this common input is the real basis of retinal correspondence (apart from the matter of the parenthetical question above). the more precise it is (the narrower that distribution), the more informative differences in input can be, and so the better for stereopsis.


aggasalk t1_jdnfzaq wrote

yes, basically. there is a precise correspondence (the term for it is... "retinal correspondence") between positions in the two eyes.

deviations from this correspondence, within a limit (usually called Panum's area), allow for stereopsis, depth sensation from small differences in the retinal positions of features.

if a feature falls precisely on corresponding positions in the two eyes, it will feel like it's at the distance at which the two eyes are converging (called the horopter). if the feature falls at slightly different positions, laterally displaced, this is "horizontal disparity", and then it feels like the feature is nearer than or further than the horopter (depending on the direction of the displacement).

if the displacement is too large, it exceeds Panum's area and the feature cannot be fused between the two eyes, and you will see the feature twice, in two laterally displaced positions ("double vision").

this binocular correspondence begins as soon as the optic nerves enter the brain: the two optic nerves meet in the thalamus (or thalami), where corresponding positions are brought in physical register - from there, still separated, the two eyes' signals project to similar positions in the visual cortex, which is essentially a big map of visual field positions, where after a few synapses they are largely indistinguishable.


aggasalk t1_jceibkp wrote

all animal eyes use photopigments that are descended from a common ancestor - whether or not that thing (a basal eumetazoan, which would have looked more like a sponge larva than any animal you've ever seen) had eyes, I think probably not. but it seems that, having evolved these supremely useful molecules, evolution figured out pretty quickly the best way to make use of them ("build an eye").

(if there is something recent suggesting a common ancestor to all eyes, i'd really like to see it!)


aggasalk t1_j9uxqcp wrote

you can also very easily write a computer program to solve a maze, but most of us would be reluctant to attribute "thinking" or "intelligence" to the program, or the computer running it (except in a very casual sense of the word - like, the computer's taking a while to do something, we might say "it's thinking", but it's not really thinking).

in the case of computers, we're applying our own psychological concepts to phenomena where they're only appropriate at an extrinsic, 3rd-person point of view - from the outside, what the machine is doing looks like what the intelligent organism does - while what's actually happening in the system is absolutely unlike the psychological phenomena on which those concepts are based.

by a classic analogy: just as a computer simulating a hurricane isn't windy or wet, a computer simulating a mind isn't thoughtful or intelligent.

i think the same applies to plants, slime molds, etc - they aren't simulating, that implies some kind of intention, but what they're doing only appears like intelligence because it happens to resemble the behaviors that we associate with actual thought.


aggasalk t1_j8h60y6 wrote

> So, first answering your main question- elementary particles are all fungible. That means, they are truly identical, and they are impossible to label. So, if a photon is absorbed and then remitted, it doesn't really make sense to say "is it the same photon or a different one?" There aren't really "same" or "different" photons, there's just photons, unlabeled.

Isn't there any sense in which, say, a photon flying through space at time t and then a moment later at time t+1 is "the same photon", and in which two photons flying in opposite directions at the same moment and the same point in space (with different energies, even) are "different photons"?


aggasalk t1_j6h5wm4 wrote

right except for the reverse situation. though it is a more-or-less passive process, the rod/cone pigments are constantly regenerating at the same rate regardless of light level.

when you go into a dark room, the cones are stocked with pigment but it's useless - but it will take a few minutes for the rods to be fully stocked, since they were bleached by your earlier exposure to light.

when you step from darkness to light, there is a brief flash since suddenly all your rods pigments are isomerizing, but the cones are functioning from the get-go.


aggasalk t1_j6gyf6k wrote

The pupil's contribution to light adaptation is relatively minor.

The main action is in the retina. There are two types of photoreceptors in the retina, rods and cones. They work in similar ways: they are constantly producing substances called photopigments, and when the right kind of light hits a photopigment it transforms in such a way that it can activate the photoreceptor so that it sends a "light detected" signal.

Rods are extremely sensitive. A rod can potentially detect a single photon. So, you use your rods to see in very very low light conditions. But because of this great sensitivity, a moderate amount of light will 'bleach' the rods, destroying their photopigments and making them useless.

Cones are very insensitive. It takes hundreds or thousands of photons to trigger a cone. But this is fine, because most of the time you are in environments that are totally flooded with light, so there's almost always enough to trigger your cone detectors.

Since they are so insensitive, it's just about impossible to bleach the cones (a very bright flash, or glancing at direct sunlight can do it). So, they are always producing pigment that is available to detect photons and excite the cones.

Both systems are very homeostatic in their light regimes, producing pigment in decent amounts so that available light can be detected, without ever needing to know the current light levels. Luckily they overlap in their sensitivities, so there's no light level at which you have no functioning receptors.

Basically, long story short, when it's very dark your rods become useful because they're so sensitive to light and your cones become useless because they're so insensitive. When it's not very dark, your rods are bleached and useless, while your cones are now useful because there are enough photons to stimulate them.


aggasalk t1_j3dugka wrote

maybe a too-fine point to make here, but nothing is 'encoded' in DNA. DNA is the base level of the biological process - DNA is fed through molecular machines and the result is construction of various proteins and new molecular machines and etc, and you could see this as a process of "decoding" (stretching the information processing metaphor too far, imho). but nothing was ever "encoded" there.

DNA comes to be the way it not by some kind of encoding process (it would if evolution were more like the Lamarckian idea), but by random mutation and natural selection, and is selected for the fact that, when it runs through that machinery, useful stuff is produced that supports the creation of more of that same DNA.


aggasalk t1_iwwk6lt wrote

> Everything else outside of the fovea is "blurry" and much less colored. You mind fills in the blanks and you end up perceiving the world as clear an detailed.

I'm sorry, but you have hit a nerve... This stuff is not really true. It gets repeated over and over again (including, sadly, in undergrad psych and even perception classes...), but.. peripheral vision supports color vision just as well as foveal vision (in fact, better in some ways: there are no S-cones in the fovea!).

And "blur" is a very vague term here. Peripheral vision has lower resolution. But it has a precise resolution, and you see things at that resolution, exactly as you do foveally. But we don't say foveal vision is "blurry", even though it has limited resolution just as the periphery does.

You can see smaller details foveally than you can see peripherally, that's the right way to think of it. But things can appear sharp (or blurry) in fovea or periphery.

e.g. see:


aggasalk t1_iwwiw29 wrote

We do not really know, most of the answers you're getting are very improvised..

Here's something we do know: The chameleon visual system is a bit unusual in that the optic nerves do not obviously lead to common targets in the chameleon brain. (detailed ref on this:

In most animals where there is some amount of binocular overlap between the two eyes - in humans for example this overlap is huge (the two eyes' views mostly overlap). If you look at where the neural pathways from the eyes lead in the brain, they quickly (after just a couple of synapses) converge on common targets, meaning that neurons in corresponding retinal positions lead to the same neurons in the brain.

This is the fundamental reason why what we see with our two eyes feels so integrated: it is more-or-less the same neurons in the brain handling inputs from both. It's not that the brain "does something" to integrate the inputs - it's simply that the inputs are largely handled by the same neural mechanisms. In fact, you don't even have conscious access to individual-eyes views (if I poke a neuron in one of your eyes, you'll see a flash - but you'll have zero idea which eye it was in).

For the chameleon this isn't obviously true. Outputs from its two eyes are segregated well into its brain: one optic nerve goes to one nucleus on one side of the brain, the other optic nerve goes to the other side (our optic nerves split apart after they leave the eye, and merge into two "optic tracts" leading into the brain, each representing the same side of the visual field, half of each tract contributed by each eye).

It's almost certainly true that if you follow the chameleon's visual pathways far enough, deep enough, they'll meet eventually, but just reading about the basic neuroanatomy, I'd guess that the chameleon's visual experience is of two largely-segregated eye-views, which sometimes include common content, which the chameleon "knows" (to whatever extent you can say a chameleon "knows" things) are the same, thanks to the fact it does have some neural convergence of those inputs, somewhere deep in its little lizard brain.


aggasalk t1_iuq4qul wrote

there's an old concept from cognitive psychology called the 'phonological loop', the idea is that this is a mechanism that we all use in memorizing things - something is put in phrase form of a certain (short) length, and just rotates through this audio-imagery buffer in order to force it into long-term memory (or, at least, to conserve it in short-term memory until we need it). sounds kind of like that to me..


aggasalk t1_iuntije wrote

Yes, now I'm reading about it a bit, and repetition (usually being a critical part of what makes something a piece of music to begin with) seems often cited as an important piece of it.

[there's this interesting book] ( (by a psychologist who studies music perception) that seems to make this hypothesis very clearly, that music is essentially about repetition, and the occurrence of earworms is specifically related to this quality. (i just read the first few pages and skimmed through it, looks interesting though)


aggasalk t1_iunrxzf wrote

but then why don't we just-as-easily get spoken phrases as earworms? on your explanation, you'd think it would be just as common to have a line of shakespeare or a piece of poetry or something lodged in your mind's ear, but it really just happens with music.

i think there's something special here that has to do with music specifically. dunno what that is.