Comments

You must log in or register to comment.

testsubject_127 t1_iwev7ix wrote

Can somebody dumb this down for me please? I am interested, but I've noticed that there is much jargon that I don't understand.

9

antichain OP t1_iwgoc2g wrote

I'm not an expert (I came across this during a lit review on "emergence" in philosophy and found it refreshingly quantitative), but here's my take-away:

Say you're a scientist and you're trying to model some system. The general, reductionist assumption is that the best you can do is always going to be having a complete model of the micro-scale. Sort of a Pascal's Demon kind of thing. If we had enough knowledge and computing power, we could solve biology by just reducing it to a bunch of quantum mechanics problems.

But the world doesn't seem to work that way. Macro-scale objects in biology seem to have a "causal power" of their own. When you say, idk, you got sick with the flu, it doesn't really "feel" right to say that your illness was "caused" by a bunch of interactions between atomic orbitals or whatever. The illness can be modeled pretty much perfectly in terms of "macro-scale" interactions between the bug and your biology. So even if Pascal's Demon could do it all based on particles, he doesn't apparently need to.

What (I think) the authors are arguing is that we can understand this in terms of redundancy. There's not really any point in modelling every atom in a flu virus because they are, for the most part, totally redundant. If you wanted to predict the future of the illness by modeling every atom, you'd be doing something hugely wasteful, since two atoms in the shell of the virus capsule basically contribute the same thing.

So what the authors do is show that (in a bunch of silly, toy systems), you can "coarse grain" a system (sort of like lumping all the atoms together and saying "we don't care about individuals, just the structure you're part of"), and in doing so, that redundant information about the future copied over many elements gets "converted" into "useful" information specific to macro-scale elements.

I'm a bit fuzzy on this "synergy" construct - I'm not sure where that fits in.

5

uoahelperg t1_iwfqwj3 wrote

I’d also like a dumbed down version.

From what I read it looks like the author is saying that information can change between a subset and a larger set leading to it processing inputs more or less efficiently.

If I understood the logic gate portion correctly it seems as simple as saying 1+1+1+1+1 isn’t quite the same as saying 5 because to do 5+1 you just do two steps and to do 1+1+1+1+1+1 you do a bunch of steps lol, but I am probably missing something.

Ed: also that when you add variability to it, for some things the smaller scale is not as consistent as the larger scale or vice versa, and the idea is that there’s different optimal scales to look at different things to get the most useful information.

2

Ripheus23 t1_iwg14v3 wrote

Think of the Lévy hierarchy in set theory, or higher-order logic more generally. Arithmetic and proof structures recursed over a first-order base can have surprisingly new characteristics, like you can prove different things in different ways as you go up, e.g. the well-ordering lemma can be used to derive the axiom of choice but not vice versa. Or in one family of intuitionistic logic the choice axiom allows deriving the law of excluded middle.

Subtly different structural inputs with substantially different outputs of content.

1

woke_up_early t1_iwgzk5h wrote

add biology and chemistry to the mix

2