EyeSprout t1_j7sqjzc wrote

CNNs and some very early optimizations for them that used to be kind of useful but are no longer really needed anymore since our computers are now faster (like Gabor functions) are sort of inspired from neuroscience research. Attention mechanisms were also floating around for quite a bit in neuroscience in models of memory and retrieval before it was sort of streamlined and simplified into the form we see today.

In general, when things go from neuroscience to machine learning, it takes a lot of stripping down of things into the actually relevant and useful components before they become actually workable. Neuroscientists have lot of ideas for mechanisms, but not all of them are useful...


EyeSprout t1_j4iav6e wrote

>I would say that economic development, higher literacy, better health outcomes, and robust human rights protections are inherently good.

I don't think most people would agree that economic development is inherently good, though.

From my point of view, it needs a bit of reframing. In the context of a discussion about politics, it can be useful to assume that the things you mentioned are inherently good, if only as an approximation. Otherwise discussions would take too long and delve into irrelevant topics; there's a practical limit there. But assuming democracy is an inherent good is a really bad approximation for many reasons.


EyeSprout t1_izv1sez wrote

>No one would want to bind their freedom to that specific reason.

By that, do you mean: That specific reason (assuming you're talking about the reddit account name condition) is easy enough to change (by, say, someone hacking one's account or something), and no one is willing to lose their freedom over that (their account being hacked) so it's not a good condition?

Then is the condition just about how easy something is to change? i.e. the value of a person's freedom shouldn't change very easily under realistic circumstances? That does sound like a decent functional definition, it can work.

>If you value reason, then you can't deny that people's freedom are equal, since there is no basis for stating otherwise.

That paragraph is hard to understand, but at the end, do you just mean that qualitative/discrete properties of a person's freedom should be equal? A good argument for that is that there are a continuous spectrum of people and any discrete cut we introduce in that continuity would necessarily be arbitrary.

So on one hand, it's can make sense to restrict people's freedom of action in the sense of giving them varying amounts of income because income is a continuous property, but it doesn't make sense to restrict people's freedom of action by allowing or disallowing specific actions because it's a discrete property and would introduce an arbitrary cut?

i.e. your central argument is basically a topological one? That's an interesting idea and something I could get behind.

Edit: or more specifically, in the case of two continuous properties, any map/dependence would have some arbitrary parameters, so we can't really "reduce" it by making everyone equal. But when you map a continuous space to a discrete space, there's a clear preference there.


My own framework isn't really important to this conversation, but to explain some things:

>If morality is just rational interest, subject to game theoretic stability,

No, that's not quite what I mean. Morality has the property of (approximate) stability, but it is not uniquely defined by stability. There are many distinct systems with the property of stability and some of them can be called "morality" while calling others morality would be ridiculous.

>Why not be a free rider if there are no consequences to being so?

In any realistic situation, no one is able to tell ahead of time whether there are consequences or not, and just assuming there are consequences tends to lead to better results than constantly worrying about whether there are consequences.

But yeah, I get it, I tend to treat morality descriptively rather than prescriptively, which is a slightly different question. It's a matter of my interests; I always find the descriptive problem more interesting. Same thing happens when I talk about the problem of induction, it's more interesting to me to talk about when we can induct and not if we can induct.


EyeSprout t1_izup7d9 wrote

I don't think this answers my questions. I gave you a specific example, why is "in order to maximize the happiness of EyeSprout" not a good public justification? The above is an objective basis for differentiating my freedom from that of others; it's really a description of how some atoms in some server's memory are arranged. You claim that it's not reasonable, but why is it not reasonable?

The key point here is that people are not identical, and I can always define some set of properties that distinguish me from other people and hence value my freedom from other people. There are more "common" ways to distinguish people, such as based on they contribute to society, or how much money they make. Are you saying that no such set of conditions is "reasonable"? But you have been somehow restricting your moral system to only include humans. Why is only including humans a "reasonable" differentiation while other things are not? In general, why are some methods of differentiation "reasonable" and some not?

The reason I'm a stickler for this point is because there's an explanation I do accept for why people should follow morality, and the answer turns out to be "because morality is designed so that it's usually in their self-interest to follow morality", i.e. morality follows a game-theoretic stability principle.


EyeSprout t1_izulm25 wrote

Only if you know the initial state of the system and can describe the evolution of the system.

Deterministic systems are systems where any "future state" is a fixed function of the "initial state". If the observer knows both of these things, then it's predictable by definition. That doesn't mean that an observer actually knows the initial state or what the fixed function is. Things can get a little complicated if the system includes the observer itself.

There is a question of is it even possible to know or approximate what the initial state of a system is? It's possible to have a system where there's a limit to how much information you can get about the initial state.

(There's nothing special about the "math" part, "math" is really just any language that describes something precisely.)


EyeSprout t1_izujx2b wrote

The article doesn't really explain what "reason" is supposed to mean in this context, but the central argument is very much dependent on this one definition.

> Second, the value of reason is established by asking why. The question isn’t “who shall force me to be moral” or “what is moral,” both of which imply an outside force imposing morality through authority. But rather the question is like “what argument for morality can you provide that I can be reasonably expected to accept?” The skeptic will only accept a reason-based response.

What is a "reason-based response"? Obviously,"the happiness of people with reddit accounts named 'eyespout' should be maximized" is not what you would consider a "reason-based response", but on what grounds exactly? Usually by "reason" we mean a system of statements that can be derived from axioms... but every logical system depends on axioms, why can't I choose whatever I want as an axiom for my system?

What constraints are you putting on your allowed axioms?

>If the skeptic recognizes his own freedom, as well as that freedom being subject to reason, then he must accept the freedom of others. It cannot be reasonable that the skeptic’s own personal freedom is the only freedom worth valuing.

That requires a constraint on what "reason" is: whatever this "reason" means has the property that "it cannot be reasonable that the skeptic’s own personal freedom is the only freedom worth valuing". But why exactly would "reason" have that property?


EyeSprout t1_iyopym1 wrote

For example, in iterated prisoner's dilemma "always cooperate with your opponent" is not stable, because your opponent's optimal strategy against that is to defect every turn. The simulation I linked in my original comment shows a ton of strategies that are not stable and shows quite directly how they would quickly get eliminated by evolution.

For a simple example in evolution, most mutations harm the organism and are unstable. If most organism in a population had a very harmful mutation and a small population didn't, that small population would quickly take over the larger population. Hence, that mutation is unstable.

A slightly nontrivial example would be blind altruism in a situation where your species is severely starved of resources. If most animals were blindly altruistic and a small number of animals were not and would take advantage of the altruistic animals, then again, that small number would outcompete the larger population. So blind altruism isn't stable.

Of course we can't find many real-life examples; that is because they tend to be quickly eliminated by evolution. If they exist, it's usually only temporary.


EyeSprout t1_iyon8r9 wrote

The oxygen catastrophe is possibly the worst possible counterexample you could pick here. The oxygen catastrophe happened slowly enough for all forms of life to settle in niches, enough for game theory to direct evolution, and for a stability condition to apply. Those niches were approximately stable while they existed.

That's all that the stability condition needs to be applied. It's not some complicated concept.


EyeSprout t1_iyofnq7 wrote

The stability condition itself is an independent concept from "ideal" morality. I was using the idea of an "ideal" system of morality for reference because it's what people seem to be most familiar with, even if most people here probably don't believe in the existence of an ideal set of moral rules themselves.

As I said, the stability condition doesn't uniquely define a set of moral rules, it's possible that multiple different sets of moral rules can satisfy it at the same time. Different people with different values will still arrive at different sets of moral rules that all satisfy the stability condition.

A rationale behind caring about the stability condition in a system of morality is that actual systems of morality and ethics all tend to approximately follow the stability condition, due to evolutionary pressures. A moral system that is not (approximately) stable in practice won't persist very long and will be replaced by a different system. So the stability condition is "natural" and not arbitrarily decided by some individual values. Few conditions like that exist, so it's a valuable tool for analyzing problems of morality.


EyeSprout t1_iyldpwt wrote

I don't think this article sees or explains the full extent of how far math can go to describe morality. All it talks about are utility functions, but math can go so much further than that.

Many moral rules can arise naturally from social, iterated game theory. Some of you might know how iterated prisoner's dilemma gives us the "golden rule" or "tit for tat" (for those of you that don't, look at this first before reading further https://ncase.me/trust/), but stable strategies for more complex social games gives rise to social punishment and as a result, rules for deciding who and what actions to punish.

Most people would believe that this merely explains how our moral rules became accepted and use in society, and doesn't really tell us what an "ideal" set of moral rules would be. But I think, even if it might not uniquely specify what morality is, it puts some strong constraints on what morality can be.

In particular, I think that morality should be (to some degree) stable/self-enforcing. By that, I mean that a moral rules should be chosen so that if most of society is following that set of moral rules, then for most people following moral rules as opposed to discarding them is in their personal self-interest, in the same way that cooperation is in each of the player's self=interest in the iterate prisoner's dilemma under the "golden rule" or "tit for tat" moral rule.