Viewing a single comment thread. View all comments

TR_2016 t1_irzd8kc wrote

"the animals cannot know or care, they simply act in response to stimuli however their genetic programming causes them to, without thought or remorse."

"The real question is whether humans have anything other than instincts. Your ability to ask the question proves you have self-determination."


We are also acting based on our genetic programming and the input from the outside world. Nothing more than AI on a body really, we just don't know our exact code. Our actions are nothing more than the output of a model. Randomness might be involved as well, but that is not free will.

"So you're saying that murdering every human in existence wouldn't be immoral? That seems like an odd, and very immoral, position to take."

It wouldn't be moral or immoral. Assume you are testing out a short trial period of a "game", limited to 3 hours. You could do whatever you want, once the trial runs out, none of your actions matter anymore and you are locked out, game and your save files are erased forever.

Does your actions actually matter? No. Are we really in a different situation than an AI playing this game? The AI is coded to perform certain actions and preserve itself, while its actions can be heavily manipulated depending on input presented by other players. Players are different variations of the same AI.

Lets take up morality. The only basis for morality i can see is by indoctrinating society that it is "bad" to do certain things or harm others, we are reducing the likelihood that someone else will harm us. It can be argued that we are following the self preservation task of our code by creating a "morality".

Lets say however, you have come to the conclusion that even if you harm others, you will still be safe from others inflicting the same harm on you, what incentives are there now to follow morality? There is another incentive. Due to a combination of our code and indoctrination, one might avoid harming others to avoid feeling "bad".

In the end it is not so different than a set of conditional jumps in assembly. Basicly a long set of risk/reward assesments with everyone having a different threshold. Again, since we can't read our brains like a computer code, we don't have the whole formula for a calculation.

It might be impossible for some to arrive at a certain conclusion no matter the inputs presented to them due to their "code", or if it is a critical function or a linear code, it might be executed regardless of which inputs we present in to the calculation.

Each person might come to a different conclusion on a single "moral" question, even if inputs are the same, because their code might be different. Or you could indoctrinate two people on the same/very similar code with different inputs in their development stage and later, they might each come to a different conclusion on the same moral question.

Since we don't know our code, our observation is mostly limited to check which inputs produce different outcomes. There is no objective correct or incorrect outcome.

It is entirely possible that if you could somehow modify Putin's brain like we could modify an AI's code, you could easily make him declare peace or launch nuclear weapons depending on your preference. So where is his free will? How are his actions anything but the output of a complicated model?

2

TMax01 t1_is0ygq2 wrote

>We are also acting based on our genetic programming and the input from the outside world.

That is merely the starting point of our behavior. Certainly, you can maintain your assumption as unfalsifiable, and believe that humans (including yourself) are nothing but biological robots like all other forms of life, with no conscious self-determination, by defining every act of art, poetry, philosophy, science, engineering, hope, or emotion as "genetic programming" or operant conditioning. But in doing so, you are, admittedly or not, proving the opposite, because animals have no need, desire, or capacity to do such a thing.

>So where is his free will?

This is the root of the problem, indeed. The conventional perspective you are regurgitating is the same dead end philosophy has been mired in for thousands of years. But it is a false assumption that self-determination is or requires "free will".

All it would take for Putin to realize he made a mistake and call off his unjustified and illegal invasion of Ukraine would be recognizing that the teleology he used to justify it is both backwards and upside down. According to your worldview, and therefore as long as everyone else continues to agree with and promulgate the conventional philosophy underlying your worldview, this is effectively impossible: Putin believes he based his choice on logic, and so he will continue to see that decision as logical. In my philosophy, it is merely unlikely, because he has as much faith in the conventional philosophy as you do: he believes he is acting reasonably, despite the rather obvious fact that he is not. If everyone around Putin rejected the conventional belief that self-determination is free will, it wouldn't matter if Putin did, he would still be much more likely to act reasonably rather than continue to use false logic to remain convinced he is no different than an AI ape.

Thanks for your time. Hope it helps.

2

Krasmaniandevil t1_is8pwe5 wrote

Stop staw-manning, you can't characterize someone else's position with so much rhetorical flourish so early in your response.

Dirty pool.

1

TMax01 t1_isancbe wrote

Your contentiousness fails to provide a rebuttal to my comment.

1