Xavion251

Xavion251 t1_je526zc wrote

>It fundamentally comes down to this: If your belief, or your logical conclusion, or our shared experience does not match reality, then which is correct?

How would you know if those things don't match reality? You can't observe reality independently of these methods (experience, belief, logical conclusion, science, etc.).

>Experiences are subjective, even if you and I agree on something that does not mean it is true. And it is absolutely not the same as logically proving that it is true.

Fundamentally, experience is all we have. Even in science, everything boils down to an experience. You experience the data and the experimentation through your senses. Experience is inescapable, it is all we have, all we are.

> I said that every logical conclusion is based on premises, and those premises are things that people take for granted, not things that are proven true with logic.

All premises eventually go back to experiences we can all agree on. Even things as basic as "the world exists", "humans exist", etc. You can't transcend/escape that, even with science.

You can't leave your own subjective experience, that's ultimately the only way anyone can know anything. You wouldn't be able to know science works at all if you didn't experience it working.

1

Xavion251 t1_je0fq3x wrote

So...

"Not believing science is the be-all-end-all that the world should revolve around" = "anti-science stance"?

That's a very cult-like mindset.

Science is a very good methodology for gaining truths, particularly truths that lead to technological advancement. That doesn't mean it's a God we should all bow down too - which is how you seem to be treating it.

1

Xavion251 t1_jdzazg1 wrote

>The fact that even you admit this scenario is impossible is the exact problem. You cannot derive knowledge via an impossible process.

Taking real things to a hypothetical (but practically impossible) extreme is actually a very good way to logically work through something.

>Even the claim that all logical statements are true is not something that is proven, it is assumed to be true because it has not be demonstrated to be wrong.

It is impossible to understand anything (including science) if logic does not work. So we can't really even have a discussion on whether or not logic works, all conversation necessarily assumes that logic works.

>First you are assuming an infinite regress of provable premises,

No I don't. Eventually all premises boil down to direct, shared experiences that everyone (or almost everyone) can agree on. So does science. So does everything, really.

1

Xavion251 t1_jdzafia wrote

>Aren’t they experiences?

All experiments are experiences, but not all experiences are experiments.

>Do you think science doesn’t involve reasoning?

All science involves reasoning, but not all reasoning is a part of science.

>Do you think mathematics is not reasoning?

Mathematics is logic, not necessarily reasoning depending on definition. Even so, not all reasoning is math.

>Do you have knowledge of things you never experienced?

Nope. Everything must be experienced in some form. If I learn something from reading a book, I still had to experience the book in some way (usually reading via eyesight).

1

Xavion251 t1_jdy35dv wrote

Testing it is a way to confirm it, but ultimately not fundamentally necessary. Every logical deduction that turns out to be false will be falsd because there was some error in the logic (either a false premise or a conclusion that doesn't follow).

That means that a (obviously purely hypothetical) person who is 100% perfect at understanding and applying logic could always deduce the truth with perfect accuracy - without testing anything.

While the above hypothetical example is of course impossible - it's simply taking what is to an extreme. Showing that logic does and must work regardless of whether science is involved.

Science is a good thing, it works very well - especially at obtaining knowledge that leads to technology. But that doesn't mean it's the be-all-end-all.

It's possible to be too "pro" a good thing and too "anti" a bad thing.

1

Xavion251 t1_jdsqvw6 wrote

>I actually differentiated science vs knowledge.

I didn't say you didn't. I said "You're still basically just re-defining "science" into "good way of gaining knowledge" ". In other words, you are essentially re-defining "science" to refer to all methods of gaining knowledge that work.

> am claiming that science includes all the component steps of the process. You can't do science without deductive reasoning, so either we claim deductive reasoning is scientific, or we claim it is necessary but insufficient.

Just because deductive reasoning is part of science doesn't mean that all deductive reasoning is scientific. To make a crude analogy - that's like saying that wheels are fundamentally vehicular because they are a part of vehicles.

I don't think it fits with how the term is used to describe every person's normal experiences, deductions, and intuitions as "scientific" simply because there are elements of overlap.

>So why would I go with a more permissive definition? Because the alternative requires some arbitrary threshold, a point at which logic and observation and pattern recognition switches from "not science" to "science".

I don't think a definition of "following the scientific method" for "science" is an arbitrary threshold.

It is a "hard" line, but not all things are spectrums. Not all things that aren't spectrums are "arbitrary". Sometimes (admittedly somewhat rarely), there are just hard binaries.

2

Xavion251 t1_jdr0kku wrote

You're still basically just re-definining "science" into "good way of gaining knowledge". Saying a method is "more scientific" is then just a way of saying "better".

No offense, but this just seems like an ad hoc redefining of terms to preserve the idea that "science" is the be-all-end-all of everything. Why do that?

2

Xavion251 t1_jdnwmjd wrote

If you push the definition of science to such a degree - then yes, you can mostly justify "scientism" (although I still think there a couple exceptions with experience).

However, if you push the definition that much - it becomes an almost meaningless term. Almost any form of learning and/or belief becomes "science".

This is not what people (like me) who oppose the mindset of scientism are targeting. And this is not what the people who promote scientism are saying.

3

Xavion251 t1_jac7mgs wrote

>You are literally a bunch of signals. So is an "AI" existing in a bunch of silicon.

Putting that (problematic IMO, as I'm a dualist) assumption aside and simply granting that it is true - human brains use different kinds of signals generated in different ways. Does that difference matter? Neither you or I can prove either way.

>As for your arguments, it seems that you argue that "since other humans look like you they must be conscious", and you then conclude that this implies that "entities that do not look human are not conscious.".

This is reductive. I'm not talking about superficial appearance. I wouldn't conclude that a picture of a human is conscious - for example.

But I would conclude that something that by all measures works, behaves, and looks (both inside and out, on every scale) like me probably is also conscious like me.

It would be rather contrived to suggest that in a world of 7 billion creatures like me (and billions more that are more roughly like me - animals), all of them except for me in particular just look and act conscious while I am truly conscious.

>I may agree with the first, but that does not entail the opposite direction, and hence it can not be used here. It's like saying "if it rains the street is wet" and then concluding "if the street is wet it rains".

No, because we can observe the street being wet for other reasons. We can't observe consciousness at all (aside from our own).

1

Xavion251 t1_jabq7pa wrote

Well, also I share most of my DNA with other humans. They look roughly like me, act roughly like me, and biologically work the same as me.

So it's a far more reasonable, simple explanation that they are conscious just like I am. To a somewhat lesser degree, this can extend to higher animals as well.

But an AI that acts conscious still has some clear differences with me in how it works (and how it came to be). So I would place the odds significantly lower that they are really conscious and aren't just acting that way.

That said, I would still treat them as conscious to be on the safe side.

1

Xavion251 t1_jabpx2s wrote

Actually, bacteria only "make up most of our bodies" if you look at the raw cell-count.

Most bacteria cells are smaller than most human cells, so actually bacteria only make up about 1-3% of us by weight (a few pounds).

...But the "bacteria cells outnumber human cells" is a more provocative statement - so that's the one that gets spread around.

2

Xavion251 t1_jabpkv0 wrote

Except consciousness (fundamentally) cannot be measured externally, so how would you know if a machine is conscious?

You seem to be making a false equivalency between "conscious" and "acts conscious". Which needn't be the case.

You cannot know if something that doesn't act conscious is actually experiencing things. Nor can you know that something that acts conscious (even if it says it is conscious) really is.

1

Xavion251 t1_jaboy0k wrote

Since my original comment got banned for not having enough arguments (fair enough to be honest). I'll remix it with the comment I followed up with.

In short, this article is making a lot of completely unjustified assumptions.

Pretty much every proposition seems like a random, unjustifiable leap with no real logical flow.

"Pleasure/pain is required for consciousness"

"Only a biological nervous system could produce these feelings"

"AI does not have intent driving it"

"An AI has nothing to produce these feelings"

These are all just assumptions that can't be verified. Nor can they be logically deduced from any premises.

You could re-arrange the question "Is X conscious?" into "Does X have any subjective experience of anything?".

You cannot possibly know what an AI is or isn't experiencing (up to and including nothing at all i.e. no consciousness). Just as an AI could not possibly know that humans are conscious by studying our brains. To it, our nervous system would just be another "mechanism" for information processing.

How would you know if a self-learning AI does or does not experience pleasure when it does when it's trained to? How would you know if it does or does not perceive it's programming to do XYZ as an "intention" the same way we do?

1

Xavion251 t1_ito5y6g wrote

This article is naive as hell. This the only way to actually solve the issue.

You can't make society be more moral. You can only incentivize moral behavior and disincentivise immoral behavior.

You can't change humanity. You can only change everything around us.

In short, the issue of "humanities moral failure" cannot be adressed. Only it's symptoms can.

2