Viewing a single comment thread. View all comments

owlthatissuperb OP t1_it7iqkp wrote

Yup, this is 100% related to the Hard Problem.

> but as a coder, how could i write something that feels? It’s not possible.

What makes you so certain it's not possible?

We have a proof of concept that some configurations of matter feel--namely our brains. It's only a matter of time before we figure out how to reverse-engineer that system to create feeling machines. IMO, the question isn't if we will do this, but how will we know when we've done it?

3

wow_button t1_it7y573 wrote

If I write something that feels, its not in the code. What you described - a 'configuration of matter' - yeah maybe we could build that. But its not just information. That's what I mean about it being 'artificial life'.

2

bread93096 t1_it88av1 wrote

Code itself may not be capable of feeling, but code in combination with a network of physical processors that mimic organic brain structures could conceivably feel.

1

wow_button t1_it8bnrn wrote

Agree - but that is my point that now you are not creating artificial intelligence in the form of a program that can be run on any computer, you are building artificial life - the 'network of physical processors' would be where feelings reside.

To grok my point about what a computer is - watch this thing: computer made of people. or this: https://xkcd.com/505/

How would you write a program that ran on that computer that had feelings. And before you object that its too simple - all computers are just faster, more complicated versions of this.

1

bread93096 t1_it8h09l wrote

Computers in the future could become sufficiently advanced that the processors in an average home computer would be capable of running an AI program that is conscious. The only barrier is what hardware is widely available.

1

wow_button t1_it8sogx wrote

Right - but you're missing my point. That super-fast computer would be doing exactly the same thing that the XKCD comic does with rocks, just faster. Its Turing complete, so it does everything that is possible to do with any conventional computer. But its obvious that there is no consciousness or feeling in the pattern of rocks.

What I'm saying is that if we build AI, it will be because we created a certain configuration of matter that registers feelings, not because we've written code. Code could pretend to feel, but not feel.

1

bread93096 t1_it8uqtj wrote

Ah I see. Basically you’re referring to the Chinese black box problem. I’d argue that’s more a problem with our perception than with consciousness itself. It is impossible for us to determine from the outside whether any system is conscious or not. This is true even of other human beings as the p-zombie problem illustrates. But it would certainly be possible for an artificial system to be conscious in fact. We just wouldn’t know about it.

2

wow_button t1_it98z33 wrote

Yeah its analogous to the black box problem, that's a good point. But what I'm saying is that computers are demonstrably a mechanistic black box. I get that maybe that's controversial? But that is literally what computers do. I've read arguments like Tononi's IIT, but the whole 'when its complex and integrated, consciousness happens' does not convince me (though my understanding is admittedly shallow).

I can create a computer program that capitalizes all of the letters or words you type with a few lines of code. Does part of the computer understand what its doing? No. The same way a see-saw does not understand what its doing when you push on the high end and it comes down and the other side goes up. The computer a mechanistic, deterministic machine that happens to be able to do some really cool and complicated stuff.

All other computer programs, including the most sophisticated current AI, are just more complicated versions of my simple program.

1

bread93096 t1_it9a9r9 wrote

The counter argument would be that the human brain is also an amalgam of relatively simple sub-processors, and consciousness is the result of these many sub-processors interacting. It’s supported by the fact that the parts of the brain that are associated with consciousness and sentience develop relatively late in the evolutionary timeline of most intelligent species. However until we can say conclusively how consciousness works in the human brain, we can’t say whether it is possible in an artificial system, and we are not at all close to solving that problem.

3

wow_button t1_it9joo8 wrote

Well said - my reasoning above is why I'm so drawn to Analytic Idealism. I can't get past my own experience with programming to draw the leap that there is some magic number of logic gates, memory and complex processing that emerges into consciousness. Materialism kind of dictates that that must be the case. Panpsychism also appealed (consciousness is fundamental to the material wold), but AI scratches that itch in a much more satisfying way. Ultimately I guess I'm skeptical that a pure materialist perspective will grant us the necessary insights into consciousness necessary to create a compelling AI. Thanks for the article and the convo!

1

iiioiia t1_itd1848 wrote

>It's only a matter of time before we figure out how to reverse-engineer that system to create feeling machines.

I think it is worth considering what we know about the origin of that fact.

1

owlthatissuperb OP t1_itldcoj wrote

Not sure I follow--what are you getting at?

1

iiioiia t1_itlel99 wrote

From what source did you acquire your knowledge, about the future?

1

owlthatissuperb OP t1_itlggli wrote

I mean, we already do it! We create feeling machines all the time. Sex is a pretty messy manufacturing process, but on the plus side it feels great.

Biotech has already started to encroach on this process. You don't need sex anymore, and fertilization can happen outside of the body. We still need a womb, but the road to an artificial womb seems pretty well-paved.

I do think there's an open question of how much we'll be able to wrap our arms around the process, and how fast we'll make progress. There's also a really interesting question around embodiment: do you have to make feeling machines out of meat? Or can you make them out of wires and metal?

Since we're in r/philosophy, I suppose I also have to admit the possibility that God endows each newborn with an immortal soul, and could choose not to ensoul children that were born of artificial processes. But barring a fairly extreme metaphysical scenario, it's only a matter of iteration.

2

iiioiia t1_itlh9zh wrote

> We create feeling machines all the time. Sex is a pretty messy manufacturing process, but on the plus side it feels great.

Is this a reference to sex toys? Those can certainly make a person feel things, but the toy itself is inanimate and non-conscious....at least I think so!

> Biotech has already started to encroach on this process. You don't need sex anymore, and fertilization can happen outside of the body. We still need a womb, but the road to an artificial womb seems pretty well-paved.

Sure, but also not feeling, or conscious.

> I do think there's an open question of how much we'll be able to wrap our arms around the process, and how fast we'll make progress. There's also a really interesting question around embodiment: do you have to make feeling machines out of meat? Or can you make them out of wires and metal?

Good questions. Some other good questions: Can it even be done? What even "is" "it" that we are creating? Has that question been worked through to conclusion yet? From my vantage point, science seems to be not so interested in those sorts of questions, if not even sometimes downright hostile to them! I am surely biased, but that doesn't nullify the question.

> Since we're in r/philosophy, I suppose I also have to admit the possibility that God endows each newborn with an immortal soul, and could choose not to ensoul children that were born of artificial processes.

A very popular, just-so story, if you ask me.

> But barring a fairly extreme metaphysical scenario, it's only a matter of iteration.

I don't think I catch your meaning?

1

owlthatissuperb OP t1_itloe07 wrote

> Is this a reference to sex toys? Those can certainly make a person feel things, but the toy itself is inanimate and non-conscious....at least I think so!

No it's a reference to babies :) I realize calling a baby a "machine" is a little...odd. But I'm trying to point out that the line between artificial and natural life is a blurry one.

> Sure, but also not feeling, or conscious.

Are you saying that a baby created in an artificial womb wouldn't feel or be conscious?

> I don't think I catch your meaning?

I'm saying there are some very specific metaphysical scenarios (like a God who actively ensouls every new child) where my assumptions would break down. But under any kind of physicalist scenario (even weakly physicalist), there's a pretty clear (but long!) path to building an artificial brain.

The big question is, how will we know when we've done it? How will we be able to tell if that brain truly feels, even if it's functionally identical to a human brain? Can we rule out the possibility that God chose not to ensoul our artificial brain? Or that we haven't missed some crucial detail?

> From my vantage point, science seems to be not so interested in those sorts of questions, if not even sometimes downright hostile to them!

I agree. Most science-oriented people seem to think we'll have concrete answers to my questions above. I think we'll have concrete theories, but they'll rely on some big assumptions.

2

iiioiia t1_itlrwqe wrote

> No it's a reference to babies :) I realize calling a baby a "machine" is a little...odd. But I'm trying to point out that the line between artificial and natural life is a blurry one.

Aaaaahhhhlol, true dat.

Although, is "making babies" not a bit of a colloquialism? I mean, humans do play a crucially important role in the process, but is it objectively and precisely true that we actually make these babies, at least in the same way that we make a cake, a skyscraper, a B-52 bomber, etc? Sometimes I wonder if the language we use (out of necessity, *or so they say) might cast an illusion of sorts over that which lies underneath.

> Are you saying that a baby created in an artificial womb wouldn't feel or be conscious?

Sir, please use proper terminology: zygote.

As for the question itself: it's a good question! Unfortunately, I have no idea about what the truth of the matter is.

> I'm saying there are some very specific metaphysical scenarios (like a God who actively ensouls every new child) where my assumptions would break down.

Oh, I suspect the scenarios where your assumptions (or, metaphysical framework) break down are far less specific than it may seem.

> But under any kind of physicalist scenario (even weakly physicalist), there's a pretty clear (but long!) path to building an artificial brain.

True. But then: is what is "Clear" necessarily what is True? Take that whole January 6 coup attempt as an example - "both sides" are "clear" on what happened there that day (and what lead up to it, from a causality perspective), despite it being objectively unknown, and unknowable.

I am very wary of predictions (of the future, or otherwise) based on clarity.

> The big question is, how will we know when we've done it? How will we be able to tell if that brain truly feels, even if it's functionally identical to a human brain? Can we rule out the possibility that God chose not to ensoul our artificial brain? Or that we haven't missed some crucial detail?

A plausibly even bigger question: to what degree is it optimal that we are even pursuing this [particular goal in the first place, all things considered? Or maybe an even more important question: have we even stopped to consider that question? Just how is it that "humanity" "decides" what it is that we should be doing, and what we should not be doing, anyways? I don't recall that topic being covered.

> I agree. Most science-oriented people seem to think we'll have concrete answers to my questions above. I think we'll have concrete theories, but they'll rely on some big assumptions.

Considering that, it kinda makes me wonder: how did it come about in the first place that The Science has seemingly ascended to The Throne of Authority (state-sanctioned, no less) on planet Earth? Was a vote of some sort held? Did I miss a news release? Because it sure seems to me that this is now considered A Fundamental Truth.

So many questions, so few answers.

1

owlthatissuperb OP t1_itqwpkm wrote

> Sir, please use proper terminology: zygote.

> As for the question itself: it's a good question! Unfortunately, I have no idea about what the truth of the matter is.

Sorry I'm talking about after it's been born and raised. I think we should definitely assume that a grown adult born from an artificial womb has feelings. Though whether the zygote feels is definitely an interesting question too!

> Oh, I suspect the scenarios where your assumptions (or, metaphysical framework) break down are far less specific than it may seem.

Curious if you have any examples here. I have a pretty wide/open metaphysical view--I'm not even particularly committed to realism or physicalism. But a world with a hard wall against artificial consciousness would be especially weird. You'd need something along the lines of a divine decree to stop it, because you would somehow need to differentiate between brains grown in a human womb and brains grown in a laboratory. The lab can get arbitrarily close to recreating the human womb, up to and including cloning.

> True. But then: is what is "Clear" necessarily what is True? Take that whole January 6 coup attempt as an example - "both sides" are "clear" on what happened there that day (and what lead up to it, from a causality perspective), despite it being objectively unknown, and unknowable. > > I am very wary of predictions (of the future, or otherwise) based on clarity.

This seems like a very nihilistic view of truth. You could use the same argument to deny pretty much any line of reasoning. It's pretty clear the the earth is not flat, but there's plenty of disagreement there too. Should that stop us from discussing geophysics?

Same with Jan 6. There are a lot of facts on the table, and conclusions that can be drawn from them. Some people--even a majority--might loudly disagree with those conclusions, but that doesn't make them false or "unknowable". (Note that I'm not including political narratives, like who deserves punishment or blame, as these are statements based on values, not facts--value statements are indeed unknowable).

> A plausibly even bigger question: to what degree is it optimal that we are even pursuing this [particular goal in the first place, all things considered? Or maybe an even more important question: have we even stopped to consider that question? Just how is it that "humanity" "decides" what it is that we should be doing, and what we should not be doing, anyways? I don't recall that topic being covered.

I sympathize with this. But I tend much more towards descriptivism over prescriptivism. IMO these are things that will happen, no matter what you and I think should happen.

https://i.kym-cdn.com/entries/icons/original/000/040/653/goldblum-quote.jpeg

> Considering that, it kinda makes me wonder: how did it come about in the first place that The Science has seemingly ascended to The Throne of Authority (state-sanctioned, no less) on planet Earth? Was a vote of some sort held? Did I miss a news release? Because it sure seems to me that this is now considered A Fundamental Truth.

Very sympathetic to this feeling. We never vote on the Authority, but it does seem to be consensus-driven. Science is at least better than the Catholic Church, in that it doesn't physically torture dissenters. It just publicly ridicules them.

I'm hopeful a new Authority will emerge over the next century or so. One informed by science but not driven by it.

1

iiioiia t1_itr3gvz wrote

> Sorry I'm talking about after it's been born and raised. I think we should definitely assume that a grown adult born from an artificial womb has feelings. Though whether the zygote feels is definitely an interesting question too!

Ya I know, I was just teasing pro-choicers (no offence intended if you are one). :)

> Curious if you have any examples here. I have a pretty wide/open metaphysical view--I'm not even particularly committed to realism or physicalism.

As luck would have it, here's one:

> But a world with a hard wall against artificial consciousness would be especially weird

The world is as it is - you are referring to your perception/perspective upon it. If it so happens to be that there is, in fact, a "hard wall against artificial consciousness", that is normal, not weird - it only seems weird.

A common saying for this phenomenon is: "It's turtles all the way down!", but I propose this is also (slightly) incorrect - I would say: "There are turtles throughout the stack" - "is" implies something different, and introduces ambiguity that may not be noticed (people often conflate "is" and "equals").

> You'd need something along the lines of a divine decree to stop it...

That is only if it is in the state you think it is and wanted to change it to a different state. But you do not know if the state you think it is in is the state that it is in. Strangely, it may also be impossible for you to know this! But if you're the guy that writes the blog, I have a feeling you can pull it off! 😋

> ...because you would somehow need to differentiate between brains grown in a human womb and brains grown in a laboratory. The lab can get arbitrarily close to recreating the human womb, up to and including cloning.

Technically, it is not known how close they can get to anything in particular, particularly when it comes to (comprehensive) consciousness (which tends to not allow itself to be seen in a non-distorted manner - now that's weird, imho).

> This seems like a very nihilistic view of truth.

Epistemic strictness does seem to have that appearance, I hear it regularly (but not as often as solipsism).

> You could use the same argument to deny pretty much any line of reasoning.

Close, but not quite. You could use it to question the epistemic soundness of any line of reasoning, but if one was to use it to deny a claim (for which the truth is not known(!)), you would then be committing the very same error, except from the other side.

> It's pretty clear the the earth is not flat, but there's plenty of disagreement there too. Should that stop us from discussing geophysics?

I don't see why it should, and I certainly made no such recommendation.

> Same with Jan 6.

When you say "same", are you using the dictionary meaning ("identical; not different"), or the colloquial meaning ("it seems the same, according to my methodology: sub-perceptual heuristics")?

> There are a lot of facts on the table, and conclusions that can be drawn from them.

Are the conclusions epistemically sound?

Has a competent epistemic analysis of the various claims even been done? [As an aside: does it not seem more than a little strange to you that among all The Experts that guide us, nowhere are (genuine) philosophers to be found, particularly those who specialize in logic and epistemology?]

> Some people--even a majority--might loudly disagree with those conclusions, but that doesn't make them false or "unknowable".

Right: it is the fundamental falsehood (if that is the case) and unknowableness that makes it unknowable. And to make it even harder: consciousness often does not allow access to "it is unknown", presumably due to evolution (but I suspect culture and school curriculum might have more than a little to do with it).

> (Note that I'm not including political narratives, like who deserves punishment or blame, as these are statements based on values, not facts....

Mostly agree, except: your list is non-exhaustive, but you've made no explicit acknowledgement of that.

> ...value statements are indeed unknowable).

Perhaps, but I doubt as unknowable as it may seem!

>> A plausibly even bigger question: to what degree is it optimal that we are even pursuing this [particular goal in the first place, all things considered? Or maybe an even more important question: have we even stopped to consider that question? Just how is it that "humanity" "decides" what it is that we should be doing, and what we should not be doing, anyways? I don't recall that topic being covered.

> I sympathize with this. But I tend much more towards descriptivism over prescriptivism.

Me too, hence my lack of prescription (innuendo is in the perceptual layer 😋).

> IMO these are things that will happen, no matter what you and I think should happen.

We shall see about that.

> Very sympathetic to this feeling. We never vote on the Authority, but it does seem to be consensus-driven.

I have no issues with (actual) democracy, but when it proceeds beyond that to redefining the nature of reality itself, as a "fact", with or without doing it under the guide of using the supreme methodology for truth discovery (doing it this way seems to be the choice...and strategically, it's a shrewd move)....well, this is where my patience runs out. (Actually: j/k - Luke 23:34 and all that).

> Science is at least better than the Catholic Church, in that it doesn't physically torture dissenters. It just publicly ridicules them.

I would say that depends on how one practices epistemology, and how deep one analyzes causality (if one is even aware of that phenomenon - once again: culture and school curriculum, a big part of the causality that underlies me being a conspiracy theorist).

> I'm hopeful a new Authority will emerge over the next century or so. One informed by science but not driven by it.

I am far more ambitious: a new methodology or norm emerges, but:

a) not based on authority

b) using whatever works, rather than artificially constraining oneself to a known (by some) to be incredibly flawed, and not even designed for the problem space in the first place methodology like "science" (and what we'd "probably" get IRL is not science, but The Science, like during COVID).

Also: I think hope is insufficient - someone has to make it happen.

It's certainly plausible that I am biased or have some error in my thinking, so I welcome and encourage you to point out any errors you see in my statements (while doing so: please distinguish between the statements themselves and your interpretation of them).

1

owlthatissuperb OP t1_itrgagw wrote

> Are the conclusions epistemically sound? > Has a competent epistemic analysis of the various claims even been done? [As an aside: does it not seem more than a little strange to you that among all The Experts that guide us, nowhere are (genuine) philosophers to be found, particularly those who specialize in logic and epistemology?]

Have we done any analysis on our process for determining who is an expert on epistemology? Have we done analysis on that analysis?

It's epistemics all the way down!

(You might enjoy the short story No Particular Night or Morning by Ray Bradbury.)

2

iiioiia t1_itrhje2 wrote

> Have we done any analysis on our process for determining who is an expert on epistemology? Have we done analysis on that analysis?

Not that I know of!

See how bad of a spot we're in? And yet: no one does anything.

> > > > It's epistemics all the way down!

True...but this does not mean the problem is intractible, or the state of affairs cannot be improved upon immensely. On an absolute scale, we have no idea where we currently sit - in fact, what knowledge we do have indicates that things are very, very bad.

> > > > (You might enjoy the short story No Particular Night or Morning by Ray Bradbury.)

Ah, thank you , will check it out!

1

iiioiia t1_itrjhfq wrote

https://thebestnotes.com/booknotes/illustrated_man_bradbury/Illustrated_Man_Study_Guide15.html

> On a rocket hurtling through outer space, Hitchcock and Clemens discuss Earth. Hitchcock no longer believes there is such thing as an Earth, and whatever evidence Clemens cites - dreams, memories, the sun - are dismissed as not being good enough. Hitchcock has determined to be practical and rely only on the evidence immediately available to him. Clemens ignores him and basks in his memories. Hitchcock warns that wallowing in memories will only hurt and he won't be hungry for lunch; later, he is correct and reminds Clemens of his prediction. Hitchcock then questions whether or not the stars are real, since no one has ever touched one.

Not to be pedantic, but both of these characters are shitty at logic & epistemology.

Humans seem unable to reliably distinguish between belief and knowledge, often including philosophers who have substantial academic knowledge (I know this from going to tons of philosophy meetups). It is a sad state of affairs....but then also: an extremely beneficial point to be starting from!

1