Galactus_Jones762

Galactus_Jones762 OP t1_jefvb0v wrote

Saying it “might” may seem irrelevant but it’s not technically wrong to claim something might have consciousness. It’s sort of a “definitely maybe” thing as opposed to a “definitely not, a priori impossible” type thing. That’s the tension I’m working between.

There’s probably a continuum of things between a rock and a person, where the possibility increases. I think AI is climbing this continuum as it begins to exhibit emergent properties and agentic behaviors, which it has.

Also consider panpsychism. There is no good reason to be certain that something isn’t conscious. We just don’t know enough about consciousness to talk in certainties. I think doing so is a trap.

There is also no good reason to be certain something IS conscious, except perhaps ourselves, as we experience our subjective reality directly. We should live as if others are conscious but we have to fall short of certainty.

1

Galactus_Jones762 OP t1_jef6fps wrote

We mostly agree. It may be a wise default position to say it’s like clippy til proven otherwise. But my claims are actually this fwiw: at some as to yet be pinpointed, consciousness may emerge. It’s not arbitrary, it’s unknown but we know it’s there, at least in brains. We don’t know if it’s there in the most advanced AI systems. We also don’t know it’s not there, by it, I mean a level at which it could potentially emerge with some form of what we might usefully label “consciousness.”

I’m also saying that even if we can define consciousness, we can’t prove it’s in our own brains merely by looking at them. What we can plausibly say there is “likely” a basic infrastructure in our brains that yield subjectivity, qualia; all that good stuff, and that infrastructure has been more or less eluded to in my article, reductionist, sure, but I talked about the ion channels, firing rate, action potential, number of synaptic connections, networks, and brain regions.

You are telling me that in order to say it is “likely” there’s a basic infrastructure yielding something beyond mere automaton behavior in AI, something like consciousness, I need more evidence. I agree.

Which is why I’m not saying it’s likely. I am merely offering that it is possible. And that’s plenty to divorce the Chinese Room from this conversation.

Searle wrote his experiment when he had “eyes” on all the pieces of the model’s functionality. It’s a braindead obvious observation but also an important one.

His original conception of sequential token manipulation does not account for unexplained emergence in systems where we don’t have a model for how the data is structured at these massive scales.

Bottom line: one can’t argue that just because humans are made up of sub atomic particles we can extrapolate that they can never be conscious or are entirely mechanical automatons.

We are entirely mechanical. Yet we don’t know how consciousness arises in us or any creature. We know it’s mechanical and have tracked the culprit to the brain but still don’t know precisely how it creates this subjective experience we have.

What’s new and different about AI as opposed to older versions is we are starting to see edge case activity where we really and truly no longer know why or how these things do what they the do.

When emergence takes place in complex systems you have to put Searle’s experiment on notice—it’s in danger of being increasingly irrelevant as the behaviors become increasingly unexplainable by sequential token manipulation alone.

We should not blindly invoke history, blindly rely too heavily on these prosaic “rules” handed down by great thinkers. It’s important we understand their role but know when to come out of the rain.

If any scholar invokes Chinese room as a one and done answer to smart layman who are noticing something weird is happening in the AGI research field, that’s lazy.

It’s fine with a child who mistakes a chatbot for a true friend. It’s not adequate for those who are fully aware of Searle and what his dictum meant.

We have to be open to the possibility that AI is showing early signs of graduating from its Chinese room shackles. Maybe we need a new thought experiment.

0

Galactus_Jones762 OP t1_jedd3n0 wrote

I'm tired of hearing the Chinese Room thing. The thought experiment is quickly losing its relevance. Just look around; GPT and LaMDA are doing much more than Chinese Rooming. Before you object, READ. Then go ahead, object. I live for that shit.

(I am Galan.)

−7

Galactus_Jones762 t1_je8pmph wrote

People don’t know what the word understand means because to define it you have to rely on other words that are ill-defined. We don’t fully know what it means to understand something because we don’t ourselves know how consciousness works. So to say in a condescending dismissive way that “LOOK…what you have to realize is it’s not understanding anything you idiots, no understanding is taking place,” aside from the annoying condescension it’s also a nonsense statement, because understand is not well-defined and thus saying it doesn’t “understand” is not a falsifiable statement any more than saying it does understand is a falsifiable statement. Agreed that saying it doesn’t understand is irrelevant.

0

Galactus_Jones762 OP t1_je8lp5q wrote

Yeah that sounds about right. Sometimes I think the diff between Kant and Nietzsche is the difference between what we want to be true and sadly what is actually true. Although…

If it’s true that we want something to be true, eventually it will be, even if we have to fucking tear into our brains or genes and tinker with them until we powerfully will the will to power to it’s knees.

Nietzsche can’t hide from Galactus. I wield the power cosmic.

1

Galactus_Jones762 OP t1_je6v4b0 wrote

Good points. Our primitive aspects are real. My hope is to get them out in the open to be examined. Just as man has always struggled, man also shows consistent capacity to slowly improve morally. It doesn’t happen by itself. It happens thru ideas and leadership that grow into movements that turn into change. There have always been the cynics and there have always been the idealists. There is massive futility in both, but less so for the idealists and dreamers, who rightfully own all the positive changes that have occurred in history, and when you add them up, you must admit there are MANY. Today’s idealists are no different. Most of us are spitting in the wind, but it is the combined efforts of millions that lead to the few whose names are associated with change. Which team are you on: that’s the only remaining question.

Your point about the mechanics of rhetoric and confirmation bias is astute. Articulating the problem is half of finding a solution, so you’re doing good work. Thank you.

1

Galactus_Jones762 OP t1_je6nei3 wrote

I think a lot of people don’t WANT certain outcomes that benefit a lot of humanity, but that instead of saying it outright, they want to hide behind feasibility arguments or just have us “take a hint.”

Feasibility arguments and evidence are extremely important, I’m not trying to dodge those. I’m trying to separate the wheat from the chafe.

My goal is to get it out in the open what people actually want. To press the issue. Because if many people secretly or privately want 7 billion people to just die off or go away in the event AI automates all production, that’s sort of an important thing to know NOW.

While I believe in the value of all human life, I can perfectly see where someone is coming from if they say “once we no longer need a large consumer base or labor force, I can’t really say I’d want there to be 8 billion people. There’s no benefit in a large population at that point and I’d prefer the world was inherited by the producers and the power elite. We won’t need useless eaters, and deep down I measure the value of a life by the use value it creates for society, not for its inherent value.”

So again, I wouldn’t agree with the person at all, I’d be appalled, but I’d get where they’d be coming from. Basically more of a Randian libertarian value of life versus a Kantian one. And that’s fine. But at the same time, since I’m Kantian, I worry that the Randian ideal is imminent. We are heading right for it — but very very very few people seem to want to talk about the potential for genocide of the population by the power elite. This could take the form of a slow genocide, a choking off of resources and morale, or a fast one. Because it does sort of logically flow from the premise that life’s value is determined by use and why would the elite want to share a planet with so many people if they don’t see inherent value in all human life and it’s no longer creating value?

I’m not saying it’s plausible that they consciously have a plan, although I wouldn’t rule it out. Also likely is a subconscious belief system that will naturally play out really bad if we don’t talk openly about it.

As George Carlin pointed out “THEY DON’T CARE ABOUT YOU.” But they do need you. What happens when they stop needing you?

1

Galactus_Jones762 OP t1_je3tyow wrote

Can you close out the email scraper? Actually it’s pretty simple.

Again, it’s just a hypothesis on the root causes of disagreement in debates about the future.

And I don't see the harm in understanding shared values and goals (or where they diverge) BEFORE diving into feasibility concerns.

In fact, while it's not quite the same thing, it's a lot like “BACKCASTING,” a method used in urban planning and other policy discussions. They first establish a shared vision of a desirable future. This takes place BEFORE getting into feasibility concerns.

Then, you work backward from a group-defined ideal outcome. This often leads to better conversation between people from different camps.

I would like to separate it into two discussions for more productive discourse: First, discuss desirability, get agreement, and next, discuss feasibility. I talk about this in my essay, but as usual, I wrote it around an example of someone not finding peace and prosperity desirable, which, the essay explains, is due to fear, selfishness and ignorance. But regardless of why, it’s not about feasibility, he doesn’t WANT it whether it’s feasible or not. But people have a hard time saying what they want, so it’s easier to just argue the feasibility, which is always safely speculative.

This might explain why so much ink is wasted on ENDLESS feasibility discussion and almost no time discussing desirability among the people discussing.

One example, a person who argues that UBI is infeasible, but who deep down just doesn’t like the IDEA of UBI because it’s scary to change and also he doesn’t like how it feels to imagine people getting money for nothing. But instead of saying all this — which is awkward — he rebuts with an endless maze of feasibility problems.

1

Galactus_Jones762 OP t1_je362no wrote

I agree with your sentiment. Honestly, my hypothesis is that some people jump into feasibility debates prematurely as a way to avoid discussing their true feelings regarding the desirability of an outcome. Just a guess! But if this is indeed happening, that's bad. I can't be sure it happens, or know when it's happening. But my essay offers a way to make it impossible to happen, by separating the discussions into desirability first, then feasibility.

I don't see the downside of this. But I realize it’s a contentious issue in some ways, as if I’m accusing ALL dissenters of UBI etc of motivated reasoning. I’m not!

Have you ever sensed you were in a debate with someone who was arguing feasibility or unfeasibility simply because they "liked" or "disliked" the outcome? I think it’s a problem. Curious to hear your experience with this.

1

Galactus_Jones762 t1_j6gso3m wrote

UBI is about making access to basic money like access to public water. Just a constant. Not need-based, just there. Like the air. Because we can, and because we will eventually have to.

−1

Galactus_Jones762 t1_j6ff6sm wrote

AI will lower headcount of the employed. Drastically. It’s amazing how many ways people have devised to talk around this. Yes, it probably won’t replace ALL people who work in X right away. It may even augment people and make them more productive. But in the mid-term totality of its effect, we will not have 3 billion employed people. Jobs will become incredibly rare which means we have to produce basic living goods and services and distribute them for free. Full stop. The end. With a cherry on top because this should be seen as a good thing but it isn’t, because people are terrified of breaking the no-free-lunch rule. It shatters their reality and they can’t tell up from down and they collapse into madness and ego-death.

2