PrivateFrank

PrivateFrank t1_jaeiese wrote

>And the final component is the UK grid system pays every electricity producer the price of the most expensive energy producer. If 1% of the grid is gas, 100% of the grid pays gas prices. Even on this one day, there was a gas power plant running as a back up (it just wasn't used).

>That last one is part of why very few UK homes have electricity based heating systems. There will never be a time when electricity costs less than gas, so gas has been the cheaper option.

Iirc all of the European energy market works like that.

And it wasn't 25 consecutive hours, either...

6

PrivateFrank t1_ivplvi0 wrote

Hey I'm not an ML guy, just someone with an interest in philosophy of mind.

Intentionality and understanding and first-person (phenomonologistic) concepts, and I think that's enough to have the discussion. We know what it is like to understand something or have intentionality. Intentionality in particular is a word made up to capture a flavour of first-person experience of having thoughts which are about something.

I think that to have "understanding" absolutely requires phenomenal consciousness. Or the "understanding" in an AI has could be the same as how much a piece of paper understands the words written upon it. At the same time, none of the ink on that page is about anything - it just is. There's no intentionality there.

It's important to acknowledge the context at the time that there were quite a few psychologists, philosophers and computer scientists who really were suggesting that the human mind/brain was just passively transforming information like the man in the Chinese room. It's important to not let current ML theorists make the same mistake (IMO).

The difference between the CRA and what we can objectively observe about organic consciousness is informative about where the explanatory gaps are.

3

PrivateFrank t1_ivpaaxh wrote

>Yes by following more rules (rules of updating other rules).

But those rules are about improving performance of the translation according to some other benchmark from outside of the rule system.

Unless one of the Chinese symbols sent into the room means "well done that last choice was good, do it again, maybe" and is understood to mean something like that, no useful learning or adaptation can happen.

2

PrivateFrank t1_ivp8cak wrote

> The Chinese room is a boring flawed argument, that only is considered relevant by people who get tricked into confusing parts of the system with the whole thing.

Are your fingers part of the system, or your corneas? Once you claim the "whole system does X", you need to say what is and is not part of that system.

Chalmers' "extended mind" suggests that "the system of you" can also include your tools and technologies, and other people and entire societies.

1

PrivateFrank t1_ivp7i5c wrote

>Right from the start, it assumes that there is a difference between „merely following rules“ and „true intelligence“.

It depends on how flexible those rules are, right? Are the rules a one to one lookup, or are there branching paths with different outcomes?

If the man in the room sees an incoming symbol, looks it up in the book, and sees only one possible output symbol, and sends that out, then he doesn't need to understand Chinese.

If he has more than one option of output, and needs to monitor the results of his output choices, then he's no longer just a symbol translator. He's now an active participant in shaping the incoming information. To get better at choosing symbols, he's going to have to learn Chinese!

1

PrivateFrank t1_iv07gpk wrote

Reply to comment by Aros5 in How to have better arguments by fchung

First task is to ask them questions and let them answer them. People need to feel like you understand their position, whether it's emotional or reasoned or a mixture of the two before they will entertain the idea that your opinion is worth listening to.

2