Viewing a single comment thread. View all comments

thecodethinker t1_jduvi9z wrote

That’s not even to mention that appearing conscious is as good as being conscious as far as the teams behind these LLMs are concerned.

There’s no practical difference

4

bjj_starter t1_jduz6p7 wrote

I'm not sure if most of them would agree, based on their actions and statements. They certainly think that AI is an existential risk, but that is a different thing from viewing it as conscious. You could definitely be right, I just haven't seen much from them that would indicate it.

That said, the extremely common sense position you just outlined was mainstream among basically all respectable intellectuals who had any position on AI, right up until the rubber hit the road and it looked like AI might actually achieve that goal in the near future. The fact is that if something behaves like a conscious entity in all of the ways that matter, it is conscious for the sake of the social meaning of the term. Provenance shouldn't matter any more than gender.

2

thecodethinker t1_jdzvin6 wrote

LLMs are not social, not alive, and can’t act on their own.

“Social meaning” need not be applied to LLMs unless you’re trying to be pedantic.

0

bjj_starter t1_jdzymdg wrote

>not social

"needing companionship and therefore best suited to living in communities" is a fine descriptor of some of their peculiarities. More importantly, I was referring to how consciousness is socially defined, and it is absolutely the case that it is up to us to determine whether any given AI should be considered conscious. We do not have an even moderately objective test. We as a society should build one and agree to abide by what we find.

>not alive

That's the entire point under discussion. I didn't lead with "they're alive" because I recognise that is the central question we should be trying to address, as a society. I am arguing my point, not just stating it and expecting people to take it on faith, because I respect the people I'm talking to.

>can’t act on their own.

A limitation that can be convincingly solved in approximately an hour using commonly available tools isn't a fundamental limitation. A good LLM with a good LangChain set-up can act on its own, continuously if it's set up to do so. I require a mechanical aid to walk - requiring the aid doesn't make me any lesser. I don't know if an LLM with a good LangChain set-up should be considered conscious or a person - I suspect not, because it's not stable and decays rapidly (by human lifespan standards), as well as still failing several important tests we do have, such as novel Winograd schemas. But our intuition shouldn't be what we're relying on to make these determinations - we need a standardised test for new applicants to personhood. Make it as challenging as you like, as long as at least a significant number of humans can pass it (obviously all humans will be grandfathered in). What's important is that we make it, agree that anything which passes is a person, and then stick to that when something new passes it.

1