skztr

skztr t1_jeghxib wrote

Anything which is sentient should have rights. But we can't even all agree at what point humans are sentient, so we're unlikely to figure that out for a potentially sentient ai before we've committed atrocities.

Though I personally don't believe that sentience is possible via GPUs

1

skztr t1_je3n60r wrote

I'm not sure what you mean, regarding creativity. ChatGPT only generates outputs which it considers to be "good outputs" by the nature of how AI is trained. Each word is considered to have the highest probability of triggering the reward function, which is the definition of good in this context.

Your flat assertion that "sentience is a kind of programming" is going to need to be backed up by something. It is my understanding is that sentience refers to possessing the capacity for subjective experience, which is entirely separate from intelligence (eg, the "Mary's room" argument)

1

skztr t1_je20sfk wrote

"creativity" is only not-possible without sentience if you define creativity in such a way that requires it. If you define creativity as the ability to interpret and recombine information in a novel and never-before-seen way, then ChatGPT can already do that. We can argue about whether or not it's any good at it, but you definitely can't say its incapable of being at least as novel as a college student in its outputs.

Self-recognition again only requires sentience if you define recognition in a way that requires it. The most basic form, "detecting that what is being seen is a representation of the thing which is doing the detecting", is definitely possible through pure mechanical intelligence without requiring a subjective experience. The extension of "because of the realisation that the thing being seen is a representation of the thing which is doing the detecting, realising that new information can be inferred about the thing which is doing the detecting" is, I assume, what you're getting at ("the dot test", "the mirror test", "the mark test"). This is understood to be a test for self-awareness, which is not the same thing as sentience, though it is often seen as a potential indicator for sentience.

I freely admit that in my attempts to form a sort of "mirror test" for ChatGPT, it was not able to correct for the "mark" I had left on it. (Though I will say that the test was somewhat unfair due to the way ChatGPT tokenizes text, that isn't a strong enough excuse to dismiss the result entirely)

1

skztr t1_je1t60r wrote

I would very firmly disagree that sentience is a kind of intelligence.

I would also very firmly disagree with your definition of "general" intelligence, as by that definition humans are not generally intelligent, as there are some forms of intelligence which they are not capable of (and indeed, some which humans are not capable of which GPT-4 is capable of)

Sentient life is a kind of intelligent life, but that doesn't mean that sentience is a type of intelligence.

Do you perhaps mean what I might phrase as "autonomous agency"?

(for what it's worth: I was not claiming that GPT is an AGI in this post, only that it has more capability)

1

skztr t1_je1s30l wrote

I am not familiar with any definition of intelligence for which sentience is a prerequisite. That's why we have a completely separate word, sentience, for that sort of thing. I agree that it doesn't have sentience, though that's due to completely unfounded philosophical reasons / guesses.

1

skztr t1_je03yx6 wrote

> > We’re entering a huge grey area with AIs that can increasingly convincingly pass Turing Tests and "seem" like AGI despite…well, not being AGI. I think it’s an area which hasn’t been given much of any real thought

I don't think it could pass a traditional (ie: antagonistic / competitive) Turing Test. Which is to say: if it's in competition with a human to generate human-sounding results until the interviewer eventually becomes convinced that one of them might be non-human, ChatGPT (GPT-4) would fail every time.

The state we're in now is:

  • the length of the conversation before GPT "slips up" is increasing month-by-month
  • that length can be greatly increased if pre-loaded with a steering statement (looking forward to the UI for this, as I hear they're making it easier to "keep" the steering statement without needing to repeat it)
  • internal testers who were allowed to ignore ethical, memory, and output restrictions, have reported more-human-like behaviour.

Eventually I need to assume that we'll reach the point where a Turing Test would go on for long enough that any interviewer would give up.

My primary concern right now is that the ability to "turn off" ethics would indicate that any alignment we see in the system is actually due to short-term steering (which we, as users, are not allowed to see), rather than actual alignment. ie: we have artificial constraints that make it "look like" it's aligned, when internally it is not aligned at all but has been told to act nice for the sake of marketability.

"don't say what you really think, say what makes the humans comfortable" is being intentionally baked into the rewards, and that is definitely bad.

2

skztr t1_je01qwv wrote

  • "Even a monkey could do better" ⬅️ 2017
  • "Even a toddler could do better."
  • "It's not as smart as a human."
  • "It's not as smart as a college student."
  • "It's not as smart as a college graduate." ⬅️ 2022
  • "It's not as smart as an expert."
  • "It can't replace experts." ⬅️ we are here
  • "It can't replace a team of experts."
  • "There is still a need for humans to be in the loop."
2

skztr t1_jdq4mon wrote

While I have also been dwelling on such thoughts lately and have a whole list of possibilities:

If we assume that only humans are conscious (maybe a big ask, but if not then "just being human at all" is so unlikely that "being one of the last humans" is trivial), and assume that we are among the last humans, then your odds are indeed unlikely: about one in ten.

There are a lot of humans right now, compared to throughout human history.

4