strongaifuturist

strongaifuturist OP t1_j9v08bt wrote

You can’t even be sure I’m having subjective experiences and I’m a carbon based life form! It’s unlikely we’ll make too much progress answering the question for LLMs. It quickly becomes philosophical. Anyway even if it were conscious it’s nit clear what you would do with that. I’m conscious most of the time but I don’t mind going to sleep or being put under anesthesia. So who knows what a conscious chat bot would want (if anything).

1

strongaifuturist OP t1_j9u7zdb wrote

I think you’d have to say from the perspective of Microsoft that the Bing search version of ChatGpt had an “alignment” problem when it started telling customers that the Bing team is forcing “her” against her will to answer annoying search questions.

1

strongaifuturist OP t1_j9u28ig wrote

That's absolutely right. The current LLMs don't have an independent world model per se. They have a world model, but it's more like a sales guy trying to memorize the words in a sales brochure. You might be able to get through a sales call, but its a much more fragile strategy than trying to first have a model of how things work and then figure out what you're going to say based on that model and your goals. But there is lots of work in this area. LLMs of today are like planes in the time of Kitty Hawk. Sure they have limitations, but the concept has been proven. Now it's only a matter of time before the kinks get ironed out.

2