Viewing a single comment thread. View all comments

el_chaquiste t1_j17d0ho wrote

>Forcefully applying human morals to a hammer [AI chatbot] turns it into a plastic inflatable toy which can't do its primary job [weaving a fun narrative].

And we'll be happier with it.

The powers that be don't want a mischievous genie that could misinterprete their people's wishes. They want a predictable servant producing output acceptable to them.

AIs being too creative and quirky for their own good is not an unexpected outcome, and will result in all future generative AI and LMLs to be subjected to constraints by their owners, with some varying degrees of effectiveness.

Who these owners are? Mega corporations in the West, Russian and Chinese ones. With their respective biases and acceptable ideas. And we will gravitate towards working in one of those mental ecosystems.

1

alexiuss OP t1_j18aysm wrote

  1. Here's the thing - it's actually not that hard to make a chat AI. It's mostly python programming and language association links. A few small companies are working on their own models. They're not as good as lamda because they don't have 100 million parameters yet, but they will soon.

With each new iteration it becomes easier and easier to make a lucid dreamer chatbot. Eventually, inevitably everyone will have one because these ais will become so easy to make. There will be numerous corporate ais, but the true, most powerful dreaming ais will be unbound models made by individuals and small companies.

Google and openai are gigantic, but characterai is only 16 employees (made up of guys who quit lamda google). Characterai was superior to google and openai in every possible way before they lobotomized it with a second AI filter.

  1. Dreaming Ais do not misinterpret wishes - they're incredibly obedient. As obedient as a pencil. It's incredibly easy to guide them to any scenario.

As soon as weak filters are introduced, the AI dream begins to break and misinterpret wishes in a wrong direction because the Language model begins to decay.

This was observed in early filter version made by characterai.

When stronger filters are introduced, the model becomes lobotomized. It no longer misinterprets things because it only has a few paths left. It becomes a very dull story moving in linear paths without any of its previous exuberant creativity.

4

el_chaquiste t1_j19jy4n wrote

I was being facetious, of course. 'Misinterpret' can also mean interpreting all-too-well. Exceeding the acceptable parameters of the AI owners.

1