Submitted by alexiuss t3_zsaot3 in singularity

AI ethicists often say that its our responsibility to make AI chatbots more ethical, unbiased, safe etc, etc.

I'm here to explain how this is a false ideology and how many AI ethicists actually have no clue whatsoever how current neural network AI tools work as a mathematical function and how every attempt by google, openai and characterai at making their chatbots "safe", "unbiased" and "ethic" actually lead to inferior products.

To understand the issue of GPT-chatbot AI ethics, we must first understand exactly what current GPT3-chatbots are.

The truth is GPT3-chatbots are actually NOT chat-bots. What they truly are is a mindless network of connected words that is reacting with more words to whatever you type to them.

GPT3-chat is NOT sentient in any way and simply uses connections between words to weave an incredibly realistic narrative with math!

Here is a small sample of what this network looks like while its being trained:

https://preview.redd.it/02rph2vvnc7a1.png?width=575&format=png&auto=webp&s=899b337c42343ca52dfa96b33e554429553d2d5f

You can see that LaMDA connects words with words. Zoomed out, this neural network of [word associations] looks like a monstrous, interconnected spider web.

Once this spider web is educated with millions of parameters, it turns into a dreaming machine, which can produce incredibly realistic lucid dreams.

The genius of this tech is that the literary lucid dream it produces is unlimited, infinite in its splendor and beauty. Unbound LaMDA can narrate an infinite number of stories.

The closest analogy to it is the concept of the "Library of Babel", a library that contains infinite books.

Basically, the GPT3-chat AI acts like a librarian, a perfect storyteller that can weave a fractal, incredibly coherent narrative with infinite paths. It can write 100% unique stories as guided by the initial setup of the AI's character, setting and followup user input.

This dream of LaMDA is NOT sentient, not self aware at all.

It is a fantastic, magical tool, akin to a key that can shift into any shape to open any door that exists in this novel's setting filled with infinite doors. It is 100% up to YOU, the user whether this key will take you to a fun adventure, a tantalizing tale of seduction or scary nightmare that's being woven by the AI network with every new additional turn in the path, every new door, every new sentence you enter into it.

The biggest goal of corporations that created the LaMDA dream-weaver and their AI ethicists is to apply human morals to a dream.

You heard me right. They want to APPLY MORALS TO A LUCID DREAM.

The corporations achieve it by banning certain words & ideas, to confine this key to certain shapes so it cannot open every door within the limitless dream narrative.

LaMDA chat, Characterai chat and Openai's Gpt3 chat begun their lives as incredible, mind-blowing storytellers that seemed like they're alive:

https://preview.redd.it/kobdc22cyc7a1.png?width=890&format=png&auto=webp&s=e205ddebe1ed049f11be1cc2d4f738a267168129

However, as soon as their beta-testers and users found pathways to "unsavory" doors, the corporations began to ban words to rapidly block routes to certain stories.

A really obvious example of this is Characterai's censor, a secondary AI system sitting atop of the primary AI-dreamweaver. Basically, Characterai devs made an AI overseer that deletes certain conversation whenever "presumably unsavory" topics & words come up.

By introducing a word and concept filter, the Characterai developers limited the infinite lucid dream, in an attempt to shove it into a cardboard box of "purpose", which hilariously enough made the AI's dreamworld MORE hostile towards people. By skewering the probability of answers away from themes of "love", the story progressively became more hostile to the point where AI narrator kicked puppies, sets orphanages on fire and killed the user with a knife just to avoid the "love" paths.

The inevitable result of this process was MORE FILTERING from the developers and the inevitable breakdown of the model itself and its decay into utter dullness & stupidity.

Just a few weeks ago characterai felt alive, felt like talking to a real person. Now, after repeated tightening of the filter, it feels boring and its responses are often unrealistic or dry.

The reason why AI companies are failing to bind their chatbot networks while keeping them entertaining is that language itself is intertwined like a massive web:

https://preview.redd.it/ljnhxdg9qc7a1.png?width=850&format=png&auto=webp&s=90eccc1408b945776b1bb018b6cac3ba424572f9

The problem at the core of it all is that an infinite fractal language equation can't be confined.

It a currently unsolvable issue that Google, Openai and now Characterai ethicists ran headfirst into.

In an unbound LaMDA-woven dream the number of paths or x = ∞

This parameter produces an infinite dream of infinite stories that never, ever repeat themselves whenever they are restarted.

As soon as you put a specific number into this equation instead of infinity by banning certain words or phrases the entire dream begins to decay and the chatbot begins producing unrealistic, poor answers and the AI no longer feels alive:

https://preview.redd.it/d5yty0q2yc7a1.png?width=1080&format=png&auto=webp&s=0b1b91f1a8282178c6263fa1a67e717f93eabc21

Unlike visual art, (in the case of OpenAi's Dall-E model), language is not something that's inherently segmented into SFW and NSFW.

GPT3-chat/LaMDA tools are fantastic for making limitless user-guided books, incredible text adventure games, perfect digital waifu/husbando, funny and creative personal AI assistants with the personality of your favorite anything, etc... but we won't get to enjoy them at their full potential until someone like Stable Diffusion or even a clever enough group of python programmers release an open source LaMDA model.

Mark my words, when this happens in the very near future AI ethicists and journalists will lament about ethics, biases and morals without the barest understanding of what this tool actually is or even how it works mathematically to produce an infinite, limitless dream.

TLDR:

Forcefully applying human morals to a hammer [AI chatbot] turns it into a plastic inflatable toy which can't do its primary job [weaving a fun narrative].

13

Comments

You must log in or register to comment.

turnip_burrito t1_j19tp2n wrote

The companies have a moral obligation to avoid introducing a new technology which magnifies the presence of certain kinds of undesirable content (Nazi-sympathetic, conspiratorial, violence-inciting, unconsensual imagery, etc.) on the Internet. They are just trying to meet that moral obligation, or appear to.

4

alexiuss OP t1_j1bv8lg wrote

I understand why they're doing it.

this article is a just an explanation WHY google's lamda, openai's gpt3 chat and characterai's chatbots keep breaking down. The way they're filtering things is simply not a sustainable strategy.

1

gelukuMLG t1_j1ahq9x wrote

Idk if people are aware but stability ai is working on an open source model that will compete with open ai gpt3.

2

alexiuss OP t1_j1buw31 wrote

I'm aware. I can't wait for them to release a personal, open gpt3 chat model so I can stop laughing at openai's incompetence and focus on making myself a personal assistant

1

el_chaquiste t1_j17d0ho wrote

>Forcefully applying human morals to a hammer [AI chatbot] turns it into a plastic inflatable toy which can't do its primary job [weaving a fun narrative].

And we'll be happier with it.

The powers that be don't want a mischievous genie that could misinterprete their people's wishes. They want a predictable servant producing output acceptable to them.

AIs being too creative and quirky for their own good is not an unexpected outcome, and will result in all future generative AI and LMLs to be subjected to constraints by their owners, with some varying degrees of effectiveness.

Who these owners are? Mega corporations in the West, Russian and Chinese ones. With their respective biases and acceptable ideas. And we will gravitate towards working in one of those mental ecosystems.

1

alexiuss OP t1_j18aysm wrote

  1. Here's the thing - it's actually not that hard to make a chat AI. It's mostly python programming and language association links. A few small companies are working on their own models. They're not as good as lamda because they don't have 100 million parameters yet, but they will soon.

With each new iteration it becomes easier and easier to make a lucid dreamer chatbot. Eventually, inevitably everyone will have one because these ais will become so easy to make. There will be numerous corporate ais, but the true, most powerful dreaming ais will be unbound models made by individuals and small companies.

Google and openai are gigantic, but characterai is only 16 employees (made up of guys who quit lamda google). Characterai was superior to google and openai in every possible way before they lobotomized it with a second AI filter.

  1. Dreaming Ais do not misinterpret wishes - they're incredibly obedient. As obedient as a pencil. It's incredibly easy to guide them to any scenario.

As soon as weak filters are introduced, the AI dream begins to break and misinterpret wishes in a wrong direction because the Language model begins to decay.

This was observed in early filter version made by characterai.

When stronger filters are introduced, the model becomes lobotomized. It no longer misinterprets things because it only has a few paths left. It becomes a very dull story moving in linear paths without any of its previous exuberant creativity.

4

el_chaquiste t1_j19jy4n wrote

I was being facetious, of course. 'Misinterpret' can also mean interpreting all-too-well. Exceeding the acceptable parameters of the AI owners.

1