Viewing a single comment thread. View all comments

guyonahorse t1_ja0ii8z wrote

  1. Of course it's possible
  2. We have nothing even close to it AI wise yet. Currently it's just inferencing.

Humans are a terrible example of an AGI as evolution is all about 'survival of the fittest'. Human AI creations have all had a specific purpose and a right/wrong answer (knowing the right answer is the only way to train an inferencing AI).

So what is the "right answer" of an AGI? If you don't have that, there's no current way to train one.

12

InevitableAd5222 t1_ja1d1i7 wrote

So much of the confusion in this debate comes down to philosophical terminology. Like "general" intelligence. What would we consider "general intelligence"? Symbolic reasoning? BTW we don't need right/wrong answers in the form of a labeled datasets to train an AI. ChatGPT doesn't even use that, it is self-supervised. For more generic "intelligence" look into self-supervised learning in RL environments. ML models can also be trained by "survival of the fittest", genetic/evolutionary algorithms are being researched as an alternative to the SOTA gradient based methods.

​

https://www.uber.com/blog/deep-neuroevolution

8

guyonahorse t1_ja1ftau wrote

Well, ChatGPT's training is pretty simple. It's trained on how accurate it can predict the next words in a training document. It's trained to imitate the text it was trained on. The data is all "correct", which amusingly leads to bad traits as it's imitating bad things. Also amusing is the qualia of the AI seemingly being able to have emotions. Is it saying the text because it's angry or because it's just trained to imitate angry text in a similar context?

But yeah, general intelligence is super vague. I don't think we want an AI that would have the capability to get angry or depressed, but these are things that evolved naturally in animals as they benefit survival. Pretty much all dystopian AI movies are based on the AI thinking that to survive it has to kill all humans...

3

Monnok t1_ja28d43 wrote

There is a pretty widely accepted and specific definition for general AI... but I don't like it. It's basically a list of simple things the human brain can do that computers didn't happen to be able to do yet in like 1987. I think it's a mostly unhelpful definition.

I think "General Artificial Intelligence" really does conjure some vaguely shared cultural understanding laced with a tinge of fear for most people... but that the official definition misses the heart of the matter.

Instead, I always used to want to define General AI as a program that:

  1. Exits primarily to author other programs, and

  2. Actively alters its own programming to become better at authoring other programs

I always thought this captured the heart of the runaway-train fear that we all sorta share... without a program having to necessarily already be a runaway-train to qualify.

2

ChuckFarkley t1_ja3zom1 wrote

By some definitions, your description of GAI also qualifies as being spiritual, esp. Maintaining and improving its own code.

1

HumanBehaviourNerd t1_ja2jruv wrote

Human beings are the best example of AGI that we know. In fact if someone could replicate human level AGI, they would be the worlds first trillionaire overnight. Most human beings cannot tell the difference between the information they “know” and their consciousness (themselves), so unless someone solves that problem, we are a while away.

1

jamesj OP t1_ja4gv2m wrote

AlphaZero used no human games and defeats the world's best Go players. It is not true that all AI systems need labeled data to learn, and even with labeled data it isn't true that they can't learn to outperform humans on the dataset.

1

guyonahorse t1_ja4mpku wrote

Of course AlphaZero had labeled data. We already know how to detect when the game is won, we just don't know what moves are good to get there. The AI just did moves and the right answer = winning the game. The beauty was it could play against itself vs human players.

For AGI we don't know how to detect "winning the game".

1

LizardWizard444 t1_ja2xg5l wrote

Yes but even a terrible example of AGI has made extinct many species and irreversibly changed the planet without the one track optimization inherent in even the simplest AI.

When your argument is "We haven't made one and don't know how to make yet" doesn't inspire comfort as it means we absolutely can stumble into it and then everyone's phone start heating up as they're used fkr processing and WAY scarier things start happening after that

0