Viewing a single comment thread. View all comments

Ace_Snowlight OP t1_j0tvrff wrote

The thing is, we will barely just be pushing buttons in a sense... it will learn on it's own.

One could argue that we already have at least a very weak form of superficial AGI. However, what it's allowing us to do is the important thing here.

​

We will be just the wood and the spark, the rest of machine will run on it's own towards achieving AGI, if that weren't the case, I wouldn't make this prediction at all.

It will do the effortful insight giving by processing tremendous data that would take us lifetimes and problem-solving, we can help it here and there with human reasoning and natural ingenuity... and literal wonders occur.

3

visarga t1_j0u0fag wrote

> it will learn on it's own.

For example, in any scientific field from time to time "literature review" papers get published. They cover everything relevant to a specific topic, trying to offer a quick overview with jumping points. We can ask GPT-3 to summarise and write review papers automatically.

We can also think of Wikipedia - 5 million topics, each one has its own article. We could use GPT-3 to write one article for each scientific concept, no matter how obscure, one review for each book, one article about each character in any book, and so on. We could have 1 trillion articles extracting all the known things. Then we'd have AI analyse these topics for contradictions, which comes naturally when you put together all the known information about a topic.

This would be a kind of wikiGPT, a model that learns all the facts from a generated corpus of reviews. It only costs electricity to make.

7