Viewing a single comment thread. View all comments

User1539 t1_j5grqw0 wrote

> If we want effective automation or make general human tasks faster we certainly do not need AGI.

Agreed. We're very, very, close to this now, and likely very far away from AGI.

> If we want inventions and technology which would be hard for humans to come up with in a reasonable time frame, we do need AGI.

This is where we disagree. I have many contacts at universities, and most of my friends have a PHD and participate in some kind of research.

In their work, they were evaluating Watson (IBMs LLM style AI) years ago, and talking about how it would help them.

Having a PHD necessarily means having tunnel vision. You will do research that makes you the single person on earth that knows about the one cell you study, or the one protein you've been working with.

Right now, the condition of science is that we have all these researchers writing papers to help other scientists have a wider knowledge on things they couldn't possibly dedicate time to.

It's still nowhere near wide enough. PHDs aren't able to easily work outside their field, and the result is that their research needs to go through several levels of simplification before someone can find a use for it, or see how it effects their own research.

A well trained LLM can tear down those walls between different fields. Suddenly, you've got an infinitely patient, infinitely knowledgeable assistant. They can write code for you. You can ask it what effect your protein might have on a new material, without having to become, or know, a material scientist.

Everyone having a 'smart' assistant that can offer an expert level understanding of EVERY FIELD will bridge the gaps between the highly specialized geniuses of our time.

Working with the sort of AI we have now will take us to an entirely new level.

9

Baturinsky t1_j5iq32y wrote

And how safe is to give that tools into the hands of, among others, criminals and terrorists?

1

User1539 t1_j5jjimd wrote

The same argument has been made about google, and it's a real concern. Some moron killed his wife a week or so ago, and the headline read 'Suspect google history included 'How to hide a 140lb body''

So, yeah. It's already a problem.

Right now we deal with it by having Google keep records and hoping criminals who google shit like that are just too stupid to use a VPN or anonymous internet.

Again, we don't need AGI to have that problem. It's already here.

That's the whole point of my comment. We need to stop waiting for AGI before we start to treat these systems as being capable of existential change for the human race.

1

Baturinsky t1_j5jl5nt wrote

I agree, human + AI working together is already and AGI. With only limit of the human part being unscaleable. And can be extremely dangerous if AI part is very powerful and both are non-aligned with fundamental human values.

1