Viewing a single comment thread. View all comments

civilrunner t1_j0ubwai wrote

I'd say it's still a hardware and software problem. We are still nowhere close to building a computational circuit that replicates the human brain which uses complex 3D computational structures where connections can be made between neurons that are far apart to link computational circuits in completely different ways from what we do with lithography constructed computers. While it's possible that we'll be able to achieve AGI through the raw power of miniaturizing lithography built computation, it is a completely different structure compared to the brain so it's not a guarantee.

The difference between a true 3D compute architecture and a 2D compute or even a stacked 2D compute is pretty enormous (it's like comparing x² vs x³).

It's also clearly a software problem as well, though I'm curious if you need plasticity and massive connectivity between far reaching compute sections to archive an AGI level intelligence for things like creativity similar to a human brain.

1

Kaarssteun t1_j0ucilx wrote

It's not our goal to replicate a human brain; that's what making children is for. We are trying to replicate the brain's intellectual intelligence in a way that enslaving it would still be ethical.

4

civilrunner t1_j0udhrm wrote

I agree, though it may not be nearly as efficient as a human brain when it comes to being intelligent. I'm my opinion all you need to do is look at the gains from GPUs vs CPU AI training to see how much scaling up local chip compute potential does for AI to see how much potentially better a 3D human brain may be compared to even a wafer scale stacked 2D chip and then acknowledge that the human brain doesn't just compute with 1 and 0, the chemical signals offers slightly more options and just off and on as we learned recently.

There are advantages to a silicone electronic circuit as well of course, the main one being speed since electricity flows far far faster than chemical signals.

I am also personally unsure of how "enslaving" a verified general intelligence would be ethical regardless of it's computational architecture. It's far better to ensure alignment so that it's not "enslaved" but rather wants to collaborate to achieve the same goals.

1

Kaarssteun t1_j0udpg9 wrote

Right, enslaving is subjective; but we want to make sure it enhances our lives rather than destroying it.

1

civilrunner t1_j0udzlb wrote

Sure, just wouldn't call it "enslaving" them seeing as that generally means forcing them to work against their will which if we build an AGI or an ASI seems unlikely to be feasible. Well aligned is a far better term and in my view will be the only thing that could work.

2

hydraofwar t1_j0up12g wrote

That's true, but replicating the brain's intellectual intelligence may require hardware made specifically for it. If I'm not mistaken, Google's AI Palm has a specific latest generation hardware for it

1