Plouw

Plouw t1_iuzxoar wrote

I very much see your worries, but I see all those worries behind closed doors too. Also not necessarily talking about AGI here, just policy making/suggesting AI.
I'm not quite sure what the solution is to be honest, but I know for sure that a closed source AI is not trustable and I think the future requires trustless operations, especially if it's gonna manage policies.

1

Plouw t1_iuwvnwn wrote

Then at that point we've still gotten engineers who have learned and spread that learning to the world.

I would never trust any commercial company with managing the world. My point is merely that there is positives to be found in smart researchers working on these areas. And usually the researchers individually are ethical - in my opinion most people are, it's money that corrupts.

So yes, we should not allow Google to manage the world, but we can still use their ideas and findings to build the "crowdsourced, blabla..." AI I mentioned - and that's the positive perspective of this. The researchers hints to this as well, that they are laying the framework for other to draw inspiration from.
Science is science, how it's being used is up to the people to decide.

0

Plouw t1_iuws4jf wrote

We shouldn't trust in Google having the power, but I do trust the engineers making the framework and contributing to reaching a AI that can, at least partly, manage policies.

Using "Google AI" as a dictator? No.
Using the learnings from what their engineers are creating to at some point make a crowdsourced, opensource, cryptographically verifiable and truly democratically controlled AI to manage policies at a slowly increasing rate?
I think that has potential to be very beneficial.

1

Plouw t1_ityyc2q wrote

>Because those emotions were created for a social environment with similar beings.

So maybe to research a creature better it would be beneficial to experience the emotions.

>A physical zoo can be as big as reserve or even a planet. Terraforming is still cheaper than planet simulation.

If we were to be a simulation, you have no idea what is cheap or not in the world that is creating our simulation.

>You clearly underestimate how energy intensive full quantum level simulations are.

You seem to be too confident in your ability to predict the motivations of something that you have no to very limited experience with.

1

Plouw t1_itks351 wrote

>Feelings like the ones primates have are relevant only in the context of primates

Why?

> It's unnecessary since it's easier to create a literal physical zoo

A physical zoo does not replace studying them in the wild.

>We don't waste a lot of energy doing ape simulations precise to the quantum level because it's expensive energy wise.

Yet.

1

Plouw t1_itkqhaz wrote

>Why would something so fundamental to brains, something arising out of physical properties of a biological organ be relevant in a non biological world

We do not know, because we haven't seen the non-biological world in anything but a very premature stage.

The issue is you are assuming the function is based off of something biological, and not the other way around; that evolution build this function through something biological, because this function has a intellectual ( or other) benefit. Maybe it is not inherit to biological brains/intelligence but to intelligence, biological or not. Do we feel because of biological processes or do we feel because it has a functional purpose and biological evolution build processes to make us feel.

It feels off to attribute this to biological only, merely because you have only seen it biologically, as if you're ignoring the black swan.

1

Plouw t1_it7frbr wrote

What makes you assume we have any idea what motives post-singularity civilization has? It might be so, that they are not interested in what 'chemical emotions' provide, it might be the opposite. A motive could be to learn by experiencing all aspects of reality. A motive could also be for the pure entertainment - we do not know.

4

Plouw t1_it6x0y8 wrote

>I'm fairly certain the optimal arrangement of matter within that space will not be biological in nature

I guess that's the real question though. We currently do not know the answer, whether or not brains are actually close to optimally using the space in a way classical bit computers cannot. We also do not know if quantum computers are physically capable of it either. At least for all possible operations the classic/quantum/biological computers are doing.

It might be so that a symbiotic relationship between all three is needed, for optimal operations in different sorts of areas where the different types might exceed. I am also aware that this might be me romanticizing/spiritualizing the brains capabilities, but at least it cannot be ruled out - as we do not know the answer.

1