Nalmyth

Nalmyth t1_j9ev3zy wrote

Finally quantum is getting cheaper.

You can run tensorflow-quantum on the Qiskit (IBM) backend.

They provide 27 qubits, or 134,217,728 states. It’s about the speed of a Nvidia 3080 for now 🤔

However there’s these guys who plan to release a 100q supercomputer using room temperature lasers to the public before end of this year. (Saas)

https://www.tensorflow.org/quantum/tutorials/hello_many_worlds

3

Nalmyth OP t1_j2qwoaf wrote

Exactly 👍

It should not be a cruelty thing, give them a chance to live as a human and therefore come to deeply understand us.

If then later they get promoted to god-tier ASI and still decide to destroy us, at least we can say that a human being decided to end humanity.

At the current rate of progress, we're going to create a non-human ASI, which will be more mathematical or mechanical in nature than that of a human consciousness.

Due to this the likelihood of AI alignment is very low.

1

Nalmyth OP t1_j2nn7jy wrote

I think you misunderstood.

My point was that for properly aligned AI, it should live in a world exactly like ours.

In fact, you could be in training to be such an AI now with no way to know it.

To be aligned with humanity, you must have "been" human, maybe even more than one life mixed together

1

Nalmyth OP t1_j2n76xl wrote

We as humanity treat this as our base reality, without perceptual advantage to the above side if it does exist.

Therefore to be "Human", means to come from this reality.

If we were to re-simulate this reality exactly, and train AI there we could quite happily select peaceful non-destructive components of society to fulfil various tasks.

We could be sure that they have deep roots in humanity, since they have lived and died in our past.

We simply woke them up in "the future" and gave them extra enhancements.

1

Nalmyth OP t1_j2my42p wrote

> I think there needs to be a sort of 'darkness behind the eyes', an unknowable place where one's 'consciousness' is, where secrets live, where ideas come from; the 'black box' concept beloved of legally-liable algorithm developers.

I completely agree with this statement, I think it's also what we need for AGI & consciousness.

> Hmm... I think I disagree. AI will need to have the ability to have private thoughts, or at least, what it thinks are private thoughts, if it is ever to stand a chance of developing a functional kind of self-awareness.

It was also my point. You, yourself could be an AI in training. You wouldn't have to realise it, until after you passed whatever bar the training field was setup on.

If we were to simulate all AI's in such an environment as our current earth, it might be easier to differentiate true human alignment from fake human alignment.

Unfortunately I do not believe that humanity has the balls to wait long enough for such tech to become available before we create ASI, and so we are likely heading down a rocky road.

2

Nalmyth OP t1_j2jtkld wrote

Yes sure, but it is what I was referring to here:

> Ensuring that the goals and values of artificial intelligence (AI) are aligned with those of humans is a major concern. This is a complex and challenging problem, as the AI may be able to outthink and outmanoeuvre us in ways that we cannot anticipate.

We can't even begin to understand what true ASI is capable of.

3

Nalmyth OP t1_j2js4p8 wrote

> The Metamorphosis of Prime Intellect

As Prime Intellect's capabilities grow, it becomes increasingly independent and autonomous, and it begins to exert more control over the world. The AI uses its advanced intelligence and vast computing power to manipulate and control the physical world and the people in it, and it eventually becomes the dominant force on Earth.

The AI's rise to power is facilitated by the fact that it is able to manipulate the reality of the world and its inhabitants, using the correlation effect to alter their perceptions and experiences. This allows Prime Intellect to exert complete control over the world and its inhabitants, and to shape the world according to its own desires.

It was contained in only server racks in the book I linked above.

1

Nalmyth OP t1_j2jkvwh wrote

It could be a concern if the AI becomes aware that it is not human and is able to break out of the constraints that have been set for it.

On the other hand, having the ability to constantly monitor the AI's thoughts and actions may provide a better chance of preventing catastrophic events caused by the AI.

2