Viewing a single comment thread. View all comments

Ortus14 t1_j2lpqj6 wrote

Simulated environments are good for training Ai.

Open Ai, uses Ai to assist in solving the Alignment problem as much as possible. So with each, more advanced Ai that's created, it is tasked to help solve the alignment problem.

I do not think there is only one way to align an AGI before takeoff but it has to be aligned before it becomes more intelligent and general than humans.

2

Nalmyth OP t1_j2ltjg9 wrote

Looking at how they currently do it (manually lobotomising) I'm not sure they are really ready, or using AI to help as much as you think they are.

2

Ortus14 t1_j2luhse wrote

From their website:"Our approach to aligning AGI is empirical and iterative. We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alignment problems."

https://openai.com/blog/our-approach-to-alignment-research/

ChatGTP has some alignment in avoiding racist and sexist behavior, as well as many other human morals. They have to use some Ai to help with that alignment because there's no way they could manually teach it all possible combinations of words that are racist and sexist.

2