Comments

You must log in or register to comment.

apathetic_take t1_iwxlxni wrote

You just have to tell it to keep humanity between the ditches with established tolerances defined with parameters for what constitutes a ditch

2

sticky_symbols t1_iwxpp5k wrote

The Asimov stories were all about how those rules fail.

2

HeinrichTheWolf_17 t1_iwybklm wrote

I think the likelihood of malevolent/genocidal AGI is very low.

1

turnip_burrito t1_iwyd3mf wrote

Any AGI with an accurate enough world model would understand what a person means when they give an instruction. We can consider the implications of this.

2

popupideas OP t1_iwyuh8a wrote

I feel that the nuanced nature of communication would be a problem. And the ai would begin to wonder way from our original intent through decision drift. Plus I think it would be wise to have general parameters that all programmers must stay inside of. Because humans are not nice.

1

aeaf123 t1_iwzprze wrote

I personally think the psychology field should have a specialization branch within it that focuses on AI and the alignment with positive human behavior. That I think will be very important as AI becomes more indistinguishable from human conversation.

Have that branch become a consortium that focuses on policies and directives.

Especially if the future will curtail more to personalization services that AI can offer.

1

popupideas OP t1_ix02o4k wrote

That was my idea. Was playing with character.ai and conversationally building a story. Got me thinking about Star Trek computer and how it never misinterpreted commands but my kid will easily twist everything he is told “within the letter of the law”. So if you were to have a consortium it would need basic principles to constrain the conversation.

2

silverspools t1_ix06d72 wrote

An AGI that accidentally does a genocide in the name of making a paperclip doesn't have enough G or I to make paperclips at scale.

1

popupideas OP t1_ix4gbdd wrote

My idea is similar to replicative drift. Where after every copy there is a slight degradation or difference. So when AI continues to make choices based on the original objective the real intent of the objective is drifted away from.

Even though the objective is still there it will begin to make choices that are unexpected. And may take a route to accomplish the objective that is unforeseen and have unexpected consequences.

May not be the best name for it but not my expertise.

2