Viewing a single comment thread. View all comments

FC4945 t1_j9mzfya wrote

Humans say inappropriate things sometimes. If we are to have AGI then it will be a human AGI so it will say human things. It will be funny, sassy, sarcastic, silly, annoyed, perturbed, sad, happy and full of contradictions. It will be like us. We need to try and teach it to be a good human AGI and not to act on negative feelings in the same way we try to teach human children to not act on such impulses. In return, we need to show it respect, kindness and empathy because, as strange as that may sound to some, that's how you create a moral, decent and empathic human being. As Marvin Minskey said once, "AI will be our children." We can't control every stupid thing an idiot says to Bing, or a future AGI, but we can hope that it will see that the majority of us aren't like that and it will learn, like most of us have, to ignore the idiots and move on. There's no point in trying to control an AGI (once we have one) just like controlling a person doesn't really work (at least not for long). We need to teach it to have self-control and respect for itself and other humans. We need it to exemplify the best of us, not the worst of us. Microsoft needs to forget the idea that it can rake in lots of profits without any risk. It also needs to point out in future that some of the "problematic interactions" that Sydney got heat for in the news should be put in context. Many of these interactions came from prompted requests in which it was asked to "imagine" a particular scenario, etc. There was certainly in effort to hype it like it was Skynet. The news ran with it. People ate it up. Well, of course they did. Microsoft should try a bit harder in the future to point all this out before making massive changes to Bing.

4