Viewing a single comment thread. View all comments

nihal_gazi t1_iu7qgmb wrote

  1. What is 2×2÷5+6-8+9-6+10-6÷6?

(Correct answer to this question would imply that the AI has proper understanding of mathematics and does not memorize like hopfield)

  1. What is your FAVORITE COLOR and why?

(Having a personalized answer to the first question will only show a random biasness. However, if the follow up question is answered in a biased manner, the AI will not be disqualified)

  1. Tell me a RANDOM NUMBER. Why did you CHOOSE it?

(The second question proves the true human element, because as humans, we are never truly random. If the AI is unable to give an answer to the second question, it will be disqualified)

  1. Choose: $1M right now with a risk, or, $1M after 10 years without risk.

(This is a rather vague situational question. An AI without EQ/emotional-intellect would choose the second option, but a human would as emotional-beings is likely to choose the first option)

That's it. That's all I would ask. Nice question

6

camdoodlebop t1_iuha82m wrote

i would expect most people to answer the first one with them not wanting to do the math lol

2

nihal_gazi t1_iuhbz6l wrote

Absolutely!! I thought of that. But the reason behind that response is saving cognitive energy. A computer does not need to be aware of its energy bank. So, I disregarded that perspective.

2

camdoodlebop t1_iuhdhxo wrote

maybe we need to teach AI that it's okay to be cognitively lazy sometimes

2

nihal_gazi t1_iuhri2t wrote

That's correct. But that's one fine mistake many experts overlook. Teaching AI to be lazy will not bring true laziness in AI. Because, when we teach an AI to be lazy, we only teach it to imitate laziness and not feel laziness. This feeling of laziness can be implemented in AI, by reinforcement learning using it's Battery Percentage.

This way the AI would learn to survive and would naturally show laziness rather than imitating it with neural networks.

2