Viewing a single comment thread. View all comments

genjitenji t1_izhq47k wrote

On the topic of A.I. Sentience, I think the true test of sentience will be if an A.I. can decide to do something for fun, in order words, no productive reason at all.

3

alvenestthol t1_izj47pa wrote

Human fun is productive though - AIs definitely go through a lot of locally optimal states that don't necessarily correspond to what we think we're training them for, and I think that is pretty close to what humans consider "fun".

Or we can take a trained AI, and just give it resources to execute on - we look at it like we're the ones having fun watching the AI do stuff while the AI's just being productive, but to the AI it's probably the closest analogue to just having fun.

An AI doesn't have the physio-chemical systems of stress and emotion that they must regulate in spite of the goal we've trained them on - though it's possible to just make them simulate one, it's entirely up to the programmer to decide whether an AI can genuinely be stressed (instead of just, say, simulating a stressed person's writings) instead of optimizing it away in favour of just letting it do its best job. That would necessitate some manner of de-stressing, which can come in the form of "fun"; but then is there any point to hoisting the human concept of "fun" upon an AI, when the AI should've been able to always make itself happy?

2