Viewing a single comment thread. View all comments

cy13erpunk t1_j4rjkys wrote

censorship is not the way

'turning off hate' implies that the AI is now somehow ignorant , but this is not what we want , we want the AI to fully understand what hate is , but to be wise enough to realize that choosing hate is the worst option , ie the AI will not chose a hateful action because that is the kind of choice that a lesser or more ignorant mind would choose , and not an intelligent/wise AI/human


Cognitive_Spoon t1_j4s48c4 wrote

Best not to train it on zero sum thinking.

What I love about AI conversations is how cross discipline they are.

One second it's coding and networking, and the next it's ethics, and the next it's neurolingistics.


cy13erpunk t1_j4sy6qg wrote


you want the AI to be the apex generalist/expert in all fields ; it is useful to be a SME but due to the vast potential for the AI even when it is being asked to be hyper focused we still need/want it to be able to rely on a broader understanding of how any narrow field/concept interacts with and relates to all other philosophies/modalities

narrow knowledge corridors are a recipe for ignorance , ie tunnel vision


LoquaciousAntipodean t1_j4u7am6 wrote

Very well said, u/Cognitive_Spoon, I couldn't agree more. I hope cross disciplinary synthesis will be one of the great strengths of AI.

Even if it doesn't 'invent' a single 'new' thing, even if this 'singularity' of hoped-for divinity-level AGI turns out to be a total unicorn-hunting expedition (which is not necessarily what I think), the potential of the wisdom that might be gleaned from the new arrangements of existing knowledge bases that AI is making possible, is already enough to blow my mind.