Significant_Pea_9726
Significant_Pea_9726 t1_j8s5odt wrote
Reply to comment by chrisjinna in Bingchat is a sign we are losing control early by Dawnof_thefaithful
It really really doesn’t matter if there is “no actual thought” behind the scenes. If it can sufficiently imitate human behavior, then we may have a significant problem if/when a GPT model gains access and sufficient competency for domains beyond chat and the other currently limited use cases.
Significant_Pea_9726 t1_j8rwpsa wrote
Reply to comment by Baturinsky in Bingchat is a sign we are losing control early by Dawnof_thefaithful
I don’t think that question affects OP’s point. Either way, an extremely powerful AI system that is unaligned would be problematic.
Significant_Pea_9726 t1_j9oyrvf wrote
Reply to comment by Coderules in How long do you estimate it's going to be until we can blindly trust answers from chatbots? by ChipsAhoiMcCoy
Right. There is no “100% accuracy”. And for every 1 question that has an easy “correct” answer, there are 1000 that are subject to at least a modicum of context and assumptions.
Eg seemingly obvious geometry questions would have different answers depending on if we are talking about Euclidean vs non Euclidean geometry.
And - “ is Taiwan its own country?” cannot in principle have a 100% “correct” answer.