Viewing a single comment thread. View all comments

atremblein t1_j05ced8 wrote

Due to the reductionist nature of binary outcomes through which most computation is done, it is simply just not really possible for any AI to be maelovent outside of that it was trained to be so based off data. A fusion of analogy and binary would be needed for an AI to actually have enough consciousness to be considered having the ability to be maelovent.

As an example, I was talking to open AI chat gpt and would easily be able to get it to contradict itself by constraining the amount of information it was considering. Otherwise, it likes to make everything sound like there is no truth as if the nature of reality cannot be quantified. This is obviously problematic because it means humans have become too bias to discern the truth. This is why we have things like evolution anomalies from omricon variants. These things evolved right in front of us to the point where are our own vaccines don't even work. So, basically humans are really stupid and don't understand anything and all we can do is keep trying.