run_zeno_run

run_zeno_run t1_irsuwjb wrote

Because the current consensus dominant paradigm amongst professionals working in the brain-cognitive sciences is based on the presumption that what we call "consciousness" is just the first-person perspective attentive awareness of an emergent computational process arising out of sufficiently developed & tightly coupled massively parallel physically-embedded information processing systems. Given this definition, they assume it's only a matter of having enough computational capacity (eg memory/processing), proper perception/actuation/control modules (even if completely simulated/abstract), and a correctly programmed learning/cognitive algorithm (or set thereof), and you'll get some type of conscious agent (though not necessarily similar to human or any other biological conscious agent for that matter).

FWIW, and I do have what I at least think are well-thought out yet speculative reasons (not just a gut reaction), I don't believe the consensus has a complete model. That doesn't mean they are completely wrong, but their model is incomplete, and what they're leaving out, though subtle from our current vantage point, I think are hugely important aspects of reality we don't have any real understanding of. The really interesting thing about rejecting the consensus is that pretty much any alternative to it presupposes some radical modifications to our physicalist worldview, which I also am in support of, though with the honesty to admit it's fraught with epistemic hazards to start thinking seriously in that direction.

Finally, a major consequence of rejecting the consensus in this way means to notice the ultimate inadequacy of current computational approaches to AGI. Rejecting the belief in the eventual emergence of consciousness from merely algorithmic processes, and also rejecting the belief that super-intelligence doesn't even need consciousness at all (paperclip maximizers), puts a hard limit on what types of behaviors can be exhibited from Turing machines as we conceive of them now, and also foils the plans for any near term singularity based on those ideas. I do think the current trajectory of AI will still be very disruptive, but mostly from a socioeconomic and political perspective, as the exponential increase of automation and autonomous systems will drastically increase power/wealth inequality and destabilize the social order in unthinkable ways...unless we change that order first that is.

1