Viewing a single comment thread. View all comments

SpaceAgeGekko t1_jczi7wt wrote

When looking into the mid and far future is hard to think of anything a robot can’t do that a human can without defining that thing as “human made”. But something else worth thinking about is that there is likely a point at which an AI stops being a machine/tool and starts being a person with legal protections and self awareness. Defining humans (technologically enhanced or not), digital sapiences , and anything else that can demonstrate self awareness and is given legal protection as legal persons separate from AIs/machines without personhood, then it is likely there are some tasks only a ‘person’ could do.

The caveats here are

1: It is possible to engineer an AI that can perfectly match or be better than a human mind in all aspects, without making it self aware (and we would want the philosophical and scientific evidence to prove that this machine is not self aware). In this case this AI would be considered a tool, and could do everything a human can (except make explicitly defined ‘human made’ goods)

2: The society in question (there could be multiple who each treat this topic differently) chooses not to grant legal protection to proveably self aware AIs, and uses them as tools. These could do anything a human could. (Many sci fi users, myself included, view this as extremely similar and as morally evil as human slavery)

Tl:dr Assuming a sufficiently advanced machine could be self aware, an advanced and ethical society would consider an advanced self aware machine to be a person entitled to same or similar legal protections that human workers would have. Therefore there are some tasks only a person could do that a non sapient machine could not.

0