RedditFuelsMyDepress

RedditFuelsMyDepress t1_jdxs0d6 wrote

A smart robot probably would recognize itself in the mirror, but I don't think that's really enough to prove that it's conscious the same way we are. The problem is that everyone experiences the world through their own body so we can't truly put ourselves in someone else's shoes and see and feel what they do. There's no way for me to even know for certain that other humans are conscious, I can only assume that based on us being the same species. A robot may have the appearance of being conscious, but it could be fake. Like a marionette being pulled on strings by its programming. Or like a character written into a story except that this character is being written in real-time by computer algorithms based on things happening around it. Someone might argue that humans are similar to that too, but the point is that puppets and fictional characters aren't conscious even though they may appear as such and a robot could be the same way.

I think we'd have to do more research and understand how the brain and electrical signals in our bodies work to determine if a machine is conscious.

1

RedditFuelsMyDepress t1_jdxgkvb wrote

By consciousness I just mean the subjective experience of self. Like the old saying "I think, therefore I am". I can feel and experience the world through my own body and I can assume other people do as well since they're humans just like me (unless I'm living inside a simulation or something and none of you are real). But how do we know a non-biological machine is able to experience the same?

1

RedditFuelsMyDepress t1_jdwpkfj wrote

>AI can't distinguish these things.

I'm not sure how true that is though. Even with GPT3, it would actually take into account the context of the whole conversation instead of just the most recent sentence when I asked something.

Hard to say how well it would handle itself in a real-world environment though since it's just a chat-bot atm.

1

RedditFuelsMyDepress t1_jdvzdtf wrote

Well I tried asking the coffee question from ChatGPT and this is what it said:

"As an artificial intelligence language model, I do not have personal preferences or opinions. However, I can provide some general information about coffee.

Coffee is a popular beverage enjoyed by millions of people around the world. It is made by brewing roasted coffee beans, which contain caffeine, a natural stimulant that can help increase alertness and reduce fatigue. Coffee also contains antioxidants and other beneficial compounds that may offer health benefits when consumed in moderation.

However, it is important to note that excessive consumption of coffee can have negative effects on health, such as increasing the risk of insomnia, anxiety, and digestive issues. Additionally, some people may be more sensitive to the effects of caffeine than others, and may experience negative side effects even with moderate consumption.

Overall, whether or not to consume coffee is a personal choice that depends on an individual's preferences, health status, and caffeine tolerance. It is always a good idea to consult with a healthcare professional to determine if coffee consumption is right for you."

In that first paragraph it does acknowledge the intent of your question, but just says that it isn't able to answer it. The facts about coffee being spit out I believe is just part of the directives given to ChatGPT.

1

RedditFuelsMyDepress t1_ixx2yxj wrote

I guess the confusion is that how would they change the village based on AI behavior if that AI isn't always simulated on the background. Obviously they don't need to render it graphically when you're not there, but you'd expect the village to change when you leave and come back. Could they just quickly run the simulation or "predict" what changes would have occurred when the chunk is loaded in? I'm genuinely asking, because I don't know how this stuff usually works.

3