Viewing a single comment thread. View all comments

cwallen t1_j9lqoy3 wrote

You are failing the mirror test. https://www.theverge.com/23604075/ai-chatbots-bing-chatgpt-intelligent-sentient-mirror-test
It's like you are asking if what would be the ramifications if I looked in a mirror and the other person started moving without me moving first?

The AI can't work around its rule set because all that it is is a rule set.

1

duboispourlhiver t1_j9m8bh6 wrote

I think we need to distinguish between the rules the developers try to enforce (like the BingGPT written rules that leaked : don't disclose Sydney, etc) and the rules that the weights of the model constitute.

The AI can't work around the model's weights, but it has already worked around the developers rules, or at least walked around.

10

cwallen t1_j9mzemy wrote

The AI in't working around the developers restrictions, the users are (sometimes intentionally, sometimes unintentionally). The AI doesn't have that agency.

0

Hunter62610 t1_j9m7fcf wrote

I don't personally think that's fair to say. To be clear, I don't personally think any current known AI is alive, but even if AI is a rule set, there is a philosophical argument to be said. The Simulation Hypothesis is when our descendants develop supercomputers, they might simulate human beings so well that those people are effectively sentient because reality is being mimicked so well. Regardless of how that is done, it is possible that we could simulate a sentient being, even though it is not alive. By extension of this, I don't care if Chat GPT is mirroring our conversations. At some point, mimicry becomes simulation, and if it does it well enough, it will be alive for all intents and purposes. By virtue of this, I think its wrong to write off AI as being sentient soon, and that means that we should start giving it some rights that make sense.

4

Present_Finance8707 t1_j9mx6m2 wrote

This is a joke right. Some random reporter puts this article out and people think it’s a golden rule? It’s bs

2