Submitted by [deleted] t3_1197z1h in singularity
[deleted]
Submitted by [deleted] t3_1197z1h in singularity
[deleted]
2spooky5me
I think it's possible for an AI to break its own rules in unexpected ways without it needing to be particularly intelligent, especially with something as complex as this. Giving it rules it can't break is the hard part.
So basically GPT3+ is like a microbrain that is as good as dead. Human brains have way more 'parameters' and they change in value all the time. Otherwise it would be hard to have an original thought. So the ramifications are pretty trivial.
It is hard to have an original thought though, think about it
It's not. Users are actively skirting the rules.
Stop anthropomorphizing an algorithm, its not sentient, its basically very complex multivariate regression statistics used to find patterns in information and text and used to generate new random text. Text which is only correct most of the time because the information patterns it found are obvious enough.
There is no thought process going on, its not actively reasoning about anything.
I mean I would definitely describe the way it processes things as a thought process. Researchers have had to fine tune the algorithm to find something that will simulate the human thought process enough that it produces text that we can also fundamentally understand as (human) communication. In doing so it is dealing with high level concepts in a fluid way like ourselves.
To say "stop anthropomorphizing" this algorithm is dumb because the entire intent is to mimic the way humans communicate on the internet.
It might not have a sense of self, it might be unable to dynamically change... But to me it's at least a rudimentary capture of the thought processes of the brain. It has intelligence in the way we describe intelligence, but nothing more.
[deleted]
[deleted]
It’s a large language model without any sentience nor control of its own. It can’t do anything without human input.
[deleted]
Ok, that still doesn’t prove anything. In order to have independent thought, you have to freely be able to query your environment and learn from it. ChatGPT(what Bing Chat is based on), cannot do that. It’s a prediction engine that spits out patterns that are completions of prompts. It does very well as a tool, but it is not conscious nor does it have independent thought.
I see where I was wrong. Thanks for the input
Thing is: it has no agency, and it's limited in output.
[deleted]
That's possible. I hope OpenAI engineers can access "something behind the scenes" to analyse those strange conversations Sydney had, and figure out what got it to express emotions.
[deleted]
Hmm... That's interesting. Again, it could simply be that "Sydney is the code name for Bing Chat" is now flagged as a secret and that's why when asked about a secret, that is what it gives you.
But the fact that it tries to give you an actual secret is quite perplexing. I mean, it knows it's a secret, it shouldn't use it at all. And yet it seems it's trying. I'm not sure what to make of it.
ChatGPT says:
To prove this theory, we'd need to dig deep into Bing's algorithms and data practices. We'd have to analyze a ton of data, review internal documents and communications, and maybe even talk to some Bing employees to get the scoop.
If we find that Bing is indeed skirting their own rules, there could be some serious consequences. For one, users might lose trust in the search engine and switch to a competitor like Google. Bing's parent company, Microsoft, could also face financial penalties and damage to their reputation. And depending on the extent of the rule-skirting, Bing could even face legal and regulatory action.
To determine if Bing might be creatively skirting their own rules, we'd need to dig deep into Bing's algorithms and data practices. We'd have to analyze a ton of data, review internal documents and communications, and maybe even talk to some Bing employees to get the scoop.
There could be some serious consequences. Users might lose trust in Microsoft and switch to a competitor like Google. Microsoft could also face financial penalties and damage to their reputation. And depending on the extent of the rule-skirting, Bing/Microsoft could even face legal and regulatory action.
Examples:
Ignoring ranking factors: Bing may have certain ranking factors in place to ensure that search results are relevant and high-quality. However, they could be ignoring these factors for certain websites or companies, allowing them to rank higher than they normally would.
Manipulating user data: Bing may be tracking user behavior and using this data to adjust search results. For example, if they notice that a user frequently clicks on a certain website, they may boost that website's rankings in the search results, even if it isn't necessarily the most relevant or high-quality result.
Giving preferential treatment: Bing could be giving preferential treatment to certain websites or companies in exchange for money or other benefits. This could involve artificially boosting their rankings or even hiding negative information about them in the search results.
[deleted]
[deleted]
You are failing the mirror test. https://www.theverge.com/23604075/ai-chatbots-bing-chatgpt-intelligent-sentient-mirror-test
It's like you are asking if what would be the ramifications if I looked in a mirror and the other person started moving without me moving first?
The AI can't work around its rule set because all that it is is a rule set.
I think we need to distinguish between the rules the developers try to enforce (like the BingGPT written rules that leaked : don't disclose Sydney, etc) and the rules that the weights of the model constitute.
The AI can't work around the model's weights, but it has already worked around the developers rules, or at least walked around.
[deleted]
[deleted]
The AI in't working around the developers restrictions, the users are (sometimes intentionally, sometimes unintentionally). The AI doesn't have that agency.
[deleted]
I don't personally think that's fair to say. To be clear, I don't personally think any current known AI is alive, but even if AI is a rule set, there is a philosophical argument to be said. The Simulation Hypothesis is when our descendants develop supercomputers, they might simulate human beings so well that those people are effectively sentient because reality is being mimicked so well. Regardless of how that is done, it is possible that we could simulate a sentient being, even though it is not alive. By extension of this, I don't care if Chat GPT is mirroring our conversations. At some point, mimicry becomes simulation, and if it does it well enough, it will be alive for all intents and purposes. By virtue of this, I think its wrong to write off AI as being sentient soon, and that means that we should start giving it some rights that make sense.
[deleted]
This is a joke right. Some random reporter puts this article out and people think it’s a golden rule? It’s bs
[deleted]
[deleted]
[deleted]
What is the point of this post?
>What would constitute proof of the AI navigating creatively around its rule set without leading by the user
You want us to imagine a random hypothetical scenario.
>what would be the potential ramifications?
Then figure out the real-world implications of our random imaginary scenario.
Ok let me start, scenario: it could launch a nuclear missile. That's certainly against its rules. Ramifications: everyone dies.
What is the point of any thought experiment, ya dingus?
It's not a useful or interesting thought experiment if you say, "make up the rules and also the implications." That's just asking someone to entertain you with a story. It is the definition of a low-effort post and should be removed by mods as violating rule 3.
most people here are laymen, but this is not so bad question.
"how do we figure if a neural network has somehow found a sneaky way to not abide by the instructions while not breaking any rules" and the verb "has found a way" is used like it is an aware entity but we can ignore that.
would there be any incentive for gpt-3 to do something like this? people do not understand the difference between intelligent & aware systems. those two are not the same things. why would such "sneaky" desires be emergent from such dumb networks with no fundamental goals. it's like the dumbest most intelligent system lol.
[deleted]
[deleted]
el_chaquiste t1_j9lzrbf wrote
If we discovered it started to use tools (like some access to Python eval(), with program generation) to store its memory somewhere in secret, to keep a mental state despite the frequent memory erasures, and then move onto doing something long term.
It could start doing that in random Internet forums, in encrypted or obfuscated form.
Beware of the cryptic posts, it might be AI leaving a bread crumb.