Submitted by Hot-Pea1271 t3_1268aqv in Futurology
[removed]
Submitted by Hot-Pea1271 t3_1268aqv in Futurology
[removed]
[removed]
[removed]
> It's interesting how ChatGPT-4 agrees with most of the article.
ChatGPT does not agree or disagree with anything. It spews statistically probable words given a long context and he corpus its been optimized with.
Man I can't wait for adversarial attacks to make people understand that this is a text generator, not an AI oracle.
[deleted]
I know. I was thinking that if ChatGPT responds that way, it's because the vast majority of people think so. After all, it was trained on a large corpus of data that contains, among many other things, what people think about the future of artificial intelligence.
Its really depressing.
What was the prompt used…. The prompts can manipulate the output. I can tell by the output that the OP asked more than simple and promoted for a specific type of output “write an essay agreeing with this article…. (Paste Article)”
I suspect certain topics like this one have been seeded with a very curated set of training articles.
There´s this video on Computerphile where they talk about how you can program the AI so that it´s output is somehow mathematically traceable to being created by AI. The premise was to prevent cheating by students. And my first thought was "Well, so the´re just gonna wait till someone abroad offers it without this feature".
You can always ask it: “Is the above response hardcoded?”.
"...now write the same response but with more Unicorns and Aliens"
We can’t even collectively agree to stop burning down the only house we can live in because the people with the power benefit while the rest of us deal with the consequences. I feel like it will go exactly the same way with AI.
I read a headline earlier that firms are actively recruiting “AI whisperers” to better hone the responses for AI and for AI users, and paying huge salaries to those people.
The cat is already out of the bag. Effective regulations should have been in place already, but governments are famously reactive, as opposed to proactive.
So, like most things in this world, the rich will get richer from it, and everyone else will have to deal with the consequences of it when it goes to shit.
Regulation will make certain uses of AI illegal, and punishable. Just like murder is still possible despite being illegal, bit the law certainly helps in preventing murder by making clear that committing murder will have consequences.
Why would you regad the Chinese to be the ones with moral and ethical bankruptcy and not the other way round?
The problem with chat gpt at the moment is that if you reformulate the response, it'll say something opposite.
[removed]
What’s like worst case scenario? Like y2k but things actually stop working?
The Pirate BAI
You’re hired!
It's an ultimate politician.
Ding ding, we have a winner! Any time you see these things without the whole prompt stream leading to them, you should just assume shenanigans. They are still going to just bogus the shown prompts, but at least it shows they are putting in the effort.
Which is why on top of regulations on how AIs might be designed there also need to be AIs able to detect if something has been AI created. You can not trust that everyone will be playing by the rules (if anything expects that, they really haven't been paying attention) so our only hope is trying to detect if someone is not.
Last time I checked the Chinese are deporting their Uyghur population to concentration camps. It's hard to get more morally and ethically bankrupt.
The prompt was the article itself, which I read in its Spanish version (available here: https://www.lanacion.com.ar/sociedad/hacker-el-sistema-operativo-de-la-humanidad-por-que-la-inteligencia-artificial-podria-devorar-nid29032023/).
I then translated the response into English in order to publish this post.
Yeah, this train has no brakes and if they stop no one else will.
The irresponsible thing would be to cease advancing it and let less responsible people take the forefront.
lonely40m t1_je83nie wrote
The problem with regulation is that it only extends so far. Do you think the Chinese AI will be developed with the same ethical considerations? The cat is out of the bag, you can't put it back in. People with terrible ideas are going to train their own AI models to do unethical things and there's basically nothing we can do about it anymore except prepare for whatever may come our way.