QuartzPuffyStar

QuartzPuffyStar OP t1_jed48yb wrote

That's a good point. Viewing it from a higher perspective, it might have been it's original purpose, given how fast the international open source ai research institute was proposed. Those things take months to prepare.

I have to remind myself that we are now in a 4D Chess game, and hoping AI isnt already playing at all 16D string theory proposes lol.

1

QuartzPuffyStar OP t1_jebz0y7 wrote

I really doubt that it will rest in the West even if its accepted. The military wings of Google, Amazon, will keep working on it, OpenAI will also keep working on it. We're past the point of no return here IMO.

But at least everyone accepting that things should be analyzed better and from an unified front would allow some progress on future improvements.

However, since that didn't happen, we gonna see the worst paths Bostrom warned about a decade ago.

AGI and ASI will be born into an extremely divided, greedy, and selfish world, that will shape it at its image.

In any case, this scenario of development allows for more competition and a probability of having a good outcome. If GPT wasnt released, and LLAMA "leaked", we would have Google/Amazon/Military monopoly on AGI, and that would take any chance of good coming after.

3

QuartzPuffyStar OP t1_jebqcwb wrote

I dont trust neither Musk, nor OpenAI. I really dont care about him, and I believe it would have been a lot better if the letter was only signed by people directly involved in the research, as the guys from Deep Mind.

The presence of business people there just geopardized the whole thing. And they haven't even wrote it.

2

QuartzPuffyStar t1_je63hmw wrote

Good luck with that. The AI Pandora's box is open and capitalism will not allow it to close.

Also, with the official end of Nuclear Arms treaties that we saw today (US officially getting out after Russia), I really hope we reach Singularity and ASI ASAP, so it takes control of everything before everything goes KABOOM.

At least with ASI we have a 50/50 chance of surviving. I will not trust two dozen of nuclear-capable countries not pushing the button in their petty conflicts with the world's fate.

5

QuartzPuffyStar t1_jduy7s3 wrote

They have the potential to learn crosschecking and use wiki audit tools to verify the probability of a wiki article being wrong, and not taking it at face value.

Even when they have trained with wiki as a "very high value source".

At least GPT shows signs of that. Bing just closes the conversations when you ask it to explore beyond the 1st page wiki article that you could have read yourself.

0

QuartzPuffyStar t1_jdu2ml5 wrote

Pls no. Wikipedia is extremely biased, manipulated, and incomplete. It's only useful on the most uninteresting topics and then, only as a starting point for further research.

It all started with good intentions and love for humanity, and ended up in control of private and state agencies.

−8

QuartzPuffyStar t1_jdi5jg2 wrote

Pls tell me what do you do? LOL you are regurgitating words, phrases, ideas, and concepts you have absorbed and trained your mind on since you were capable of, with some minimal change from individual capabilities.

Most people most of the time, including me and you, will never create anything beyond that dataset.

You really overestimate the human mind nature. Especially for people below 120IQ lol

4

QuartzPuffyStar t1_j8i0p2i wrote

Bs science funded to give "scientific proof" to show governments that they have to keep the vaccine consumption high. Even when Omicron is basically a cold that lasts 2 days...

−7