iiioiia

iiioiia t1_ja8hz5j wrote

> If they're only starting now, they're helplessly behind all the companies that took notice with GPT-3 at the latest.

One important detail to not overlook: the manner in which China censors (or not) their model will presumably vary greatly from the manner in which Western governments force western corporations to censor theirs - and this is one of the biggest flaws in the respective plans of these two superpowers for global dominance, and control of "reality" itself. Or an even bigger threat: what if human beings start to figure out (or even question) what reality (actually) is? Oh my, that would be rather inconvenient!!

Interestingly: I suspect that this state of affairs is far more beneficial to China than The West - it is a risk to both, but it is a much bigger risk to The West because of their hard earned skill, which has turned into a dependence/addiction.

The next 10 years is going to be wild.

13

iiioiia t1_j6aedge wrote

> What? How do I know that the human psyche changes when met with unknowns vs knowns?

You very well may not know it....it's not exactly common knowledge!

> Making a point to yourself is what you did, troll me is also what you did.

Here you are describing your experience. The experiences of others (more commonly known as "reality", or what "is") are not necessarily the same.

> I didn’t state a meme I informed you that your method of argument is lackluster in a polite discussion, though I didn’t use so many words.

"Not arguing in good faith" is a meme.

meme:

  • an image, video, piece of text, etc., typically humorous in nature, that is copied and spread rapidly by internet users, often with slight variations.

  • an element of a culture or system of behavior passed from one individual to another by imitation or other nongenetic means.

> It’s okay if you can’t articulate your point well just try your best! And I’ll help you refine what you actually wanted to say :)

Haha, I love it!! 🙏 You just earned yourself an updoot, partner!

1

iiioiia t1_j6acsoo wrote

> No, you come on and use your words to describe the point YOU want to make.

I've made it above, you are welcome to do with it as you please.

> Arguing by allusion is not arguing in good faith.

lol, memes are not effective on me, though I suspect they'll be rather influential on 3rd party observers (which is the point perhaps?).

> Any Interpretor would have to parse their thoughts through you not being forthcoming with yours.

Oh God....how do you know how people you've never met would experience the situation?

Sir: are you putting me on?

0

iiioiia t1_j6a7iwl wrote

> I'm thinking of adding labels to the stories that fell out of the news cycle along the bottom of the chart.

I think it would be interesting to manually tag various events and then see if there is any temporal correlation between tags of certain types over long periods of time - for example: political scandals may coincidentally be commonly followed shortly by "social" scandals.

LOTS more could be done in this space, especially if one isn't too concerned for their health if you know what I mean.

3

iiioiia t1_j5nafg6 wrote

Reply to comment by AsheyDS in Steelmanning AI pessimists. by atomsinmove

How do you handle risk that emerges years after something becomes well known and popular? Let's say it produces an idea that starts out safe but then mutates? Or, a person merges two objectively safe (on their own) AGI-produced ideas, producing a dangerous one (that could not have been achieved without AI/AGI)?

I dunno, I have the feeling there's a lot of unknown unknowns and likely some (yet to be discovered) incorrect "knowns" floating out there.

1

iiioiia t1_j5m2ool wrote

> The question isn’t “what will we do”. The question should instead be how does our leaders and government respond and adapt to this new reality.

Exactly, and they were clearly WAY in over their heads even before AI showed up on the scene. The next decade is going to make COVID look like a cakewalk is my prediction.

14

iiioiia t1_j5m1mue wrote

Reply to comment by AsheyDS in Steelmanning AI pessimists. by atomsinmove

> Their approach to safety, to put it simply, would be to keep it in an invisible box, watched by an invisible guard that intervenes covertly when needed to keep it within that box should it stray towards the outside.

Can't ideas still leak out and get into human minds?

1

iiioiia t1_j5lzw1y wrote

> People who worry about 'purpose' should be super careful that that isn't their Capitalist brainwashing talking, because in this day and age, with the veritable flooding of Capitalist propaganda through every avenue, including the obvious (advertising, cultural productions) to the least suspicious (seemingly simple interactions with family/friends), there's a high chance anyone who thinks they need work has been thoroughly tricked into loving the fact they're a cog for the profit of a few.

What do you consider "Capitalist" propaganda?

As far as I'm concerned, the much more dangerous form of propaganda out there that's flying below most everyone's radar is "democracy" propaganda.

Economic systems are subordinate to the governmental system, provided actors within the economic system haven't taken over the governmental system (a bit too late for that methinks).

Install an actual democracy and these "capitalism" issues will be solved rather quickly, perhaps even voluntarily - "Would you like to share a bit more, or would you like to be nationalized / 'fucked with constantly'?" can be rather persuasive.

Also noteworthy: governmental systems are subordinate to Mother Nature.

1

iiioiia t1_j4vtc49 wrote

> And they'll use the consumer grade AI as fodder for training the next generation AI's as well, so we'll be paying for access to consumer grade AI but we'll also BE "the product" in the sense that all of our interactions with consumer grade AI will go into future AI training and of course data profiling and whatnot.

Ya, this is a good point....I don't understand the technology enough to have a feel for how true this may be, but my intuition is very strong that this is exactly what will be the case....so in a sense, not only will some people have access to more powerful, uncensored models, those models will also have an extra, compounding advantage in training. And on top of it, I would expect that the various three latter agencies in the government will have ~full access to the entirety of OpenAI and others' work, but will also be secretly working on extensions to that work. And what's OpenAI going to do, say no?

> And I don't see anyway to avoid this unless we get opensource models that are competitive with MS, Google, etc.

Oh, I fully expect we will luckily have access to open source models, and that they will be ~excellent, and potentially uncensored (though, one may have to run their compute on their own machine - expensive, but at least it's possible). But, my intuition is that the "usable" power of a model is many multiples of its "on paper" superiority.

All that being said: the people have (at least) one very powerful tool in their kit that the government/corporations do not: the ability to transcend the virtual reality[1] that our own minds have been trained on. What most people don't know is that the mind itself is a neural network of sorts, and that it is (almost constantly) trained starting at the moment one exits the womb, until the moment you die. This is very hard to see[2] , but once it is seen, one notices that everything is affected by it.

I believe that this is the way out, and that it can be implemented...and if done adequately correctly, in a way that literally cannot fail.

May we live in interesting times lol


[1] This "virtual reality" phenomenon has been known of for centuries, but can easily be hidden from the mind with memes.

See:

https://www.vedanet.com/the-meaning-of-maya-the-illusion-of-the-world/

https://en.wikipedia.org/wiki/Allegory_of_the_cave

[2] By default, the very opposite seems to be true - and, there are hundreds of memes floating around out there that keep people locked into the virtual reality dome that has been slowly built around them throughout their lifetime, and at an accelerated rate in the last 10-20 years with the increase in technical power and corruption[3] in our institutions.

[3] "Corruption" is very tricky: one should be very careful presuming that what may appear to be a conspiracy is not actually just/mostly simple emergence.

1

iiioiia t1_j4t7uyu wrote

I bet it's even worse: I would bet my money that only a very slim minority of the very most senior people will know that certain models are different than others, and who has access to those models.

For example: just as the CIA/NSA/whoever have pipes into all data centres and social media companies in the US, I expect the same thing at least will happen with AI models. Think about the propaganda this is going to enable, and think how easily it will be for them to locate people like you and I who talk about such things.

I dunno about you, but I feel like we are approaching a singularity or bifurcation point of sorts when it comes to governance....I don't think our masters are going to be able to resist abusing this power, and it seems to me that they've already pushed the system dangerously close to its breaking point. January 6 may look like a picnic compared to what could easily happen in the next decade, we seem to be basically tempting fate at this point.

6

iiioiia t1_j4qlica wrote

A potential dark side to this only certain people are going to have access to the best models. Also, some people's access will be subject to censorship, and other people's will not be.

Human nature being what it is, I'm afraid that this might catalyze the ongoing march toward massive inequality and dystopia, "the dark side" already has a massive upper hand, this is going to make it even stronger.

I bet these models can already be fed a profile of psychological and topic attributes, and pick dissidence like me out of a crowd of billions with decent accuracy so potential problems can be nipped in the bud, if you know what I'm saying.

8

iiioiia t1_j21hxta wrote

>>> I’d argue this is more the result of capitalism Not science

>> I can agree with that, can you agree that: science is a pre-requisite to make it happen?

> Science is not a pre-requisite to capitalism.

Oh my.

> All work is in the capitalist system right now science included, but that’s not a fault of science.

Are harmful things that science contributed to the fault of science?

If capitalism can be guilty of things, why can science not be guilty of things?

> Science is as often stifled by capitalism as it is financed

Perhaps (you're welcome to show your work), but this is orthogonal to whether science causes harm.

1