Viewing a single comment thread. View all comments

Standard_Ad_2238 t1_j9kzqrb wrote

What really is funny in this whole "controversy" regarding AI is that what you have just said applies to EVERY new technology. Every one of them also brings a bad side that we have do deal with it. From the advent of cars (which brought a lot of accidents with them) to guns, Uber, even the Internet itself. Why the hell are people treating AI differently?

21

EndTimer t1_j9l48jm wrote

Because people doing bad things on the internet is a half-solved problem. If you're a user on a major internet service, you vote down bad things or report them. If you're the service, you cut them off.

Now we're looking at a service generating the bad things itself if given the right prompt. And it's a force multiplier. You can say something bad a thousand ways, or create fake threads to gently nudge readers toward the views you want. And if you're getting buried by the platform, you can ask the AI to make things slightly more subtle until you find the perfect way to fly beneath the radar.

You can take up vastly more human moderator time. Sure, we could let AI take up moderation, but first, is anyone comfortable with that, and second, how much electricity are we willing to burn on bots talking to each other and moderating each other and trying to subvert each other?

IF you could properly, unrealistically, perfectly align these LLMs, you would sidestep the entire problem.

That's why they want to try.

15

Artanthos t1_j9m7xn2 wrote

Except the internet, including Reddit, frequently equates unpopular opinions as bad, even when perfectly valid.

It also equates agreeing with the hive mind as good, even when blatantly wrong.

7

NoidoDev t1_j9nhll0 wrote

All the platforms are pretty much biased against conservatives, anyone who isn't anti-national and against men, but allow anti-capitalist propaganda and claims about what kinds of things are "racist". People can claim others are incels, certain opinions are the ones incels have, and incels are misogynists and terrorists. Same goes for any propaganda in favor of any especially protected (=privileged) victim group. Now they use this dialog data to train AI while raging about dangerous extremist speech online. Now we know why.

0

ebolathrowawayy t1_j9petdx wrote

> All the platforms are pretty much biased against conservatives

It may be the case that conservative viewpoints are unpopular. Vitriolic and uninformed opinions about non-white, lgbtq, and women's reproductive rights aren't popular and I'm glad they're not.

0

Artanthos t1_j9pmx6h wrote

And anyone that does not agree with the hive mind, provides real data that disagrees with the hive mind, or offers a neutral position that accepts more than one possible point of view may be equally valid is placed under this label.

1

ebolathrowawayy t1_j9pubqd wrote

Some things are cut and dry, like women's rights and treating people with respect. If someone has conservative views then they'll be labeled a conservative. The hivemind isn't out to get anyone, it's just that conservative views aren't as popular as non-conservative views. Clown enthusiasts aren't very popular either, but they don't feel attacked all the time, probably because they don't hold positions of power that can affect everyone.

1

Artanthos t1_j9r61ls wrote

If it was just women’s rights, racism, or LGBTQ, we wouldn’t be having this conversation.

It’s economics, agism, blatant misinformation, eat the rich, and whatever random topic the hive mind takes a position on on any given day.

1

ebolathrowawayy t1_j9u9uci wrote

I don't know who you're kidding, maybe yourself? The conservative platform is about 95% of the issues I named and gun control. That's all they talk about and all they care about. They have basically never been fiscally conservative and they prefer to strangle the middle and lower classes instead of taxing corporations. They love to ram through unpopular legislation by portraying it as religiously correct to pander to their aging voters. Republicans just want control, mostly control of women. That and pocketlining through corruption (Dems do this too, but not as much).

1

Artanthos t1_j9vphbo wrote

>I don't know who you're kidding, maybe yourself? The conservative platform is about 95% of the issues I named and gun control

I'm not talking about the conservative platform, and I've tried to make this very clear.

I'm talking about the hive mind classifying anything and everything that disagrees with it as conservative and downvoting it while ignoring their own issues and any real data that contradicts their own biases.

1

Standard_Ad_2238 t1_j9l84z2 wrote

Correct me if I got it wrong, but you are talking about bot engagement or fake news, right? In that case, if anything, at least AI would be indirectly increasing jobs for moderation roles ^^

1

EndTimer t1_j9lcc8k wrote

I'm talking about everything from fake news to promoting white supremacy on social networks.

I'm thinking about what it's going to be like when 15 users on a popular discord server are OCR + GPT (>=) 3.5 + malicious prompting + typing output.

AI services and their critics have to try to limit this and even worse possibilities, or else everything is going to get overrun.

3

Standard_Ad_2238 t1_j9lkvrf wrote

People always find a way to talk about what they want. Let's say Reddit for some reason adds a ninth rule: "Any content related to AI is prohibited." Would you simply stop doing that at all? What the majority of us would do is find another website where we could talk, and even if that one starts to prohibit AI content too, we would keep looking until we find a new one. This behavior applies to everything.

There are already some examples of how trying to limit a specific topic on an AI would cripple several other aspects of it, as you could clear see it on a) CharacterAI's filter that prevented NSFW talks at the cost of a HUGE overall coherence decrease; b) a noticeable quality decrease of SD 2.0's capability of generating images with humans, since a lot of its understanding of anatomy came from the NSFW images, now removed from the model training; and c) BING, which I think I don't have to explain due to how recent it is.

On top of that, I'm utterly against censorship (not that it matters for our talk), so I'm very excited to see the uprising of open-source AI tools for everything, which is going to greatly increase the difficulty of limiting how AI is used.

5

berdiekin t1_j9lky82 wrote

>Why the hell are people treating AI differently?

I don't think we are, like you said it's something that seems to occur with every major new technology.

Seems to me that this is just history repeating itself.

4

Standard_Ad_2238 t1_j9lmxm2 wrote

I think most people who are into this field are, but it seems to me that every company is walking on eggshells afraid of a possible big viral tweet or to appear on a well known news website as "the company whose AI did/let the users do [ ]" (insert something bad there), just like Microsoft with Bing.

I could train a dog to attack people on streets and say "hey, dogs are dangerous" or to buy a car and run over a crowd just to say "hey, cars are dangerous too". What it seems to me is that some people don't realize that everything could be dangerous. Everything can and at sometime WILL be used by a malicious person to do something evil, it's simply inevitable.

Recently I started to hear a lot of "imagine how dangerous those image generative AIs are, someone could ruin a lot of people's lives by creating fake photos of them!". Yeah, we didn't have Photoshop until this year.

4

HeinrichTheWolf_17 t1_j9lmo94 wrote

Yeah, people shat themselves on the railroad too. It’s always the end of the world.

1

UltraMegaMegaMan t1_j9md5w8 wrote

I agree there's a parallel with other technologies: guns, the internet, publishing, flight, nuclear technology, fire. The difference is scope and scale. ChatGPT is not actual A.I., it does not "think" or attempt to in any way. It's not sentient, sapient, or intelligent. It just predicts which words should be used in what order based on what humans have written.

But once you get to something that even resembles humans or A.I., something that is able to put out content that could pass for human, that's an increase in the order of magnitude for technology.

Guns can't pass the Turing test. ChatGPT can. Video evidence, as a reliable object in society, has less than 5 years to live. That will have ramifications in media, culture, law, and politics that are inconceivable to us today. Think about the difference between a Star Trek communicator in the 1960s tv show compared to a smart phone of today.

To be clear, I'm not advocating that we go ahead and deploy this technology, that's not my point. I'm saying you can't use it without accepting the downsides, and we don't know what those downsides are. We're still not past racism. Or killing people for racism. It's the 21st century and we still don't give everyone food, or shelter. And both of those things are policy decisions that are 100% a choice. It's not an economic or physical constraint.

We are not mature enough to handle this technology responsibly. But we've got it. And it doesn't go back in the bottle. It will be deployed, regardless of whether it should be or not. I'm just pointing out that the angst, the wringing of hands, is performative and futile.

Instead of trying to make the most robust technology we've ever known the first perfect one, that does no harm, we should spend our effort researching what those harms will be and educating people about them. Because it will be upon us all in 5 years or less, and that's not a lot of time.

4