Viewing a single comment thread. View all comments

BitterAd9531 t1_j5fal5s wrote

>no one seems to be even considering dealing with it in a serious way

Everyone has considered dealing with it, but everyone who understands the technology behind them also knows that it's futile in the long term. The whole point of these LLMs it to mimic human writing as closely as possible and the more they succeed, the more difficult it becomes to detect. They can be used to output both more precise and more variated text.

Countermeasures like watermarks will be trivial to circumvent while at the same time restricting the capabilities and performance of these models. And that's ignoring the elephant in the room, which is that once open-source models come out, it won't matter at all.

>this is the most pressing ethical issue in AI safety today

Why? It's been long known that the difference between AI and human capabilities will diminish over time. This is simply the direction we're going. Maybe it's time to adapt instead of trying to fight something inevitable. Fighting technological progress has never worked before.

People banking on being able to distinguish between AI and humans will be in for a bad time the coming few years.

42

Historical-Coat5318 t1_j5fbhj5 wrote

If by fighting technological progress you mean controlling it to make sure it serves humanity in the safest most optimal way then yes, we've been doing this forever, when cars were first introduced traffic police didn't exist. There is nothing retrograde or luddite in thinking this way, it's what we've always done.

Obviously watermarking is futile but there are other methods that need to be considered which no one even entertains, for example the ones I mentioned in my first comment.

Also it should be trivially obvious that AI should never be open-source. That's the worst possible idea.

−5

TonyTalksBackPodcast t1_j5iblmx wrote

I think the worst possible idea is allowing a single person or handful of people to have near-total control over the future of AI, which will be the future of humanity. The process should be democratized as much as can be. Open source is one way to accomplish that, though it brings its own dangers as well

11

KvanteKat t1_j5j3ewk wrote

>I think the worst possible idea is allowing a single person or handful of people to have near-total control over the future of AI

I'm not sure regulation is the biggest threat to the field of AI being open. We already live in a world where a small handful of people (i.e. decision makers at Alphabet, OpenAI, etc.) have an outsized influence on the development of the field because training large models is so capital-intensive that very few organizations can really compete with them (researches at universities sure as hell can't). Neither compute (on the scale necessary to train a state-of-the-art model) or well-curated large training datasets are cheap.

Since it is in the business interest of incumbents in this space to minimize competition (nobody likes to be disrupted), and since incumbents in this space already have an outsized influence, some degree of regulation to keep them in check may well be beneficial rather than detrimental to the development of AI and derived technologies and their integration into wider society (at least I believe so, although I'm open to other perspectives in this matter).

2

Historical-Coat5318 t1_j5jw8o8 wrote

I just can't even begin to comprehend this view. Of course, democratizing something sounds good, but if AI has mass-destructive potential it is obviously safer if a handful of people have that power than if eight billion have it. Even if AI isn't mass-destructive, which it obviously isn't yet, it is already extremely socially disruptive and if any given person has that power our governing bodies have basically no hope of steering it in the right direction through regulation, (which they would try to since it would serve their best interests as individuals). The common person would still have a say in these regulations through the vote.

−1

GinoAcknowledges t1_j5kb95p wrote

A vast amount of technological knowledge (e.g. how to create poisons, manufacture bombs) has mass destructive potential if it can be scaled. The difficulty, just like with AI, is scaling, and this mostly self-regulates (with help from the government).

For example, you can build dangerous explosive devices in your garage. That knowledge is widely available (google "Anarchists Handbook"). If you try and build thousands of them (enough to cause mass destruction) the government will notice, and most likely, you aren't going to have enough money and time to do it.

The exact same thing will happen for "dangerous uses of AI". The only actors which have the hardware and capital to cause mass destruction with AI are the big tech firms developing AI. Try running inference at scale on even a 30B parameter model right now. It's extremely difficult unless you have access to multiple server-grade GPUs which are very expensive and hard to get ahold of even if you had the money.

3

BitterAd9531 t1_j5idapl wrote

>trivially obvious that AI should never be open-source

Wow. Trivially obvious? I'd very much like to know how that statement is trivially obvious, because it goes against what pretty much every single expert in this field advocates.

Obviously open-source AI brings problems, but what is the alternative? A single entity controlling one of the most disrupting technologies ever? And ignoring for a second the obvious problems with that, how would you enforce it? Criminalize open-sourcing of software? Can't say I'm a fan of this line of thinking.

5

Historical-Coat5318 t1_j5juhb7 wrote

AI in my view should be controlled by very few institutions, and these institutions should be carefully managed by experts and very intelligent people, which is the case for companies like Google or OpenAI. If AI must exist, and it must, I would much rather it were in the hands of people like Sam Altman and Scott Aaronson than literally everyone with an internet connection.

Obviously terms like "open-source" and "democratised" sound good, but if you think about the repercussions of this you will surely realise that it would be totally disastrous for society. Looking back in history we can see that nuclear weapons were actually quite judiciously managed when you consider all of the economic and political tensions of the time, now imagine if anyone could have bought a nuke at Walmart, human extinction would have been assured. Open-source AI is basically democratized mass-destruction, and if weapons of mass-destruction must exist (including AI), then it should be in as few hands as possible.

Even ignoring existential risk, which is obviously still very speculative, even LLMs should never be open-source because that makes any regulation impossible. In that world evidence (video, images and text), not to mention human creativity, would cease to exist and the internet would basically be unnavigable as the chasm between people's political conception of the world and the world itself only widens. Only a few companies should be allowed to have this technology, and they should be heavily regulated. I admit I don't know how this could be implemented, I just know that it should be.

This is basically Nick Bostrom's Vulnerable World Hypothesis. Bostrom should be read as a prerequisite for everyone involved in AI, in my opinion.

−2