Comments

You must log in or register to comment.

po-handz t1_ja92mix wrote

That's the dumbest take I've ever heard

This prof probably thinks the war on drugs has been successful

43

VirtualHat t1_jaa4ueu wrote

A better analogy would be: This professor thinks the implementation of driver's licences has reduced traffic accidents.

−1

bitemenow999 t1_ja9dl6k wrote

The problem is that the AI ethics debate is done by people who don't directly develop/work with ML models (like Gary Marcus) and have a very broad view of the subject often taking the debate to science fiction.

Anyone who says ChatGPT or DallE models are dangerous needs to take ML101 class.

AI ethics at this point is nothing but a balloon of hot gas... The only AI ethics that has any substance is data bias.

Making laws to limit AI/ML use or keeping it closed-source is going to kill the field. Not to mention the amount of resources required to train a decent model is prohibitive enough for many academic labs.

EDIT: The idea of "license" for AI models is stupid unless they plan to enforce the license requirements to people buying graphic cards too.

31

admirelurk t1_ja9wy95 wrote

I counter that many developers of ML have a too narrow definition of what constitutes danger. Sure, chatGPT will not go rogue and start killing people, but the technology affects society in much more subtle ways that are hard to predict.

6

OpeningVariable t1_jaa3ldd wrote

This is not about academic labs, but about industry, governments, and startups. It is one thing that Microsoft doesn't mind rolling out a half-assed BingChat that can end up telling you ANYTHING at all - but should they be allowed to? What about Tesla? Should they be allowed to launch and call "autopilot" an unreliable piece of software that they know cannot be trusted and that they do not fully understand. I think not

3

bitemenow999 t1_jaa5b9n wrote

what are you saying mate, you can't sue google or Microsoft because it gave you the wrong information... all software services come with limited/no warranty...

As for tesla, there is FMVSS and other regulatory authorities that already take care of it... AI ethics is BS, a buzzword for people to make themselves feel important...

AI/ML is a software tool, just like python or C++... do you want to regulate python too on the off chance someone might hack you or commit some crime?

​

>This is not about academic labs, but about industry, governments, and startups.

Most of the startups are off shoots of academic labs.

0

OpeningVariable t1_jaa8zp8 wrote

BingChat is generating information, not retrieving it, and I'm quite sure that we will see lawsuits as soon as this feature becomes public and some teenager commits suicide over BS that it spat out or something like that.

Re the tool part - yes, exactly, and we should understand what that tool is good for, or more specifically - what it is NOT good for. No one writes airplanes' mission critical software using python, they use formally verifiable languages and algorithms because that is the right tool for the amount of risk involved. AI is being thrown around for anything, but it isn't a good tool for everything. Depending on the amount of risk and exposure for each application, there should be different regulations and requirements.

​

>Most of the startups are off shoots of academic labs.

This was a really bad joke. First of all, why would anyone care about off-shoots of academic labs? They are no longer academics, they are in the business, and can fend for themselves. Second of all, there is no way most startups are offshoots of academic labs, most startups are looking for easy money and throw in AI just to sound cooler and bring more investors.

0

VirtualHat t1_jaa4jwx wrote

An increasing number of academics are identifying significant potential risks associated with future developments in AI. Because regulatory frameworks take time to develop, it is prudent to start considering them now.

While it is currently evident that AI systems do not pose an existential threat, this does not necessarily apply to future systems. It is important to remember that regulations are commonly put in place and rarely result in the suppression of an entire field. For instance, despite the existence of traffic regulations, we continue to use cars.

3

PacmanIncarnate t1_jaafjl5 wrote

Don’t regulate tools, regulate their product and the oversight of them in decision making. Don’t let any person, institution or corporation use AI as an excuse for why they committed a crime or unethical behavior. The law should take it as an a priori that a human was responsible for decisions, regardless of whether or not an organization actually functioned that way, because the danger of AI is that it’s left to make decisions and those decisions cause harm.

1

lukasz_lew t1_ja9jsrf wrote

Exactly.
Requiring a licence for "chatting with GPT-3" is silly.

It would be like requiring a licence to talk to a child (albeit a very knowledgeable child with a tendency to make stuff up). You would not allow such kid to write your homework or thesis, would you?

Maybe requiring reading a warning akin to "watch out, the cup is hot", would make more sense for this use case.

1

enryu42 t1_jaa1lru wrote

> The only AI ethics that has any substance is data bias

While the take in the tweet is ridiculous (but alas common among the "AI Ethics" people), I'd disagree with your statement.

There are many other concerns besides the bias in the static data. E.g. feedback loops induced by ML models when they're deployed in real-life systems. One can argue that causality for decision-making models also falls into this category. But ironically, the field itself is too biased to do productive research in these directions...

1

WarmSignificance1 t1_ja9jnft wrote

You don’t have to understand the physics behind nuclear weapons to argue that they’re dangerous. Indeed, the people in the weeds are not always the best at taking a step back and surveying the big picture.

Of course making AI development closed source is ridiculous, though.

−1

bitemenow999 t1_ja9p3n9 wrote

that is a very bad argument.. I would suggest you read up on the quote from Oppenheimer after the first nuclear test, whereas, the people surveying the "big picture" decided to bomb Hiroshima...

3

JiraSuxx2 t1_ja932j8 wrote

AI is a technology so powerful that countries that ‘pause’ it will be at a disadvantage quickly. Not likely to happen.

A driver’s license to use it? A pretty vague suggestion if you ask me. How would that work exactly?

15

Ramdogger t1_ja97dxi wrote

Use, AI powered, software of course to determine legitimacy of ID. /s

3

ton4eg t1_ja9aokt wrote

After spending some time exploring AI ethics, it seems rather useless. However, ethics is a real problem, but the discipline failed to provide any meaningful answers.

8

yaosio t1_ja9rvvw wrote

It's only considered dangerous because individuals can do what companies and governments have done for a long time. What took teams of people to create plausible lies can now be done by one person. When somebody says AI is dangerous all I hear is they want to keep the power to lie in the hands of the powerful.

5

PacmanIncarnate t1_jaaghjo wrote

Exactly. Any ethicist worried about how joe will use AI is missing the big picture that real ethical violations are going to come from governments and corporations.

2

vhu9644 t1_ja9cw0v wrote

Laws have to be pragmatic.

It's like making encryption illegal. Anyone with the know-how can do it, and you can't detect an air-gapped model being trained.

We, as a society, shed data more than we shed skin cells. Restricting dataset access wouldn't really be that much of a deterrent either.

2

quisatz_haderah t1_ja9j2ib wrote

>It's like making encryption illegal.

Yet they are pushing this agenda. They have no clue how Internets work.

1

walk-the-rock t1_ja9m7dr wrote

> requirement of a license to use AI like chatGPT since it's "potentially dangerous"

guess we need a license to use sophisticated technology like Python, C++, Java, shell scripts, Excel... anything that executes code and makes machines do stuff.

You could implement the math for a resnet in an excel spreadsheet (I'm not recommending this).

2

daidoji70 t1_ja9xchz wrote

If the Internet has taught me anything, its that for whatever ridiculous 100% dumbest take you can imagine, you can def find a credentialed professional who holds that opinion. Its often unclear whether they hold that opinion for attention or notoriety or just for character defects.

2

_poisonedrationality t1_ja9xdek wrote

I hardly ever see AI ethicists say anything useful. I feel like they're motivated by making hot takes than contributing a helpful perspective.

1

andreichiffa t1_jaa3v5s wrote

Based on some of the comments over on /r/ChatGPT asking to remove the disclaimers while they teach themselves plumbing, HVAC and electric works with ChatGPT, we are a couple of lawsuits from OpenAI and MS actually creating a GPT certification and workplaces requiring it to interact with LLMs/insurances refusing claims resulting from ChatGPT interaction without certification.

1

leondz t1_ja9dk7x wrote

Depends who & what you're using it on, doesn't it, just like a driver's license. Do what you like on your own private property. If you want it to be critical in decision-making that affects others, some rudimentary training makes a ton of sense.

0

OpeningVariable t1_ja9xo2h wrote

I think requiring an audit of models and data before the model can be used commercially is not such a bad thing. E.g. audit of ChatGPT and granting permission for specific kinds of commercial use - once we figure out what those are, and what tools we can use for auditing the models.

0

Big_Reserve7529 t1_ja98huf wrote

Idk if a license is the way to go. I do agree that there need to be certain regulations put in place for safety. We after really late when it came to data & safety and digital identity. A lot of countries still don’t have tight data laws about this, I think sadly if people don’t advocate for the possible dangers of fast growing technology that we will feel the consequences of it later on.

−1

currentscurrents t1_ja94y4s wrote

"AI ethics professor" isn't a real thing.

Ethics isn't even the kind of thing you can be an expert in; anybody calling themselves an ethics expert has declared themselves the arbiter of right and wrong.

−6

redflexer t1_ja97eug wrote

This specific take is naive, but ethics is a very rigorous discipline and is also different from moral codes, which are subjective.

7

currentscurrents t1_ja99uud wrote

I'm not talking about philosophers debating the nature of moral actions. Ethics "experts" and ethics boards make a stronger claim; that they can actually determine what is moral and ethical. This truly is subjective.

At best they're a way for people making tricky decisions to cover their legal liability. Hospitals don't consult ethics boards before unplugging patients because they think the ethicists will have some useful insight; they just want their approval because it will help their defense if they get sued.

3

quisatz_haderah t1_ja9j8xr wrote

I think you should add this to your original response. Because this should be heard more.

3

redflexer t1_ja9locb wrote

This is not at all how ethic boards operate. They very rarely make decisions themselves but define the parameters within which an ethical decision can be made (e.g. what aspects need to be considered and weighted against each other, who needs to be heard, etc.). If you had other experiences, this is not representative for the majority of boards.

2