Viewing a single comment thread. View all comments

bitemenow999 t1_jaa5b9n wrote

what are you saying mate, you can't sue google or Microsoft because it gave you the wrong information... all software services come with limited/no warranty...

As for tesla, there is FMVSS and other regulatory authorities that already take care of it... AI ethics is BS, a buzzword for people to make themselves feel important...

AI/ML is a software tool, just like python or C++... do you want to regulate python too on the off chance someone might hack you or commit some crime?

​

>This is not about academic labs, but about industry, governments, and startups.

Most of the startups are off shoots of academic labs.

0

OpeningVariable t1_jaa8zp8 wrote

BingChat is generating information, not retrieving it, and I'm quite sure that we will see lawsuits as soon as this feature becomes public and some teenager commits suicide over BS that it spat out or something like that.

Re the tool part - yes, exactly, and we should understand what that tool is good for, or more specifically - what it is NOT good for. No one writes airplanes' mission critical software using python, they use formally verifiable languages and algorithms because that is the right tool for the amount of risk involved. AI is being thrown around for anything, but it isn't a good tool for everything. Depending on the amount of risk and exposure for each application, there should be different regulations and requirements.

​

>Most of the startups are off shoots of academic labs.

This was a really bad joke. First of all, why would anyone care about off-shoots of academic labs? They are no longer academics, they are in the business, and can fend for themselves. Second of all, there is no way most startups are offshoots of academic labs, most startups are looking for easy money and throw in AI just to sound cooler and bring more investors.

0