bizarre_coincidence

bizarre_coincidence t1_j6w9qyl wrote

There is also the issue of accuracy. When they are trying to give you a webpage, the page may or may not be relevant or accurate, but it is someone else’s content. There isn’t any responsibility on their part to be correct. But if they are generating the answers, suddenly they are responsible for their validity. They might avoid legal culpability (maybe), but enough bad stories of people relying on the bot, not double checking the info, and getting screwed could tarnish google’s reputation. If chatGPT weren’t being used as a toy, being dismissed as “just a language model” whenever it generates harmful bullshit they is blatantly false, it could do a ton of harm. If google had a chat bot used in any official capacity, people would either take it much more seriously or they would have to essentially ignore it.

People rely on google,their AI assistant has to be much more accurate than chatGPT for it not to jeopardize that.

16

bizarre_coincidence t1_iw7v8jz wrote

Unless you are witnessing events firsthand, you have to trust someone to tell you what the facts are. If two information sources disagree on what the facts are, you either don’t know what to believe or you come up with your own process to decide which source to believe.

Facts may be objective, but we very rarely come up against facts. Rather, we come up against claims of facts, and we cannot independently assess whether these claims are true. We can only ask if they are consistent with other things we believe are true, or are consistent with other sources that we trust, and this is an imperfect strategy.

Even scientific facts which are in principle verifiable might not be in practice. And since science makes plenty of counterintuitive claims, there is legitimate reason to be skeptical of things that are known to be factual.

The point is that it isn’t that simple. We take for granted that we know what the facts are, and that they are self evident. The truth is much more complicated.

5