bizarre_coincidence
bizarre_coincidence t1_iw7v8jz wrote
Reply to comment by CosmicDave in People who post stories marked false by Snopes on Reddit fall into five groups: Reason to Disagree, Changed Belief, Steadfast Non-Standard Belief, Sharing to Debunk, and Sharing for Humor. Reducing misinformation requires different approaches for each group. by asbruckman
Unless you are witnessing events firsthand, you have to trust someone to tell you what the facts are. If two information sources disagree on what the facts are, you either don’t know what to believe or you come up with your own process to decide which source to believe.
Facts may be objective, but we very rarely come up against facts. Rather, we come up against claims of facts, and we cannot independently assess whether these claims are true. We can only ask if they are consistent with other things we believe are true, or are consistent with other sources that we trust, and this is an imperfect strategy.
Even scientific facts which are in principle verifiable might not be in practice. And since science makes plenty of counterintuitive claims, there is legitimate reason to be skeptical of things that are known to be factual.
The point is that it isn’t that simple. We take for granted that we know what the facts are, and that they are self evident. The truth is much more complicated.
bizarre_coincidence t1_isoq2ey wrote
Reply to TIL Liquid Helium is the perfect element to keep the superconductive magnets in MRI machines cold by Alternative-Leg1095
And if you think it is funny when you breathe in helium, you should see what happens when you drink it!
bizarre_coincidence t1_j6w9qyl wrote
Reply to comment by [deleted] in Google is asking employees to test potential ChatGPT competitors, including a chatbot called 'Apprentice Bard' by No-Drawing-6975
There is also the issue of accuracy. When they are trying to give you a webpage, the page may or may not be relevant or accurate, but it is someone else’s content. There isn’t any responsibility on their part to be correct. But if they are generating the answers, suddenly they are responsible for their validity. They might avoid legal culpability (maybe), but enough bad stories of people relying on the bot, not double checking the info, and getting screwed could tarnish google’s reputation. If chatGPT weren’t being used as a toy, being dismissed as “just a language model” whenever it generates harmful bullshit they is blatantly false, it could do a ton of harm. If google had a chat bot used in any official capacity, people would either take it much more seriously or they would have to essentially ignore it.
People rely on google,their AI assistant has to be much more accurate than chatGPT for it not to jeopardize that.