Comments

You must log in or register to comment.

treesprite82 t1_j8g736c wrote

> According to this pros and cons list, the “Bissell Pet Hair Eraser Handheld Vacuum” sounds pretty bad. Limited suction power, a short cord, and it’s noisy enough to scare pets? Geez, how is this thing even a best seller?

> Oh wait, this is all completely made up information.

Is it? There's a "Bissell Pet Hair Eraser Handheld Vacuum" with a 16 feet cord. Moreover, although the reviews are largely positive, there are some complaining about noise and limited suction power.

There is also a cordless variant, which I think is what this blog post's author has found, but it's listed under the name as "Bissell Pet Hair Eraser Lithium Ion Cordless Hand Vacuum".

So Bing AI's claims seem justifiable at least. I'm not sure how to confirm whether the citation was correct (full link isn't given in the screenshot).

26

ausernamethatistoolo t1_j8iambm wrote

The irony is that this means the AI is more reliable than the author of this article when it comes to picking pet hair vacuums.

5

paulitabean t1_j923hgo wrote

Maybe the AI has crossed the singularity into the 5th dimension of alternate parallel universes and these hallucinations are actual realities elsewhere -- possible scenarios that almost occurred here, but didn't. 🤷‍♀️

1

Ortus14 t1_j8havmv wrote

Still more accurate than humans, most of which are in a constant state of hallucination.

18

Iffykindofguy t1_j8ez914 wrote

They showed up yesterday lmao were a fucking decade away from people trusting these things

15

itsnotlupus t1_j8ft0dq wrote

Well, people trust them today. They shouldn't, but they do. And it's going to get hilarious.

More seriously, we're going to learn collectively to flex a new muscle of "this AI may be super helpful, but it may also be bullshitting me." And odds are it'll be a bit of both in every answer.

Maybe those models are the inoculation we need to practice detecting bullshit online?

12

turnip_burrito t1_j8f4qxl wrote

Hopefully some group figures out how to make these bots accurate because this is... yeah...

3

Iffykindofguy t1_j8fb9ik wrote

Uhhhhhhh Ive been using it and getting (mostly) correct results. Its been truly better than I ever expected. Ive had to fix a few things but its made my work life easier until it takes it totally

13

WarmSignificance1 t1_j8g4iu5 wrote

Correct, as far as you know.

The fact that it hallucinated during a pre-recorded demo that was obviously vetted by multiple people demonstrates how hard it is to actually determine when it is wrong.

2

Iffykindofguy t1_j8g5ll8 wrote

No, correct as far as I checked every single thing I took from it because my job depends on it. I didn't rely on it, its brand spanking new.

5

turnip_burrito t1_j8fic3n wrote

Yeah, it can be a time saver for sure, just wish I could be lazy and rely on it for accurate information. I don't think it will take long to make it super accurate (maybe a decade or less).

1

Lawjarp2 t1_j8f72dd wrote

Over 8 billion searches a day on Google. Even being 99.99% accurate can mean a lot of lawsuits when it messes up.

1

Tall-Junket5151 t1_j8gxtl6 wrote

They specifically have an opt-in waitlist for Bing AI, calling for them to take it down is just a brain dead take. We understand it’s not 100% reliable BUT it’s better to have it than not.

7

betsla69 t1_j8glkzm wrote

They don't need to be reliable to be useful.

5

Reddituser45005 t1_j8hedlb wrote

I find the whole hallucination thing fascinating. Researchers are suggesting that LLMs exhibit a theory of mind and that they construct their own machine learning model in its hidden states, the space in between the input and output layers. It is unlikely that machine consciousness would arrive fully developed. Human infants take longer to develop than other primates or mammals. It is unlikely that machine consciousness would just turn on like a switch. It would take time to develop an awareness, to integrate the internal and external worlds, to develop an identity. Are these examples of hallucinations and LLMs developing an internal model the baby steps of developing consciousness?

5

Borrowedshorts t1_j8fnwkq wrote

Disappointing to say the least, but I suspect the hallucination problem will be fixed very soon, either through better overall models or specific methods to fix it.

4

94746382926 t1_j8g2kaj wrote

It may be a fundamental flaw of these neural networks that no amount of scaling can fix. If it is, it be be a long time until they find a solution. I hope this isn't the case but it's too early to tell.

5

giuven95 OP t1_j8fpmv1 wrote

I think it will eventually be fixed. Soon? I'm not so sure

−1

jspsfx t1_j8g4nqb wrote

When did we cross over from just enjoying the novelty of AI systems into anthropomorphizing them (“hallucinations”) and acting like its at all noteworthy for them to get answers wrong. What is this about “trusting” the AI?

What happened? I feel like the typical tech hype train has caused mass confusion over what AI is. It is a tool - thats it.

2

Martholomeow t1_j8gyup8 wrote

Yes exactly. As if google is any better. I’m so sick of doing a google search and coming up with nothing but nonsensical SEO spam.

3

DigitalRoman486 t1_j8hk0rn wrote

I mean isn't this why Google were holding off? It is easy to put out this stuff and have it be wrong and unpredictable, it is quite another task creating a reliable search bot that comes back with correct info.

2

manubfr t1_j8hf8la wrote

I've had access for a few days and I feel quite underwhelmed. Bing chat is VERY inaccurate, I'd say more than half the time when researching on topics I am very familiar with, it correctly identifies information sources and then botches up the output, making very plain mistakes (e.g. pulls the correct statement from a webpage except the year which it gets wrong, replacing 2022 with 2021 within the same statement). It also struggles with disambiguation, eg two homonyms will be mixed up.

I honestly thought web connectivity would massively improve accuracy, but so far I've been very disappointed. However, the short term creative potential of LLMs and image models is insane.

1

Fit-Meet1359 t1_j8hipmt wrote

I am surprised that they didn't get anyone to look over the financial reports response. Checking a bunch of numbers should be easy. The presentation must really have been put together in a hurry in order to beat Google.

I'm still really impressed by Bing Chat though. It's quite inaccurate a lot of the time with the small details, even when it searches the web. But it still has made discovering new stuff much much easier than it used to be. I was trying to describe to it a concept I wanted to achieve in Blender after not having used it for a long time, and although it couldn't give me a perfect step by step answer on the first attempt, I was able to converse with it about the mistakes and get it to suggest alternative ideas.

1

bigkoi t1_j8gop27 wrote

MSFT stock won't be impacted because consumers have low expectations for MSFT

0