Comments

You must log in or register to comment.

Iffykindofguy t1_j8ez914 wrote

They showed up yesterday lmao were a fucking decade away from people trusting these things

15

Borrowedshorts t1_j8fnwkq wrote

Disappointing to say the least, but I suspect the hallucination problem will be fixed very soon, either through better overall models or specific methods to fix it.

4

itsnotlupus t1_j8ft0dq wrote

Well, people trust them today. They shouldn't, but they do. And it's going to get hilarious.

More seriously, we're going to learn collectively to flex a new muscle of "this AI may be super helpful, but it may also be bullshitting me." And odds are it'll be a bit of both in every answer.

Maybe those models are the inoculation we need to practice detecting bullshit online?

12

94746382926 t1_j8g2kaj wrote

It may be a fundamental flaw of these neural networks that no amount of scaling can fix. If it is, it be be a long time until they find a solution. I hope this isn't the case but it's too early to tell.

5

jspsfx t1_j8g4nqb wrote

When did we cross over from just enjoying the novelty of AI systems into anthropomorphizing them (“hallucinations”) and acting like its at all noteworthy for them to get answers wrong. What is this about “trusting” the AI?

What happened? I feel like the typical tech hype train has caused mass confusion over what AI is. It is a tool - thats it.

2

treesprite82 t1_j8g736c wrote

> According to this pros and cons list, the “Bissell Pet Hair Eraser Handheld Vacuum” sounds pretty bad. Limited suction power, a short cord, and it’s noisy enough to scare pets? Geez, how is this thing even a best seller?

> Oh wait, this is all completely made up information.

Is it? There's a "Bissell Pet Hair Eraser Handheld Vacuum" with a 16 feet cord. Moreover, although the reviews are largely positive, there are some complaining about noise and limited suction power.

There is also a cordless variant, which I think is what this blog post's author has found, but it's listed under the name as "Bissell Pet Hair Eraser Lithium Ion Cordless Hand Vacuum".

So Bing AI's claims seem justifiable at least. I'm not sure how to confirm whether the citation was correct (full link isn't given in the screenshot).

26

betsla69 t1_j8glkzm wrote

They don't need to be reliable to be useful.

5

bigkoi t1_j8gop27 wrote

MSFT stock won't be impacted because consumers have low expectations for MSFT

0

Tall-Junket5151 t1_j8gxtl6 wrote

They specifically have an opt-in waitlist for Bing AI, calling for them to take it down is just a brain dead take. We understand it’s not 100% reliable BUT it’s better to have it than not.

7

Ortus14 t1_j8havmv wrote

Still more accurate than humans, most of which are in a constant state of hallucination.

18

Reddituser45005 t1_j8hedlb wrote

I find the whole hallucination thing fascinating. Researchers are suggesting that LLMs exhibit a theory of mind and that they construct their own machine learning model in its hidden states, the space in between the input and output layers. It is unlikely that machine consciousness would arrive fully developed. Human infants take longer to develop than other primates or mammals. It is unlikely that machine consciousness would just turn on like a switch. It would take time to develop an awareness, to integrate the internal and external worlds, to develop an identity. Are these examples of hallucinations and LLMs developing an internal model the baby steps of developing consciousness?

5

manubfr t1_j8hf8la wrote

I've had access for a few days and I feel quite underwhelmed. Bing chat is VERY inaccurate, I'd say more than half the time when researching on topics I am very familiar with, it correctly identifies information sources and then botches up the output, making very plain mistakes (e.g. pulls the correct statement from a webpage except the year which it gets wrong, replacing 2022 with 2021 within the same statement). It also struggles with disambiguation, eg two homonyms will be mixed up.

I honestly thought web connectivity would massively improve accuracy, but so far I've been very disappointed. However, the short term creative potential of LLMs and image models is insane.

1

Fit-Meet1359 t1_j8hipmt wrote

I am surprised that they didn't get anyone to look over the financial reports response. Checking a bunch of numbers should be easy. The presentation must really have been put together in a hurry in order to beat Google.

I'm still really impressed by Bing Chat though. It's quite inaccurate a lot of the time with the small details, even when it searches the web. But it still has made discovering new stuff much much easier than it used to be. I was trying to describe to it a concept I wanted to achieve in Blender after not having used it for a long time, and although it couldn't give me a perfect step by step answer on the first attempt, I was able to converse with it about the mistakes and get it to suggest alternative ideas.

1

DigitalRoman486 t1_j8hk0rn wrote

I mean isn't this why Google were holding off? It is easy to put out this stuff and have it be wrong and unpredictable, it is quite another task creating a reliable search bot that comes back with correct info.

2

paulitabean t1_j923hgo wrote

Maybe the AI has crossed the singularity into the 5th dimension of alternate parallel universes and these hallucinations are actual realities elsewhere -- possible scenarios that almost occurred here, but didn't. 🤷‍♀️

1