Viewing a single comment thread. View all comments

Ill-Construction-209 t1_jdg0nac wrote

Hopefully wolfram is able to teach Chat some basic math. That's a key weakness at the moment.

31

acutelychronicpanic t1_jdgadd8 wrote

According to that recent paper on GPT-4, its pretty good at using this kind of tool. So yes, it will!

13

WorkO0 t1_jdgkj5b wrote

No need for that. Just like you would use a calculator/computer to solve algorithmic problems so will AI in the future. Doing mental math is slow and inefficient, our own brains prove it. OTOH, using implicit extensions to do it will make GPT do things previously unimaginable.

11

Mercurionio t1_jdgocc4 wrote

Our brain does NOT prove it. It's actually the opposite. Ask any autistic kid about 174th number in Pi and he will easily answer your question (exaggerating, but still).

What our brain proves is that it's highly concentrated even when we think it's not. Manipulating our body is a VERY demanding task, it consumes a lot of resources. So, when you are on a "trip", your brain will just relax and do whatever it wants. And your creativity will burst way better than gpt4, for example.

−19

angrathias t1_jdh2y92 wrote

Shit, and I thought the LLMs were the big halluncinators 😂

12

kallikalev t1_jdhj0tf wrote

We’re talking about direct computations. Someone with a massive memory of pi has it memorized, they aren’t computing it via an infinite series in the moment.

The point being made is that it’s much more efficient, both in time and energy, in having the actual computation done by a dedicated and optimized program that only takes a few CPU instructions, rather than trying to approximate it using the giant neural network mind that is a LLM. And this is similar to humans, our brains burn way more energy multiplying large numbers in our head than a CPU would in the few nanoseconds it would take.

7