Comments

You must log in or register to comment.

Kolinnor t1_iww46to wrote

I agree with the article concerning Galactica, it was utter trash (EDIT : apparently you can still do some nice stuff with it) and excessively arrogant. I'm glad this terrible project just gets shut down.

However, I strongly disagree about the conclusion. It makes no doubt to me that this is the right direction : I've been helped by GPT-3 when studying math (for example today I explained that I wanted to know if a certain type of a function had a name, because I wasn't able to find anything on google, and it correctly understood my vague explanation), or it's just pretty good in general with "well-known" knowledge. The fact that it is really naive helped me to craft some intuition sometimes. Of course, it's still baby steps now, but big potential.

The article kinda downplays how good LLM are in general, kinda dismissing them as nonsense generator. But Gary Marcus being cited in the article is a big red flag for me as well.

19

TopicRepulsive7936 t1_iwwdd1u wrote

Somebody should research his background a little. Why is he so determined and why is he everywhere.

9

Kolinnor t1_iwyuae2 wrote

What do you mean ? I know he's popular for being highly controversial, but I wonder if I don't know all the story about him.

1

TopicRepulsive7936 t1_ix08pz5 wrote

Maybe it's nothing. But for decades he was a small time psychology professor until he bursted into the AI scene and like many he assumed he knew the business. Maybe he just likes attention, bit insecure. But you never know if he gets extra money from somewhere.

4

Veneck t1_ix2b5ot wrote

We seem to be experiencing a moment in time where being critical is being somewhat overly rewarded socially, as opposed to building things.

1

visarga t1_ix0ji8d wrote

> it was utter trash and excessively arrogant

Galactica is a great model for citation retrieval. It has innovations in citation learning and beats all other systems. Finding good citations is a time consuming task when writing papers.

It also has a so called <work> token that triggers additional resources such as a calculator or Python interpreter. This is potentially very powerful, combining neural and symbolic reasoning.

Another interesting finding from this paper is that a smaller, very high quality dataset can replace a much larger, noisy dataset. So there's a trade-off here between quality and quantity, it's not sure which direction has the most payoff.

I'd say the paper was targeted for critique because it comes from Yann LeCunn's AI institute. Yann has some enemies on Twitter since a few years ago. They don't forget or forgive. There's a good video on this topic by Yannic Kilcher.

And by the way, the demo still lives on HuggingFace: https://huggingface.co/spaces/lewtun/galactica-demo

5

Martholomeow t1_iwwdj27 wrote

Some of these problems already exist with Alexa and Siri. I very often ask them simple questions and get false answers. One example that comes to mind was when i asked Siri how many novels Stephen King had written, her answer was “Stephen King has written 10 novels.”

I just checked again and this time she was correct but still there are many cases where they are wrong, in part because they get their answers from questionable sources.

6

jloverich t1_iwx0k74 wrote

They sound very confident when they are wrong.

4

ftc1234 t1_iwwxrcd wrote

Isn’t this like, Duh?!

All of deep learning, including LLMs, is about coming up with a non linear model that best models input data. Does it guarantee that: a) any output it generates is consistent with the actual input data (I don’t mean input distribution here) and b) it understands what’s not said in the input data (eg., it doesn’t have enough knowledge or training to answer the prompt accurately).

At a high level, all that LLMs do it model an input distribution. And you can sample it for interesting images and text. There are no guarantees that the output makes sense and the AI community is not even close to developing techniques that limits generated output to only sensible ones (or throw up an error if there is no good answer).

And more importantly, given how easy it is to generate output, the real challenge is to not get lost in a world of simulation and to keep it real.

3

Veneck t1_ix2avtt wrote

This is a trash quality article. There is nothing in it besides stating the obvious that it wasn't a good product, and a bunch of naive, almost satirical comments from a supposed expert.

1

[deleted] t1_iwvtrfz wrote

AI designed by autistic data nerds, employed by psychopathic corporations to generate profit, monopoly, and above all, control.

... What could go wrong?

"Your Scientists Were So Preoccupied With Whether Or Not They Could, They Didn’t Stop To Think If They Should" -- Ian Malcolm, Jurassic Park.

−17

Think_Olive_1000 t1_iwwf1w9 wrote

All the people sitting around singing kumbaya aren't building anything but mud huts unfortunately. The chads and jocks are too busy shaking us for lunch money while stealing our b*itches and the philospher is busy navel gazing. Imagine lumping people together like you're still in highschool.

10

Kaarssteun t1_iww60jv wrote

Take a hike. Just because there's a corporate entity behind an AI division doesn't mean it's garbage. Galactica was a very cool experiment, produces SOTA performances in a ton of usecases, just succumbed to internet trolls and false usecases. Meta AI is cutting edge.

6