Viewing a single comment thread. View all comments

Neurogence t1_j9jef7k wrote

54

WithoutReason1729 t1_j9jmd05 wrote

The catch is that it only outperforms large models in a narrow domain of study. It's not a general purpose tool like the really large models. That's still impressive though.

111

Ken_Sanne t1_j9jxg68 wrote

Can It be fine tuned ?

7

WithoutReason1729 t1_j9jxy78 wrote

You can tune it to another data set and probably get good results, but you have to have a nice, high quality data set to work with.

18

Ago0330 t1_j9lm5ty wrote

I’m working on one that’s trained on JFK speeches and Bachlorette data to help people with conversation skills.

21

Gynophile t1_j9msb3s wrote

I can't tell if this is a joke or real

7

Ago0330 t1_j9msg1r wrote

It’s real. Gonna launch after GME moons

10

ihopeshelovedme t1_j9npl0j wrote

Sounds like a viable AI implementation to me. I'll be your angel investor and throw some Doge your way or something.

2

Borrowedshorts t1_j9ka0ta wrote

I don't think that's true, but I do believe it was finetuned on the specific dataset to achieve the SOTA result they did.

5

InterestingFinish932 t1_j9m2xhe wrote

It chooses the correct answer from multiple choices. it isn't actually comparable to chatGtp.

4