Viewing a single comment thread. View all comments

ArcticWinterZzZ t1_jdtn3hl wrote

Or, you know, you can just download Wikipedia locally

23

1II1I11II1I1I111I1 t1_jdtncfb wrote

GPT-4 is far, far smarter than Wikipedia.

35

ArcticWinterZzZ t1_jdtps8v wrote

Of course. Wikipedia cannot think. But what I mean is that if you just want to preserve information, you should preserve an archive and not an AI that can sometimes hallucinate information.

25

1II1I11II1I1I111I1 t1_jdtpyeu wrote

Wikipedia is 1% the archive that GPT-4 is though. Hallucinations will likely be solved soon according to Ilya Sutskever, keep up!

13

Cryptizard t1_jdur2sy wrote

This is completely wrong. Wikipedia has a lot more in depth information than GPT does. Try asking GPT about obscure facts some time.

5

1II1I11II1I1I111I1 t1_jdur9s4 wrote

Give me 3 facts and I'll ask GPT4 to check them now

3

Cryptizard t1_jduumtg wrote

I have access to GPT4, I’m not making this stuff up. Here are three from poking around, but keep in mind it will pretend to know the answer to anything it is just wrong when you ask it to explain the details. It will not match with actual fact, I.e. what is in Wikipedia.

What is an oblivious tree?

What is the population of Goleh-ye Cheshmeh?

Where was the 65th governor of Delaware born?

15

CancerPiss t1_jdujefs wrote

GPT cannot think either

−3

daou0782 t1_jduxcxr wrote

What is thinking? Do submarines swim?

9

CancerPiss t1_jduxuwl wrote

Is my microwave thinking? I mean, I click a few buttons and it starts doins things

0

Ambiwlans t1_jdvjxkg wrote

While the GPT authors would agree with you, reddit knows better! GPT clearly thinks and has a soul and is basically agi!

4

CancerPiss t1_jdvlcz1 wrote

That's why you need to stay BASED😩 when dealing with redditors

1

Anjz OP t1_jdtnx32 wrote

Wikipedia will tell you the history of fishing, but it won't tell you how to fish.

For example, GPT-4 has open source knowledge of the fishing subreddit, fishing forums, stackexchange etc. Even Wikipedia. So it infers based on the knowledge and data on those websites. You can ask it for the best spots to fish, what lures to use, how to tell if a fish is edible, how to cook a fish like a 5 star restaurant.

Imagine that localized. It's beyond a copy of Wikipedia. Collective intelligence.

Right now our capabilities to run AI locally is limited to something like Alpaca 7b/13b for the most legible AI, but in the near future this won't be the case. We might have something similar to GPT-4 in the near future running locally.

13

ArcticWinterZzZ t1_jdtpq0u wrote

Of course, and I understand what you're talking about, I just mean that if you were interested in preserving human knowledge, an LLM would not be a great way to do it. It hallucinates information.

5

Puzzleheaded_Acadia1 t1_jdvpzmk wrote

Is gpt 4 really that good and better than gpt-3 i don't have access to it but if you try it is it that good

1

Anjz OP t1_jdtqjm4 wrote

I think past a certain point, hallucinations would be infinitely small that it won't matter.

Obviously in the current generation it's still quite noticeable especially with GPT-3, but think 5 years or 10 years down the line. The margin of it being erroneous would be negligible. Even recent implementation of the 'Reflection' technique cuts down greatly on hallucination for a lot of queries. And if you've used it, GPT-4 is so much better at inferring truthful response. It comes down to useability when shit hits the fan, you're not going to be looking to Wikipedia to search how to get clean drinking water.

I think it's a great way of information retrieval without the usage of networks.

0

ArcticWinterZzZ t1_jdtqupy wrote

Maybe, but it can't enumerate all of its knowledge for you, and it'd be better to reduce the actual network just to the reasoning component, and have "facts" stored in a database. That way its knowledge can be updated and we can make sure it doesn't learn the wrong thing.

2

DaffyDuck t1_jdtz90r wrote

Can you not essentially prevent hallucinations by instructing it to tell you something, like a fact, only if it is 100% confident? Anyway, interesting topic! I’m also wondering if it could essentially spit out all of its knowledge in a structured way to essentially rebuild human knowledge.

1

qepdibpbfessttrud t1_jduoaem wrote

Sure, but LLMs will compress it and even improve upon it

If LLM is trained good enough, u can ask it to write Wiki article on any subject

1

Ambiwlans t1_jdvk5zy wrote

Currently you can store an uncompressed copy of wikipedia in your pocket, so there isn't much advantage to compressing it.

I'd have both. And use GPT to interface with wikipedia if needed.

4

qepdibpbfessttrud t1_jdzuceg wrote

Maybe. I wouldn't be surprised if users of Wiki were 90%+ satisfied with just AI-chat that was trained on it. ~21GB doesn't allow to run the thing in RAM cheaply yet

I'm not advocating for getting rid of Wiki, amassing and preserving training data will likely be important for whole generations of AI. But I wouldn't also be surprised if some version of GPT would be able to generate better in every possible way version of Wiki that the whole mankind managed so far

1

QuartzPuffyStar t1_jdu2ml5 wrote

Pls no. Wikipedia is extremely biased, manipulated, and incomplete. It's only useful on the most uninteresting topics and then, only as a starting point for further research.

It all started with good intentions and love for humanity, and ended up in control of private and state agencies.

−8

y___o___y___o t1_jdudulu wrote

Sounds like you've been brainwashed by charlatans.

11

QuartzPuffyStar t1_jduxrv5 wrote

Lol ok keep taking wikipedia as the cradle of human knowledge. I'm not even discussing with you.

−2

EvilKatta t1_jdumi7w wrote

Well, so are LLMs.

2

QuartzPuffyStar t1_jduy7s3 wrote

They have the potential to learn crosschecking and use wiki audit tools to verify the probability of a wiki article being wrong, and not taking it at face value.

Even when they have trained with wiki as a "very high value source".

At least GPT shows signs of that. Bing just closes the conversations when you ask it to explore beyond the 1st page wiki article that you could have read yourself.

0