Anjz

Anjz t1_jedk8g5 wrote

I gave GPT-4 a task to name 5 novel camelids and it gave me:

Guanacorn - A playful name that combines "guanaco" with "unicorn," giving it a mythical and fantastical appeal.

Vicunimbus - A name inspired by "vicuña" and "nimbus," invoking the image of a camelid that roams among the clouds.

Zephyrella - A name that combines "zephyr" (a light wind) with "llama," suggesting a graceful and ethereal creature. Dromedazzle - A name that blends "dromedary" with "dazzle," evoking a sense of wonder and awe.

Moonvicuna - A name inspired by the moon, symbolizing the beauty and serenity of the night.

Alpacalypse - A playful and imaginative name that combines "alpaca" with "apocalypse," giving it a bold and adventurous vibe.

Personally, I like Alpacalypse.

Also, I think this will replace whoever makes Pokemon names.

15

Anjz OP t1_jdxlw8f wrote

No, ChatGPT is closed source and we don't have the weights for it. Plus, it's probably too big to query with consumer GPUs.

Stanford University came up with Alpaca, which is a lighter weight model trained from Facebook's LLaMa but still functionally works as good as earlier iterations of GPT. This one you can run locally given some knowhow.

1

Anjz OP t1_jdwb284 wrote

While it is true that you can download the entire English Wikipedia in a relatively small size, it does not diminish the potential of AI and LLMs. Wikipedia is a static collection of human-generated knowledge, while AI, such as LLMs, can actively synthesize, analyze, and generate new insights based on available information. AI has the potential to connect disparate pieces of knowledge, create context, and provide personalized assistance. Thus, the comparison between Wikipedia and AI should not be based on size alone, but also on the dynamic capabilities and potential applications that AI offers.

For example, can you infer to Wikipedia to tell you how to distil water in a step by step process given only certain tools, or what process to disinfect a wound when there are no medication available? Sure you can find information on it, but a lot of people won't know what to do with it given the information.

That is the difference between Knowledge and Wisdom.

1

Anjz OP t1_jdw5teo wrote

While I agree with your statement somewhat and it is true that the internet contains noise, it also offers unprecedented access to information and diverse perspectives.

The key is to develop critical thinking and discernment, which can transform data into meaningful understanding. Technology, such as AI, can help us navigate, process, and synthesize vast amounts of information. We should not view AI as replacing human wisdom, but as a tool that can complement and enhance our collective knowledge, while still valuing experience and human insight.

Granted it's put in the hands of the right individuals. Some people will take a stick and see only a stick for what it is, a collection of biological matter prone to rotting. Whereas some will see it as a transformative tool that could amount to much more than face value, a fishing rod or a handle for a hammer.

Given this context, at what point can you infer true wisdom? Does a child at 3 years old reflect true wisdom? Is there a certain point where you could legitimately exclaim that an AI is now fully understanding context and inferring true wisdom? Or is this subjective?

Just my two cents, but embracing technology does not necessitate abandoning true wisdom; it can assist in our quest for it.

1

Anjz OP t1_jdvkutn wrote

It's not as good as ChatGPT but it's much lighter. Granted it's just a small copy of fine tuning from GPT-3 API, given more parameters for fine tuning on GPT-4 it would probably be a whole different beast. It has something like 10x less data if not more. We're fully capable of creating something much better, it's just a matter of open source figuring out how and catching up to these companies keeping the Krabby Patty secret formula. Turns out for profit companies don't like divulging world changing information, who woulda thought?

If you take a look at Youtube there are a couple demos from people running it on rPI, granted at the moment it's at a snails pace - this could be a different story a year or so from now. It works decently well with a laptop.

1

Anjz OP t1_jdvj7ry wrote

With it running on phones, laptops and raspberry pi's, a solar panel would be sufficient to power small devices.

If you've tried GPT-4 its propensity to hallucinate is so much less than previous iterations that errors would be negligible. We have Alpaca now, but could very well have something like GPT-4 locally in the near future if we look at the pace of how fast things are improving.

1

Anjz OP t1_jdtycqs wrote

I think very soon, there will be ASIC(Application-specific integrated circuit) low powered devices that can run powerful language models locally.

It's within our grasp. Might be integrated into our smartphones sooner than later actually.

10

Anjz OP t1_jdtva5c wrote

That's pretty crazy now that you got me thinking deeper.

Civilizations in the future could send out cryostatic human embryo pods to planets billions of lightyears away that are suitable 'hosts' with AI with the collective knowledge of humanity as we know it that will teach them from birth and restart civilization.

Or maybe we don't even need biological bodies at that point.

Fuck that would be a killer movie plot.

I'm thinking way too ahead, but I love sci-fi concepts like this.

37

Anjz OP t1_jdtth3u wrote

In another line of thought similar to what you've just said, we've always had robotic responses from text to speech, but if we apply what we have with current machine learning foundations and train it with huge amounts of audio data on how people talk..

That will be a bit freaky I would think. I would be perplexed and amazed.

12

Anjz OP t1_jdtqjm4 wrote

I think past a certain point, hallucinations would be infinitely small that it won't matter.

Obviously in the current generation it's still quite noticeable especially with GPT-3, but think 5 years or 10 years down the line. The margin of it being erroneous would be negligible. Even recent implementation of the 'Reflection' technique cuts down greatly on hallucination for a lot of queries. And if you've used it, GPT-4 is so much better at inferring truthful response. It comes down to useability when shit hits the fan, you're not going to be looking to Wikipedia to search how to get clean drinking water.

I think it's a great way of information retrieval without the usage of networks.

0

Anjz OP t1_jdtpqcq wrote

It is, and I'd imagine other companies hiring devs from OpenAI or even devs from OpenAI diverging information to open source to create something as good as GPT-4.

Even leakage of instruction from GPT-3 like Stanford's training was hugely useful.

16

Anjz OP t1_jdtnx32 wrote

Wikipedia will tell you the history of fishing, but it won't tell you how to fish.

For example, GPT-4 has open source knowledge of the fishing subreddit, fishing forums, stackexchange etc. Even Wikipedia. So it infers based on the knowledge and data on those websites. You can ask it for the best spots to fish, what lures to use, how to tell if a fish is edible, how to cook a fish like a 5 star restaurant.

Imagine that localized. It's beyond a copy of Wikipedia. Collective intelligence.

Right now our capabilities to run AI locally is limited to something like Alpaca 7b/13b for the most legible AI, but in the near future this won't be the case. We might have something similar to GPT-4 in the near future running locally.

13

Anjz t1_jc758w9 wrote

Blows my mind, they used a large language model to train a small one.

>Fine-tuning a 7B LLaMA model took 3 hours on 8 80GB A100s, which costs less than $100 on most cloud compute providers.

Now imagine what's possible with GPT-4 training a smaller language model and a bigger instruction sample with corporate backing to use hundreds of A100's at the same time for days at a time?

We're already in reach of exponential growth for low powered devices, it's not going to take years like people have predicted.

3