Scarlet_pot2

Scarlet_pot2 OP t1_jedrsmn wrote

you are such a sad person. you're life is so sad you have to insult strangers on the internet to make yourself feel better. And you're so low IQ you can't even form a coherent argument. shut up and go back to work at your 9-5 restaurant job. reddit loser

Also: anyone can link a few irrelevant articles. you linked ones that have no relation to the topic at hand but you are too brain dead to be able to actually comprehend it.

Take your sausage fingers off the keyboard and go learn common sense.

And lose some weight while you're at it.

1

Scarlet_pot2 OP t1_jedadfu wrote

talking to you is like a brick wall. I'm done. Keep idolizing rich people with your false narratives.

Yeah I'm sure the first person to learn how to raise crops was drowning in wealth. I'm sure the first person to make a bow was somehow wealthy, lmao. I'm sure the wealthy king walked into the blacksmiths place one day and just figured out how to build chainmail. The person who invented the wheel had so much wealth he didn't even need to get up if he didn't want to. all sarcasm. This belief you have is illogical.

In reality, most advancements were made by regular people, very poor people by modern standards, just trying to improve their lives, or discovering by accident, or other ways.

−2

Scarlet_pot2 OP t1_jed9mxw wrote

I see your point about tailoring foundational models. The problem is that, do you think companies like OpenAI and Google are going to allow regular people to tailor train their models however they want? It's debatable. Even in the best case the corps will still put some restrictions on what and the models are tailor trained.

The best way to get around this is have open source foundational models. To do this you need available compute (people donating compute over the internet) and free training (free resources and groups to learn together). I'm sure tailoring corporate models will play a role, but if we want true decentralization we should approach it from all angles

1

Scarlet_pot2 OP t1_jed8dir wrote

These articles are talking about in our modern society. Our technology is to the point where it takes a lot of effort to make modest improvements (in most areas). for most of time the innovations found didn't cost much, like how to make a bow, or how to smith metal. If you think all inventions were made by wealthy people, you are delusional. It wasn't the king that learned how to make chainmail armor, and it wasn't the noble that learned how to raise bigger crops.

P.S. Your insults don't help your point at all.

−1

Scarlet_pot2 OP t1_jed7tts wrote

Fine-tuning isn't the problem.. if you look at the alpaca paper, they fine tuned the LLaMA 7B model on gpt-3 and achieved gpt-3 results with only a few hundred dollars. The real costs are the base training of the model, which can be very expensive. Also having the amount of compute to run it after is an issue too.

Both problems could be helped if there was a free online system to donate compute and anyone was allowed to use it

1

Scarlet_pot2 OP t1_jed747y wrote

Okay now that's just incorrect. Most of human innovations were made by small groups or even a single person, without much capital. Think of the wheel, agriculture, electricity, the light bulb, the first planes, Windows OS. The list goes on and on.

It's only recently that it takes super teams and large capital to make these innovations. I'm saying we should crowdsource funds, with free resources to learn from together, donating compute, etc. It's totally possible but modern people aren't very good at forming groups. Maybe its because people are too tired from work, or they have become much less social. For whatever reason, still, we could improve AI progress and decentralize AI if the people learned to talk and collaborate again

0

Scarlet_pot2 OP t1_jed67k5 wrote

True alpaca is competent, but we need more models, better and larger models.. a distributed system where people donate compute could also be used to allow people to run larger models. maybe not 175 billion parameters, but maybe 50-100B as long as everyone donating compute isn't using it at the same time

that being said more smaller models like alpaca / LLaMA are needed too. if we made sufficient resources / training available to anyone, models like that could be created and made available more often

1

Scarlet_pot2 OP t1_jed1kio wrote

That's definitely a positive move. The only issue is that people at LAION will probably decide who gets access and when. Still much better then corps or gov tho, but more projects would be good. Maybe a distributed training network where people could contribute compute over the internet? Along with a push to give anyone who wants it free training on ML / AI. Those two things would help decentralize AI

3

Scarlet_pot2 t1_je937zq wrote

to go from scratch to having a model is 6 steps. first step is data gathering - there are huge open-source datasets available such as "The pile" by eluther.ai. Second step is data cleaning, this is basically preparing the data to be trained on. Third step is designing the architecture- to make these advanced Ai models we know of, they are all based on a transformer architecture, which is a type of neural network. The paper "Attention is all you need" explains how to design a basic transformer. There have been improvements so more papers would need to be read if you want to get a very good model.

Fourth step is to train the model. That architecture that was developed in step three is trained on the data from step 1 and 2. You need GPUs to do this. This is automatic once you start it, just wait until its done.

Now you have a baseline AI. fifth step is fine-tuning the model. You can use a more advanced model to finetune your model on to improve it, this was shown by the Alpaca paper a few weeks ago. After that, the sixth step is to do RLHF. This can be done by people without technical knowledge. The model is asked a question (by the user or auto-generated) and it makes multiple answers and the user ranks them from worst to best. This teaches the model what answers are good and what aren't. This is basically aligning the model.

After those 6 steps you have a finished AI model

1

Scarlet_pot2 t1_je92iud wrote

Most of this is precise and correct, but it seems like you say a transformer architecture is the GPUs? The transformer architecture is the neural network and how it is structured. It's code. The paper "attention is all you need" describes how the transformer arch. is made

After you have the transformer written out, you train it on GPUs using data you gathered. Free large datasets such as "the pile" by eluther.ai can be used to train on. This part is automatic.

the Human involved part is the data gathering, data cleaning, designing the architecture before the training. then after humans do finetuning / RLHF (reinforcement learning though human feedback).

those are the 6 steps. Making an AI model can seem hard and like magic, but it can be broken down into manageable steps. its doable, especially if you have a group of people who specialize in the different steps. maybe you have someone who's good with the data aspects, someone good at writing the architecture, some good with finetuning, and some people to do RLHF.

2