Viewing a single comment thread. View all comments

ohmsalad t1_j920dtk wrote

chatgpt says it wrote the above. While this can be considered technically possible, in reality there are many things that have to be figured out, like processing power needs homogeneity in p2p and distributed training systems, that means different gpus/cpus and pc configurations won't work well if at all together, a DAO cannot do that yet, how about their training sets? How about latency and bandwidth? With current blockchain speeds and confirmations that would take centuries. We are not there yet, when we figure out how to P2P train an LLM we are going to do it without the use of a blockchain. This looks to me as an overambitious project by ill-informed people or a scam.

0

onil_gova t1_ja0tlsm wrote

I agree, i don't think our current methods of training models, mainly back propagation, can be distributed like across heterogeneous machines with various latencies, just seem impractical and not likely to scale. I can't imagine what would happen if a node goes down. Do you just lose those neurons? Is their self correcting mechanic? Are all the other nodes waiting? We dont currently have methods for training a partial model and scaling up and down with the inclusion or removal of neurons. And no dropout is not doing this. The models are usually static from creating to fully trained.

Another thing that im not clear about is that maybe you are not contributing training a model but with a trained model. I dont see how having a collection of trained models would lead to AGI. I also have a lot of doubts since it seems like we need to solve a lot of problems before something like this is practical or possible.

2