Viewing a single comment thread. View all comments

HeinrichTheWolf_17 t1_ja3qorz wrote

Open Source options are a great remedy for this issue (See Stable Diffusion). Now we just have to get the same thing for LLMs.

66

el_chaquiste t1_ja48owc wrote

Meta's LLaMA seems to be a step in that direction, even if people like E. Yudkowsky doesn't believe it's any good (basically calling Meta's engineers incompetent).

11

rainy_moon_bear t1_ja7lsd5 wrote

It's not open source, and it isn't QAT, so it's behind open-source alternatives for instruct training or RLHF.

2

Spire_Citron t1_ja56hz2 wrote

Yup. As long as there are open source models, things may stop getting better, but they can't get worse. But also they probably will get better, because it's the community that provides a lot of the improvements, not people trying to make money.

3