[P] RWKV 14B Language Model & ChatRWKV : pure RNN (attention-free), scalable and parallelizable like Transformers Submitted by bo_peng t3_10eh2f3 on January 17, 2023 at 4:54 PM in MachineLearning 19 comments 110
LetterRip t1_j4sumo7 wrote on January 18, 2023 at 12:42 AM Reply to comment by limpbizkit4prez in [P] RWKV 14B Language Model & ChatRWKV : pure RNN (attention-free), scalable and parallelizable like Transformers by bo_peng Receptance Weighted Key Value RWKV Permalink Parent 5
Viewing a single comment thread. View all comments