[R] Is there any work being done on reduction of training weight vector size but not reducing computational overhead (eg pruning)? Submitted by Moose_a_Lini t3_yjwvav on November 2, 2022 at 5:48 AM in MachineLearning 23 comments 22
dI-_-I t1_iur5z7j wrote on November 2, 2022 at 12:55 PM No because it makes more sense to start with a too large network and reduce compute to a reasonable level than keeping compute constant. Permalink 1
Viewing a single comment thread. View all comments