Rephil1
Rephil1 t1_iu64uyw wrote
Reply to comment by GPUaccelerated in Do companies actually care about their model's training/inference speed? by GPUaccelerated
For live video at 30fps you get 33ms to read the frame, run inference on the frame and draw detection boxes + overhead.
Rephil1 t1_ixsa6jj wrote
Reply to comment by jazzzzzzzzzzzzzzzy in Is Linux still vastly preferred for deep learning over Windows? by moekou
Standard driver + cuda. Get yourself a docker container and your good to go 😊 this is probably the easiest way to do it