Viewing a single comment thread. View all comments

yannbouteiller t1_j70o6y3 wrote

FPGAs are theoretically better than GPUs to deploy Deep Learning models simply because they are theoretically better than anything at doing anything. In practice, though, you never have enough circuitry on an FPGA to efficiently deploy a large model, and they are not targetted by the main Deep Learning libraries so you have to do the whole thing by hand including quantizing your model, extracting its weights, coding each layer in embedded C/VHDL/etc, and doing most of the hardware optimization by hand. It is tedious enough for preferring plug-and-play solutions like GPUs/TPUs in most cases, including embedded systems.

4

Open-Dragonfly6825 OP t1_j72qtao wrote

That actually makes sense. FPGAs are very complex to program, even though the gap between software and hardware programming has been narrowed with High Level Synthesis (e.g. OpenCL). I can see how it is just easier to use a GPU that is simpler to program, or a TPU that already has compatible libraries built for that abstract the low level details.

However, FPGAs have been increasing in area and available resources in recent years. It is still not enough circuitry?

1