Submitted by JanBitesTheDust t3_11ddohk in MachineLearning

Hi,

I've been reading up on the backpropagation algorithm used in artificial neural nets. After finding out about automatic differentiation, I wanted to implement it myself. The implementation is fairly simple using Python (that allows for operator overloading and has a garbage collector), but I wanted to see how much it differs from the implementation in C. I wrote up a general overview of autodiff in the readme of the repo.

If there are any remarks/feedback, let me know :)

As a result, here is the repo: Autodiff

15

Comments

You must log in or register to comment.

ch9ki7 t1_ja86mi0 wrote

now you might want to provide the python wrapper for this. would be pretty interesting for smaller simpler optimization cases like arbitrary curve fitting.

3

JanBitesTheDust OP t1_ja8avd4 wrote

I actually have never wrapped C code for python hahaha. I wonder how difficult it is?

1

Jonas_SV t1_ja8tpxw wrote

Not too bad, check the python C types API

2

CireNeikual t1_ja93iae wrote

I would actually recommend Cython over C types, it's nicer especially when it comes to handling numpy arrays.

1

halffloat t1_ja8o7s0 wrote

Cool work! You might also enjoy tapenade (tapenade.inria.fr:8080) which takes a C file and then produces a new C source file with the derivative program. It's useful when you want compiled derivatives. I use this a lot for optimization problems; never tried it for NNs, though.

2

JanBitesTheDust OP t1_ja8xmsf wrote

Wow that is impressive. I skimmed through their paper and did not really understand how they implemented it. Nonetheless, it would be cool to try and implement it myself.

1