Viewing a single comment thread. View all comments

suflaj t1_j0zq31h wrote

What if I told you that even if you were using C/C++, you'd still need to be using library functions? Because the code, ultimately, doesn't run natively, it calls Fortran, Assembly and CUDA libraries.

You cannot directly program in whatever CUDA compiles to because it's proprietary and GPU model-specific, so why bother? Researchers chose Python not because they like snakey-boys or enjoy British comedy, they chose it because it is adequate to do research in, unlike C or C++, which are horrible to work with and too hideous to read and understand even if a pro writes them, let alone some researcher.

Ultimately Python code is easier to maintain and work on, and there are more developers as opposed to C/C++, so of course companies will use it over whatever C++ API exists for DL libraries.

As for your Rust/Go question, although Go has some potential it has no community to work with. It is also harder to use than Python. There is almost no benefit of using Go over Python even if the decision was to be made now, let alone transfer, other than Go's nice concurrency model. Now, why would you use that when from joblib import delayed, Parallel does the trick? So far, the biggest problem Python has with concurrency is its lack of a good shared memory API, which is probably going to be fixed in a year or so now that it is part of Python. But this lack of API does not significantly impact Python, because you'd do this kind of stuff via a C module anyways.

As for Rust it will probably never become a serious contender for research and development because it is even more hideous and complex than C/C++ are. It is also slower, so, what's the point? Unless you want to double the average wages of people in DL and kill 90% of jobs since barely anyone can use Rust effectively.

15

vprokopev OP t1_j0zqzxm wrote

Again, in C++ I am not so much constrained to only use Pytorch functions. I can use other libraries and native features.

In python I basically must express any algorithm I have in my head in terms of vectorized pytorch operations with broadcasting. Not the case in C++. Am I wrong here?

I am not taking about researchers, I am talking more about businesses. No problem with researchers using python.

−6

suflaj t1_j0ztusm wrote

> Not the case in C++. Am I wrong here?

Probably. It seems you "have" to do these things because you want speed. But if you want speed, then you'll have to do them in C++ as well.

> I am not taking about researchers, I am talking more about businesses.

This applies to businesses more than anything. Your manager does not give a single fuck about the purity and the performance of your code before its deployed. Until then the only thing that matters is that the product is developed before your competitors get the contracts for as low of a cost as possible.

And when code is deployed, it will often not even be in C++. A lot of the time you have to port it to C because there are not C++ compilers for a platform, or you will keep it in ONNX format and then deploy on some runtime to keep maintenance easy.

8

RaptorDotCpp t1_j0zttgc wrote

You'd still use vectorized functions in C++ though, just because they're faster for doing algebra

6

[deleted] t1_j0zrakb wrote

[removed]

−12

suflaj t1_j0zthsd wrote

Looking at your post history, there are plenty of things I could make fun of. Dehumanize you even.

But instead of stooping to your level, all I will say is - I frequently program in C and (x86 & aarch64) Assembly, but I recognise that many of my R&D colleagues are not engineers, and that their strengths can be better utilised if they focus on the things they are good at.

2