CactusOnFire
CactusOnFire t1_j3dhxfk wrote
Reply to comment by brucebay in [Discussion] Is there any alternative of deep learning ? by sidney_lumet
I work in the financial world- and to elaborate on your comment:
Speaking strictly for myself, there are a few reasons I rarely use neural networks at my day job:
-Model explainability: Often stakeholders care about being able to explicitly label the "rules" which lead to a specific outcome. Neural Networks can perform well, but it is harder to communicate why the results happened than with simpler models.
-Negligible performance gains: While I am working on multi-million row datasets, the number of features I am working with are often small. The performance improvements I get for running a tensorflow/pytorch model are nearly on par with running an sklearn model. As a result, deep learning is overkill for most of my tasks.
-Developer Time & Speed: It is much quicker and easier to make an effective model in sklearn than it is in tensorflow/pytorch. This is another reason Neural Networks are not my default solution.
There are some out-of-the-box solutions available in tools like Sagemaker or Transformers. But even still, finding and implementing one of these is still just going to take slightly longer than whipping up a random forest.
-Legacy processes: There's a mentality of "if it ain't broke, don't fix it". Even though I am considered a subject matter expert in data science, the finance people don't like me tweaking the way things work without lengthy consultations.
As a result, I am often asked to 'recreate' instead of 'innovate'. That means replacing a linear regression with another linear regression that uses slightly different hyperparameters.
-Maintainability: There are significantly more vanilla software engineers than data scientists/ML Engineers at my company. In the event I bugger off, it's going to be easier for the next person to maintain my code if the models are simple/not Neural Networks.
CactusOnFire t1_j9s4b0f wrote
Reply to comment by shoegraze in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
>We can say goodbye soon to a usable internet because power seeking people with startup founder envy are going to just keep ramping these things up.
This might seem like paranoia on my part, but I am worried about AI being leveraged to "drive engagement" by stalking and harassing people.
"But why would anyone consider this a salable business model?" It's no secret that the entertainment on a lot of different websites are fueled by drama. If someone were so inclined, AI and user metrics could be harvested specifically to find creative and new ways to sow social distrust, and in turn drive "drama" related content.
I.E. creating recommendation engines specifically to show people things that they assume will make them angry, specifically so they engage with it in great detail, so that a larger corpus of words will be exchanged that can be datamined for advertising analytics.