Viewing a single comment thread. View all comments

L43 t1_j45wbf1 wrote

Reply to comment by chimp73 in [D] Bitter lesson 2.0? by Tea_Pearce

Yeah I have a pretty dystopian outlook on the future because of this.

10

thedabking123 t1_j46pulo wrote

the one thing that could blow all this up is requirements for explainability; which could push the industry into low cost (but maybe low performance) methods like neurosymbolic computing whose predictions are much more understandable and explainable

I can see something to do with self driving cars (or LegalTech, or HealthTech) that results in a terrible prediction with real consequences. This would then drive the public backlash against unexplainable models, and maybe laws against them too.

Lastly this would then make deep learning models and LLMs less attractive if they fall under new regulatory regimes.

5

DisWastingMyTime t1_j47ans8 wrote

In vision/robotics this is already the case, low hardware/liw cost requirements is an incredible seller for automotive industry, so large disgusting models are out.

But we still use deep, if anything it's pretty surprising how much is possible with "shallow" models, for specialized domains, but thats still very far from explainable models

7

fullouterjoin t1_j4vbawe wrote

> requirements for explainability

We have to start pushing for this legislation now. If you leave it up to the market, Equifax will just make a magic Credit Score model that will be like huffing tea leaves.

2