Viewing a single comment thread. View all comments

currentscurrents OP t1_j2g9mvy wrote

Thanks, that is the question I'm trying to ask! I know explainability is a bit of a dead-end field right now so it's a hard problem.

An approximate or incomprehensible algorithm could still be useful if it's faster or uses less memory. But I think to accomplish that you would need to convert it into higher-level ideas; otherwise you're just emulating the network.

Luckily neural networks are capable of converting things into higher-level ideas? It doesn't seem fundamentally impossible.

3

Dylan_TMB t1_j2gc9va wrote

I actually think you are looking for this:

https://arxiv.org/abs/2210.05189

Proof that all neural networks can be represented by a decision tree. Navigating a decision tree is an algorithm so this would be a representation of the "algorithm"

So a question to ask would be if it is the minimal decision tree?

2

currentscurrents OP t1_j2gctk4 wrote

Interesting!

This feels like it falls under emulating a neural network, since you've done equivalent computations - just in a different form.

I wonder if you could train a neural network with the objective of creating the minimal decision tree.

1

Dylan_TMB t1_j2gcz2v wrote

Or just learn to minimize a tree that's input.

4