Submitted by olegranmo t3_10kw6ob in MachineLearning

​

Tsetlin machine interpretability vs deep learning attention.

Researchers at West China Hospital, Sichuan University, NORCE, and UiA have developed a Tsetlin machine-based architecture for premature ventricular contraction identification by analyzing long-term ECG signals. The experiments show that the Tsetlin machine is capable of producing human-interpretable rules, consistent with the clinical standard and medical knowledge. Simultaneously, the accuracy was comparable with deep CNN-based models.

Paper: https://arxiv.org/abs/2301.10181

62

Comments

You must log in or register to comment.

DogeMD t1_j5uvxiw wrote

Ole, I haven’t heard about the Tsetlin machine before. My group is doing some ML research using CNN architectures to predict myocardial infarctions. Would love to explore the use of Tsetlin machines for showing ECG signs of infarction to users (doctors) since EU legislation mandates explainability. Have you tried anything like this before and if so, do you think the Tsetlin machine would be a good candidate? We are based in Lund, southern Sweden

9

olegranmo OP t1_j5v1xsq wrote

Hi DogeMD,

Thanks for the questions! I introduced the Tsetlin machine in 2018 as an interpretable and transparent alternative to deep learning, and it is getting increasingly popular, showing promising results in several domains. The paper reports the first approach to using Tsetlin machines for ECG classification, and it is fantastic that you see potential opportunities in myocardial infarction prediction. If you like, I can do an online tutorial on Tsetlin machines with you and your team to give you a headstart?

11

deeceeo t1_j5wnrf1 wrote

How would you compare Tselin Machines to other intrinsically interpretable models, like the sparse decision trees that Cynthia Rudin's group works on? Both in terms of capacity/expressiveness and interpretability.

5

olegranmo OP t1_j5xpnj2 wrote

Great question! Rudin et al.’s approach elegantly builds an optimal decision tree through search. TM learns online, processing one example at a time, like a neural network. Also, like logistic regression, TM adds up evidence from different features, however, it builds non-linear logical rules, instead of operating on single features. TM also supports convolution for image processing and time series. It can also learn from penalties and rewards addressing the contextual bandit problem. Finally, TMs allow self-supervised learning by means of an auto-encoder. So, quite different from decision trees.

4