Submitted by olegranmo t3_10holgp in MachineLearning

​

Fine-grained control of the number and size of clauses.

Paper: https://arxiv.org/abs/2301.08190

Code: https://github.com/cair/tmu

Tsetlin machine (TM) is a logic-based machine learning approach with the crucial advantages of being transparent and hardware-friendly. While TMs match or surpass deep learning accuracy for an increasing number of applications, large clause pools tend to produce clauses with many literals (long clauses). As such, they become less interpretable. Further, longer clauses increase the switching activity of the clause logic in hardware, consuming more power. This paper introduces a novel variant of TM learning - Clause Size Constrained TMs (CSC-TMs) - where one can set a soft constraint on the clause size. As soon as a clause includes more literals than the constraint allows, it starts expelling literals. Accordingly, oversized clauses only appear transiently. To evaluate CSC-TM, we conduct classification, clustering, and regression experiments on tabular data, natural language text, images, and board games. Our results show that CSC-TM maintains accuracy with up to 80 times fewer literals. Indeed, the accuracy increases with shorter clauses for TREC, IMDb, and BBC Sports. After the accuracy peaks, it drops gracefully as the clause size approaches a single literal. We finally analyze CSC-TM power consumption and derive new convergence properties.

235

Comments

You must log in or register to comment.

SilentHaawk t1_j5abu8l wrote

looks interesting, and I might be able to use this for something. I have some data where i know the pattern is generated by some relatively simple rules + some that isnt, but it is difficult to just see the pattern, and I havent found a way to use machine learning to learn it. But it could be that TMs could solve it.

Also, have you done any work on unsupervised learning?

34

olegranmo OP t1_j5ad5x8 wrote

While the autoencoder can be used for self-supervised learning: https://arxiv.org/abs/2301.00709 Sounds like you are working on an interesting problem!

6

SilentHaawk t1_j5ag6gq wrote

Thank you for the response. It could be, i havent been able to fully flesh out the idea or which problem exactly that I am trying to solve yet. But I do see a potential application/service i could make for our customers if I solve it.

5

currentscurrents t1_j5b6jf2 wrote

Interesting! I think it's good to remember that the important part of neural networks is the optimization-based learning process - you can run optimization on things other than neural networks. Like how plenoxels got 100x speedup over NeRF by running optimization on a structure more naturally suited to 3D voxel data.

I do wonder how scalable TMs are to less toy tasks though. MINST is pretty easy in 2023, and I think you can solve the BBC Sports dataset just by looking for keywords.

23

knestleknox t1_j5d0852 wrote

oh wow this looks super interesting. I had no idea what Tsetlin machines were until today. It's actually something I've basically tried emulating with standard ML approaches.

I have an unsolved mathematics problem that I've been working on for almost a decade since my professor showed me in undergrad. It's a very specific problem that maybe 10-20 combinatorists are working on or aware of and it's still unsolved to this day. One of the biggest parts of the problem is finding a bijection between these two infinite classes of integer partitions. Being able to find a rules-based bijection would prove a large part of the overall problem.

My idea was to try and model these bijections as a supervised learning problem and feed them into various ML models. I've tried standard feed-forward networks, auto encoders, CNNs, and many more. But it's never worked because of the rules-based nature of the problem. I suspect the rules that govern the bijecection are a bit too complicated to be modeled by the approximation methods found in standard models. But this looks very promising or at least something to play around with. I'm going to try it out this weekend. Thanks!

8

Cherubin0 t1_j5e3xrr wrote

So time to dump deep learning for tm?

−1