Submitted by tekktokk t3_11w4kqd in MachineLearning

Came across this concept, Meta-Interpretive Learning (MIL) developed by Muggleton, Patsantzis, et al.

From what I understand this is a relatively new approach to ML? Has anyone heard of this? I was hoping to get a general feel for what people in the industry believe for the perspectives of this approach. If you're curious, here's an implementation of MIL.

3

Comments

You must log in or register to comment.

UnusualClimberBear t1_jd2myu1 wrote

Sounds like a rebranding of Inductive Logic Programming. It does not scale, while all recent advances are about scaling simple systems. Think that for a vanilla transformer, the bottleneck is often the size of the attention because it is N^2, and people are switching to linear attention.

2

UnusualClimberBear t1_jd3gqap wrote

Usually, the problem is the combinatorial nature of the possible number of rules that could apply. Here they seem to be able to find a subset of possible rules with a polynomial complexity, but as table 7 of the second paper contains tiny 'wrt ML/RL data) instances of problems, I would answer yes to your questions. ILP is something coming with strong guarantees, while ML comes with a statistical risk. Theses guarantees aren't free.

1

tekktokk OP t1_jd3l4vl wrote

Alright, thank you. Then I guess last question, if you happen to know; what is the current state of ILP in the ML/AI industry? Is it pretty much dead? Is it merely an interesting theory but hasn't found much application in the market? Does anyone see a bright future for it?

1