Viewing a single comment thread. View all comments

Ulfgardleo t1_ir72pix wrote

Why is this a nature paper?

  1. Strassen is already known not to be the fastest known algorithms in terms of Floating point multiplications https://en.wikipedia.org/wiki/Computational_complexity_of_matrix_multiplication

  2. already strassen is barely used because its implementation is inefficient except in the largest of matrices. Indeed, strassen is often implemented using a standard MatMul as smallest blocks and only used for very large matrices.

  3. Measuring the implementation complexity in floating mul is kinda meaningless if you pay for it with a multiple of floating additions. It is a meaningless metric (see 2.)

54

neanderthal_math t1_ir7k9jl wrote

I’m a little confused by the purpose of this paper too. If the point is to show that an RL algorithm found better bounds than Strassen, then that’s cool. But are they claiming that this is something that a compiler would use in practice? How does this work with fixed SIMD sizes.

18

Lairv t1_ir7p0xt wrote

In the article they try 2 types of reward: minimizing the rank of the tensor decomposition (i.e. minimizing total number of multiplication), and minimizing the runtime of the algorithm on a given hardware (they tried with nvidia V100 and TPUv2)

The latter could be actually useful since their graphs shows that the algorithms discovered reach better performances than cuBLAS (Fig.5)

37

AssumptionIcy5363 t1_irbuwfh wrote

What I would do is train the model for matrixes for many small and many usefull combination of sizes.

Than I would use the normal algorithm for every other combination of sozes

1

golfstreamer t1_ir7cdtu wrote

> Measuring the implementation complexity in floating mul is kinda meaningless if you pay for it with a multiple of floating additions. It is a meaningless metric (see 2.)

I was confused about this for a bit but I think it makes sense. When you use the algorithm for block matrix multiplication instead the multiplication operations far outweigh the addition operations. If done recursively this new method should provide lower complexity (e.g. O(n^2.78 ) vs O(n^2.8 )) than strassen's

9

master3243 t1_ir94pxk wrote

I don't think you're right unless deepmind is lying in the abstract of a nature paper which I highly doubt.

> Particularly relevant is the case of 4 × 4 matrices in a finite field, where AlphaTensor’s algorithm improves on Strassen’s two-level algorithm for the first time, to our knowledge, since its discovery 50 years ago

4

Ulfgardleo t1_ir95y3t wrote

Yeah they are not right. Sota is laser method.

They even missed the huge improvement from 1981...

https://ieeexplore.ieee.org/document/4568320

It is btw all behind the wiki link above.

3

Ulfgardleo t1_ir997hv wrote

The worst thing is however that they do not even cite the practically relevant memory efficient implementation of strassen (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.39.6887 ). One can argue that all matmul algorithms with better complexity than Strassen are irrelevant due to their constants, but not even comparing to the best memory implementation is odd-especially as they don't show improvement in asymptotic complexity.

5

victotronics t1_ir9fc44 wrote

Yes yes and yes. Can half a dozen authors really be that ignorant that they don't know about all the work that's been done after Strassen? And how did this pass review?

To add to 2: numerical stability of Strassen is doubtful too.

4

emotionalfool123 t1_irdvwr8 wrote

Can you explain what does numerical stability mean in this case?

1

victotronics t1_ire6ha5 wrote

Behavior under roundoff. Floating point numbers are not actually mathematical numbers so all algorithms are inexact. You want them to be not too inexact: small perturbations should give only small errors. The fact that STrassen (and other algorithms) sometimes subtract quantities means that you can have numerical cancellation.

2

Thorusss t1_ir9or2m wrote

The algorithm is plain faster on the most advanced hardware. For such an already heavily optimized area, that is very impressive.

>Leveraging this diversity, we adapted AlphaTensor to specifically find algorithms that are fast on a given hardware, such as Nvidia V100 GPU, and Google TPU v2. These algorithms multiply large matrices 10-20% faster than the commonly used algorithms on the same hardware, which showcases AlphaTensor’s flexibility in optimising arbitrary objectives.

https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor

2

Capable-Speed5915 t1_irb8idy wrote

As a way to demonstrate using AI for fundamental research ? I mean they did find algorithms with less multiplications.

It may be harder to apply or benefit from it (numerical stability), still would be a good scientific achievement and pushes the field forward towards exploring where else can such techniques be applied for algorithm discovery.

They also speculate about adding numerical stability as a metric, so maybe a future work at some point.

Either way, the paper does indeed do something new, and opens up new research avenues to explore.

1