Submitted by Balance- t3_11ksa12 in MachineLearning

tinygrad, a deep learning Frameworks that aims to have a complexity between a pytorch and a karpathy/micrograd, just tagged their 0.5.0 release.

Release notes

An upsetting 2223 lines of code, but so much great stuff!

  • 7 backends : CLANG, CPU, CUDA, GPU, LLVM, METAL, and TORCH
  • A TinyJit for speed (decorate your GPU function today)
  • Support for a lot of onnx, including all the models in the backend tests
  • No more MLOP convs, all HLOP (autodiff for convs)
  • Improvements to shapetracker and symbolic engine
  • 15% faster at running the openpilot model
2

Comments

You must log in or register to comment.

etesian_dusk t1_jb8yzec wrote

Why would I start using this today?

2

nucLeaRStarcraft t1_jb9289f wrote

they claim it's fast on apple m1 and some embedded arm devices, but i have no idea how easy it is to use ootb.

2

etesian_dusk t1_jb94rak wrote

Ok, that doesn't sound like much. I don't understand why I should abandon standard and verified tools for this.

On top of that the whole "George Hotz Twitter internship" thing was just embarassing. I trust him to jailbreak playstations, but that's the end of it.

7

chris_myzel t1_jb9bbqz wrote

pytorch installations go typically into the gigabytes, while tinygrad keeps it core at <1000 lines.

1

etesian_dusk t1_jbioscf wrote

Comparing package size to "sore sourcecode" size is kind of misleading. The pytorch codebase by itself isn't 1 GB.

Also, in most usecases, I'd rather have pytorch's versatility, than be able to brag about <1000 lines.

1