G0ldenSp00n
Been working through Andrej Karpathy’s ML lectures in Rust. The backprop one went pretty well, but I had to learn how to do type indirection and interior mutabilty because of the backprop graph structure. I’m now on the makemore lecture, but having a lot of trouble building the bi-gram model in Burn (the rust native ML framework), because it seems like directly incrementing the tensor values is insanely slow. His example that takes like 10 seconds to run in Python takes two and a half minutes in Rust with Burn, so trying to figure out how to optimize or speed that up.
Are all the commenters paid by the video game industry, seriously what is going on there.
Abrash is a snake
Wow, didn’t expect all the winter drop features already.
I currently am compiling Arch for Arm manually using some third party packages to run in distrobox on Asahi Linux (M1 Mac Fedora based distro). Hopefully this should mean I won’t need the manual compile step anymore, and hopefully we start seeing more general Arm support for more Linux apps.