JFranek
Automattic… that’s why there are two t’s!? Jesus Christ.
Just something I found in the wild (r/machine learning): Please point me in the right direction for further exploring my line of thinking in AI alignment
I’m not a researcher or working in AI or anything, but …
you don’t say
Then there is John Michael Greer…
Wow, that’s a name I haven’t heard in a long time.
A regular contributor at UnHerd…
I did not know that, and I hate that it doesn’t surprise me. I tended to dismiss his peak oil doomerism as wishing for some imagined “harmony with nature”. This doesn’t help with that bias.
Yeah, neural network training is notoriously easy to reproduce /s.
Just few things can affect results: source data, data labels, network structure, training parameters, version of training script, versions of libraries, seed for random number generator, hardware, operating system.
Also, deployment is another can of worms.
Also, even if you have open source script, data and labels, there’s no guarantee you’ll have useful documentation for either of these.