AI researchers are simply stumbling in the dark. ML is still a Black Box.
For the interpretability problem is not allowing us to see how a given AI came to its conclusions.
Ali Rahimi, an AI-researcher in California, finds company in François Chollet, a computer scientist at Google in Mountain View, as they worry about AI’s reproducibility problem wherein, thanks to inconsistencies, AI innovators still falter in learning from each other and in breaking down what’s going on under the hood.
For instance, think of ‘stochastic gradient descent’ and how after thousands of academic papers and numerous ways of applying it, we still tip toe on trial-and-error. Hell yeah, it is sexy to embrace deep-learning and all the adjacent stuff, but watch for wasted effort and suboptimal performance. Try algorithm-testing for various scenarios or ablation studies or Computer Scientist Ben Recht’s idea of shrinking it into a ‘toy problem’ may be.
Ask yourself if you are petting a Schrodinger’s cat. It may stink.[Read further]