Most neural networks today are dense and highly entangled, making it difficult to understand what each part is doing. In our new research, we train “sparse” models—with fewer, simpler connections between neurons—to see whether their computations become easier to understand.
Sparse Neural Networks Improve AI Model Interpretability Research
By
–
Leave a Reply