The phones we hold can translate from one language to another. Yet, the engineers who programmed them typically do not know how this translation happens. This is because machine learning specifies a target macroscopic behavior, and a set of microscopic rules that enable networks to obtain them.
In a sense, this is similar to saying that we (roughly) understand the laws of evolution, but don’t understand the biology of a specific organism.
I focus on recurrent neural networks, and develop tools to “open the black box” and understand the dynamics these networks arrive to via learning.
Recurrent neural networks as versatile tools of neuroscience research
Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks
The interplay between randomness and structure during learning in RNNs
Charting and navigating the space of solutions for recurrent neural networks
One Step Back, Two Steps Forward: Interference and Learning in Recurrent Neural Networks