The conventional wisdom about artificial intelligence is that bigger is better.
Consider neural networks, the software used for training algorithms that can discover patterns within information that humans may miss. Researchers are creating increasingly huge versions that can analyze massive amounts of data, and thus do things like generating more realistic text in response to a query.
But these enormous neural networks come with some costs, including making it difficult for researchers to figure out how and why the software makes its predictions and decisions. When these neural networks become so big, researchers can get lost attempting to make sense of the billions of interconnected calculations taking place.