There are indications that for optimizing neural computation, neural networks may operate at criticality. Previous approaches have used distinct fingerprints of criticality, leaving open the question whether the different notions would necessarily reflect different aspects of one and the same instance of criticality, or whether they could potentially refer to distinct instances of criticality. In this work, we choose avalanche criticality and edge-of-chaos criticality and demonstrate for a recurrent spiking neural network that avalanche criticality does not necessarily entrain dynamical edge-of-chaos criticality. This suggests that the different fingerprints may pertain to distinct phenomena.
In biological neural networks, scale-free avalanches of neuronal firing events have suggested that such networks might preferably operate at criticality, in particular, since theoretical studies of artificial neural networks and of cellular automata have highlighted some potential computational benefits of such a state. In these studies, notions of either edge-of-chaos criticality or avalanche criticality were adhered to. Here, using a recurrent neural network of more realistic neurons compared with what has been considered previously, we scrutinize whether these two manifestations of criticality are necessarily co-occurring. Based on a realistic paradigm of neural networks, we show that a positive largest Lyapunov exponent—indicating chaotic dynamics of the network—is conserved as we tune the network from subcritical to critical and to supercritical avalanche behavior. This demonstrates that avalanche criticality does not necessarily co-occur with edge-of-chaos criticality.