8th and 9th of July, 2025
Florence, Italy
The following are invited and contributing speakers for the workshop.
Giovanni Petri - "Balancing Learning, Generalization, and Processing Efficiency"
The human brain's remarkable ability to learn, generalize, and multitask reflects a fundamental tradeoff between interactive parallelism—shared representations that enhance learning—and independent parallelism—efficient concurrent processing.
In this talk, we synthesize our recent findings to explore this balance.
We first reveal how even minimal reliance on shared neural representations can significantly limit parallel task processing, highlighting inherent constraints on multitasking.
Next, leveraging network control theory, we illustrate how cognitive control and processing costs emerge from the interplay of control and learning under multitasking pressures.
Finally, we connect channel capacity and coding capacity through information theory to explain how the brain optimizes information processing within its capacity constraints.
By integrating these studies, we provide insights into how the brain navigates the interplay between learning, generalization, and processing efficiency, offering implications for developing AI systems that effectively manage these tradeoffs..
Marilyn Gatica - "Charting Higher-Order Models of Brain Function"
In this talk, I will discuss the relevance of understanding high-order interactions in computational neuroscience using fMRI human data, highlighting their significance in resting-state and task conditions, as well as their potential therapeutic applications, such as transcranial ultrasound stimulation.
Laura Sparacino - "Measuring Directed and Undirected High-Order Interactions in Dynamic Physiological Networks through Information Rate Decomposition"
Information Decomposition tools are emerging as principled and flexible frameworks to unveil complex high-order interactions in a wide range of network systems in computational neuroscience and physiology. Two of the most popular tools in this context are the O-information (OI), an undirected metric quantifying the balance between redundant and synergistic high-order interactions among three or more random variables, and the partial information decomposition (PID), a directed approach capable of dissecting the information shared between an assigned target variable and two or more sources into separate redundant and synergistic contributions. In spite of being defined specifically for random variables, these tools are ubiquitously applied to multivariate neurophysiological time series interpreted as realizations of random processes with temporal statistical structure. In this work, to overcome the incorrect depiction of high-order effects by OI and PID when applied to dynamic networks, we introduce the frameworks of O-information rate (OIR) and partial information rate decomposition (PIRD). By leveraging mutual information rate (MIR) instead of mutual information (MI), our approach decomposes the dynamic information shared by multivariate random processes into unique, redundant, and synergistic contributions obtained aggregating information rate atoms in a principled manner. To solve PIRD, we define a pointwise redundancy rate function based on the minimum MI principle applied locally in the frequency-domain representation of the processes. The framework is validated in benchmark simulations of dynamic Gaussian systems, demonstrating its advantages over traditional OI and PID schemes and showing how the spectral representation may reveal scale-specific higher-order interactions that are obscured in the time domain. OIR and PIRD are then applied to exemplary physiological network systems with dynamic behavior probed measuring multivariate neurophysiological time series.
Maria Pope - Time-varying synergy/redundancy dominance in the human cerebral cortex
Recent work has emphasized the ubiquity of higher-order interactions in brain function. These interactions can be characterized as being either redundancy or synergy-dominated by applying tools from multivariate information theory. Though recent work has shown the importance of both synergistic and redundant interactions to brain function, their dynamic structure is still unknown. Here we analyze the moment-to-moment synergy and redundancy dominance of the fMRI BOLD signal during rest for 95 unrelated subjects to show that redundant and synergistic interactions have highly structured dynamics across many interaction sizes. The whole brain is strongly redundancy-dominated, with some subjects never experiencing a whole-brain synergistic moment. In small sets of brain regions, our analyses reveal that subsets which are redundancy dominated on average exhibit the most complex dynamic behavior as well as the most synergistic and most redundant time points. In accord with previous work, these regions frequently belong to a single coherent functional system, and our analysis reveals that they become synergistic when that functional system becomes momentarily disintegrated. Although larger subsets cannot be contained in a single functional network, similar patterns of instantaneous disintegration mark when they become synergistic. At all sizes of interaction, we find notable temporal structure of both synergy and redundancy-dominated interactions. We show that the interacting nodes change smoothly in time and have significant recurrence. Both of these properties make time-localized measures of synergy and redundancy highly relevant to future studies of behavior or cognition as time-resolved phenomena.
Patricio Orio - "Synergistic high-order statistics in a neural network is related to task complexity and attractor characteristics"
Emergence of collective functions in the brain is an open question in neuroscience, as they are believed to underlie consciousness, behavioral outputs, and brain disorders. It is unknown how measurable emergent behaviors can originate and be sustained, contributing to information processing. We studied the self-emergence of high-order interactions (HOIs) in RNNs that undergo plasticity to learn to perform cognitive tasks of different complexity. HOIs are statistical structures that are present in a group of variables but not in pair-wise interactions, and are characterized by tools from Information Theory. We trained continuous-time RNNs to perform one of the following tasks: Go/NoGo, Negative patterning, Temporal Discrimination, Context-dependent Decision making. Then, the dynamics of the hidden layer was evaluated. HOIs were characterized using the O-info and S-info metrics, at different orders of interaction taking all combinations from 3 to 11 nodes. The dimension of the trajectory was assessed by the amount of variance explained by the first 5 PCA components. In our results, training causes the dynamics of hidden layer to show HOIs with high redundancy at higher orders of interaction and synergistic interactions measured at lower order. More synergy is observed after training with the context-dependent task, while more redundancy is originated by the simpler Go/NoGo. Synergistic interactions are correlated with more complex dynamics as suggested by a trajectory of higher dimension. Finally, we tested different pruning procedures to obtain sparser weight matrices, without observing an effect on the HOIs measured. Our results suggest that complex tasks promote the emergence of synergistic interactions.
Andrea Brovelli - "Higher-order and distributed synergistic functional interactions encode information gain in goal-directed learning"
The ability to create beliefs about the consequences of our actions, also known as goal-directed learning, is a key facet of cognition and it provides the basis for rational decision-making. Goal-directed learning arises from a distributed neural circuit including the prefrontal, posterior parietal and temporal cortices. However, the role of cortico-cortical functional interactions remains unclear. To address this question, we integrated the information decomposition framework with human magnetoencephalography (MEG) to investigate whether and how learning signals are encoded through neural interactions. Our findings revealed that information gain (the reduction in uncertainty about the causal relationship between an action and its consequence) is represented within a distributed cortical network, incorporating the visual, parietal, lateral prefrontal, and ventromedial/orbital prefrontal cortices. Remarkably, cortico-cortical interactions encoded information gain in a synergistic manner, beyond what individual regions represented alone. Synergistic interactions between cortical regions encoded information gain at the level of pairwise and higher-order relations, such as triplets and quadruplets. Higher-order synergistic interactions were characterized by long-range relationships centered in the ventromedial and orbitofrontal cortices, which served as key receivers in the broadcast of information gain across cortical circuits. Overall, this study provides evidence that information gain is encoded through both synergistic and higher-order functional interactions and is broadcast to the prefrontal reward circuitry. Furthermore, our findings offer a new perspective on how cognitively relevant information is encoded and transmitted within distributed cortical networks and brain-wide dynamics.
Jesús Cortes - "Brain structure and functional high-order interactions in the human brain"
The brain’s modular organization, ranging from microcircuits to large-scale networks, has been extensively studied in terms of its structural and functional properties. Particularly insightful has been the investigation of the coupling between structural connectivity (SC) and functional connectivity (FC), whose analysis has revealed important insights into the brain’s efficiency and adaptability related to various cognitive functions. Interestingly, links in SC are intrinsically pairwise but this is not the case for FC; and while recent work demonstrates the relevance of the brain’s high-order interactions (HOI), the coupling of between SC and functional HOI remains unexplored. To address this gap, this study leverages functional MRI and diffusion weighted imaging to delineate the brain’s modular structure by investigating the coupling between SC and functional HOI. Our results demonstrates that structural networks can be associated with both redundant and synergistic functional interactions. In particular, SC exhibits both positive and negative correlations with redundancy, it shows consistent positive correlations with synergy, indicating that a higher density of structural connections is linked to increased synergistic interactions. These findings advance our understanding of the complex relationship between structural and high-order functional properties, shedding light on the brain’s architecture underlying its modular organization.
Bastian Wiederhold - "The limits of human information processing"
In a recent study Zheng and Meister (Neuron 2025) found that humans can process only around 10 bit/s of high-level information. In this talk, I will show that information processing reaches 42 bit/s in memory competitions and 160 bit/s in mental calculation competitions. A majority of the time during memorization seems to be spent on reading suggesting that associations in memory form even faster. In mental calculation, for a short period, visual processing reaches 90 bit/s and algorithmic processing 70 bit/s.
Martin Breyton - "Spatiotemporal brain complexity quantifies consciousness outside of perturbation paradigms"
Signatures of consciousness are found in spectral and temporal properties of neuronal activity. Among these, spatiotemporal complexity after a perturbation has recently emerged as a robust metric to infer levels of consciousness. Perturbation paradigms remain, however, difficult to perform routinely. To discover alternative paradigms and metrics we systematically explore brain stimulation and resting-state activity in a whole-brain model. We find that perturbational complexity only occurs when the brain model operates within a specific dynamical regime, in which spontaneous activity produces a large degree of functional network reorganizations referred to as being fluid. The regime of high brain fluidity is characterized by a small battery of metrics drawn from dynamical systems theory and predicts the impact of consciousness altering drugs (Xenon, Propofol and Ketamine). We validate the predictions in a cohort of 15 subjects at various stages of consciousness and demonstrate their agreement with previously reported perturbational complexity, but in a more accessible paradigm. Beyond the facilitation in clinical use, the metrics highlights complexity properties of brain dynamics in support of emergence of consciousness.
Simone Poetto - "Topological Fingerprinting: Mesoscale Signatures and Information Dynamics of Individual Brain Identity"
Identifying individuals by their unique brain functional architecture is a key goal in neuroscience. However, standard functional connectivity (FC) methods for "brain fingerprinting" often overlook higher-order network topology. This study uses homological scaffolds, via topological data analysis (TDA) of resting-state fMRI, as a superior brain fingerprint. Analyzing Human Connectome Project data (n=100), topological fingerprints achieved 100% identification accuracy, outperforming FC-based methods (~90%) robustly across varying preprocessing, atlases, and scan durations. Notably, scaffold-based discriminative power stems from connections between networks, unlike FC's reliance on within-network features, highlighting a mesoscale signature of identity. Crucially, an information-theoretic analysis using integrated information decomposition (IID) reveals a novel link between the topological structure captured by scaffolds and the underlying information dynamics. The scaffold construction inherently identifies topological "holes" (1-dimensional cycles, or H1 generators) in the functional connectome. We found that the "border edges," which define the boundaries of these topological cycles and typically exhibit high FC, are characterized by high informational redundancy. This aligns with the known strong correlation between FC and redundancy, where border edges represent stable, potentially well-trodden communication pathways. In contrast, the "internal edges" – those connections that span across these topological voids – exhibit significantly higher synergistic information. These internal scaffold edges displayed higher synergy compared to border edges and also higher than randomly selected edges, even when controlling for FC strength, or randomly selecting other cycle structures. This suggests that the topological voids identified by persistent homology are not merely areas of weak pairwise correlation but are structured regions characterized by heightened synergistic interactions among the connections spanning them. We propose that the unique, individual-specific signature of brain organization captured by scaffolds is encoded in this balance between structured, redundant information flow along cycle boundaries and flexible, synergistic integration within the topological holes. This mesoscale organization, characterized by the specific arrangement of redundant and synergistic pathways, offers a richer, more nuanced view of brain function than pairwise FC alone. These findings establish topological scaffolds not only as a powerful tool for capturing individual variability but also as a framework for understanding how unique signatures of brain organization are encoded in the interplay between mesoscale network integration and information dynamics. This approach holds promise for biomarker discovery and for a deeper understanding of individual differences in cognition and behavior.
Demian Battaglia - "Coordinated Multi-frequency Oscillatory Bursts Enable Time-structured Dynamic Information Transfer"
Slower (e.g., beta) and faster (e.g., gamma) oscillatory bursts have been linked to multiplexed neural communication, respectively relaying top-down expectations and bottom-up prediction errors. These signals target distinct cortical layers with different dominant frequencies. However, this theory faces challenges: multiplexed routing might not require distinct frequencies, and phasic enhancement from slow oscillations may be too sluggish to modulate faster oscillatory processes. What fundamental functional advantage, then, could multi-frequency oscillatory bursting offer? We investigate information transfer between two neural circuits (e.g., different cortical layers or regions) generating sparsely synchronized, transient oscillatory bursts with distinct intrinsic frequencies in spiking neural networks. Through a systematic parameter space exploration, guided by unsupervised classification, we uncover a diverse range of Multi-Frequency Oscillatory Patterns (MFOPs). These include configurations in which the populations emit bursts at their natural frequencies, deviating from them, or even at more than one frequency simultaneously or sequentially. We then use transfer entropy between simulated multi-unit activity and analyses of single unit spike transmission to assess functional interactions.
We demonstrate that distinct MFOPs correspond to different Information Routing Patterns (IRPs), dynamically boosting or suppressing transfer in different directions at precise times, forming thus specific temporal graph motifs. Notably, the “slow” population can send information with latencies shorter than a fast oscillation period and also affect multiple faster cycles within a single slow cycle. Supported by precise analyses of the spiking dynamics of synaptically-coupled single neurons, we propose that MFOPs act as complex "attention mechanisms" (in the sense of ANNs) as they provide a controllable way to selectively weight the relevance of different incoming inputs, as a function of their latencies relative to currently emitted spikes.
Our findings show that the coexistence and coordination of oscillatory bursts at different frequencies enables rich, temporally-structured choreographies of information exchange, moving well beyond simple multiplexing (one direction = one frequency). The presence of multiple frequencies considerably expands the repertoire of possible space-time information transfer patterns, providing a resource that could be harnessed to support distinct functional computations. Notably, multi-frequency oscillatory bursting could provide a self-organized manner to tag spiking activity with sequential context information, reminiscent of attention masks in transformers or other ANNs.
Clélia De Mulatier - "Uncovering high-order patterns of data with spin models: simplicity matters"
Finding the model that best captures patterns hidden within noisy data is a central problem in science. In this context, the Ising model has been widely used to infer pairwise patterns in binary data. In recent years, attention has been brought to high-order patterns of data and the question of how to detect them. We will discuss the use of (classical) spin models with interactions of order higher than two to extract such patterns in binary data, and why this problem is challenging. By analyzing the information-theoretic complexity of spin models, we will see that, despite their appearance, models with high-order interactions are not necessarily more complex than pairwise models.
We will then focus on a sub-family of spin models with minimal information-theoretic complexity, which we call Minimally Complex Models (MCMs). These models have interactions organized in a community-like manner, and we will see how they can be used to identify groups of highly correlated variables in binary data. Uncovering such community structures in noisy data is crucial to understanding emergent phenomena in many complex systems, such as the brain, or health or social systems. So far, existing techniques for community detection in data rely mostly on the pairwise correlation patterns of the data; in contrast, our approach takes into account all higher-order data patterns. We will demonstrate the capabilities of our approach against pairwise community detection on artificial data with built-in high-order community structures, and discuss the consequence of using a pairwise approach when the data is inherently high-order. Finally, we will discuss possible applications of MCMs in the context of neuroscience.
Marius Pille - "The Virtual Brain reveals sweet dynamics of deep brain stimulation in Parkinson’s disease"
Deep Brain Stimulation (DBS) is a successful symptom-relieving treatment for Parkinson's disease (PD). Our recently developed DBS model using The Virtual Brain (TVB) simulation tool was able to reproduce multiple biologically plausible effects of DBS in PD compared to literature (Meier et al., 2022). In the current work, we extend the virtual DBS model towards higher resolution, now sensitive to the exact 3D location of the electrode, the fiber activations, the activated contact, and the electric field size.
We simulate DBS of N=14 PD patients with available empirical data on monopolar ring and directional contact activations with corresponding motor task outcome. A linear model based on the principal component involvement of the simulated dynamics demonstrated a correlation between predicted and empirically observed motor task improvements due to DBS of r=0.386 (p<10-4) in a leave-one-out cross-validation based on in total N=392 different electrode settings. Benchmarking revealed a trend toward better predictions with our "sweet dynamics" than imaging-based static methods such as the sweet spot (r=0.16, p<0.05) and sweet streamline (r=0.26, p<10-4) approaches (Hollunder et al., 2024). Furthermore, our model outperforms the traditional trial-and-error method in predicting optimal clinical settings for individual patients, e.g., achieving an over 60% likelihood of identifying the optimal contact within the first two suggested contacts.
In the future, the identified sweet dynamics can be used to optimize the electrode placement and settings in silico in individual patients, showcasing the potential benefit of whole-brain simulations for improving clinical routine.
This workshop has been run at CNS for over two decades now -- links to the websites for the previous workshops in this series are below:
Image modified from an original credited to dow_at_uoregon.edu, obtained here (distributed without restrictions); modified image available here under CC-BY-3.0