HBP Education Poster Section
Here you can find posters of HBP students and HBP partners, who agreed to share their posters publicly.
Click on the images to download the posters.
If you would like us to display your poster here as well, please contact us.
Deep-sleep-inspired activity induces density-based-clustering on memories and entropic, energetic and classification gains
Chiara De Luca1,2*, Cristiano Capone1, Pier Stanislao Paolucci1
1Istituto Nazionale di Fisica Nucleare (INFN)
2 PhD Behavioural Neuroscience, La Sapienza University
Sleep is known to be essential for awake performance, but the mechanisms underlying its cognitive functions are still to be clarified: here we aim to investigate the effect of deep-sleep like activity over internal memories representation. Relying on very minimal assumptions, which are the thalamo-cortical structure and the presence of cortically generated cortico-thalamic slow oscillations we formally find out that sleep might naturally perform a "density-based clustering" in the thalamo-cortical connections. We demonstrate this process improves the performances of visual classification tasks (e.g. MNIST) in both a rate and spiking networks. Finally, we measure the entropic and energetic effects of sleep, that can be applied to experimental data to verify our theoretical predictions.
We modelled a thalamo-cortical system with a two-layer network (the thalamus layer where the input is encoded and a cortical layer, reciprocally connected)l and defined a sleep-inspired activity that reshapes the structure of both synapses and stored memories (as depicted in Figures 1A and B). We implemented a rate based model of a network completely disconnected from external stimuli and assume that the populations of neurons are capable of sustaining an Up-state, a short (few hundreds of ms) state with a sustained level of activity, a hallmark of cortical activity during deep sleep. When the Up state occurs, other populations are activated, generating and hebbian association. We performed similar experiments and found similar results for a spike-based model (not shown in this document). We also considered the natural scenario in which training and testing examples are not equally represented: each example is associated with a mass value that encodes the strength of the expression of that example in the sleeping dynamics. The probability of the Up-state occurring in a population is chosen proportionally to its mass. Coherently with experimental observation, we implemented a homeostatic effect of sleep that progressively lowers the masses of the examples. Our framework was also capable to account for the effect of plasticity on cortico-cortical recurrent connections during sleep-like dynamics. Finally, we made entropy-based measures to the network status at different sleep stages in two different ways: first a microscopic network state analysis and, second, an approximated macroscopic network state analysis to perform measures to be experimentally verified (Figure 2). We applied this analysis to both a 2-class toy dataset and the more complex MNIST dataset in both rate and spiking simulations (as a refinement of network models described in  and ).
RESULTS AND DISCUSSION
In this document, we present results obtained for the rate-based model trainedover a 2-class dataset, the Crescent Full Moon dataset. We found analogous results for the spiking model trained on more complex datasets (i.e. MNIST Dataset, not shown). In this work, we found theoretical and experimental evidence that stored memories are reorganized following a density-based clustering: close memories group together (see Figure 1C and D). Specifically, we infer the internal representation of the memories from the synaptic weight distributions: similar synaptic weight patterns, leading also to similar activation patterns in the cortical layer, are associated with closer internal representations. Then, we compute the separability of these memories by a linear separator since we aim to study and model the intrinsic learning capabilities of the network. The huge advantages of such clustering are: not requiring a number of clusters, finding arbitrarily shaped clusters, independence on the number of elements. This improves the linear separability of memories belonging to different classes improving the accuracy during classification tasks (see Figure 1E). We also show that the presence of heterogeneous masses ( this can be biologically interpreted as heterogeneity in population recurrent connectivity) speeds up the separability performances for plastic feed-forward connections and improves the separability for plastic recurrent connections. Finally, we propose entropy-based measures that can be applied to experimental data to verify our theoretical predictions (Figure 2). Specifically, to evaluate the quantity of stored information in the network, we measure the entropy associated to the internal representation of memories (see Figure 2A) through both synaptic weights distribution and output layer activity in retrieval; then, we measure the cross-entropy to evaluate the mutual information of associated to the two classes (see Figure 2B). Finally, we also evaluate the energy used by the network at each sleeping step (see Figure 2C). We show that a reduction in both entropy and energy consumption together with an increase in cross-entropy is associated with an improvement in classification performances (as depicted in Figure 2D).
To sum up, in this work we demonstrate the beneficial effect of association in a deep-sleep like activity over a simple thalamo-cortical network in both rate and spiking simulations.
This opens up for future investigation in expanding this analysis to more complex datasets and other stages of sleep.
Figure 1: Simulated effects of sleep over stored memories with plastic feedforward connections. A) Network diagram: only thalamo-cortical connections are present. They are plastic during the deep-sleep like activity. B) Graphic representation of the internal "Density-based clustering" dynamics generated by sleep. When the population k produces an Up-state the vectors close to each other are attracted more than further ones and tend to create clusters. C) and D) Internal representation of stored memories at the beginning and the end of the sleep phase respectively. Black solid lines depict the optimal separator identified by the classifier. E) Linear separability of the stored memories as a function of sleeping iterations.
Figure 2: Entropy, cross-entropy and energy measures made over the network status during sleep: an increase in performances is associated with a reduction of entropy and energy consumption and an increase in cross-entropy values. A) Entropy of the system at each sleeping step. B) Cross-entropy between the two classes computed at each iteration of sleep to evaluate the separability of memories. C) Synaptic energy consumption of the network at each iteration step. D) Accuracy of the network in classification at each sleeping step of new unseen examples. It is worth noting that after a given amount of sleep iterations, all the internal representations of memories start to collapse in the same representation, with a consequent reduction in classification performances,
Keywords: deep-sleep, synaptic plasticity, learning
 Walker, M. P. & Stickgold, R. Sleep, memory, and plasticity. Annu. Rev. Psychol. 57, 139–166 (2006).
 Jadhav, S. P., Kemere, C., German, P. W. & Frank, L. M. Awake hippocampal sharp-wave ripples support spatial memory. Science 336,1454–1458 (2012).
 B. Golosio, C. De Luca, C. Capone, E. Pastorelli, G. Stegel, G. Tiddia, G. De Bonis and P.S. Paolucci “Thalamo-cortical spiking model of incremental learning combining perception, context and NREM-sleep-mediated noise-resilience”,2020. arXiv.2003.11859
 Capone, C., Pastorelli, E., Golosio, B., Paolucci, P.S. Sleep-like slow oscillations improve visual classification through synaptic homeostasis and memory association in a thalamo-cortical model. Sci Rep 9, 8990 (2019). https://doi.org/10.1038/s41598-019-45525-0
Modelling cerebellar nuclei and NucleoCortical pathways
Massimo Grillo1*, Alice Geminiani1,2, Alberto Antonietti1,2, Egidio D’Angelo2,3, Alessandra Pedrocchi1
1 NEARLab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
2 Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
3 IRCCS Mondino Foundation, Pavia, Italy
The cerebellum is a subcortical structure, whose main role in motor control is learning and coordination . From an anatomical point of view, it can be subdivided into two major layers: cortex and deep nuclei. The cerebellar cortex processes input sensorimotor information and then, conveys signals toward the Deep Cerebellar Nuclei (DCN), which constitute the sole output stage of the cerebellum . However, the information does not flow only from the cerebellar cortex to the cerebellar nuclei, since it was found that some DCN neurons project back to the cerebellar cortex, forming the so called NucleoCortical (NC) pathways –.
In order to investigate the behaviour of DCN neurons during the execution of sensorimotor tasks, in-silico models of the cerebellum are often exploited. Available models of the cerebellar circuit include Spiking Neural Networks, inspired to anatomical and functional features of mouse cerebellum, such as: cells morphologies, anatomical connections, and electrophysiological behaviour of neurons and synapses , . The most recent models of the DCN include only 2 out of 6 neuron types identified in literature  and projections from the input stage of the cerebellum towards the nuclei, without any backward connection (NC pathways). Since a detailed implementation of the cerebellar nuclei is crucial when simulating complex sensorimotor tasks, the aim of this work was to update the current in-silico DCN layer by including the NC pathways, which required the implementation of a new neural population in the DCN layer: the Glycinergic-Inactive (Gly-I) neurons.
In order to replicate the functional behaviour of neurons inside an in-silico model, Spiking Neural Networks can be exploited. Recently, the neural simulator NEST has been developed within the Human Brain project and it represents a shared platform for Brain simulation . For our purpose, the electroresponsive properties of Gly-I neurons were replicated through a single-point neuron model, the Extended-Generalized Leaky Integrate and Fire (E-GLIF) model , . This model can reproduce complex electrophysiological behaviours with limited computational cost, thanks to a system of ordinary differential equations, which describe the dynamics of membrane potential and two intrinsic currents of the neuron. The model includes some electrophysiological parameters, extracted from literature, and other optimizable parameters, tuned through an optimization algorithm implemented in Matlab. Specifically, in order to implement Gly-I neurons, before running the optimization algorithm, a specific stimulation protocol with current steps was designed, allowing to evaluate their main electroresponsive features: no autorhythm, linear current-frequency relationship and high spike frequency adaptation . After that, several optimizations were launched in order to find the best set of parameters which minimized the difference between the desired spike times and the actual times at which the E-GLIF membrane potential reached threshold; this way, the target firing rates for the different input current values in the stimulation protocol were fitted. Finally, the parameters were verified with NEST through single-neurons simulations, in which one Gly-I neuron was stimulated with the same protocol used during optimization and its firing rates were computed after the stimulation onset (instantaneous firing rate) and near the end of each stimulation step (Steady-State firing rate).
At this point, by exploiting the cerebellar scaffold tool (https://github.com/dbbs-lab/bsb) , , , Gly-I neurons have been placed inside the DCN layer scaffold model, by defining the layer position, neuron density, and morphological properties. Considering the convergence and divergence ratios, they have been connected to other neurons inside the scaffold: Purkinje Cells and Mossy fibers, pre-synaptic neurons that provide inhibitory and excitatory inputs respectively , but also Golgi cells (GoCs) inside the cerebellar cortex, post-synaptic population inhibited by the activity of Gly-I neurons . The synaptic parameters of the NC pathway were set considering that a Gly-I burst lasting around 50 ms caused a suppression in 25% of GoCs .
RESULTS AND DISCUSSION
In Figure 1, we can notice the behaviour of Gly-I model at rest and when stimulated by depolarizing current with increasing amplitude.
Figure 1: Current-Frequency relationship of Gly-I neuron during NEST simulations in two conditions: rest and depolarized by currents with increasing amplitude. Both instantaneous (dots) and Steady-State (squares) firing rates have been reported considering their average value among 50 simulations along with their standard deviation. Black symbols refer to model outcomes, whereas green symbols represent the experimental behaviour. The model values have been linearly interpolated (black lines).
At rest, no spiking activity was generated, while, when depolarized, the instantaneous firing rate increased almost linearly from around 20 Hz to 135 Hz. We can also notice that the Steady-State firing rate is much lower than the instantaneous firing rate, showing a high spike frequency adaptation: the firing rate strongly decreases from stimulation onset to the end of stimulation phase.
After the implementation of these neurons as E-GLIF models, they have been placed inside the cerebellum scaffold and connected with other neurons. In particular, Gly-I neurons received input signals coming from Mossy fibers and Purkinje cells and projected outside the DCN layer toward the cerebellar cortex, reaching 25% of GoCs, see Figure 2.A. The synaptic strength of the NC pathway made by Gly-I neurons has been properly tuned in order to replicate the suppression effect on GoCs: the population firing rate of GoCs decreases from 8.52 ± 0.82 Hz to 1.30 ± 0.71 Hz, see Figure 2.B.
Figure 2: (A) Representation of the cerebellar cortex (pink), DCN layer (green) and NC pathways (redlines) connecting one Gly-I neuron (black dot) to several GoCs (blue dots) in the cerebellar cortex. (B) Scatter plots of GoCs and Gly-I neurons, showing the inhibitory effect of a glycinergic burst on GoCs activity.
To sum up, we here enhanced previous models of the cerebellar DCN layer adding a new neural population that projects back to the cerebellar cortex. The model will be exploited for more accurate simulations of complex cerebellum-driven tasks, where integration of input sensorimotor signals and cerebellar output feedback occurring in the cerebellar cortex is supposed to play a key role . In addition, following the experimental characterization of the other neuron types in the DCN, the model will be further updated.
Keywords: Computational Neuroscience, Spiking Neural Networks, Cerebellum, NucleoCortical pathways
 S. G. Lisberger, “The neural basis for learning of simple motor skills,” Science (80-. )., 1988.
 J. F. Medina, W. L. Nores, T. Ohyama, and M. D. Mauk, “Mechanisms of cerebellar learning suggested by eyelid conditioning,” Current Opinion in Neurobiology. 2000.
 B. D. Houck and A. L. Person, “Cerebellar loops: A review of the nucleocortical pathway,” Cerebellum. 2014.
 L. Ankri et al., “A novel inhibitory nucleo-cortical circuit controls cerebellar Golgi cell activity,” Elife, 2015.
 Z. Gao et al., “Excitatory Cerebellar Nucleocortical Circuit Provides Internal Amplification during Associative Conditioning,” Neuron, 2016.
 A. Geminiani, A. Pedrocchi, E. D’Angelo, and C. Casellato, “Response Dynamics in an Olivocerebellar Spiking Neural Network With Non-linear Neuron Properties,” Front. Comput. Neurosci., 2019.
 S. Casali, E. Marenzi, C. Medini, C. Casellato, and E. D’Angelo, “Reconstruction and simulation of a scaffold model of the cerebellar network,” Front. Neuroinform., 2019.
 M. Uusisaari and T. Knöpfel, “Functional classification of neurons in the mouse lateral Cerebellar Nuclei,” Cerebellum, 2011.
 J. M. Eppler, M. Helias, E. Muller, M. Diesmann, and M. O. Gewaltig, “PyNEST: A convenient interface to the NEST simulator,” Front. Neuroinform., 2009.
 A. Geminiani, C. Casellato, F. Locatelli, F. Prestori, A. Pedrocchi, and E. D’Angelo, “Complex dynamics in simplified neuronal models: Reproducing golgi cell electroresponsiveness,” Front. Neuroinform., 2018.
 A. Geminiani, C. Casellato, E. D’Angelo, and A. Pedrocchi, “Complex electroresponsive dynamics in olivocerebellar neurons represented with extended-generalized leaky integrate and fire models,” Front. Comput. Neurosci., 2019.
 M. Uusisaari and T. Knöpfel, “GlyT2+ neurons in the lateral cerebellar nucleus,” Cerebellum, 2010.
 R. De Schepper et al., “The Brain Scaffold Builder, a package for structural and functional modelling of brain circuits: the cerebellar use case.”
 B. D. Houck and A. L. Person, “Cerebellar premotor output neurons collateralize to innervate the cerebellar cortex,” J. Comp. Neurol., 2015.
Highway to a biologically-grounded neural field model of cerebellum
Roberta Maria Lorenzi1*, Alice Geminiani1, Claudia A.M. Gandini Wheeler-Kingshott1,2,3, Fulvia Palesi1, Claudia Casellato1, Egidio D'Angelo1
1 Department of Brain and Behavioral Sciences, University of Pavia, Via Forlanini 6, Pavia, Italy
2 Queen Square MS Centre, Department of Neuroinflammation, UCL Institute of Neurology, Russell Square House, Russell Square, WC1B 5EH, London, United Kingdom
3 IRCCS Mondino Foundation, Via Mondino 2, Pavia, Italy
The brain is made of interconnected networks generating its global activity. Several modelling approaches are used to investigate the contribution of local networks to brain global dynamics. While biophysically detailed implementations allow to distinguish the contribution of single neurons to brain dynamics, they are usually too complex for large-scale simulations and need to be condensed into simpler and more abstract models. Typically, these take the form of neural masses or mean fields, which oversimply the physiological properties of an entire neuronal circuit 1,2,3. What we propose here is an advanced mean field model of the cerebellar cortex that maintains a fundamental set of physiological and structural properties of this specific circuit.
While the cerebellum contains about 50% of all brain neurons, is deeply interconnected with the rest of the brain and remarkably contributes to generate ensemble brain dynamics 4 , the mean field models developed so far are tailored to the cerebral cortex only but may not be effective to capture cerebellar cortex properties. Indeed, the cerebellar circuit organization is peculiar and markedly differs from that of the cerebral cortex. A mean field model of the cerebellar circuit should therefore consider its complex neuronal features, multi-layer organization, quasi-crystalline geometry and local connectivity.
This work aims to develop a mean field model retaining the salient properties of the cerebellar circuit. The model will be used not just to provide theoretical insight on cerebellar network functioning but also to simulate the impact of cerebro-cerebellar interactions on whole brain dynamics in the framework of The Virtual Brain (TVB).
The design of the cerebellar mean-filed model starts from an accurate and extensive knowledge of cerebellar anatomy and physiology. The model includes the main populations of the cerebellar cortex - Granular Cells (GrC), Golgi Cells (GoC), Molecular Layer Interneurons (MLI) and Purkinje Cells (PC) – and their connections. The connections among these neuronal populations include different excitatory, inhibitory or self-inhibitory synapses (Figure 1). Furthermore, we account for the multi-layer organization of the cerebellar cortex, where GrC layer and PC layer are the input and output layers, respectively5. The reference in functional terms is the spiking activity of neurons modelled as E-GLIF single-point neurons6,7 optimized for each population . Population-specific firing frequencies and synaptic connections are used to calculate the conductance for each cell population (equations in Figure 1). A Transfer Function (TF) mathematical formalism2 is used to transform population-specific spiking patterns into a time-continuous global output. This approach is inspired to that already validated for the implementation of mean-field models of isocortical circuits made of excitatory and inhibitory neurons 9–11. Here, the mean-field TF formalism accounts also for the physiological diversification of cerebellar neuronal populations and for their topological organization. GrCs and GoCs receive, through mossy fibers (mf), external synaptic input (𝜈input) coming from other brain areas5. The detailed placement and connectivity generated by scaffold model approaches8,12 are used to set connection probability (K). Detailed synaptic models (tuned and validated for each pairwise connection types) are used to set quantal synaptic conductances and synaptic time decays (Q, 𝜏). Six different stationary 𝜈input (10, 20, 40, 60, 80, 100 Hz) are used to generate reference spiking activity in NEST for fitting the TFs.
PRELIMINARY RESULTS AND DISCUSSION
Figure1 shows the model structure and the conductance equation for each population with the explicit dependencies from the synaptic parameters K, Q and 𝜏 (values in Figure1-right). Figure2 shows our pipeline for PC. The color-map shows the behavior of the parameters used for TF fitting for either excitatory (𝜈GrC) and inhibitory input activity (𝜈MLI) to PC. The pipeline is extended to other populations. The final cerebellar mean-field formalism is reported in Figure2-right.
Compared to the existing mean field model, our cerebellar network is built up with a bottom-up approach tailored to the specific structural and functional interactions of the neurons population in the cerebellum.
This work aims, for the first time, to implement a mean-field model of cerebellum able to reproduce its population dynamics and to investigate specific cerebellar alterations in pathologies like ataxia and autism.
This design will allow to investigate, for each population, how its activity affects the cerebellar cortex output enabling a theoretical interpretation of network functions.
The spatial localization of input stimuli from the extended brain connectome will be fundamental to understand the topological organization of cerebellar signal processing. This will require multiple modules of this cerebellar mean field, each one receiving mfs from specific source regions. The differentiated descriptions of synapses connecting neuronal populations will allow to account for critical factors controlling local circuit dynamics (e.g. differentiation between AMPA and NMDA receptor at excitatory synapses, or between parallel fibers and ascending axons at GrC-GoC and GrC-PC synapses). Finally, to extend the model toward the mesoscale, the TF will be extended from the cerebellar cortex to include Deep Cerebellar Nuclei (DCN).
Once an interconnected set of cerebellar mean fields will be built and validated, it will be mapped on brain atlases and integrated in TVB, in order to exploit the long-range connectome. A recent work13 demonstrated the importance of cerebellum in whole brain dynamics by focusing on cerebro-cerebellar loop Those results will be compared with those generated with the present ad-hoc mean field. TVB-NEST co-simulations are being developed to reproduce brain activity at different levels of granularity (e.g. with single neurons or populations resolution). It will be interesting to compare the change of brain dynamics when the cerebellum is modelled as a NEST spiking node in the full TVB or as the mean field node described here.
 Pinotsis D, Robinson P, Graben PB, Friston K. Neural masses and fields: Modeling the dynamics of brain activity. Front. Comput. Neurosci. 2014;8(NOV):2013–2015.
 Boustani S El, Destexhe A. A master equation formalism for macroscopic modeling of asynchronous irregular activity states. Neural Comput. 2009;21(1):46–100.
 DiVolo M, Romagnoni A, Capone C, Destexhe A. Biologically Realistic Mena-Field Models of Conductance-Based Networks of Spiking Neurons with Adaptation. Neural 2019;31:653–680.
 D’Angelo E. Physiology of the cerebellum [Internet]. 1st ed. Elsevier B.V.; 2018.Available from: http://dx.doi.org/10.1016/B978-0-444-63956-1.00006-0
 Gandolfi D, Pozzi P, Tognolina M, et al. The spatiotemporal organization of cerebellar network activity resolved by two-photon imaging of multiple single neurons. Front. Behav. Neurosci. 2014;8(APR):1–16.
 Geminiani A, Casellato C, Locatelli F, et al. Complex dynamics in simplified neuronal models: Reproducing golgi cell electroresponsiveness. Front. Neuroinform. 2018;12(December):1–19.
 Geminiani A, Casellato C, D’Angelo E, Pedrocchi A. Complex electroresponsive dynamics in olivocerebellar neurons represented with extended-generalized leaky integrate and fire models. Front. Comput. Neurosci. 2019;13(June):1–12.
 De Schepper R, Al E. The Brain Scaffold Builder, a package for structural and functional modelling of brain circuits: the cerebellar use case. In preparation;
 Kuhn A, Aertsen A, Rotter S. Neuronal Integration of Synaptic Input in the Fluctuation-Driven Regime. J. Neurosci. 2004;24(10):2345–2356.
 Zerlaut Y, Chemla S, Chavane F, Destexhe A. Modeling mesoscopic cortical dynamics using a mean-field model of conductance-based networks of adaptive exponential integrate-and-fire neurons. J. Comput. Neurosci. 2018;44(1):45–61.
 Zerlaut Y, Destexhe A. Enhanced Responsiveness and Low-Level Awareness in Stochastic Network States [Internet]. Neuron 2017;94(5):1002–1009.Available from: http://dx.doi.org/10.1016/j.neuron.2017.04.001
 Casali S, Marenzi E, Medini C, et al. Reconstruction and simulation of a scaffold model of the cerebellar network. Front. Neuroinform. 2019;13(May):1–19.
 Palesi F, Lorenzi RM, Casellato C, et al. The Importance of Cerebellar Connectivity on Simulated Brain Dynamics. Front. Cell. Neurosci. 2020;14(July):1–11.
Figure 1) Cerebellar cortex microcircuit model. The external input 𝜈input is relayed by mf to GrC and GoC. The output activity 𝜈PC projects to the DCN. 𝜈input=external input [Hz]; for each population p or connection c: mp=conductance of each population [nS]; 𝜈p=firing rate of each population [Hz]; Kc=connections probability*cells numbers; Qc = quantal synaptic conductance [nS], 𝜏c=synaptic time decay [ms]. p = GrC, GoC, MLI, PC. c = mf-GrC, mf-GoC, GrC-GoC, GoC-GrC, GoC-GoC, GrC-MLI, MLI-MLI, GrC-PC, MLI-PC.
Figure 2) Pipeline to compute transfer function. Left: Color-map shows from yellow to blue the parameters used for Transfer Function Fitting. Population conductances assume high values for high excitations combined with low inhibition (Map bottom-left corner). PC mV follows the same conductances trend, while σV is higher for low excitation and high inhibition. 𝜏 V is higher for low excitation-high inhibition. Right: Cerebellum Mean Field Equations.
Morphological alterations of pyramidal neurons from the contralesional hemisphere after ischemic stroke
Sergio Plaza-Alonso1,2, Asta Kastanauskaite1,2, Susana Valero-Freitag3, Nikolaus Plesnila3, Farida Hellal3, Javier DeFelipe1,2,4 and Paula Merino-Serrais1,2
1 Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, Madrid 28223, Spain.
2 Instituto Cajal, Consejo Superior de Investigaciones Científicas (CSIC), Madrid 28002, Spain.
3 Experimental Stroke Research, Institute for Stroke and Dementia Research, Cluster for Systems Neurology, University of Munich Medical Center, Munich, Germany.
4 Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED), ISCII, Madrid, Spain.
Stroke is one of the major causes of death and disability worldwide. With over 80 million prevalent cases globally, the world is facing a modern epidemic.
Much emphasis has been placed on clarifying the pathological aspects and consequences of the focal lesion, the infarct core. However, interesting studies indicate that, after stroke, remote regions connected to the infarcted area are also affected. This process, known as diaschisis, is present among others, in the contralateral hemisphere, affecting the performance of the whole brain and may be implicated in the suppression of functional recovery after stroke[3–5].
Given this, the main goal of the present study was to analyze possible microanatomical alterations of pyramidal neurons in layer III from the contralesional somatosensory cortex-barrel field (BF) in the ischemic stroke mice model ‘tMCAo’. These alterations could provide new insights into the understanding of the pathology and lead us to new therapy approaches.
The transient middle cerebral artery occlusion mice stroke model (tMCAo) and the correspondent SHAM-control mice were used (20 weeks old, 18 male animals). 594 pyramidal neurons, located in the contralesional somatosensory cortex-barrel field (layer III) were individually injected with Lucifer Yellow (LY). LY injected cells were then 3D reconstructed using confocal microscopy and morphological parameters were analysed with Neurolucida 360 software (Figure 1).
Several morphometric parameters were analysed in apical and basal dendritic trees. First, we evaluated the complexity of the dendritic arborization in both, apical and basal dendritic tree by measuring: dendritic length; dendritic volume; number of intersections; dendritic surface; number of nodes and dendritic diameter, as a function of the distance from the soma (Sholl analysis - 30 cells per group). This analysis creates a 3D scaffold of concentric spheres that normalize measures, thus allowing a reliable comparison between groups (Figure 1C).
Then, spine morphology was analysed through different morphometric parameters, such as dendritic spine density; dendritic spine length and dendritic spine volume, in apical and basal dendritic trees. This data was studied as a function of the distance from the soma (Sholl analysis), as an average per dendrite and as a frequency distribution analysis (minimum of 21 dendrites per group).
Statistical analysis: Mann-Whitney test was used to compare averages (mean ± SEM); Kolmogorov – Smirnov test was used in the frequency distribution analysis; Two-way ANOVA followed by a post–hoc Bonferroni comparison was used to compare values when presented as a function of the distance from the soma (Sholl analysis).
Apical dendritic tree shows less neuronal complexity in tMCAo animals: significant decrease in dendritic length (Figure 2A), number of intersections (Figure 2C), and dendritic surface (Figure 2D) of the apical dendritic tree were found in tMCAo compared with SHAM mice. No changes were found in basal dendritic tree analysis in any parameter.
Apical and basal dendrites show alterations in spine morphology: Frequency distribution analysis reveals significant changes in spine length and volume in both apical and basal dendritic trees between groups. No differences were found in Sholl and average analysis.
The complexity of the neuronal dendritic structure determines their biophysical properties, thus influencing their functional capacity and potential for plastic change. In addition, dendritic spines, being the main post-synaptic element, are targets of most excitatory synapses in the cerebral cortex and their morphology could determine synaptic strength and learning rules[7–9]. Therefore, alterations of spine morphology and neuronal complexity could affect neuronal function. Furthermore, the morphological alterations found in this study could play an important role in the recovery of the patient, since most of the post-ictus rehabilitation therapies rely on the neuronal and circuit potential for plasticity of the areas both close to the infarct core and symmetrically located in the contralateral hemisphere[10,11]. Thus, the alterations found in this stroke model are of great interest and should be further analysed. In fact, we are currently extending the areas of study to perform these analyses. Specifically, we are planning to include the contralateral hippocampus and the secondary somatosensory cortex, both areas in which the ischemic lesion may have a different impact.
Figure 1. A) Panoramical overview of LY injected pyramidal neurons (in green) from a SHAM somatosensory cortex transversal section (right hemisphere). B). Detail of a LY injected neuron from Fig 1A. (indicated with a white arrow). Neuronal and spine structure were 3D reconstructed using confocal microscopy and visualizce with Neurolucida 360 software. C). 3D tracing model of the pyramidal neuron from Fig. 1B. Concentric spheres represent the segments made by Sholl Analysis to estimate the data. D) Detail of a LY injected apical dendrite from a SHAM mouse. E-F). 3D trace of the dendrite branch (in red) and dendritic spines from Fig 1D.
Figure 2. Reduction in the apical dendritic tree complexity. Results from Apical Dendritic Tree morphological Sholl analysis. Significant decreases were found in several parameters (Dendritic Length; Number of intersection; Dendritic Surface and Dendritic Volume) from tMCAo mice model, compared to SHAM. Two-way ANOVA (repeated measures) and post-hoc Bonferroni test were used to analyze the data. N= 30 cells per group (tMCAo, SHAM). Data is shown as mean ± SEM. *p<0.05; **p<0.01
 Donnan GA, Fisher M, Macleod M, Davis SM. Stroke. Lancet (London, England). 2008 May;371(9624):1612–23.
 Global, regional, and national burden of neurological disorders, 1990-2016: a systematic analysis for the Global Burden of Disease Study 2016. Lancet Neurol. 2019 May;18(5):459–80.
 Buetefisch CM. Role of the Contralesional Hemisphere in Post-Stroke Recovery of Upper Extremity Motor Function. Front Neurol. 2015;6:214.
 Duering M, Righart R, Csanadi E, Jouvent E, Hervé D, Chabriat H, et al. Incident subcortical infarcts induce focal thinning in connected cortical regions. Neurology. 2012 Nov;79(20):2025–8.
 Silasi G, Murphy TH. Stroke and the connectome: how connectivity guides therapeutic intervention. Neuron. 2014 Sep;83(6):1354–68.
 Benavides-Piccione R, Regalado-Reyes M, Fernaud-Espinosa I, Kastanauskaite A, Tapia-González S, León-Espinosa G, et al. Differential Structure of Hippocampal CA1 Pyramidal Neurons in the Human and Mouse. Cereb Cortex. 2019 Jul 2.
 Benavides-Piccione R, Fernaud-Espinosa I, Robles V, Yuste R, DeFelipe J. Age-based comparison of human dendritic spine structure using complete three-dimensional reconstructions. Cereb Cortex. 2013 Aug;23(8):1798–810.
 Stuart GJ, Spruston N. Dendritic integration: 60 years of progress. Nat Neurosci. 2015 Dec;18(12):1713–21.
 Carmichael ST. Plasticity of cortical projections after stroke. Neuroscientist. 2003 Feb;9(1):64–75.
 Gerloff C, Bushara K, Sailer A, Wassermann EM, Chen R, Matsuoka T, et al. Multimodal imaging of brain reorganization in motor areas of the contralesional hemisphere of well recovered patients after capsular stroke. Brain. 2005 Dec 19;129(3):791–808.
Cerebro-cerebellar interactions: in vivo and in silico tools co-design
Francesco Jamal Sheiban1, Alessandra Maria Trapani1, Alice Geminiani1,2, Egidio Ugo D’Angelo2,3, Alessandra Pedrocchi1
1 NEARLab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy;
2 Department of Brain and Behavioural Sciences, University of Pavia, Pavia, Italy;
3 IRCCS Mondino Foundation, Pavia, Italy.
A major goal of contemporary neuroscientific research is to design experimental protocols for animal studies coupled with in vivo neural recordings to gather data so to investigate the brain mechanisms underlying behaviour . Moreover, replicating those experiments in silico lets researchers develop and test computational models of the different brain areas involved and validate them against collected data. By faithfully reproducing biological features, such models can be used to test and/or suggest neurophysiological/pathological hypotheses and be eventually translated to clinical scenarios .
Recent findings ascribe cognitive functions to the cerebellum  that might be linked to cerebro-cerebellar interactions, though experimental tools and complementary computational models that investigate these connections in an exhaustive way are missing in literature . The aim of this study is therefore the co-design of in vivo and in silico behavioural protocols and tools to be eventually used to investigate the functional role of cerebro-cerebellar interactions in motor learning tasks.
Following this approach, the work presented here involves: (i) the implementation of a custom experimental setup for "reach-to-grasp" in vivo experiments on adult mice; (ii) the design of a neurorobotic setup to replicate the protocol in silico, including a virtual environment and a robotic subject embodying a functional spiking neural network; (iii) the simulation of a behavioural task execution using the neurorobot.
The target behavioural protocol of this study is a "reach-to-grasp” movement to collect water droplets at two possible locations, marked by an anticipatory directional (left/right) cue. For the in vivo experimental procedure, water deprived mice are trained to reach the reward after a time-varying delayed go-cue (i.e., a sound) from a fixed starting point, by associating the direction of an early stimulus to the reward location.
Thus, a sensorized cage was designed to house animal during the task execution and automatically deliver the protocol stimuli with precise timings via integrated sensors and actuators. The overall system was controlled by an Arduino ® Mega 2560 board communicating with a user-friendly graphical interface designed for setting protocol parameters and monitoring task execution in real-time.
Along with the implementation of the in vivo apparatus, the behavioural protocol was reconstructed on the NeuroRobotics Platform (NRP) . The in silico task was designed to closely mimic its in vivo counterpart by employing the iCub humanoid robot and providing the same environmental stimuli via custom-designed transfer functions.
This virtual implementation also required the design of a brain network model of integrate-and-fire neurons driving the neurorobot movements, with constant synaptic weights tuned via a trial-and-error procedure. The reconstructed model, designed following biological findings, included two identical modules (to discriminate the directional stimuli) and feedback loops between premotor and frontal cortices , motor thalamus  and cerebellum  to reproduce short-term memory and temporal decisions mechanisms. More specifically, cortical areas consisted of the medial prefrontal cortex (mPFC), the secondary (ALM) and primary (CFA, RFA) motor cortex and the primary somatosensory cortex (vS1) regions. The thalamic areas comprised nuclei from the ventral lateral (VL, VAL) and posterior (VPM) regions (Figure 1).
RESULTS AND DISCUSSION
As a result of the in vivo and in silico co-design, we released and tested the first prototype of the experimental set-up, adjusting the components design to adapt it to real-case scenarios and completing the hardware circuitry along with the real-time communication between the software and the micro-controller. Then, the entire experimental protocol was correctly executed in silico by the neurorobot using a scaled-down neural network of the designed brain model (Figure 1).
Figure 1: In vivo and in silico protocols co-execution. The iCub robot, when provided with a SNN composed of two identical modules, whose structure is schematically reported in the figure [left], successfully performed the behavioural task as it was designed to be executed in vivo [right]. The virtual subject waited in standing position with its hand on a fixed starting point (orange box), receiving a somatosensory stimulus on its shoulder (red box) simulating the in vivo directional stimulus (a whisker touch). Then, the robot waited motionless until a go-cue (green box) signalled reward availability, upon which it performed the grasping movement (blue box) mirroring the somatosensory stimulus direction.
Concerning the in silico protocol reconstruction, we managed to provide the same in vivo experimental stimuli to the neurorobot, implementing different transfer functions to carry signals from the environmental and robotic sensors to the spiking neural network.
Monitoring the activity of the tuned spiking neural network, we evaluated the capability of reproduce spiking responses supposed to occur at the beginning and end of cerebellum-driven learning. Specifically, we assessed that the premotor cortex/thalamus loop, the medial prefrontal cortex and the motor cortex were able to sustain preparatory activity, block impulsive actions and drive movement execution, respectively, once provided with the expected cerebellar activity at the end of learning, as opposed to being disconnected from the cerebellar module (Figure 2).
Figure 2: Sensorimotor signals encoding in the network. The spike activity recorded without [left] and with [right] the supposed post-learning cerebellar input dynamics to the cerebral nodes. Without these inputs, the ALM/VL-VLA loop (c-f) and mPFC area (h) are not silenced upon go-cue exhibition (l-m), thus their sustained activity persists after the screen is set to green. Conversely, with the proper cortico-cerebellar connections, the go signals cause the inhibitory VL-VLA areas (e-g) to stop the mPFC and subsequent ALM-premotor (d) and CFA-RFA (i) activation drives the neurorobot in correctly performing the movement.
These results represent a solid basis on which to continue the co-design of in vivo and in silico protocols, improving the prototype of the experimental set-up, acquiring animal data, refining the neural network model (e.g., scaling up the network and embedding distributed plasticity) and simulating the full learning protocol, eventually exploiting high-performance computing resources.
Acknowledgments: This work is supported by CerebNEST and RisingNet projects within the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3). We thank Ileana Montagna, PhD, Department of Brain and Behavioral Sciences, University of Pavia, for the experimental setup design support.
 J. N. Crawley and A. Holmes, “Behavioral Neuroscience,” Current Protocols in Neuroscience, no. April, pp. 1–2, 2011.
 E. D. Angelo and S. Casali, “Seeking a unified framework for cerebellar function and dysfunction : from circuit operations to cognition,” Frontiers in Neural Circuits, vol. 6, no. January, pp. 1–23, 2013.
 M. J. Wagner and L. Luo, “Neocortex – Cerebellum Circuits for Cognitive Processing,” Trends in Neurosciences, vol. 43, no. 1, pp. 42–54, 2020.
 E. Falotico, L. Vannucci, A. Ambrosano, and U. Albanese, “Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform,” Frontiers in Neurorobotics, vol. 11, no. January, pp. 1–19, 2017.
 A. Saiki, R. Kimura, T. Samura, Y. Fujiwara-Tsukamoto, and Y. Sakai, “Different Modulation of Common Motor Information in Rat Primary and Secondary Motor Cortices,” PLOS ONE, vol. 9, no. 6, pp. 1–13, 2014.
 M. Murakami, H. Shteingart, Y. Loewenstein, Z. F. Mainen, “Distinct Sources of Deterministic and Stochastic Components of Action Timing Decisions in Rodent Frontal Cortex,” Neuron, vol. 94, no. 4, pp. 908–919., 2017.
 Z. V. Guo, H. K. Inagaki, K. Daie, S. Druckmann, C. R. Gerfen, and K. Svoboda, “Maintenance of persistent activity in a frontal thalamocortical loop,” Nature, 2017.
 N. Sakayori, S. Kato, M. Sugawara, S. Setogawa, H. Fukushima, R. Ishikawa, S. Kida, and K. Kobayashi, “Motor skills mediated through cerebellothalamic tracts projecting to the central lateral nucleus,” Molecular Brain, pp. 1–12, 2019
 F. P. Chabrol, A. Blot, T. D. Mrsic-Flogel, “Cerebellar Contribution to Preparatory Activity in Motor Neocortex,” Neuron, vol. 103, no. 3, pp. 506–519, 2019.