Loading…
Attending this event?
arrow_back View All Dates
Monday, July 22
 

8:30am PDT

Registration
Monday July 22, 2024 8:30am - 8:30am PDT

9:10am PDT

Announcements and Keynote #3
Monday July 22, 2024 9:10am - 10:10am PDT

10:10am PDT

Coffee Break
Monday July 22, 2024 10:10am - 10:40am PDT

10:40am PDT

Oral Session 3: From cells to circuits
Monday July 22, 2024 10:40am - 12:30pm PDT
Jacarandá

10:41am PDT

FO3: Neural Heterogeneity Controls the Computational Properties of Spiking Neural Networks
Richard Gast , Sara A. Solla  , Ann Kennedy

Monday July 22, 2024 10:41am - 11:10am PDT
Jacarandá

11:10am PDT

11:30am PDT

O10: Functional Connectivity and Complex Network Dynamics of In-Vitro Neuronal Spiking Activity During Rest and Gameplay
Moein Khajehnejad, Forough Habibollahi Saatlou, Alon Loeffler, Brett J. Kagan, Adeel Razi

Monday July 22, 2024 11:30am - 11:50am PDT
Jacarandá

11:50am PDT

12:10pm PDT

12:30pm PDT

Lunch
Monday July 22, 2024 12:30pm - 2:00pm PDT

12:30pm PDT

OCNS Board Meeting
Monday July 22, 2024 12:30pm - 2:00pm PDT

2:00pm PDT

2:01pm PDT

FO4: Backwards and forwards, hot or cold: robust and flexible rhythms in a neural network model
Lindsay Stolting, Joshua Nunley, Eduardo Izquierdo

Monday July 22, 2024 2:01pm - 2:30pm PDT
Jacarandá

2:30pm PDT

2:50pm PDT

O14: Forecasting Seizure Duration from Neural Connectivity Patterns
Parvin Zarei Eskikand, Mark Cook, Anthony Burkitt, David Grayden

Monday July 22, 2024 2:50pm - 3:10pm PDT
Jacarandá

3:10pm PDT

O15: A computational model to help in understanding the impact of a 3D organization on cortical dynamics
Francesca Callegari, Martina Brofiga, Paolo Massobrio

Monday July 22, 2024 3:10pm - 3:30pm PDT
Jacarandá

3:30pm PDT

O16: Self-organized emergence of multi-areal information processing in a non human primate connectome-based model
Vinicius Lima Cordeiro, Nicole Voges, Andrea Brovelli, Demian Battaglia

Monday July 22, 2024 3:30pm - 3:50pm PDT
Jacarandá

3:50pm PDT

Coffee Break
Monday July 22, 2024 3:50pm - 4:20pm PDT

4:20pm PDT

P042 Optimal coding and information processing due to firing threshold adaptation near criticality
The brain encodes information through neuronal populations' output firing rates [1] or spike patterns [2]. However, weak inputs have limited impact on output rates, hindering this type of encoding from explaining all sensory system behavioral performance. Spike patterns that are implicated in perception and memory can generate sparse and combinatorial codes, enhancing memory capacity, robust signal encoding, information transmission, and energy efficiency [2, 3].
This study investigates input-output (I/O) relations in a recurrent excitatory network, describing the effect of spike threshold adaptation in both rate and pattern coding. We compare networks with adaptive and constant firing thresholds, showing that adaptive networks exhibit both optimal pattern coding capacity and I/O mutual information for weak inputs. Our model allows us to reveal the underlying mechanism of the optimization – a partial Self-Organized quasi-Critical (SOqC) dynamics [4]. The adaptation enables a smooth transition from pattern coding to rate coding as input rates increase, with a threshold recovery timescale of ~100 ms. This holds around the critical point, while constant threshold networks only perform pattern coding in the supercritical state and for stronger inputs, and are thus not capable of discriminating weak stimuli. However, the adaptive network's rate coding capacity (as described by its dynamic range) is equivalent to the subcritical regime of constant threshold networks.
The identified threshold timescale aligns with various cells in the brain, including the mammalian cortex, hippocampus (e.g., [5]), teleost pallial region, and sensory neurons. Our findings lead to the hypothesis that threshold adaptation – one of the ingredients of spike frequency modulation – is exploited by these systems in order to generate sensitivity to weak and strong stimuli alike through pattern and rate coding, respectively. For instance, threshold changes were observed in hippocampus, enhancing factors such as information transmission, feature selectivity (e.g., [6]), neural code precision, and synchrony detection. These brain regions, critical for discriminating sensory inputs and memory tasks, stand to benefit from improved pattern coding.
References:
1. W Gerstner et al. (2014): Neuronal Dynamics. Cambridge University Press.
2. BA Olshausen, DJ Field (2004): Sparse coding of sensory inputs. Curr Opin Neurobiol 14  481-487.
3. V Itskov et al. (2011): Cell Assembly Sequences Arising from Spike Threshold Adaptation Keep Track of Time in the Hippocampus. J Neurosci 31: 2828-2834.
4. G Menesse et al. (2022): Homeostatic Criticality in Neuronal Networks. Chaos Solitons Fractals 156: 111877.
5. A-T Trinh et al. (2023): Adaptive spike threshold dynamics associated with sparse spiking of hilar mossy cells are captured by a simple model. J Physiol 601  4397-4422.
6. WB Wilent, D Contreras (2005): Stimulus-Dependent Changes in Spike Threshold Enhance Feature Selectivity in Rat Barrel Cortex Neurons. J Neurosci 25: 2983-2991.
 



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P043 Diving into space: emerging and disappearing shared dimensions in neuronal activity under the influence of psychedelics
Classical psychedelics (5-HT2A agonists) wield profound influence over the orchestrated interplay of billions of neurons in the cortex. Dimensionality reduction techniques like Principal Component Analysis (PCA) reveal that lower dimensional structures of spontaneous brain activity are persistent across time and neurons [1]. However, when classical psychedelics are introduced, do new dimensions emerge or vanish within neuronal population activity? Moreover, what biological mechanisms underlie emerging new dimensions? We set out to answer these questions by identifying contrastive principal components between different brain states. 
Therefore, we analysed Neuropixels recordings of spontaneous brain activity in rodents before vs. after psychedelic drug administration (TCB-2, Psilocybin, LSD, DMT), during wakefulness vs. non-REM sleep [2], and during low vs. high arousal [3]. We utilized contrastive Principal Component Analysis (cPCA) to identify dimensions that either appear or disappear [4], i.e. directions that are present in the target dataset (e.g. before drug) but not the background dataset (e.g. after drug). Contrastive components were identified by conducting an eigen decomposition on subtracted target and background covariance matrices by multiplies of contrastive parameter alpha (Fig. 1). We methodologically extended cPCA by analysing the position of contrastive components on the alpha spectrum.
Preliminary results indicate that classical psychedelics consistently caused dimensions to disappear, which was particularly prominent on slow timescales. This trend persists even after excluding neurons that decreased firing rates before vs. after drug administration. Contrastive dimensions were also rarely unique to one dataset, but rather were shared across target and background datasets to varying extends. This holds true for psychedelic (before drug vs. after drug) and sleep (non-REM vs. wakefulness) datasets. To quantify and compare contrastive dimensions, we measured them either towards the left end of the alpha spectrum, where background variance equals to random variance of a single neuron, or towards the centre of the alpha spectrum, where the principal component captures half of the variance of principal component 1 (Fig. 1). 

References
[1] Stringer, C., Pachitariu, M., Steinmetz, N., Reddy, C. B., Carandini, M., & Harris, K. D. (2019). Spontaneous behaviors drive multidimensional, brainwide activity. Science364(6437), eaav7893.
[2] Senzai, Y., Fernandez-Ruiz, A., & Buzsáki, G. (2019). Layer-specific physiological features and interlaminar interactions in the primary visual cortex of the mouse. Neuron101(3), 500-513.
[3] Stringer, C., Pachitariu, M., Reddy, C., Carandini, M., & Harris, K. D. (2018). Recordings of ten thousand neurons in visual cortex during spontaneous behaviors (Version 4). Janelia Research Campus. https://doi.org/10.25378/janelia.6163622.v4
[4] Abid, A., Zhang, M. J., Bagaria, V. K., & Zou, J. (2018). Exploring patterns enriched in a dataset with contrastive principal component analysis. Nature communications9(1), 2134.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P044 How to measure the dynamic range of complex response functions?
The neuronal response function delineates the interplay between external stimuli and neural activity, serving as a pivotal tool for unraveling how neurons encode and process information. A fundamental aspect of response functions is their dynamic range, which quantifies the range of input level that yields distinguishable neuronal responses. Many response functions exhibit a simple sigmoidal profile, featuring subtle firing rate changes at low and high inputs and more pronounced changes for intermediate input level. For these typical cases, the conventional dynamic range definition, which assumes that the entire input range comprised between 10% and 90% of the maximum output contributes to the dynamic range (whilst the rest is discarded), proves to be successful. However, growing evidence also indicates the presence of more complex response functions with double-sigmoid or multiple-sigmoidal curves and often plateaus within the customary 10%–90% response range. In the cases of complex response functions, the conventional dynamic range definition often generates inflated results, as indistinguishable inputs (plateaus) may improperly contribute to the measured dynamic range. To better understand complex response functions, we study a set of complex response functions from previously published empirical and modeling studies, and a neuronal model of a mouse retinal ganglion cell with detailed dendritic structure capable of featuring both simple-sigmoid and complex response functions. The model incorporates two dynamical elements that reduce or increase the energy consumption of the neuron, and both alterations can yield double-sigmoid response functions. We introduce a novel way of classifying response functions based on their complexity. To estimate the dynamic range of only the discernible responses in both simple and complex response functions, we propose alternative definitions of dynamic range. These alternative approaches match the measured dynamic range of the conventional definition for simple response functions and generalize the measure for complex response functions. We discuss the advantages and limitations of each proposal, highlighting that all of them have fewer arbitrary choices than the conventional definition of dynamic range. These newly developed methods are general, and adaptable to various research fields.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P045 Hierarchical Brain Dynamics: Insights from Multicompartmental Neuronal Modeling
Cytoarchitectonic studies have uncovered a correlation between higher levels of cortical hierarchy and reduced dendritic size [1]. This hierarchical organization extends to the brain's timescales, revealing longer intrinsic timescales at higher hierarchical levels [2-4]. However, the contribution of single-neuron morphology in this hierarchy of timescales, which typically occurs at the whole-brain level, remains unclear. We employ a multicompartmental neuronal modeling approach from digitally reconstructed neurons [5], which has previously enabled the classification of neurons based on their dynamical features [6], and the study of aging effects on neuronal structure and dynamics [7]. This flexible approach has provided valuable insights into dendritic computation and the intricate interplay between age-related changes and neuronal behavior. Here we establish a significant correlation: neurons with larger dendritic structures exhibit shorter intrinsic timescales. Furthermore, we investigate the influence of inhomogeneous propagation of dendritic activity and synaptic input on neuronal energy consumption, which is also heterogeneously distributed across brain regions [8]. Our results reveal mechanisms underlying complex neuronal response functions characterized by plateaus and a double-sigmoid shape, which are akin to patterns observed in retinal ganglion cells [9]. Our findings highlight the crucial role of single-neuron structure in contributing to a hierarchy of intrinsic timescales in the brain, aligning with observations from electrophysiology experiments [10] and whole-brain resting-state functional magnetic resonance imaging [11]. This study advances our understanding of neuronal dynamics and sheds light on the intricate relationship between neuronal structure, hierarchy of timescales, and energy consumption in the brain.

[1] Hilgetag, C.C. and Goulas, A., Philos Trans R Soc Lond B Biol Sci, 2020, 375(1796), p.20190319.

[2] Kiebel, S.J., Daunizeau, J. and Friston, K.J., PLoS Comput. Biol., 2008, 4(11), p.e1000209.

[3] Chaudhuri, R., Knoblauch, K., Gariel, M.A., Kennedy, H. and Wang, X.J., Neuron, 2015, 88(2), pp.419-431.

[4] Gollo, L.L., Zalesky, A., Hutchison, R.M., Van Den Heuvel, M. and Breakspear, M., Philos Trans R Soc Lond B Biol Sci, 2015, 370(1668), p.20140165.

[5] Ascoli, G.A., Donohue, D.E. and Halavi, M., J. Neurosci., 2007, 27(35), pp.9247-9251.

[6] Kirch, C. and Gollo, L.L., PeerJ, 2020, 8, p.e10250.

[7] Kirch, C. and Gollo, L.L., Sci. Rep., 2021, 11(1), p.1309.

[8] Shokri-Kojori, E., Tomasi, D., Alipanahi, B., Wiers, C.E., Wang, G.J. and Volkow, N.D., Nat. Commun., 2019, 10(1), p.690.

[9] Deans, M.R., Volgyi, B., Goodenough, D.A., Bloomfield, S.A. and Paul, D.L., Neuron, 2002, 36(4), pp.703-712.

[10] Murray, J.D., Bernacchia, A., Freedman, D.J., Romo, R., Wallis, J.D., Cai, X., Padoa-Schioppa, C., Pasternak, T., Seo, H., Lee, D. and Wang, X.J., Nat. Neuroscience, 2014, 17(12), pp.1661-1663.

[11] Raut, R.V., Snyder, A.Z. and Raichle, M.E., PNAS, 2020, 117(34), pp.20890-20897.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P046 Basic biophysical models of short-term presynaptic plasticity
Communication between neurons via chemical synapses may generate different postsynaptic responses for consecutive activations. The differences can be explained by physiological phenomena that occur pre- and postsynaptically, possibly evolving in different time scales ranging from tens or hundreds of milliseconds to hundreds of seconds or even more. Based on previous experimental work (Barroso-Flores et al., 2015), we developed two biophysical, yet simple models of presynaptic neurotransmitter release that can be used to fit and explain existing electrophysiological data. The first model is based on a continuous, deterministic, 3D dynamical system that captures the dynamics of presynaptic calcium, the activation of the presynaptic release machinery, and the available neurotransmitter from the readily releasable pool of vesicles. This model captures the antagonistic dynamics of accumulation of residual calcium and the depletion of vesicles from the readily releasable pool. The model also gives predictions of whether synaptic release will display facilitation, depression, or a biphasic release profile as a function of the characteristic rates for calcium accumulation and vesicle replenishment, and the presynaptic firing rate among other parameters.  Examination of the biochemistry of the release machinery grants arguments for a quasi-steady state reduction that yields a 2D version of the first model. A second model that is closer to the biology of a chemical synapse is derived from the first model by replacing the neurotransmitter available for release with an integer random variable representing the number of vesicles in the readily releasable pool. The geometry and topology of both models can be studied with analytical expressions using standard dynamical systems techniques. Further, these models can also be added to continuous models of membrane potential. Examples of how these models can be used to explain changes in network dynamics induced by short-term presynaptic plasticity will be presented.


Barroso-Flores, Janet, Marco A. Herrera-Valdez, Violeta Gisselle Lopez-Huerta, Elvira Galarraga, and José Bargas. "Diverse short-term dynamics of inhibitory synapses converging on striatal projection neurons: differential changes in a rodent model of parkinson’s disease." Neural plasticity, 2015 (2015).



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P047 On EEG microstates and linear dynamics
This study delves into EEG microstates [1], which are enduring patterns of brain activity associated with cognitive and clinical phenomena. Despite their significance, there's a lack of consensus on how to analyze microstates. To address this gap, we apply various state-of-art microstate algorithms to a substantial EEG dataset, aiming to elucidate their relationships and dynamics. We propose that the properties of microstates are heavily influenced by the linear characteristics of EEG signals.
We conducted our research using the Max Planck Institut Leipzig Mind-Brain-Body Dataset. Among other, participants completed a 62-channel EEG experiment at rest using two paradigms: eyes open and eyes closed. We used the preprocessed EEG data (total N = 204) provided as EEGLAB .set and .fdt files, the data have a sampling frequency of 250 Hz, are low-pass-filtered at 125 Hz and are ~8 min long. The complete description can be found in [2].
We compared the performance of six different clustering algorithms: (Topographic) atomize and agglomerate hierarchical clustering, Modified K-means, Principal component analysis, Independent component analysis, and Hidden Markov model. These algorithms were assessed based on microstate measures such as lifespan, coverage, and occurrence, as well as dynamic statistics like mixing time, entropy, entropy rate, and the first peak of the auto-mutual information function, see [3] for detailed methods and results.
We found that microstate statistics derived from real EEG data closely resembled those obtained from Fourier surrogates, suggesting a strong dependence on the linear covariance and autocorrelation structure of the underlying EEG data. Moreover, when employing a linear vector autoregression (VAR) model, we observed highly comparable microstates to those estimated from actual EEG data. This indicates that linear VAR models could potentially provide more reliable estimates of microstate repertoire and dynamics due to their robustness [3,4].

Our findings underscore the significance of linear EEG models in comprehending both the static and dynamic properties of human brain microstates. By demonstrating high reproducibility of microstate properties from linear models, particularly Fourier surrogates and VAR models, we contribute to advancing the methodological and clinical interpretation of EEG data, and EEG microstates in particular, paving the way for a deeper understanding of brain dynamics and its links to function and pathology.

Acknowledgments

The publication was supported by ERDF-Project Brain dynamics, No. CZ.02.01.01/00/22_008/0004643 and the Czech Science Foundation project No. 21-32608S.


References
1. Pascual-Marqui RD. Segmentation of brain electrical activity into microstates: Model estimation and validation. IEEE Trans Biomed Eng. 1995;42(7):658-665.
2. Babayan A. Data descriptor: A mind-brain-body dataset of MRI, EEG, cognition, emotion, and peripheral physiology in young and old adults. Sci Data. 2019;6:1-21.

3. Jajcay N, Hlinka J. Towards a dynamical understanding of microstate analysis of M/EEG data. NeuroImage. 2023; 281:120371.
4. Pascual-Marqui RD. On the relation between EEG microstates and cross-spectra. 2022:1-15. Available from: http://arxiv.org/abs/2208.02540


h3.cjk { font-family: "Noto Serif CJK SC" }h3.ctl { font-family: "Lohit Devanagari" }p { margin-bottom: 0.1in }


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P048 Formation of artificial neural assemblies by the E%Max-winners-take-all process
The concept of neural assembly is not new and dates back to the times of psychologist Donald O. Hebb. According to his hypothesis, assemblies are sets of strongly connected neurons responsible for representing cognitive information. He believed that through the activity of these assemblies, combined with internal biological mechanisms that facilitate the formation and maintenance of these sets, more complex cognitive functions could emerge, such as language and reasoning. Recently, a framework called "Assembly Calculus" [1], describing possible operations involving neural assemblies, was proposed to tackle high-order neural computations, including those necessary for language processing. The proposed neural model is a discrete-time dynamic system equipped with some operations responsible for the formation and maintenance of assemblies. Its structure consists of a set of brain areas, each containing a finite number of excitatory neurons with synapses randomly formed within and between areas. In each area, inhibition is modeled by the k-winners-take-all process, allowing only the k neurons with the highest synaptic inputs to fire in each iteration. In the present work, we explore the properties of the model when neural competition due to inhibition is instead implemented by a more biologically plausible mechanism called E%Max-winners-take-all [2]. In it, the number of neurons firing in each iteration is variable and depends on the distribution of synaptic inputs across the network. Therefore, unlike the original model, neural synchronization and brain rhythms play important roles in assembly formation, recall, and information transfer among areas. We present a computational study where we describe the distribution of assembly sizes and the retrieval capabilities of the model network as functions of connectivity and plasticity.


References
  1. Papadimitriou C. H., et al, Brain computation by assemblies of neurons, Proceedings of the National Academy of Sciences, 2020, 14464-14472
  2. de Almeida, L., et al, A Second Function of Gamma Frequency Oscillations: An E\%-Max Winner-Take-All Mechanism Selects Which Cells Fire, Journal of Neuroscience, 2009, 7497--7503


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P049 The Virtual Brain links neurotransmitter changes and TMS-evoked potential alterations in major depressive disorder
Transcranial magnetic stimulation (TMS) has emerged as a promising therapeutic approach for major depressive disorder (MDD). TMS-evoked potentials (TEPs, Fig.1A) in EEG contain specific peaks, such as N45 and N100, which can be linked to GABAergic neurotransmission. MDD patients show higher amplitudes of the whole TEP and of single peaks, while their cerebral GABA levels are lowered. This complex and poorly understood interplay of GABA, TEPs and MDD presents a compelling case for the use of brain network models. A recent study achieved high levels of fit between empirical and simulated TEPs of healthy controls based on mean-field modeling [1]. However, a modeling perspective of MDD pathology related to TEPs is missing so far. Therefore, we aim to demonstrate in silico how inhibitory neurotransmitter changes, similar to MDD pathology, affect TEP amplitudes.


We created a brain network model using the open-source whole-brain simulator ‘The Virtual Brain’ (TVB, thevirtualbrain.org). The activity of brain regions was simulated with the Jansen & Rit (JR) mean-field model (Fig.1D). The electrical field of TMS stimulation was estimated by the software package SimNIBS (Fig.1B, simnibs.github.io/simnibs/, [2]). In line with the previous study, we applied the ADAM-based gradient-descent optimization to fit whole-brain simulations (Fig.1C), i.e. the JR parameters (Fig.1D) and the effective connectivity (Fig.1E), to empirical TEPs of healthy individuals (n=20, 14 females, 24.5±4.9 years, Fig.1A). After fitting, two inhibitory JR parameters (inhibitory time constant b and the number of inhibitory synapses C4, Fig.1H) were altered to mimic GABA-related MDD pathophysiology in TVB. The effect of these parameter alterations onto the TEP amplitude was analyzed (Fig.1J).


We achieved high mean fits between TVB simulations and empirical TEPs (r=0.696, p<0.001), reproducing with TVB the results of the previous study. Alterations of inhibitory JR parameters had statistically significant impacts on the amplitudes of the whole TEP and all peaks. Both, C4 (r=-0.48, p<0.001) and b (r=-0.37, p<0.001) negatively correlated with the global TEP amplitude. This negative correlation was also observed between C4 and all single peaks (N45: r=-0.43, p<0.001, P60: r=-0.64, p<0.001, N100: r=-0.24, p<0.001, P185: r=-0.38, p<0.001), as well as b and three peaks (N45: r=-0.20, p<0.001, N100: r=-0.30, p<0.001, P185: r=-0.23, p<0.001), while for one peak a positive correlation to b was detected (P60: r=0.10, p<0.001).


Lowering GABAergic inhibitory synaptic transmission in our model led to alterations in simulated TEPs, comparable to the ones observed empirically in MDD patients. Thus, we successfully simulated MDD pathology with TVB, offering a modeling perspective on MDD TEPs. Through our computational virtual TMS framework, we provided a mechanistic explanation for the relationship between GABAergic inhibitory synaptic transmission and pathological TEP amplitude observations in MDD. Our work symbolizes a steppingstone towards understanding and linking MDD pathology on different hierarchical levels of the brain.


References

  1. Momi D, Wang Z, Griffiths JD. TMS-evoked responses are driven by recurrent large-scale network dynamics. Elife, 2023. 12.
  2. Thielscher A, Antunes A, Saturnino GB. Field modeling for transcranial magnetic stimulation: A useful tool to understand the physiological effects of TMS?, 2015 37th Conf of EMBC. 2015.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P050 Visualizing information in Deep Neural Networks receiving competitive stimuli
Deep Neural Networks (DNNs) exhibit significant parallels with the hierarchical organization of representations in the primate visual system. However, their feed-forward architecture, where all information in a scene is processed simultaneously, is unlikely to accurately reflect reality [1]. Typically, when an animal or a human observes a scene, even without saccadic movements, covert attention enables the shifting of focus and the processing of an image as a collection of recognizable items, whose identities are then transferred to working memory. Consequently, the static image is transformed into a temporal sequence. Such temporal dynamics have been overlooked by DNN models.To push towards more realist models, we designed an experiment where a DNN is presented with two competing items, one more salient than the other, placed in different areas of the visual field represented by non-overlapping receptive fields.  We utilize the MNIST digit dataset to illustrate the model. The network's task is to identify the more salient (or target) item. However, we devised a training strategy  so the network is able to recognize the identity of the less salient (or background) item, even though this information is not explicitly required in the output layer. We subsequently developed visualization tools capable of tracking the flow of information through the layers of the network. Of particular interest to us is understanding how latent information about the background item is retained within the network. In this study, we introduce our novel tool designed for visualizing information flow within networks. Additionally, we present results obtained from networks with different architectures, subjected to the training strategy that allows maintenance of background information.

[1] Katharina Duecker, Marco Idiart, Marcel AJ van Gerven, and Ole Jensen. Oscillations in an artificial neural network convert competing inputs into a temporal code. bioRxiv, 2023. 


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P051 Astrocytic modulation of brain oscillations in a network model with neurons-astrocytes interactions in epilepsy
Epilepsy is a chronic syndrome characterized by a predisposition to generate excessive or hypersynchrony of neuronal activity in the brain, known as seizures, with neurobiological, cognitive, psychological, and social consequences. Under normal physiological conditions, astrocytes participate in regulation of neuronal excitability, transmission, and synaptic connectivity [1]. In patients with epilepsy, astrocytes show morphological and functional alterations, known as reactive astrogliosis, and the empirical experiments suggests that neurons-astrocytes interactions are key to the development and progression of the epileptogenesis and ictogenesis [2]. However, the relationship between the reactive astrogliosis and the epileptiform activity in brain circuits is not fully understood.
 
In this work, we develop a computational theoretical framework to understand how structural and physiological changes in astrocytes modulate epileptiform activity in the local field potential (LFP) of a brain circuit. We simulate a volume of human cortical tissue, with a balanced network composed of 10000 neurons, of which 8000 are excitatory and 2000 are inhibitory, and a variable number of astrocytes able to interact with synapses. Structural connectivity between neurons was random, with a 5% probability, and connectivity of synapses with astrocytes is limited to a variable distance depending on the overlap of astrocytic territories. The network dynamics are modeled with spiking neurons, adaptive exponential integrate-and-fire, and astrocytic dynamics were modelled with leaky integrate-and-fire astrocytes [3]. The synaptic model between neurons was modeled based on conductance, and the neuron-astrocyte interactions were bidirectional, with activation of astrocytes based on intracellular calcium concentration by synaptic stimulation, and gliotransmission from astrocytes to neurons. The simulation of LFP was based on the synaptic currents and the distance between the recording point and the synapses.
 
We developed an exploration of the biophysical parameter space of gliotransmission, adaptation neuronal and structural connectivity of astrocytic interactions, and we analyzed the power spectral density (PSD) of simulated LFPs to detect the emergence of brain oscillations and characterize the periodical and aperiodic components. Preliminary results suggest that morphological and physiological changes in neuron-astrocyte interactions in the context of reactive astrogliosis modulate the occurrence of high-frequency oscillations present in epilepsy.
 
Acknowledgments
 
We are grateful to National Agency for Research and Development of the Government of Chile for supporting P Illescas.
 
References


1. Santello, M; Toni, N; Volterra, A. Astrocyte function from information processing to cognition and cognitive impairment. Nature neuroscience, 2019, vol. 22, no 2, p. 154-166.


2. Verghoog, Q P., et al. Astrocytes as guardians of neuronal excitability: mechanisms underlying epileptogenesis. Frontiers in neurology, 2020, vol. 11, p. 591690.

3. De Pitta, M; Brunel, N. Multiple forms of working memory emerge from synapse–astrocyte interactions in a neuron–glia network model. Proceedings of the National Academy of Sciences, 2022, vol. 119, no 43, p. e2207912119.

Speakers

Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P052 piNET: A Neural Network Architecture Design to Maximize Decoding Accuracy Using Minimal Training Data
The multilayer perceptron (MLP) network is an essential and widely used architecture in artificial neural networks. By utilizing multiple layers of interconnected binary classifiers, MLPs are capable of modeling intricate nonlinear relationships between inputs and outputs. However, this comes at the cost of increased computational complexity and susceptibility to overfitting. The removal (or pruning) of unnecessary nodes and/or connections, using e.g. magnitude-based pruning or structured pruning, might lead to improved computational efficiency, reduced overfitting, and enhance decoding performance. However, sparse neural networks can be difficult to initialize, as the weights of the connections between neurons need to be carefully chosen to ensure that the network learns efficiently. Here we introduce a pre-initialized network architecture (piNET) that is based on co-transcriptional gene regulation networks in the somatosensory cortex. We identify the molecular network architecture and weight distributions using information-theoretic calculations. This approach results in an extremely sparse network with only $1.53\%$ of all possible edges. We conducted a performance evaluation of the network by decoding random sequences of data with high dimensionality and sequence length and compared the results against different network structures. The results showed that pre-initialized neural network architecture recovered input with $\approx94\%$ accuracy after training the network with as few as $25\%$ of the data, even at small sample sizes of $1000$. Computationally efficient artificial network architectures that perform with greater accuracy despite the limited availability of training data offer exciting opportunities for embodied computing.



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P053 A Supercomputing Simulation of Serotonergic Densities in a Shark Brain: Reflected Fractional Brownian Motion in Expanding Shapes
The axons of serotonergic neurons have strongly stochastic trajectories that support their dispersal throughout the entire brain in a diffusion-like process. Single-cell transcriptomics analyses suggest that serotonergic axons (fibers) may be partially guided to specific brain regions (depending on the neuron’s transcriptome program) [1], but their overall behavior differs substantially from that of “strongly-deterministic” axons (which often fasciculate and connect specific neuroanatomical regions). As other “strongly-stochastic” axons, serotonergic fibers form meshworks, the density of which shows a substantial regional variability. We have previously shown with supercomputing simulations that serotonergic fiber densities in the mouse brain can be partially predicted by modeling individual serotonergic fibers as paths of a superdiffusive, reflected fractional Brownian motion (FBM) [2]. FBM is a continuous-time stochastic process that generalizes Brownian motion and is parametrized by the Hurst index H (0 < H < 1; the superdiffusive regime corresponds to H > ½ which produces “persistent” paths with positively correlated increments). Questions posed by this research also have stimulated our theoretical work on the reflected FBM in shapes of various spatial dimensions [3] and on the “continuous memory” FBM with a time-dependent Hurst index [4].
In this project, we investigate whether the FBM-properties of serotonergic axons generalize across the vertebrate clade and also further study the properties of the reflected FBM. First, we use a supercomputing simulation to predict the regional serotonergic densities in a shark brain. Cartilaginous fish brains share the same Bauplan with other vertebrates, but they have highly diverse shapes and can continue to grow in adulthood (also, their neural tissue is much less differentiated). Second, we investigate the accumulation patterns of reflected-FBM trajectories in linearly and non-linearly expanding shapes. We present the results of these analyses.



Acknowledgements

This research was funded by an NSF-BMBF CRCNS grant (NSF #2112862 to SJ & TV; BMBF #STAXS to RM).


References


1. Okaty BW, Sturrock N, Escobedo Lozoya Y, Chang Y, Senft RA, Lyon KA, Alekseyenko OV, Dymecki SM. A single-cell transcriptomic and anatomic atlas of mouse dorsal raphe Pet1 neurons. eLife. 2020, 9, e55523.
2. Janušonis S, Haiman JH, Metzler R, Vojta T. Predicting the distribution of serotonergic axons: a supercomputing simulation of reflected fractional Brownian motion in a 3D-mouse brain model. Front Comput Neurosci. 2023, 17, 1189853.
3. Vojta T, Halladay S, Skinner S, Janušonis S, Guggenberger T, Metzler R. Reflected fractional Brownian motion in one and higher dimensions. Phys Rev E. 2020, 102, 032108.
4. Wang W, Balcerek M, Burnecki K, Chechkin AV, Janušonis S, Ślęzak J, Vojta T, Wyłomańska A, Metzler R. Memory-multi-fractional Brownian motion with continuous correlations. Phys Rev Res. 2023, 5: L032025.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P054 A novel method to predict subject phenotypes from EEG Spectral Signatures
The prediction of subject traits using brain data is an important goal in neuroscience, with relevant applications in clinical research, as well as in the study of differential psychology and cognition. While previous research has primarily focused on neuroimaging data, our focus is on the prediction of subject traits from electroencephalography (EEG), a relatively inexpensive, widely available and non-invasive data modality. However, EEG data is complex and needs some form of feature extraction for subsequent prediction. This process is almost always done manually, risking biases and suboptimal decisions. Here, we propose a largely data-driven use of the EEG spectrogram, which reflects macro-scale neural oscillations in the brain. Specifically, the key idea is to use the full spectrogram, reinterpret it as a probability distribution and then leverage advanced machine learning techniques that can handle probability distributions with mathematical rigour and without the need for manual feature extraction [1,2,3]. The resulting techniques Kernel Ride Regression (KRR) and Kernel Mean Embedding Regression (KMER), show superior performance to alternative methods thanks to their capacity to handle nonlinearities in the relation between the EEG spectrogram and the trait of interest. We leveraged this method to predict biological age in a multinational EEG data set, HarMNqEEG [4], showing the method's capacity to generalise across experiments and acquisition setups.


Acknowledgements:


D. Vidaurre is supported by a Novo Nordisk Foundation Emerging Investigator Fellowship (NNF19OC-0054895), an ERC Starting Grant (ERC-StG-2019-850404), and a DFF Project 1 from the Independent Research Fund of Denmark (2034-00054B). This research was funded in part by the Wellcome Trust (215573/Z/19/Z).fro We acknowledge support from PICT 2020-01413.


References:


[1] Franke, K. and Gaser, C. (2019). Ten years of brain age as a neuroimaging biomarker of brain ageing: What insights have we gained? Frontiers in Neurology, 10(JUL).

[2] Smith, S. M., Vidaurre, D., Alfaro-Almagro, F., Nichols, T. E., and Miller, K. L. (2019). Estimation of brain age delta from brain imaging. NeuroImage, 200:528–539

[3] Smola, A., Gretton, A., Song, L., and Schölkopf, B. (2007). A Hilbert space embedding for distributions. In Hutter, M., Servedio, R. A., and Takimoto, E., editors, Algorithmic Learning Theory, pages 13–31, Berlin, Heidelberg. Springer Berlin Heidelberg.

[4] Li, M., Wang, Y., Lopez-Naranjo, C., Hu, S., Reyes, R. C. G., Paz-Linares, D., Areces-Gonzalez, A., Hamid, A. I. A., Evans, A. C., Savostyanov, A. N., Calzada-Reyes, A., Villringer, A., Tobon-Quintero, C. A., Garcia-Agustin, D., Yao, D., Dong, L., Aubert-Vazquez, E., Reza, F., Razzaq, F. A., Omar, H., Abdullah, J. M., Galler, J. R., Ochoa-Gomez, J. F., Prichep, L. S., Galan-Garcia, L., Morales-Chacon, L., Valdes-Sosa, M. J., Tröndle, M. Zulkifly, M. F. M., Abdul Rahman, M. R. B., Milakhina, N. S., Langer, N., Rudych, P., Koenig, T., Virues-Alba, T. A., Lei, X., Bringas-Vega, M. L., Bosch-Bayard, J. F., and Valdes-Sosa, P. A. (2022). Harmonized-multinational qeeg norms (harmnqeeg). NeuroImage, 256:119190.





Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P055 Mesoscopic and microscopic information and its energy cost during synaptic plasticity
The relationship between long-term information encoding in synapses (synaptic learning and memory) and its associated metabolic cost is important for neuroscience, but is not well understood [1,2]. Recently, we showed that real synapses in different parts of the brain across different mammalian species nearly maximize their information content, given their mean synaptic weights [3]. This empirical observation is an example of mesoscopic information, based on synaptic sizes, and does not take into account many internal microscopic synaptic degrees of freedom. In this work, the focus is on the information content and its energy cost associated with these microscopic (mostly hidden) synaptic processes, i.e., receptors (AMPA,NMDA) and protein (PSD). This is studied using recent advances in the physics of stochastic thermodynamics and informatics [4], which are universal and applicable to micro-scale objects such as synapses. This relatively new approach to modeling of synaptic plasticity has a large methodological potential, but is still virtually unknown in computational neuroscience. We initiated this interdisciplinary approach to synaptic plasticity in a series of papers [2,5], but here we present a more unifying picture, based on relevant microscopic dynamics, using multidimensional probabilistic master equation. We find that under quite general conditions, PSD proteins can encode huge amounts of information, much bigger than membrane receptors related to synaptic weight, thus dramatically increasing the information capacity of a synapse. Moreover, this information capacity can be retained for a long time in an energy efficient way, suggesting thermodynamic stability of synaptic memory. 
References:
1) Kandel ER, et al,  Cell  157  163-186 (2014); Benna MK, Fusi S,  Nature Neurosci.  19  1697-1706 (2016); Chaudhuri R, Fiete I,  Nature Neurosci.  19  394-403 (2016).
2) Karbowski J,  J. Neurophysiol.  122  1473-1490 (2019); Karbowski J,  J. Comput. Neurosci.  49  71-106 (2021). 
3) Karbowski J, Urban P,  Scientific Rep. 13  22207 (2023).  
4) Parrondo JMR, et al,  Nature Phys. 11  131-139 (2015).  
5) Karbowski J, Urban P,   Neural Comput. 36  271-311 (2024).  


Speakers

Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P056 Homeostatic self-organization towards the edge of neuronal synchronization
Transient or partial synchronization can be used to do computations, although a fully synchronized network is frequently related to epileptic seizures. Here, we propose a homeostatic mechanism that is capable of maintaining a neuronal network at the edge of a synchronization transition, thereby avoiding the harmful consequences of a fully synchronized network. We model neurons by maps since they are dynamically richer than integrate-and-fire models and more computationally efficient than conductance-based approaches. We first describe the synchronization phase transition of a dense network of neurons with different tonic spiking frequencies coupled by gap junctions.  Then, we introduce a local homeostatic dynamics in the synaptic coupling and show that it produces a robust tuning towards the edge of this phase transition. We discuss the potential biological consequences of this self-organization process, such as its relation to the Brain Criticality hypothesis, its input processing capacity, and how its malfunction could lead to pathological synchronization


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P057 Delineating roles of TRP channels in Drosophila larva cold nociception
In Drosophila larvae, noxious cold temperature is detected by Class III (CIII) primary sensory neurons lining the inside of the body wall. Transient receptor potential (TRP) channels such as TRPA1 and PKD2 are implicated in cold sensitivity [1,2]. To distinguish roles of these TRP channels and their signal transduction mechanisms in cold temperature sensitivity, we conducted a series of experiments including electrophysiological recordings and Ca2+ imaging using CIII-specific expression of GCaMP6m comparing responses of gene-specific RNAi knockdown (KD) of each TRP channel, and we constructed biophysical models to compare the roles of these channels.
When subjected to a rapid temperature drop from 24°C to 10°C at the rate of 2-6°C/sec, CIII neurons responded with a typical peak in spiking rate [3]. Half of these neurons showed transient bursts during the peak. After the peak, the spike rate settled to a steady-state low level. Compared to the control group, TRPA1-KD exhibited a decrease in spiking rate and in occurrence of bursts during a rapid temperature decrease. Conversely, PKD2-KD maintained the transient bursting but significantly attenuated tonic spike activity at the steady low temperature.
We constructed multi-compartmental models in NEURON [4] representing the cases of TRPA1- and PKD2-KDs. These models structurally comprised branching dendrite, soma and axon. They inherited the spike generation mechanisms from previous single-compartment models [3] and included TRPA1 and PKD2 channels implemented as adjusted "two-state" class models [5]. The latter were parameterized to recapitulate the electrophysiological responses of CIII in wild-type and the knockdowns. Asymmetric distributions of TRPA1 and PKD2 channels among sister dendritic branches allowed CIII models to generate cold-induced bursts followed by steady tonic spiking. Importantly, under PKD2-KD, the TRPA1 channels ensured transient burst activity encoding the rate of temperature change, while in TRPA1-KD case, the PKD2 channels enabled the model to generate continuous spiking without bursts suggesting their role in representing temperature magnitude. These findings shed light on the complex mechanisms underlying cold sensation in Drosophila larvae and highlight the role of TRP channels such as TRPA1 in coding rate of the temperature change and PKD2 in coding the magnitude of the steady temperature.
Acknowledgements
This work was supported by NIH grant 5R01NS115209 to DNC and GSC.
References
1.       Turner HN, et al. The TRP Channels Pkd2, NompC, and Trpm Act in Cold-Sensing Neurons to Mediate Unique Aversive Behaviors to Noxious Cold in Drosophila. Curr Biol, 2016. 26(23): p. 3116-3128.
2.     Letcher JM, et al. TrpA1 mediates cold nociception in Drosophila melanogaster. In preparation.
3.       Maksymchuk NV, et al. Transient and Steady-State Properties of Drosophila Sensory Neurons Coding Noxious Cold Temperature. Front Cell Neurosci, 2022. 16: p. 831803.
4.       Carnevale NT and Hines ML. The NEURON Book. Cambridge, UK: Cambridge University Press, 2006.

5.       Voets, T. (2012). Quantifying and Modeling the Temperature-Dependent Gating of TRP Channels. In: Reviews of Physiology, Biochemistry and Pharmacology, Volume 162. Springer, Berlin, Heidelberg. 


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P058 Beyond the Connectome: Divisive Normalization Processors in the Drosophila Early Olfactory and Vision Systems
The Drosophila brain has only a fraction of the number of neurons of higher organisms such as mice and humans. Yet the sheer complexity of its neural circuits recently revealed by large connectomics datasets [1] suggests that computationally modeling the functional logic of fruit fly brain circuits at this scale poses significant challenges. In principle, a whole brain simulation could be instantiated by modeling all the neurons and synapses of the connectome/synaptome with the simple dynamics of integrate-and-fire neurons and synapses, with parameters tuned according to certain criteria. Such an effort, however, would fall short of revealing the fundamental computational units necessary for understanding the true functional logic of the brain, as the complexity of the different computational units becomes lost with a single uniform treatment of such a vast number of neurons and their connection patterns. It is, therefore, imperative to develop a formal reasoning framework of the functional logic of brain circuits that goes beyond simple instantiations of flows on graphs generated from the connectome [2].
To address these challenges, we present here a framework for building functional brain circuits from com- ponents whose functional logic can be readily evaluated, and for determining the canonical computational principles underlying these components using available data. Our focus is on modeling neural circuits arising in odor signal processing in the early olfactory, and motion detection in the early vision systems of the fruit fly using divisive normalization [3] building blocks.

We developed a model of local neuron pathways in the Antennal Lobe (AL) termed the differential Divisive Normalization Processors (DNPs) [4], which robustly extract the semantics (the identity of the odorant object) and the ON/OFF semantic timing events indicating the presence/absence of an odorant object. For real-time processing with spiking projection neuron (PN) models, we showed that the phase-space of the biological spike generator of the PN offers an intuitive perspective for the representation of recovered odorant semantics. The dynamics induced by the odorant semantic timing events were explored as well. Finally, we provide theoretical and computational evidence for the functional logic of the AL as a robust ON-OFF odorant object identity recovery processor across odorant identities, concentration amplitudes and waveform profiles.

We demonstrate that three key processing steps in the motion detection pathway, including the elementary motion detector and the intensity and contrast gain control mechanisms, can be effectively modeled with DNPs [5]. Three cascaded DNPs implementing intensity and contrast gain control and elementary motion detection, respectively, model effectively the robust motion detection realized by the early visual system of the fruit fly brain. This suggests that, despite its nonlinearity, the differential class of DNPs can be used as canonical computational building blocks in early sensory processing.
Acknowledgments
The research reported here was supported, in part, by the National Science Foundation under grant #2024607.
References
[1] Lazar et al., eLife, 2021.
[2] Lazar et al., Frontiers in Neuroinformatics, 2022.
[3] Carandini et al., Nature Reviews Neuroscience, 2012.
[4] Lazar et al., PLOS Computational Biology, 2023.
[5] Lazar et al., Biological Cybernetics, 2023.


Speakers

Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P059 Unveiling the Impact of Brain's Scale-Free Topology on Information Processing
The human brain operates as a sophisticated modular system, with interconnected modules playing pivotal roles in orchestrating and defining its method of information processing, thereby giving rise to its diverse functions. For instance, when processing visual information, each component undergoes independent processing within discrete brain regions before converging and integrating as it progresses to higher brain areas, ultimately culminating in the comprehensive interpretation of the visual input. A wealth of biological experiments and computer simulations shed light on the intricate dynamics of information flow, elucidating how it is integrated and segregated through various mechanisms, including top-down and bottom-up processes, as well as the unique connection properties of interneurons within specific brain regions. However, despite significant advancements, there remains a notable gap in our understanding of how the characteristics of network topology within these modules, along with the methods of connection between them, influence the intricate process of information processing.
 The anatomical connections in the brain can exhibit various network topology characteristics, such as small-world or scale-free features. In this study, we simulated neuronal dynamics based on various structural connectivity to investigate how the characteristics of topology shape functional networks and influence brain dynamic fluctuations. To efficiently simulate the activity of thousands of neurons, we developed parallel GPU-based code, utilizing the Izhikevich neuron model for large-scale spiking neural network simulations. We used public calcium imaging data. Zebrafish, known for easy genetic manipulation and real-time tracking of individual neuron activity, offer the advantage of providing activity at the single-neuron level for thousands of neurons. We created various modules and connected them, each with different topology characteristics, such as a random network, a scale-free network, or a small-world network. We observed how spiking patterns were segregated and integrated under various topologies. To measure segregation, coherence of spikes within each module was measured, while for integration, entropy between modules was measured. The results revealed that in small-world and random networks, coherence within modules was low, and entropy values were not particularly high. However, in the scale-free network, both coherence and entropy values maintained a high level across coupling constants. The results were consistently confirmed through mathematical stability analysis. Our findings demonstrated that functional networks within different brain systems, including data from mice and zebrafish, displayed characteristics consistent with scale-free network topology and exhibited dynamic fluctuations in brain activity. Moreover, our simulations of brain dynamics using zebrafish structural connection data, which incorporated scale-free network properties, exhibited the closest resemblance between empirical and simulated functional networks.
In conclusion, our study highlights that connectivity properties at the individual neuron level, characterized by scale-free topology, play a significant role in shaping brain information processing.

This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea  government(MSIT)  (NO. 2023R1A2C20062171, 2022M3E5E8081199)





Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P060 Neural Modeling of Channelopathies to Elucidate Neural Mechanism of Neurodevelopmental Disorders
Neurodevelopmental disorders (NDDs) such as epilepsy, autism spectrum disorder, and developmental delays vary greatly clinically and effect a large portion of the population [1]. Despite this variability, NDDs share common pathophysiological characteristics: the hallmark of these disorders is imbalanced excitatory inhibitory input (E/I), which during development leads to dysfunction in neuronal circuits [2,8]. Multiple factors such as genetic expression, environmental factors, and complex compensatory mechanisms influence the E/I balance [2]. Brain channelopathies are particularly useful in studying the E/I imbalance mechanism because their function can be linked to neuronal excitability [3]. Neuronal ion channels are important in generating electrical activity in all neurons, and disrupting this activity has been highly associated with NDDs [4]. Channelopathies can cause an increase or decrease in the excitability of neurons, and these changes can be a result in a change in the number of functional channels or to a change in channel biophysics [2,9]. Using a previously published primary motor cortex (M1) model [5], we utilize a large-scale, highly detailed biophysical neuronal simulation to investigate how channel mutations affect individual and network neuronal activity. This model was built using NetPyNE and the NEURON simulator. The stimulations provide a detailed mechanistic understanding of the role channelopathies play in the E/I imbalance and will allow us to better understand therapeutic targets that specifically target disease symptoms. Pyramidal tract projecting (PT) neurons are involved in the forwarding of motor commands to the lower motor neurons and sit strategically in the layer 5B of the cortex, a known output from the cortical circuit [6]. Layer 5 pyramidal neurons (L5PNs) are the main output of cortical networks and have a high expression of NDD-associated genes [7]. L5PN dendrites receive inputs from all cortical layers and long-range projections from distal brain regions [7]. These connections make L5PNs particularly sensitive to E/I imbalance. Additionally, previous studies have shown that the excitability of L5PN is a reliable marker for the behavior of the whole circuit [8]. Using the M1 cortical column simulation, we can measure how channel biophysical changes affect the overall excitability of the network. Specifically, we can observe how L5PN change their firing patterns to better understand the pathophysiology of the simulated channelopathy. Our M1 model is based on the Hodgkin & Huxley (HH) formalism, however, HH channels cannot capture the different biophysical properties that more complex models offer. We will replace HH channels with Hidden Markov Models (HMM) to capture all of the biophysical properties involved in channelopathies and better replicate empirical data. This model will allow us to realistically examine how NDDs alter the intrinsic excitability of each neuron and the network as a whole. This will provide a tool to investigate the underlying neuronal mechanisms of NDDs affecting many children worldwide and will allow us to stimulate how novel therapeutics can return excitability to neurotypical levels and ultimately be translated clinically.
Acknowledgements: This work was supported by the Hartwell Foundation through an Individual Biomedical Research Award. 


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P061 Brain network flexibility as a marker of early adaptation between humans and intelligent wearable machines
Merging human and biological systems with augmentation devices could substantially modify human capabilities, but there are still significant challenges in integrating these technologies with the human body. Monitoring neural behavior could provide a key non-invasive biomedical strategy for mutually adaptive human augmentation systems where neural flexibility may be harnessed to predict and monitor human adaptation to the system. Here, we investigate if neural flexibility correlates with the human ability to adapt to assistive technologies by monitoring the brain of individuals who used an "intelligent" exoskeleton boot (ExoBoot), designed to augment walking efficiency by applying bilateral torque at the ankles. We analyzed the resting state activity of 20 individuals using electroencephalography (EEG) recordings for 5 minutes collected before the ExoBoots were utilized. First, we estimated the dynamic synchronization between brain regions (electrodes) with a weighted phase lag index (wPLI) and then distilled the dynamic connectivity patterns into network modules with a generalized Louvain algorithm [1] and automated parameter search [2]. We next estimated flexibility—the propensity of sensors to change their affiliation to network modules and correlated this resting state metric with an adaptation metric derived from electromyography (EMG) during ExoBoot utilization. We use EMG-derived metrics for an objective measurement of adaptation, where less muscular effort corresponds to better adaptation to the ExoBoot. We found a strong positive correlation between individual adaptation in the initial exposure to the device and neural flexibility, particularly within the posterior and central areas of the scalp which are known to be crucial for motor, and visual processing.  Our findings also suggest temporal alterations in the adaptation process, demonstrating that while individuals with high neural flexibility exhibit rapid adaptation early on, all participants eventually reach a proficient level of device integration, suggesting a benefit from the ExoBoot's assistance over time. This distinction between short-term and long-term adaptation, adds to our understanding of the human-machine adaptation loop, particularly within the context of wearable technology. By identifying a neural marker of adaptation, our study not only advances the theoretical foundation of how humans integrate with assistive devices but also opens new avenues for the development of adaptive technologies where assistive devices are fine-tuned to individual neural profiles, contributing to future personalized, adaptive technologies that enhance user experience and efficiency in real-time.


[1] Peter J. Mucha et al., Community Structure in Time-Dependent, Multiscale, and Multiplex Networks.Science328,876-878(2010). https://doi.org/10.1126/science.1184819

[2] Italo'Ivo Lima Dias Pinto, Javier Omar Garcia, Kanika Bansal; Optimizing parameter search for community detection in time-evolving networks of complex systems. Chaos 1 February 2024; 34 (2): 023133. https://doi.org/10.1063/5.0168783



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P062 Training history determines shortcut usage in artificial agent navigation
We trained artificial agents with deep reinforcement learning (RL) on navigation tasks analogous to ones performed with humans and mice (for related earlier work see [2]). We looked at different training environments, learning rules, and developed behaviors, and drew correlations with the internal representations that these RL agents developed. We used our analyses to predict the kinds of neural activations that might exist in real brains during navigation tasks and suggest experiments that might help to uncover them.
We were inspired by studies of humans navigating in virtual environments that showed [1] that people who grew up in cities with grid-like streets were, in general, weaker navigators that people raised in more organic irregular streets. More specifically, the use of shortcuts through the novel environment was significantly higher in the latter population. We wanted to explore what navigational strategies were underlying this difference of behaviors between the two groups and, further, to generate hypotheses about possible corresponding brain activities.
To achieve this goal, we developed a navigation environment in which an agent needs to reach a goal that is hidden behind a barrier. To reach it, the agent needs to go around the barrier or, on some trials, a shortcut in the barrier will be open that can be used by the agent to reach the goal faster. Each agent is trained in a modification of this environment with a certain frequency of the shortcut availability. They are all later tested on fixed set of trials, some with the shortcut open, and some with the shortcut closed. We find that the overall navigational strategies are similar in agents with different training histories, even though the shortcut usage is much higher in the ones who have more experience with it. However, the internal representations and the temporal dynamics of their development were quite different in the two classes of agents. They differed both in the sensitivities of individual nodes to environmental landmarks and the global features of the nodes’ population activity.
These results led us to predict that humans who grow up in places with fewer available directional cues may develop an increased awareness of and ability to navigate using global landmarks. This finding is consistent with existing literature on navigational skills [3]. Our results also suggest the existence of landmark-sensitive neurons in skilled navigators. If such navigators are placed in one environment where a global cue is available and an identical one without the global cue, these neurons should display differential activation. Further, based on our results, we hypothesize that successful navigation decisions are based on population-level code involving not only spatially-sensitive, but also landmark-sensitive neurons, and that the latter have an outsized representation in the population code.
References
1. Barhorst-Cates, E. M., Meneghetti, C., Zhao, Y., Pazzaglia, F., & Creem-Regehr, S. H. (2021). Journal of Environmental Psychology, 74, 101580. 
2. A. Liu, A. Borisyuk. Investigating navigation strategies in the Morris Water Maze through deep reinforcement learning. Neural Networks. (2024) Apr:172:106050. 
3. Padilla, L. M., Creem-Regehr, S. H., Stefanucci, J. K., & Cashdan, E. A. (2017). Sex differences in virtual navigation influenced by scale and navigation experience. Psychonomic Bulletin & Review, 24(2), 582–590. 



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P063 Disentangling circuit mechanisms of how prior expectations affect decision making across the mouse brain
Biases stemming from prior knowledge or expectations have long been known to influence sensory processing and decision-making. However, the loci and mechanisms of such modulation remain unclear. Empowered by brain-wide recordings during a sensory decision-making task that spans the arc from sensory processing to action[1] and the discovery that prior expectations can be decoded widely across the brain[2], we seek to more precisely identify the areas and circuit mechanisms of modulation by prior expectations. We first disentangle neural representations of the highly correlated prior expectation, sensory input, and choice variables by balancing conditions for all 3-way dichotomies (stimulus side, choice side, and prior side) to find that sparse sets of brain regions act as stimulus responders, stimulus integrators, and choice/action generators. We next evaluate five hypotheses for how and where in these defined regions the prior knowledge exerts its bias: 1. In the activity of stimulus responders; 2. In weights from stimulus responders to stimulus integrators; 3. In the activity of integrators; 4. In weights from stimulus integrators to choice/action generators; 5. In the activity of choice/action generators. We identify predicted neural signatures of these hypotheses through models that implement the different mechanisms. Comparing predictions with the brainwide recordings, we find no significant prior encoding effects on the stimulus responders, but significant modulations in the activity of stimulus integrators and choice/action generators. Further, we find that these effects take the form of a gain modulation rather than an initial activity bias. Collectively, our results only support hypotheses 2 and 5, suggesting that prior expectations about sensory inputs influence decision making in the brain through a multiplicative gain on stimulus integration and choice/action generation, but not directly on low-level stimulus representation.


References:
  1. International Brain Laboratory et al. A Brain-Wide Map of Neural Activity during Complex Behaviour. bioRxiv, 2023.
  2. Findling, Hubert, International Brain Laboratory et al. Brain-wide representations of prior information in mouse decision-making. bioRxiv, 2023.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P064 Comparison of methods of functional connectivity estimation in investigation of diurnal changes in working memory performance

Correlation matrix estimation from functional magnetic resonance (fMRI) data presents a major challenge for a multitude of reasons, including non-stationarity of the signal and low temporal resolution, resulting in the number of variables (locations from which the signal is sampled) exceeding the number of time points. The Pearson correlation matrix is most commonly used, but likely constitutes a suboptimal choice, as in the typical fMRI setting it exhibits strong sensitivity to any noise present in the signal. Hence, comparison of alternative methods of functional connectivity estimation is the subject of this contribution. The methods compared include: sample Pearson correlation, the detrended cross-correlation coefficient [1], and a symmetrized variant a non-linear cross-correlation based on filtering high-amplitude events (rBeta) [2]. Additionally, Ledoit-Wolf shrinkage was applied to each method for noise reduction.
The methods were compared in their ability to detect statistically significant differences between experimental conditions using data obtained in an fMRI experiment investigating the effects of diurnal changes on memory performance [3]. Comparison was conducted between resting state and task performance data, experimental phases (information encoding and retrieval) and tasks based on the Deese-Roediger-McDermott paradigm: involving either linguistic processing semantically and phonetically related words, or visual processing of images of global or local similarity.  The comparison focused on eigenvalues of correlation matrices’. To identify eigenvalues to corresponding eigenvectors in different conditions and subjects, agglomerative hierarchical clustering of eigenvectors was performed.
All correlation matrix estimation methods besides the rBeta-based method detected statistically significant differences between experimental tasks. All methods led to detection of differences between experimental tasks, but these were not consistent with respect to the estimation method. Application of Ledoit-Wolf shrinkage led to a more consistent detection of condition differences. Several aspects of this investigation merit further attention, particularly the impact of the details of the data analysis pipeline on the results, including the eigenvector clustering algorithm applied.
References
1.      Kwapień J, Oświęcimka P, Dróżdż S. Detrended fluctuation analysis made flexible to detect range of cross-correlated fluctuations. Phys. Rev. E. 2015, 92, 052815.
2.      Cifre I, Miller Flores MT, Penalba L, Ochab JK, Chialvo DR. Revisiting Nonlinear Functional Brain Co-activations: Directed, Dynamic, and Delayed. Front. Neurosci. 2021, 15, 1194.
3.      Lewandowska K, Wachowicz B, Marek T, et al. Would you say “yes” in the evening? Time-of-day effect on response bias in four types of working memory recognition tasks. Chronobiol. Int. 2018, 35(1), 80-89.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P065 Idiom-independent reduced social references in Alzheimer's disease evoked speech
Alzheimer's disease (AD) is a neurodegenerative disease that affects millions of people with multiple cognitive dysfunctions, including a decline in language production. However, a complete description of the linguistic aspects of AD is needed. For example, we do not know whether the changes are idiom-specific or invariant or which elements of the context of the communication process are affected by Alzheimer's disease. Here, we used a novel linguistic accountability methodology to evaluate how evoked speech differs between AD patients and healthy volunteers in two different idioms: English and Brazilian Portuguese. We fine-tuned the Bidirectional Encoder Representations from Transformers (BERT) large-case model and its Portuguese counterpart, BERTimbau, and tested them on labeled datasets designed to diagnose Alzheimer's disease. The English dataset consisted of audio recordings and transcripts from the Cookie Theft picture description task. In contrast, the Portuguese dataset consisted of audio recordings and transcripts from the Dog Story description task. We evaluated the performance of the models using a 5-fold cross-validation procedure, which resulted in an accuracy of 87% for the English dataset and 80% for the Portuguese dataset. Our results indicate that BERT and BERTimbau capture social references when classifying AD subjects in English and Portuguese. The models identified reduced social references in the subjects' communication as the pathology progressed, providing valuable insights into LLMs' linguistic and psychological patterns for text classification. Our study contributes to understanding the linguistic and psychological features that drive the models' classification decisions.



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P066 Complexity is maximized close to the criticality between ordered and disordered cortical states
Complex systems are typically characterized as an intermediate situation between a complete reg-
ular structure and a totally random system. Brain signals can be studied as a striking example of
such systems: cortical states can range from highly synchronized and ordered neuronal activity (with
higher spiking variability) to desynchronized and disordered regimes (with lower spiking variability).
It has been recently shown, by testing independent signatures of criticality, that a phase transition
occurs in in a cortical state of intermediate value of spiking variability. Here we use a symbolic in-
formation approach to show that, despite the monotonical increase of the Shannon entropy between
ordered and disordered regimes, we can determine an intermediate state of maximum complexity
based on the Jensen disequilibrium measure. More specifically, we show that the statistical com-
plexity is maximized close to the criticality for the analyzed data of urethane-anesthetized rats, as
well as, for a network model of excitable elements that presents a critical point of a non-equilibrium
phase transition.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P067 Phase relations diversity between cortical populations: anticipated synchronization and phase bistability
Two spiking neuron populations unidirectionally connected in a sender-receiver configuration can exhibit anticipated synchronization (AS), which is characterized by a negative phase-lag. This phenomenon has been reported in electrophysiological data of non-human primates and human EEG during a visual discrimination cognitive task [1]. In experiments, the unidirectional coupling could be accessed by Granger causality and can be accompanied by both positive (the usual delayed synchronization, DS) or negative (which would characterizes AS) phase difference between cortical areas [1]. Here we show a model of two coupled populations [2,3] in which the neuronal heterogeneity and external noise can determine the dynamical relation between the sender and the receiver and can reproduce diversity in phase relations reported in experiments. We show that depending on the relation between excitatory and inhibitory synaptic conductances the system can also exhibit phase bistability between anticipated and delayed synchronization. Recently it has been reported that bistable phase-differences in magnetoencephalography (MEG) recordings appear when participants listening to bistable speech sequences that could be perceived as two distinct word sequences repeated over time [4]. This result suggests that phase-bistability in cortical regions could be related to bistable perception [3].



Acknowledgments The authors thank CNPq (grants 402359/2022-4, 314092/2021-8), FAPEAL (grant SEI n.º E:60030.0000002401/2022), UFAL, and CAPES for financial support.

References
[1] Matias, F. S., Gollo, L. L., Carelli, P. V., Bressler, S. L., Copelli, M., & Mirasso, C. R. Modeling positive Granger causality and negative phase lag between cortical areas. NeuroImage. 2014, 99, 411-418.
[2] Brito, K. V., & Matias, F. S. Neuronal heterogeneity modulates phase synchronization between unidirectionally coupled populations with excitation-inhibition balance. Physical Review E. 2021, 103(3), 032415.
[3] Machado, J. N., & Matias, F. S. Phase bistability between anticipated and delayed synchronization in neuronal populations. Physical Review E 2020, 102(3), 032412.
[4] Kösem, A., Basirat, A., Azizi, L., & van Wassenhove, V. High-frequency neural activity predicts word parsing in ambiguous speech streams. Journal of neurophysiology. 2016, 116(6), 2497-2512.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P068 Mapping brain lesions to conduction delays: the next step for personalized brain models in Multiple Sclerosis
Multiple sclerosis (MS) is a clinically
heterogeneous, multifactorial autoimmune disorder affecting the central nervous
system (CNS). Structural damage the myelin sheath, with the consequent slowing
of the conduction velocities, is a key pathophysiological mechanisms. In fact, studies have shown that
the conduction velocities of action potentials are closely related to the
degree of myelination, with thicker myelin sheaths associated to higher
conduction velocities. However, how the intensity of the structural lesions of
the myelin translates to slowing of conduction delays is not known, and lesion
volume alone is a poor predictor of clinical disability. In this work, we use
large-scale brain models and Bayesian inversion to estimate how myelin lesions
translate to longer conduction delays [1]. Each subject underwent MEG and MRI, with detailed
white matter tractography analysis. We also derived a lesion matrix indicating
the percentage of lesions for each edge in every patient. We utilized a
large-scale brain model, where neural activity of each region was represented
as a Stuart-Landau oscillator in a regime with damped oscillations, and regions
are coupled according to the empirical connectomes [2]. We proposed a mathematical function elucidating the relationship
between the conduction delays and structural damage percentages in each white
matter tract. Using deep
neural density estimators [3], we inferred the most likely relationship
between lesions and conduction delays. MS patients consistently exhibited decreased power
within the alpha frequency band compared to the healthy group. Dependent upon
the parameter alpha, this function translates lesions into edge-specific
conduction delays (leading to shifts in the power spectra). We found
that the
estimation of the alpha parameter showed a strong correlation with the alpha
peak. The most probable
inferred alpha for each subject is inversely proportional to empirically
observed peaks, while power peaks themselves do not correlate with total lesion
volume. This is the first study demonstrating the topography-specific effect of
myelin lesions on conduction delays. This adds one layer to the personalization
of models in persons with multiple sclerosis.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P069 Investigating the cellular and circuit mechanisms underlying schizophrenia-related EEG biomarkers using a multiscale model of auditory thalamocortical
Individuals with schizophrenia exhibit a deficit in sensory processing, which researchers have extensively investigated in primary auditory cortex (A1) using electroencephalogram (EEG) techniques. These deficits manifest as abnormalities in event-related potentials and cortical oscillations. These alterations reflect a broader disturbance in the balance between excitation and inhibition (E/I balance) that characterizes cortical networks. We have extended our previously developed model of auditory thalamocortical circuits to better reproduce and investigate the biophysical source of these schizophrenia-related EEG biomarkers. The A1 model simulates a cortical column with a depth of 2000 μm and 200 μm diameter, containing over 12k neurons and 30M synapses. Neuron densities, laminar locations, classes, morphology and biophysics, and connectivity at the long-range, local, and dendritic scale were derived from published experimental data. Auditory stimulus-related inputs to the thalamus were simulated using phenomenological models of the cochlear/auditory nerve and the inferior colliculus. The model reproduced in vivo cell type and layer-specific firing rates, local field potentials (LFPs), and EEG signals consistent with healthy controls. We are now leveraging this validated A1 model to gain insights into mechanisms responsible for observed EEG changes in schizophrenia. Changes made to the model to reproduce schizophrenia patient EEG biomarkers were informed using data from positron emission tomography (PET) imaging, genetics, and transcriptomics specific to schizophrenia patients. Specifically, we are employing the model to explore three changes associated with schizophrenia: 1) Reduced inhibition through parvalbumin (PV) interneurons, 2) Reduced inhibition through somatostatin (SST) interneurons, and 3) N-methyl-D-aspartate receptor (NMDAR) hypofunction on PV cells. We found that all three molecular disturbances affected firing rates in a layer- and cell-type specific way, mostly leaving granular layer responses unperturbed but significantly altering superficial and deep layers. Furthermore, in EEG recordings, they altered the 1/f slope, with differential effects in lower frequencies (4-30Hz) compared to the higher frequencies (30-80Hz). PV and NMDAR reductions on both scales showed opposite effects compared to SST reductions. Next, we plan to characterize the impact of schizophrenia-specific cannabinoid and cholinergic pathway modifications on EEG biomarkers such as P300 peak and Auditory Steady State Response (ASSR), as well as extend the model to capture stimulus-specific adaptation (SSA) and mismatch negativity (MMN). This work aims to fill a critical gap in our understanding by elucidating how experimentally determined genetic changes associated with schizophrenia result in altered circuit and network behavior, leading to the emergence of robust EEG biomarkers of the disorder.



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P070 Neuron participation in temporal patterns forms cross-layer, non-random networks in rat motor cortex
Spatiotemporal patterns of neuronal activity are hypothesized to be essential for information processing in the brain. These patterns suggest the existence of cell assemblies, groups of co-active neurons that represent a distinct cognitive unit, potentially related to specific behaviors or representations [1, 2]. Conventionally, cell assemblies are thought to be defined by strong structural connections, such that the stimulation of a portion of its members should transiently activate the entire assembly. The exact composition and computational role of these assemblies, however, has yet to be fully clarified.

Here we present the detection of spike patterns in both superficial and deep layers of rat motor cortex in a voluntary forelimb-movement task. We extend on a previous report of diverse activation of pyramidal neurons to different sequential motor phases [3] and ask how patterns are organized beyond single-neuron activation. In our approach, snapshots of neuronal activity were compared with flexible temporal alignment, relying on an extension of "edit similarity score", a metric originally introduced to compare strings [4]. We further investigated the participation of individual neurons in these flexible spike patterns, hereby named "profiles", through graph analysis and visualization.

Across animals, profiles were largely composed of neurons from both layers, and occurred preferentially, but not exclusively, close to moments of reward. By connecting neurons in a weighted graph by their co-participation in profiles, we observed non-trivial structures with effective hubs that were not explained by shuffled models. Detected profiles were not representative of entire experimental sessions (~2 hours), but specific nodes (neurons) and edges (pairs of neurons who appear together in different profiles) were sustained. We argue that beyond synchronous activation, neurons that form patterns are organized in what we call a "profile space", in which profiles with strong overlap in neuron participation are grouped together. Individual profiles can therefore be understood as different realizations of an underlying functional community, an extension with temporal flexibility to the concept of cell assembly.



Acknowledgements
We thank Japan Society for the Promotion of Science (JSPS) for supporting T. F. with KAKENHI no. JP23H05476.



References
1. Hebb DO. The organisation of behaviour: a neuropsychological theory. New York: Science Editions; 1949.
2. Buzsáki G. Neural Syntax: Cell Assemblies, Synapsembles, and Readers. Neuron. 2010, 68(3), 362-385.
3. Isomura Y, Harukuni R, Takekawa T, et al. Microcircuitry coordination of cortical motor information in self-initiation of voluntary movements. Nat Neurosci. 2009, 12(12), 1586-1593.
4. Watanabe K, Haga T, Tatsuno M, et al. Unsupervised Detection of Cell-Assembly Sequences by Similarity-Based Clustering. Front Neuroinform. 2019, 13, 39.



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P071 A Model of Activation of Cortical Cell Populations through TMS
Modeling non-invasive brain stimulation, particularly transcranial magnetic stimulation (TMS) of the primary motor cortex (M1), has been previously explored through simulation methods at different length and complexity scales. However, the coupling of TMS induced electric fields to neural mass models is still largely unexplored, while previously approached as a current pulse with arbitrary width and height [1, 2]. Via multi-scale simulations at the subcellular and neural mass level we study the underlying mechanisms of electromagnetic activation of cortical tissue by TMS. Validation of the coupling model is aided by measurements like EMG of muscle activation [3], EEG [4], and invasive recordings of so-called DI-waves on the spinal cord following TMS [3]. The model architecture is defined by choosing the cell morphologies, electric fields, connectivity, and cortical circuitry that describes the desired system. The methods developed here lay the groundwork for studying the effects of electromagnetic stimulation on any circuit architecture and facilitate realistically motivated coupling between electric fields and mean field state variables.

TMS stimulation of M1 is characterized by corticospinal pyramidal tract axons originating from deep layer 5 (L5) that carry direct (D-) and indirect (I-) waves following TMS. D-waves are believed to be generated by direct stimulation of L5 axons, while I-waves may stem from indirect activation of L5 cells from presynaptic cells [3]. This study focuses on the generation of I-waves from within the cortex, as a model D-wave generation in corticospinal tracts can be treated separately. Using reconstructed compartment models of neuron morphologies we simulate spatiotemporal dynamics on L23 and L4 axons in response to TMS induced electric fields. Generated action potentials propagate through the axonal arbor to axon terminals, forming synapses to other cells. In our model, L23 and L4 cells couple synaptically to L5. The postsynaptic potential and thereby the intracellular current is governed by synaptic and dendritic dynamics. The resulting current entering L5 somata, averaged over cells, defines the current inputs to a neural mass model governing the firing rate of a L5 population. The electric field induced by TMS is thus coupled to mean field state variables that parameterize cortical activity. The L5 population’s mean firing rate is proportional to the average cortical output that projects to the spinal cord and is qualitatively comparable to I-wave measurements. We validate the coupling model against the measured I-waves and explore directional sensitivity and dose dependence of cortical activation as driven by the underlying biophysics and stimulation paradigm.  

1. Rusu, C. V., et al. (2014). A model of TMS-induced I-waves in motor cortex. Brain stimulation, 7(3), 401–414.

2. Wilson, M. T., et al. (2021). Modeling motor-evoked potentials from neural field simulations of transcranial magnetic stimulation. Clinical neurophysiology, 132(2), 412–428.

3. Di Lazzaro V, Rothwell J.C. Corticospinal activity evoked and modulated by non-invasive stimulation of the intact human motor cortex. J Physiol. 2014, 592: 4115-4128.

4. Gordon, P. C., et al. (2021). Recording brain responses to TMS of primary motor cortex by EEG - utility of an optimized sham procedure. NeuroImage, 245, 118708.



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P072 Dendritic persistent calcium current amplifies low-frequency fluctuations in alpha motor neurons
Persistent inward currents (PICs), mediated by calcium and sodium permeable ion channels, promote the amplification of synaptic currents in alpha motor neurons (MNs). The role of self-sustained discharge, promoted by dendritic calcium persistent current, has been hypothesized as fundamental during postural and stabilization motor tasks. However, it is unclear how MN dendrite morphology, electrophysiological properties, and noradrenergic modulation of calcium persistent current shape the bandwidth of the synaptic input signal, thereby influencing the transmission of synaptic oscillations in the neural drive to the muscle. To investigate how persistent calcium current alters the MN frequency response, we conducted computer simulations with slow (S)-type and fast fatigable (FF)-type alpha MN models subjected to different noradrenergic conditions. The morphologies of models were based on detailed reconstructions of cat lumbar alpha MNs with 2,570 and 3,001 dendritic compartments for S- and FF-type models, respectively. Cable theory was adopted while modeling the propagation of signals across the dendritic membrane of the MN. The electrophysiological properties observed in vivo (cat) were reproduced by tuning the biophysical properties of ionic channels employed in the models. Independent and homogeneous Poisson stochastic point processes modeled the presynaptic commands to MNs. The mean value of the presynaptic commands was adjusted so that the discharge rate of MN models was 20 spikes/s (average). The conductance of the persistent calcium channel (gCa) was adjusted to reproduce the relationship between the amplitude of the injected current and the firing rate of MNs under the effect of noradrenergic agonist (active dendrite) and anesthetized MNs (passive dendrite, with gCa=0). Spectral analysis was employed to assess the models' frequency responses. For the models with passive dendrite, the DC gain, cutoff frequency (CF), and CF delay were 0.9 (0.5), 82 Hz (60 Hz), and 2.3 ms (3.0 ms) for the S-type (FF-type) MN model, respectively. Furthermore, the S- and FF-type models presented: i) a DC gain of 1.1 and 1.4 (increase of 22% and 180%, respectively); ii) a CF of 62 Hz and 11 Hz (reduction of 32% and 82%, respectively); and 3) a delay associated with the CF of 2.9 ms and 8.4 ms (increase of 26% and 180%, respectively). Therefore, activating the dendritic persistent calcium channel amplified MN output's low-frequency components (<5 Hz), especially in the FF-type. Also, the results suggest that the persistent dendritic calcium current in alpha MNs may shape the bandwidth of the motor commands that reach the muscles, and the amplification of low-frequency fluctuations coincides with the frequency band associated with isometric muscle contractions and postural control.



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P073 Exploring Seizure Dynamics: A Computational Model of Epilepsy
This
article encapsulates an
exploration of seizure dynamics through the lens of computational modeling in
epilepsy. Starting with a foundational understanding of the FitzHugh-Nagumo
model, a simplified representation of neuronal activity, we try
to understand the
intricacies of epileptic seizures. We methodically demonstrated the transition
from normal neuronal activity to a seizure-like state by altering key
parameters in the model, such as external current and coupling strength. This
was followed by an extension of the model to a network of neurons, simulating
the complex interactions and synchronization patterns indicative of seizure
propagation. Numerical simulations were conducted to visualize the impact of
varying coupling strengths on network dynamics, offering insights into the
mechanisms of seizure initiation and spread. The study was complemented by a
discussion on the implications of these findings for understanding epilepsy,
highlighting the bridging of theoretical models with clinical understanding.
Our approach not only illuminates the potential of computational models in
epilepsy research but also underscores the significance of interdisciplinary
collaboration in advancing our comprehension of neurological disorders. Through
this article, we aim to provide a nuanced perspective on the modeling of
epileptic seizures, offering a valuable resource for researchers and clinicians
in the field.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P074 Computational Model of the Mouse Whisker Thalamocortical Pathway
Detailed reconstructions of neuronal projections and circuit mapping studies uncovered new cell-type-specific pathways of information flow and integration across cortical and thalamic regions [1]. This includes the existence of direct projections from thalamocortical (TC) neurons to layer 6 corticothalamic (L6 CT) neurons. This direct connection is more evident in the awake vs sleep state and enables a short-latency feedback pathway that bypasses the full loop in the cortical column, but its function remains poorly understood [2]. In the whisker pathway of rodents, this direct short-latency feedback could work as a mechanism to selectively increase the responsiveness of specific thalamic neurons to incoming streams of information while silencing others, contributing to the emergence of direction-selective angular tuning in the network. Selective silencing of the direct L6 CT inputs is not possible via experimentation, and computational models provide an alternative to do so without disrupting the system. We developed a detailed multiscale mechanistic model of the mouse whisker pathway in NetPyNE. Our goal is to study the overall effect of modulatory L6 CT projections [3] and the influence of this direct L6 CT feedback in regulating network excitability [2]. We will characterize the network based on the angular tuning response of thalamic neurons to different whisker deflection angles and evaluate the contribution of direct activation of L6 CT neurons by the thalamus in this process. The model comprises a thalamic barreloid, a portion of the thalamic reticular nucleus, and a full cortical infrabarrel from L6. It includes biophysically detailed neurons, a topological distribution of synaptic inputs, short-term plasticity properties, and detailed mapping of local and external projections based on the latest experimental data available [4]. We also developed a novel realistic model of whisker deflection responses in the brainstem based on different deflection angles, providing topological feedforward inputs to the thalamus. We validated the single cell and the network models based on membrane potentials and firing frequency for different cell types. Our current results show that the architecture of thalamic projections is crucial for preserving the angular tuning across the network and that CT feedback is essential to keep the balance of thalamic excitation. Next, we will test the influence of the timing of this CT feedback, which we believe is key to sharpening the angular tuning in the thalamic network to brainstem inputs. Ultimately, our model will provide insights into the mechanisms that regulate thalamocortical excitability and how interactions between L6 CT neurons and the thalamus can shape the information arriving at the cortex.
1. Shepherd GMG, Yamawaki N. Untangling the cortico-thalamo-cortical loop: cellular pieces of a knotty circuit puzzle. Nat Rev Neurosci. 2021;22: 389–406.
2. Hirai D, Nakamura KC, Shibata K-I, et al. Shaping somatosensory responses in awake rats: cortical modulation of thalamic neurons. Brain Struct Funct. 2018;223: 851–872.
3. Crandall SR, Cruikshank SJ, Connors BW. A corticothalamic switch: controlling the thalamus with dynamic synapses. Neuron. 2015;86: 768–782.
4. Iavarone E, Simko J, Shi Y, Bertschy M, et al. Thalamic control of sensory processing and spindles in a biophysical somatosensory thalamoreticular circuit model of wakefulness and sleep. Cell Rep. 2023;42. doi:10.1016/j.celrep.2023.112200



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P075 Biologically Inspired Constraints are Compatible with Gradient-Descent-based Learning in Spiking Neural Networks
This study explores Spiking Neural Networks (SNNs), leveraging a unique combination of three algorithms to unravel the intricate dynamics within biological constraints. Our primary contributions lie in the integration of Dilated Convolutions with Learnable Spacings for delay learning[1], coupled with the fusion of two dynamic pruning methods: DeepR[2] for disconnecting and RigL[3] for reconnecting synaptic weights. 
Dynamic pruning, a method derived in machine learning, operates akin to a structure learning algorithm. We begin by initializing the neural network with sparse connectivity and maintain a constant number of active synapses throughout training. One of our innovations lies in the utilization of DeepR, which not only facilitates weight pruning, but also makes it convenient to incorporate Dale’s Principle by maintaining consistent weight column signs. This ensures the creation of exclusively excitatory and inhibitory neurons, further enriching the biological plausibility of our SNN model. While DeepR randomly reconnects weights, we instead utilize RigL which reintroduces the synapses with the highest gradient magnitudes. 
Synaptic delays denote the time required for a signal to propagate from one neuron to an adjacent neuron. These delays influence spike arrival times, which matter since spiking neurons respond more strongly to coincident input spikes. Dilated Convolutions with Learnable Spacings introduces a new approach to delay learning in deep SNNs that is compatible with typical gradient-based learning methods. The incorporation of learnable delays allows us to identify spatiotemporal “receptive fields”, a structure of spatiotemporal groups that are purely excitatory or purely inhibitory. We found that this spatiotemporal grouping of excitation and inhibition not only arose in dense networks but also persisted, despite alterations due to enforced sparsity and Dale’s Principle. 
Comparing the classification performance of a dense non-Dalean and a sparse Dalean network on the Raw Heidelberg Digits[4] dataset shows that the latter achieves 89% test accuracy with 75% sparsity, slightly below the former which reaches 94%. When comparing networks with a fixed number of active synapses, the sparse model surpasses the dense one at 87.5% sparsity (89% vs. 88% test accuracy), and this performance gap widens when  the number of active synapses is further decreased. 
This study provides new insights into the synergistic effects of sparsity, delays and Dale’s Principle in SNNs. Our findings advance the understanding of biologically-inspired computational principles in neural networks, laying a foundation for further exploration and application in the realm of neuro-inspired computing. 
 [1]  G. Bellec et al. 2018. arXiv: 1711.05136 [cs.NE]. 
 [2]  U. Evci et al. 2021. arXiv: 1911.11134 [cs.LG]. 
 [3]  I. Hammouamri et al. 2023. arXiv: 2306.17670 [cs.NE]. 
[4]  B. Cramer et al. IEEE Transactions on Neural Networks and Learning Systems 33.7, 2744–2757, 2022

Speakers
avatar for Thomas Nowotny

Thomas Nowotny

Professor of Informatics, University of Sussex, UK
I do research in computational neuroscience and bio-inspired AI. More details are on my home page http://users.sussex.ac.uk/~tn41/ and institutional homepage (link above).


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P076 Decomposition of brain calcium signals in a Pavlovian learning task
We are submitting a paper / extended abstract in .pdf format.


Acknowledgments


The authors acknowledge support from the National Institutes of Health grants NIH MH060605, NIH MH115604 and NIH DA044761, and from the National Science Foundation grant NSF IOS-2002863





Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P077 Quantifying the contribution of underlying physiological networks in functional brain connectivity through remnant functional maps
Contemporary cognitive neuroscience emphasizes that cognition does not occur in isolation within specific neural locales but rather emerges from the dynamic interplay of distributed areas across the brain. This interplay is often captured as functional connectivity networks, and the foundational principles governing the organization of these networks are related to a spectrum of cognitive functions from processing stimuli to decision-making, cognitive control and emotional regulation. The emergent properties of functional networks have been linked to various physiological factors such as structural connectivity (SC), distance-dependent connectivity (DC), similarity in gene expression (GC), and similarity in neuroreceptor composition (RC) across brain regions. However, it remains unknown what aspects of functional brain organization these underlying factors support. To address this unknown, we develop an analytical framework to evaluate the influence of SC, DC, GC, and RC on shaping the organization of functional brain networks and propose remnant functional maps (RFMs). We estimate RFMS by removing edges from the functional connectivity that represent direct links of an underlying network of interest – SC, DC, GC, and RC. We find that each of these underlying factors aid in shaping the organization of functional connectivity. Notably, similarity in neuroreceptor composition among brain regions is the primary factor shaping the organization of functional brain connectivity. The dominance of neuroreceptors was also observed when modeling functional connectivity from these physiological networks. We propose that this RFMs based framework provides a tool to quantify the contribution of underlying physiological networks in shaping brain functional organization and could also aid the identification of diverse physiological alterations due to task demands and disease onset and progression.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P078 Spreading depolarization in neocortical microcircuits
Spreading depolarization (SD) is characterized by a wave of depolarization preceded by a brief period of hyperexcitability that propagates through gray matter at 2-6 mm/min [1]. SD is accompanied by spreading depression, a prolonged neuronal silence caused by depolarization block, and disruption of ion homeostasis. SD is observed in neurological disorders, including migraine aura, epilepsy, traumatic brain injury, and ischemic stroke. Blood vessels contribute to SD as a source of oxygen and nutrients to the affected tissue. Understanding these mechanisms is essential for targeted interventions in conditions like ischemic stroke.



We used the NEURON and NetPyNE simulation platforms to investigate ion homeostasis at the tissue scale. We developed an in vivo network model based on an established cortical microcircuit model [2,3] and our previous in vitro model [4]. Point neurons with Hodgkin-Huxley style ion channels were augmented with additional homeostatic mechanisms, including Na+/K+-ATPase, NKCC1, KCC2, and dynamic volume changes. We simulate the intracellular and extracellular concentrations Na+, K+, Cl-, and O2 using NEURON/RxD [5]. The contribution of astrocytes is modeled as the O2-dependent clearance of K+. NetPyNE with the evolutional optimization Opuntia was used to find appropriate parameters for the mode [6]. Around 13,000 neurons were simulated in 1 mm3 of cortex (layers 2-6). We used histologic images to determine the locations of oxygen sources in the model. A 2.0 x 2.3 cm cross-section of the human cortical plate in V1 with immunostaining for CD34, was used to determine the locations of 918 capillaries (mean capillary density: 199.6/cm2; mean±SD capillary cross-sectional area: 16.7±11.9μm2). A biased random walk was used to  generate a 3-dimensional distribution of capillaries from this 2D cross-section.



SD was reliability triggered in this model by a bolus of extracellular K+ applied to layer 4. Our model predicts that the ability of a neuron to maintain a physiological firing rate is influenced by its proximity to an oxygen source.  We also found neuronal depolarization occurred in all cortical layers, with pathological activity spreading through extracellular K+ diffusion and network connectivity.


AcknowledgmentsResearch supported by NIH grant R01MH086638


References1. Dreier JP. The role of spreading depression, spreading depolarization and spreading ischemia in neurological disease. Nat Med. 2011;17: 439–447.
2. Potjans TC, Diesmann M. The Cell-Type Specific Cortical Microcircuit: Relating Structure and Activity in a Full-Scale Spiking Network Model. Cereb Cortex. 2012;24: 785–806.
3. Romaro C, Najman FA, Lytton WW, Roque AC, Dura-Bernal S. NetPyNE Implementation and Scaling of the Potjans-Diesmann Cortical Microcircuit Model. Neural Comput. 2021;33: 1993–2032.
4. Kelley C, Newton AJH, Hrabetova S, McDougal RA, Lytton WW. Multiscale Computer Modeling of Spreading Depolarization in Brain Slices. eNeuro. 2022;9. doi:10.1523/ENEURO.0082-22.2022
5. Newton AJH, McDougal RA, Hines ML, Lytton WW. Using NEURON for Reaction-Diffusion Modeling of Extracellular Dynamics. Front Neuroinform. 2018;12: 41.
6. Dura-Bernal S, Suter BA, Gleeson P, Cantarelli M, Quintana A, Rodriguez F, et al. NetPyNE, a tool for data-driven multiscale modeling of brain circuits. Elife. 2019;8. doi:10.7554/eLife.44494


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P079 Reinforcement and evolutionary learning of spatial navigation using models of hippocampal and entorhinal circuits
Deep learning models successful on visual processing tasks were loosely inspired by simplified visual system neuroanatomy. These architectures and neuronal receptive fields (RF) are not ideal for complex spatial reasoning and navigation tasks in dynamic environments. By using the architecture and RFs of mammalian hippocampal (HPC) spatial navigation circuits we hope to first understand then improve on performance and efficiency of models trained on navigation tasks. Here, we develop detailed circuit models containing entorhinal grid cells, place cells, and motor output circuits, interfaced with agents learning to navigate in simulated environments. 


Our grid cells (GCs) have varied spatial scales, allowing multi-resolution agent localization. Convergence of GCs onto place neurons creates place cells’ irregular RFs, and allows enhanced localization of agents. Navigation goals are encoded within a target area, with neurons with topographic RFs. Target and place cells project to an association area that integrates information about agent and goal location. This area projects to a motor output area that generates movements based on the maximally firing motor sub-population. Each area has excitatory (E) and inhibitory (I) interneurons modeled as event-based integrate and fire neurons, that synapse using standard AMPA (GABA) time-constants. 


We trained the models to perform navigation tasks to encoded target locations using a set of biologically inspired learning rules including spike-timing dependent reinforcement learning (STDP/RL), evolutionary strategy (ES), and hybrid algorithms that incorporate the strengths of each individual algorithm [1,2]. Fitness functions integrated total moves towards a target, and penalized moves away from the target. Extra reward was given for reaching a target. After training, we analyzed emergent structure in the circuits, and the dynamics enabling navigation. 


Each algorithm trained models to navigate agents to targets. STDP/RL (ES) used short (long) time-scales for weight adjustment. Therefore, post-learning dynamics in the circuit differed: STDP/RL enhanced synchronized neuronal firing and coding, and ES created diffuse neuronal firing and coding. Overall, ES may produce better fitness due to fewer constraints, but since STDP/RL uses extra information of neuron-to-neuron communication, it can reach optimal performance more quickly. Learning redistributed synaptic weights: many synapses had extremely low weights, and a few had very high weights, contributing in an outsized fashion to output..


By implementing representations and computations performed within mammalian entorhinal, hippocampal, and motor circuits, we aim to set groundwork for developing next-generation algorithms that support spatial navigation. Our modeling allows generating data that could be analyzed and compared to neurophysiology data, offering improved interpretability of neurophysiological signals, and predictions on the function of specific cell classes and their dynamics. Overall, this could eventually lead to improved teaming and communication between models, agents, and humans. 


References


[1] Training spiking neuronal networks to perform motor control using reinforcement and evolutionary learning Front. Comput Neurosci 2022


[2] Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning PLoS One 2022




Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

4:20pm PDT

P081 Behavior-dependent layer-specific oscillations, phase-amplitude coupling and spike-to-LFP coupling in a data-driven model of motor cortex circuits
Exploring the primary motor cortex (M1) is crucial for understanding motor functions in both health and disease, as well as for developing new treatments for motor disorders. Neural oscillations, a universal hallmark of brain activity, exhibit specific patterns within M1, related to motor control. During movement, gamma activity increases and beta activity decreases, reflecting active motor engagement. During immobility (including rest and isometric contraction), an opposite pattern is observed – decrease of gamma and increase of beta activity. Theta and delta oscillations orchestrate higher-frequency activities along the cortex, which manifests as cross-frequency coupling between theta/delta phase and beta/gamma amplitude. 
Previously, we built a biophysically detailed computational model of the M1 circuit validated against in vivo experimental data. The model spontaneously generated delta, beta, and gamma oscillations, with gamma increase and delta decrease during the movement state. Interestingly, beta and gamma were both locked to the delta cycle and occurred at opposite delta phases.
To further test our modeling results, we analyzed multi-layer LFP data recorded from the M1 of mice engaged in a reaching task, where they had to move a joystick following an auditory cue and maintain its position for a certain time period. Following the cue, we observed an overall low-frequency power decrease (below 25 Hz) in deep layers, except for the theta activity, which remained unchanged. High-frequency power increased in superficial and middle layers, with stronger gamma during the initial ballistic movement phase and stronger high-beta during the subsequent maintenance of joystick position. The amplitudes of gamma and beta were locked to theta cycle, with various depth profiles and preferred theta phases. Despite the discrepancies in the frequency bands between the model and the experiment, a common pattern was observed: movement-related gamma, holding-related beta, and low-frequency activity that modulates both of them in a phase-dependent manner.
Moreover, we explored the interaction between spikes and local field potentials in these experiments. Specifically, we examined associations between the spikes emitted by cells (loosely identified by their extracellular recording shape) located at different depths in the motor cortex and phases of oscillations filtered at different frequency bands, both recorded from M1 and VL thalamic areas. We found strong modulation across different frequency ranges, which were further used to constrain our detailed model. In addition, for some cells slight but significant changes in the spike-to-phase coupling were observed according to the cognitive demand (rest, expectation, execution of motor plan), which could be instrumental to further tune movement-dependent actions associated with M1.


Acknowledgments: The work is supported by NIBIB U24EB028998 and NYS DOH01-C32250GG-3450000 grants


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P082 Oscillation-Induced Firing Rate Shift in a Working Memory Model
Neural oscillations are ubiquitous in the brain and associated with various cognitive functions, including working memory (WM). Gamma oscillations are linked to selective activation and information transfer, while alpha/beta is associated with inhibitory control and status quo maintenance [1]. In WM tasks, gamma is involved in stimulus loading and selective retention, and alpha/beta – in distractor filtering and erasure of irrelevant information [2]. Despite theoretical understanding of how neural activity level affects oscillations, the reciprocal effect – oscillation-induced firing rate shift – remains underexplored. In the context of WM, it is of particular interest, as WM functions are often explained in terms of average neural activity levels [3].
In this study, we examine how input oscillations affect the time-averaged activity in a firing rate model with a rectified quadratic gain function, consisting of an excitatory and an inhibitory population. We introduce a method for estimating time-averaged firing rates in the presence of input oscillations without direct system simulation. Utilizing harmonic balance, we decompose dynamic variables into Fourier series, forming algebraic equations to relate various harmonic amplitudes self-consistently, including the time-averaged activity represented by the 0-th harmonic. While the system should be solved numerically, it is faster than simulating the original model and offers insight into the system's time-averaged equilibria through phase plane graphical analysis. By eliminating one harmonic balance equation, we derive curves analogous to nullclines on the phase plane, whose intersections indicate time-averaged equilibria, aiding in understanding how oscillations influence time-averaged activity and potentially alter activity regimes via bifurcation.
We applied our method to a WM model with potentiating E-E connection and demonstrated several effects of input oscillation on its functioning. Gamma input excited the system in the active state and increased the inter-state difference; alpha/beta input inhibited the active state and decreased the inter-state difference. It could be interpreted as oscillation-induced increase and decrease of information about WM content relative to the background, respectively. Strong alpha/beta destroyed the active state, erasing WM content. Gamma input decreased the critical stimulus amplitude required for loading into WM; alpha/beta increased this amplitude, protecting WM from overwriting. Finally, we showed that gamma input can support WM retention in a metastable system. All the results were confirmed by direct numerical simulations of the model.
References
1. Engel, AK, Fries, P: Beta-band oscillations - signalling the status quo? Curr Opin Neurobiol 2010, 20(2):156-165
2. Lundqvist, M, Rose, J, Herman, P, Brincat, SL, Buschman, TJ, Miller, EK: Gamma and Beta Bursts Underlie Working Memory. Neuron 2016, 90(1):152-164
3. Goldman-Rakic PS: Cellular basis of working memory. Neuron 1995, 14(3):477-485.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

7:10pm PDT

Banquet Dinner
Monday July 22, 2024 7:10pm - 9:40pm PDT
 
Filter sessions
Apply filters to sessions.
Filtered by Date -