Loading…
Attending this event?
Saturday, July 20
 

8:00am PDT

Registration
Saturday July 20, 2024 8:00am - 8:00am PDT

9:00am PDT

T01: Building mechanistic multiscale models using NEURON and NetPyNE to study brain function and disease
Understanding the brain requires studying its multiscale interactions, from molecules to cells to circuits and networks. Although vast experimental datasets are being generated across scales and modalities, integrating and interpreting this data remains a daunting challenge. This tutorial will highlight recent advances in mechanistic multiscale modeling and how it offers an unparalleled approach to integrate these data and provide insights into brain function and disease. Multiscale models facilitate the interpretation of experimental findings across different brain regions, brain scales (molecular, cellular, circuit, system), brain function (sensory perception, motor behavior, learning, etc), recording/imaging modalities (intracellular voltage, LFP, EEG, fMRI, etc) and disease/disorders (e.g., schizophrenia, epilepsy, ischemia, Parkinson's, etc). As such, it has a broad appeal to experimental, clinical, and computational neuroscientists, students, and educators.

This tutorial will introduce multiscale modeling using two NIH-funded tools: the NEURON 9.0 simulator (https://neuron.yale.edu/neuron/), including the Reaction-Diffusion (RxD) module and the NetPyNE tool (http://netpyne.org). The tutorial will combine background, examples, and hands-on exercise covering the implementation of models at four key scales: (1) intracellular dynamics (e.g., calcium buffering, protein interactions), (2) single neuron electrophysiology (e.g., action potential propagation), (3) neurons in extracellular space (e.g., spreading depression), and (4) neuronal circuits, including dynamics such as oscillations and simulation of recordings such as local field potentials (LFP) and electroencephalography (EEG). For circuit simulations, we will use NetPyNE, a high-level interface to NEURON supporting programmatic and GUI specifications that facilitate the development, parallel simulation, and analysis of biophysically detailed neuronal circuits. We conclude with an example combining all three tools that link intracellular/extracellular molecular dynamics with network spiking activity and LFP/EEG. The tutorial will incorporate recent developments andnew features in the NEURON and NetPyNE tools.

Speakers (in alphabetical order):

Valery Bragin, NetPyNE circuit modeling
Charité – Berlin University Medicine / State University of New York (SUNY) Downstate Health
Sciences University

Salvador Dura-Bernal, NetPyNE circuit modeling
State University of New York (SUNY) Downstate Health Sciences University

William W Lytton, Multiscale Modeling Overview
State University of New York (SUNY) Downstate Health Sciences University

Robert A McDougal, NEURON single cells
Yale University

Adam Newton, NEURON Reaction-Diffusion
State University of New York (SUNY) Downstate Health Sciences University



Saturday July 20, 2024 9:00am - 10:15am PDT
Cedro I

9:00am PDT

T02: From single-cell modeling to large-scale network dynamics with NEST Simulator
NEST is an established community code for simulating spiking neuronal network models that capture the full details of the structure of biological networks [1]. The simulator runs efficiently on various architectures, from laptops to supercomputers [2]. Over the years, a large body of peer-reviewed neuroscientific studies have been carried out with NEST, and it has become the reference code for research on neuromorphic hardware systems.

This tutorial provides hands-on experience with recent NEST feature additions. First, we explore how an astrocyte-mediated slow inward current impacts typical neural network simulations. Here, we introduce how astrocytes are implemented in NEST and investigate their dynamical behavior. Then, we create small neuron-astrocyte networks and explore their interactions before adding more complexity to the network structure. Second, we develop a functional network that can be trained to solve various tasks using a three-factor learning rule that approximates backpropagation through time: eligibility propagation (e-prop). Specifically, we use e-prop to train a network to solve a supervised regression task to generate temporal patterns and a supervised classification task to accumulate evidence. Third, we investigate how dendritic properties of neurons can be captured by constructing compartmental models in NEST. We import dendritic models from an existing repository and embed them in a network simulation. Finally, we learn to use NESTML, a domain-specific modeling language for neuron and synapse models. We implement a neuron model with an active dendritic compartment and a third-factor STDP synapse defined in NESTML. These models are then used in a network to perform learning, prediction, and replay of sequences of items, such as let-
ters, images, or sounds [3].

[1] Gewaltig M-O & Diesmann M (2007) NEST (Neural Simulation Tool) Scholarpedia 2(4):1430.
[2] Jordan J., Ippen T., Helias M., Kitayama I., Sato M., Igarashi J., Diesmann M., Kunkel S. (2018). Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers. Frontiers in Neuroinformatics 12: 2
[3] Bouhadjar Y, Wouters DJ, Diesmann M, Tetzlaff T (2022) Sequence learning, prediction, and replay in networks of spiking neurons. PLoS Comput Biol 18(6): e1010233.

Speakers (in alphabetical order):

Iiro Ahokainen, Astrocytes in NEST
Tampere University, Finland

Jasper Albers, E-prop in NEST
Jülich Research Centre, Germany

Joshua Boettcher, Compartmental models in NEST; NESTML
Jülich Research Centre, Germany



Saturday July 20, 2024 9:00am - 10:15am PDT
Cedro II

9:00am PDT

T03: Modeling cortical networks dynamics
The Tutorial aims to provide an essential introduction to modeling biophysically realistic neuronal networks, emphasizing essential circuital components underpinning asynchronous vs. synchronous dynamics. The morning session will introduce classic models of balanced excitatory/inhibitory (E–I) networks with analytical insights into some mechanisms
for the emergence of asynchronous and irregular firing. The afternoon session will shift focus towards network models displaying synchronous dynamics, with hands-on interactive jupyter sessions and practical numerical simulations delving into fundamental theory and clinical applications, including models of epileptiform activity.

MORNING SESSION: Asynchronous dynamics
Emergence of irregular activity in networks of strongly coupled spiking neurons
Alessandro Sanzeni, Bocconi University, Milan, Italy

Introducing glia into cortical network models and the emergence of glial attractors
Maurizio De Pitta, Krembil Research Institute, Toronto, Canada

AFTERNOON SESSION: Synchronous Activity
Oscillations in networks of excitatory and inhibitory neurons: the PING framework
Scott Rich, University of Connecticut, CN, USA

Simulating network models of reactive astrogliosis underpinning epilepsy
Pamela Illescas-Maldonaldo, and Vicente Medel, University of Valparaiso, Valparaiso, Chile


Saturday July 20, 2024 9:00am - 10:15am PDT
Cedro III

9:00am PDT

T04: Standardised, data-driven computational modelling with NeuroML using the Open Source Brian
Data-driven models of neurons and circuits are important for understanding how the properties of membrane conductances, synapses, dendrites, and the anatomical connectivity between neurons generate the complex dynamical behaviors of brain circuits in health and disease. However, even though data and models have been made publicly available in recent years and the use of standards such as Neurodata Without Borders (NWB) (https://nwb.org)
and NeuroML (https://neuroml.org) to promote FAIR (Findable, Accessible, Interoperable, and Reusable) neuroscience is on the rise, but the development of data-driven models remains hampered by the difficulty of finding appropriate data and the inherent complexity involved in their construction.

The Open Source Brain web platform (OSB) (https://opensourcebrain.org) combines data, accompanying analysis tools, and computational models in a scalable resource. It indexes repositories from established sources such as the DANDI data archive (https://dandiarchive.org), the ModelDB model sharing archive (https://modeldb.science), and GitHub to provide easy access to a plethora of experimental data and models, including a large number standardized in NWB and NeuroML formats.OSB also incorporates the NeuroML software ecosystem. NeuroML is an established community standard and software ecosystem that enables the development of detailed biophysical models using a declarative, simulator-independent description. The software ecosystem supports all steps of the model lifecycle and allows users to automatically generate code and run their NeuroML models using some well-established simulation
engines (NEURON/NetPyNE).

In this tutorial, attendees will learn about:
  • Finding data and models on OSB
  • NeuroML and its software ecosystem
  • Using NeuroML models on OSB
  • Building and simulating new NeuroML models constrained by the data on OSB

We will also assist with advanced tasks and discuss new features to aid researchers further.

Speakers (in alphabetical order):
Padraig Gleeson, University College London, London, UK
Boris Marin, Universidade Federal do ABC, Brazil
Angus Silver, University College London, London, UK
Ankur Sinha, University College London, London, UK


Saturday July 20, 2024 9:00am - 10:15am PDT
Cedro IV

9:00am PDT

T05: Understanding motor control through multiscale modeling of spinal cord neuronal circuits
Circuits of neurons in the spinal cord process sensory and descending information to produce the neural drive to the muscle and the ensuing movement. Dysfunctions in these circuits could generate abnormal movements, such as tremors. Models of spinal cord circuits have been used for several years to enlighten our understanding of basic principles of motor control, namely the recruitment and rate coding of motor units that explain force gradation and movement smoothness. More recently, computer simulations of biophysical models of the neuromuscular system provide information on 1) how interneuron circuits could attenuate/cancel tremor signals, 2) how modulating sensory signals could produce intermittent recruitment of motor units in a posture control task, and 3) how axon demyelination could impact force and position control.

This tutorial will present an overview of neuromusculoskeletal models and several application examples. Two hands-on sessions will follow the introductory talk. The first session will use a web-based neuromuscular simulator (ReMoto – http://remoto.leb.usp.br) that can be easily configured (without coding) to study several aspects of force generation and control. Finally, the attendees will design a spinal cord circuit from scratch using general-purpose simulators of neurons and neuronal networks (NEURON and NetPyNE). In the latter session, a model including a pool of motor neurons with stochastic synaptic inputs will be used to show how the motor pool could reduce the independent synaptic noise and transmit the common input to the motor output.

Program:
(* denotes talk given remotely)

Neuromusculoskeletal models and their applications*
André Fabio Kohn, Biomedical Engineering Lab, University of Sao Paulo, Brazil

Hands-on session using ReMoto simulator
Renato Naville Watanabe, Biomechanics Laboratory, Federal University of ABC, Brazil

Hands-on session on designing spinal cord circuits using general-purpose simulators of neurons and neuronal networks (NEURON and NetPyNE)
Leonardo Abdala Elias, Neural Engineering Research Laboratory, University of Campinas, Brazil


Saturday July 20, 2024 9:00am - 10:15am PDT
Cedro V

9:00am PDT

T06: Implementing the Gaussian-Linear Hidden Markov model (GLHMM) with a package in Python for brain data analysis
Hidden Markov Models (HMMs) are a type of statistical model used to model data sequences where the system's underlying state is not directly observable. They are a powerful tool used in several applications, including speech recognition, natural language processing, and bioinformatics, mainly because of their data-driven approach. For this tutorial, we introduce the GLHMM model and Python package (https://github.com/vidaurre/glhmm). In short, the GLHMM is a general framework where linear regression is used to parameterise a Gaussian state distribution, thereby it can accommodate a wide range of uses -including unsupervised, encoding and decoding models. GLHMM is implemented as a Python toolbox emphasizing statistical testing and out-of-sample prediction, i.e., aimed to find and characterize brain-behaviour associations. This toolbox uses a stochastic variational inference approach, enabling it to handle large data sets at reasonable computational time. This approach can be applied to several data modalities, including animal recordings or non-brain data, and applied over a broad range of experimental paradigms.

For demonstration in this tutorial, we will show examples with fMRI data. The goal of this tutorial is to provide a step-by-step guide to using the toolbox. It's aimed at Master's and PhD. students (and Postdocs) interested in learning how to implement HMMs mainly for brain data analysis but not exclusively.

Objectives:
  1. Implement the algorithm with a Python package.
  2. To use the algorithm to estimate relevant parameters of an HMM with a practical example on HCP https://www.humanconnectome.org/

Prerequisites:
To complete this tutorial, participants will need to have the following knowledge and skills:
  1. Basic knowledge of programming in Python.
  2. Basic knowledge of probability and statistics.

Materials:
The tutorial will be available online in a Collab notebook. The source code for the algorithm will also
be available online on github (https://github.com/vidaurre/glhmm).

References:
  1. Diego Vidaurre, Nick Y. Larsen, Laura Masaracchia, Lenno R.P.T Ruijters, Sonsoles Alonso, Christine Ahrends, Mark W. Woolrich. The Gaussian-Linear Hidden Markov model: a Python package.2023 https://arxiv.org/abs/2312.07151
  2. Diego Vidaurre, Stephen M. Smith, and Mark W. Woolrich. Brain network dynamics are hierarchically organized in time. 2017 https://www.pnas.org/doi/full/10.1073/pnas.1705120114
  3. Diego Vidaurre, Romesh Abeysuriya, Robert Becker, Andrew J. Quinn, Fidel Alfaro-Almagro, Stephen M. Smith, Mark W. Woolrich, Discovering dynamic brain networks from big data in rest and task, NeuroImage, https://doi.org/10.1016/j.neuroimage.2017.06.077
  4. Diego Vidaurre, A. Llera, S.M. Smith, M.W. Woolrich, Behavioral relevance of spontaneous, transient brain network interactions in fMRI, NeuroImage, https://doi.org/10.1016/j.neuroimage.2020.117713


Saturday July 20, 2024 9:00am - 10:15am PDT
Jacarandá

9:00am PDT

T07: Single cell signal processing and data analysis in Matlab
Matlab (Mathworks, Natick, MA) is a popular computing environment that offers an alternative to more advanced environments with its simplicity, especially for those less computationally inclined or for collaborating with experimentalists. In this tutorial, we will focus on the following tasks in Matlab: (1) Signal processing of recorded or simulated traces (e.g., filtering noise, spike and burst finding in single-unit intracellular electrophysiology data in current-clamp, and extracting numerical characteristics); (2) analyzing tabular data (e.g., obtained from Excel or the result of other analyses); and (3) plotting and visualization. For all of these, we will take advantage of the PANDORA
toolbox, which is an open-source project that has been proposed for analysis and visualization (RRID:
SCR_001831, [1]).

PANDORA was initially developed to manage and analyze brute-force neuronal parameter search databases. However, it has proven helpful for various other types of simulation or experimental data analysis [2-7]. PANDORA’s original motivation was to offer an object-oriented program for analyzing neuronal data inside the Matlab environment, in particular with a database table-like object, similar to the “dataframe” object offered in the R ecosystem and the pandas
Python module. PANDORA offers a similarly convenient syntax for a powerful database querying system. A typical workflow would consist of generating parameter sets for simulations, analyzing the resulting simulation output and other recorded data to find spikes, measuring additional characteristics to construct databases, and finally analyzing and visualizing these database contents. PANDORA provides objects for loading datasets, controlling simulations,
importing/exporting data, and visualization. This tutorial uses the toolbox’s standard features and shows how to customize them for a given project.

Curator:
Cengiz Gunay
Department of Information Technology, School of Science and Technology
Georgia Gwinnett College, Lawrenceville, GA, USA

References:
  1. Günay et al. 2009 Neuroinformatics, 7(2):93-111. doi: 10.1007/s12021-009-9048-z
  2. Doloc-Mihu et al. 2011 Journal of biological physics, 37(3), 263–283. doi:10.1007/s10867-011-9215-y;
  3. Lin et al. 2012 J Neurosci 32(21): 7267–77;
  4. Wolfram et al. 2014 J Neurosci, 34(7): 2538–2543; doi: 10.1523/JNEUROSCI.4511-13.2014;
  5. Günay et al. 2015 PLoS Comp Bio. doi: 10.1371/journal.pcbi.1004189;
  6. Wenning et al. 2018 eLife 2018;7:e31123 doi: 10.7554/eLife.31123;
  7. Günay et al. 2019 eNeuro, 6(4), ENEURO.0417-18.2019. doi:10.1523/ENEURO.0417-18.2019

Speakers

Saturday July 20, 2024 9:00am - 10:15am PDT
Cedro VI

10:15am PDT

Coffee break
Saturday July 20, 2024 10:15am - 10:45am PDT

10:45am PDT

T01: Building mechanistic multiscale models using NEURON and NetPyNE to study brain function and disease
Understanding the brain requires studying its multiscale interactions, from molecules to cells to circuits and networks. Although vast experimental datasets are being generated across scales and modalities, integrating and interpreting this data remains a daunting challenge. This tutorial will highlight recent advances in mechanistic multiscale modeling and how it offers an unparalleled approach to integrate these data and provide insights into brain function and disease. Multiscale models facilitate the interpretation of experimental findings across different brain regions, brain scales (molecular, cellular, circuit, system), brain function (sensory perception, motor behavior, learning, etc), recording/imaging
modalities (intracellular voltage, LFP, EEG, fMRI, etc) and disease/disorders (e.g., schizophrenia, epilepsy, ischemia, Parkinson's, etc). As such, it broadly appeals to experimental, clinical, and computational neuroscientists, students, and educators.

This tutorial will introduce multiscale modeling using two NIH-funded tools: the NEURON 9.0 simulator (https://neuron.yale.edu/neuron/), including the Reaction-Diffusion (RxD) module and the NetPyNE tool (http://netpyne.org). The tutorial will combine background, examples, and hands-on exercises covering the implementation of models at four key scales: (1) intracellular dynamics (e.g., calcium buffering, protein interactions), (2) single neuron electrophysiology (e.g., action potential propagation), (3) neurons in extracellular space (e.g., spreading depression), and (4) neuronal circuits, including dynamics such as oscillations and simulation of recordings such as local field potentials (LFP) and electroencephalography (EEG). For circuit simulations, we will use NetPyNE, a high-level interface to NEURON supporting programmatic and GUI specifications that facilitate the development, parallel simulation, and analysis of biophysically detailed neuronal circuits. We conclude with an example combining all three tools that link intracellular/extracellular molecular dynamics with network spiking activity and LFP/EEG. The tutorial will incorporate recent developments and new features in the NEURON and NetPyNE tools.

Speakers (in alphabetical order):

Valery Bragin, NetPyNE circuit modeling
Charité – Berlin University Medicine / State University of New York (SUNY) Downstate Health
Sciences University

Salvador Dura-Bernal, NetPyNE circuit modeling
State University of New York (SUNY) Downstate Health Sciences University

William W Lytton, Multiscale Modeling Overview
State University of New York (SUNY) Downstate Health Sciences University

Robert A McDougal, NEURON single cells
Yale University

Adam Newton, NEURON Reaction-Diffusion
State University of New York (SUNY) Downstate Health Sciences University



Saturday July 20, 2024 10:45am - 12:15pm PDT
Cedro I

10:45am PDT

T02: From single-cell modeling to large-scale network dynamics with NEST Simulator
NEST is an established community code for simulating spiking neuronal network models that capture the full details of the structure of biological networks [1]. The simulator runs efficiently on various architectures, from laptops to supercomputers [2]. Over the years, a large body of peer-reviewed neuroscientific studies have been carried out with NEST, and it has become the reference code for research on neuromorphic hardware systems.

This tutorial provides hands-on experience with recent NEST feature additions. First, we explore how an astrocyte-mediated slow inward current impacts typical neural network simulations. Here, we introduce how astrocytes are implemented in NEST and investigate their dynamical behavior. Then, we create small neuron-astrocyte networks and explore their interactions before adding more complexity to the network structure. Second, we develop a functional network that can be trained to solve various tasks using a three-factor learning rule that approximates backpropagation through time: eligibility propagation (e-prop). Specifically, we use e-prop to train a network to solve a supervised regression task to generate temporal patterns and a supervised classification task to accumulate evidence. Third, we investigate how dendritic properties of neurons can be captured by constructing compartmental models in NEST. We import dendritic models from an existing repository and embed them in a network simulation. Finally, we learn to use NESTML, a domain-specific modeling language for neuron and synapse models. We implement a neuron model with an active dendritic compartment and a third-factor STDP synapse defined in NESTML. These models are then used in a network to perform learning, prediction, and replay of sequences of items, such as let-
ters, images, or sounds [3].

[1] Gewaltig M-O & Diesmann M (2007) NEST (Neural Simulation Tool) Scholarpedia 2(4):1430.
[2] Jordan J., Ippen T., Helias M., Kitayama I., Sato M., Igarashi J., Diesmann M., Kunkel S. (2018). Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers. Frontiers in Neuroinformatics 12: 2
[3] Bouhadjar Y, Wouters DJ, Diesmann M, Tetzlaff T (2022) Sequence learning, prediction, and replay in networks of spiking neurons. PLoS Comput Biol 18(6): e1010233.

Speakers (in alphabetical order):

Iiro Ahokainen, Astrocytes in NEST
Tampere University, Finland

Jasper Albers, E-prop in NEST
Jülich Research Centre, Germany

Joshua Boettcher, Compartmental models in NEST; NESTML
Jülich Research Centre, Germany



Saturday July 20, 2024 10:45am - 12:15pm PDT
Cedro II

10:45am PDT

T03: Modeling cortical networks dynamics
The Tutorial aims to provide an essential introduction to modeling biophysically realistic neuronal networks, emphasizing essential circuital components underpinning asynchronous vs. synchronous dynamics. The morning session will introduce classic models of balanced excitatory/inhibitory (E–I) networks with analytical insights into some mechanisms
for the emergence of asynchronous and irregular firing. The afternoon session will shift focus towards network models displaying synchronous dynamics, with hands-on interactive jupyter sessions and practical numerical simulations delving into fundamental theory and clinical applications, including models of epileptiform activity.

MORNING SESSION: Asynchronous dynamics
Emergence of irregular activity in networks of strongly coupled spiking neurons
Alessandro Sanzeni, Bocconi University, Milan, Italy

Introducing glia into cortical network models and the emergence of glial attractors
Maurizio De Pitta, Krembil Research Institute, Toronto, Canada

AFTERNOON SESSION: Synchronous Activity
Oscillations in networks of excitatory and inhibitory neurons: the PING framework
Scott Rich, University of Connecticut, CN, USA

Simulating network models of reactive astrogliosis underpinning epilepsy
Pamela Illescas-Maldonaldo, and Vicente Medel, University of Valparaiso, Valparaiso, Chile


Saturday July 20, 2024 10:45am - 12:15pm PDT
Cedro III

10:45am PDT

T04: Standardised, data-driven computational modelling with NeuroML using the Open Source Brian
Data-driven models of neurons and circuits are important for understanding how the properties of membrane conductances, synapses, dendrites, and the anatomical connectivity between neurons generate the complex dynamical behaviors of brain circuits in health and disease. However, even though data and models have been made publicly available in recent years and the use of standards such as Neurodata Without Borders (NWB) (https://nwb.org)
and NeuroML (https://neuroml.org) to promote FAIR (Findable, Accessible, Interoperable, and Reusable) neuroscience is on the rise, but the development of data-driven models remains hampered by the difficulty of finding appropriate data and the inherent complexity involved in their construction.

The Open Source Brain web platform (OSB) (https://opensourcebrain.org) combines data, accompanying analysis tools, and computational models in a scalable resource. It indexes repositories from established sources such as the DANDI data archive (https://dandiarchive.org), the ModelDB model sharing archive (https://modeldb.science), and GitHub to provide easy access to a plethora of experimental data and models, including a large number standardized in NWB and NeuroML formats.OSB also incorporates the NeuroML software ecosystem. NeuroML is an established community standard and software ecosystem that enables the development of detailed biophysical models using a declarative, simulator-independent description. The software ecosystem supports all steps of the model lifecycle and allows users to automatically generate code and run their NeuroML models using some well-established simulation
engines (NEURON/NetPyNE).

In this tutorial, attendees will learn about:
  • Finding data and models on OSB
  • NeuroML and its software ecosystem
  • Using NeuroML models on OSB
  • Building and simulating new NeuroML models constrained by the data on OSB

We will also assist with advanced tasks and discuss new features to aid researchers further.

Speakers (in alphabetical order):
Padraig Gleeson, University College London, London, UK
Boris Marin, Universidade Federal do ABC, Brazil
Angus Silver, University College London, London, UK
Ankur Sinha, University College London, London, UK


Saturday July 20, 2024 10:45am - 12:15pm PDT
Cedro IV

10:45am PDT

T05: Understanding motor control through multiscale modeling of spinal cord neuronal circuits
Circuits of neurons in the spinal cord process sensory and descending information to produce the neural drive to the muscle and the ensuing movement. Dysfunctions in these circuits could generate abnormal movements, such as tremors. Models of spinal cord circuits have been used for several years to enlighten our understanding of basic principles of motor control, namely the recruitment and rate coding of motor units that explain force gradation and movement smoothness. More recently, computer simulations of biophysical models of the neuromuscular system provide information on 1) how interneuron circuits could attenuate/cancel tremor signals, 2) how modulating sensory signals could produce intermittent recruitment of motor units in a posture control task, and 3) how axon demyelination could impact force and position control.

This tutorial will present an overview of neuromusculoskeletal models and several application examples. Two hands-on sessions will follow the introductory talk. The first session will use a web-based neuromuscular simulator (ReMoto – http://remoto.leb.usp.br) that can be easily configured (without coding) to study several aspects of force generation and control. Finally, the attendees will design a spinal cord circuit from scratch using general-purpose simulators of neurons and neuronal networks (NEURON and NetPyNE). In the latter session, a model including a pool of motor neurons with stochastic synaptic inputs will be used to show how the motor pool could reduce the independent synaptic noise and transmit the common input to the motor output.

Program:
(* denotes talk given remotely)

Neuromusculoskeletal models and their applications*
André Fabio Kohn, Biomedical Engineering Lab, University of Sao Paulo, Brazil

Hands-on session using ReMoto simulator
Renato Naville Watanabe, Biomechanics Laboratory, Federal University of ABC, Brazil

Hands-on session on designing spinal cord circuits using general-purpose simulators of neurons and neuronal networks (NEURON and NetPyNE)
Leonardo Abdala Elias, Neural Engineering Research Laboratory, University of Campinas, Brazil


Saturday July 20, 2024 10:45am - 12:15pm PDT
Cedro V

10:45am PDT

T06: Implementing the Gaussian-Linear Hidden Markov model (GLHMM) with a package in Python for brain data analysis
Hidden Markov Models (HMMs) are a type of statistical model used to model data sequences where the system's underlying state is not directly observable. They are a powerful tool used in several applications, including speech recognition, natural language processing, and bioinformatics, mainly because of their data-driven approach. For this tutorial, we introduce the GLHMM model and Python package (https://github.com/vidaurre/glhmm). In short, the GLHMM is a general framework where linear regression is used to parameterise a Gaussian state distribution, thereby it can accommodate a wide range of uses -including unsupervised, encoding and decoding models. GLHMM is implemented as a Python toolbox emphasizing statistical testing and out-of-sample prediction, i.e., aimed to find and characterize brain-behaviour associations. This toolbox uses a stochastic variational inference approach, enabling it to handle large data sets at reasonable computational time. This approach can be applied to several data modalities, including animal recordings or non-brain data, and applied over a broad range of experimental paradigms.

For demonstration in this tutorial, we will show examples with fMRI data. The goal of this tutorial is to provide a step-by-step guide to using the toolbox. It's aimed at Master's and PhD. students (and Postdocs) interested in learning how to implement HMMs mainly for brain data analysis but not exclusively.

Objectives:
  1. Implement the algorithm with a Python package.
  2. To use the algorithm to estimate relevant parameters of an HMM with a practical example on HCP https://www.humanconnectome.org/

Prerequisites:
To complete this tutorial, participants will need to have the following knowledge and skills:
  1. Basic knowledge of programming in Python.
  2. Basic knowledge of probability and statistics.

Materials:
The tutorial will be available online in a Collab notebook. The source code for the algorithm will also
be available online on github (https://github.com/vidaurre/glhmm).

References:
  1. Diego Vidaurre, Nick Y. Larsen, Laura Masaracchia, Lenno R.P.T Ruijters, Sonsoles Alonso, Christine Ahrends, Mark W. Woolrich. The Gaussian-Linear Hidden Markov model: a Python package.2023 https://arxiv.org/abs/2312.07151
  2. Diego Vidaurre, Stephen M. Smith, and Mark W. Woolrich. Brain network dynamics are hierarchically organized in time. 2017 https://www.pnas.org/doi/full/10.1073/pnas.1705120114
  3. Diego Vidaurre, Romesh Abeysuriya, Robert Becker, Andrew J. Quinn, Fidel Alfaro-Almagro, Stephen M. Smith, Mark W. Woolrich, Discovering dynamic brain networks from big data in rest and task, NeuroImage, https://doi.org/10.1016/j.neuroimage.2017.06.077
  4. Diego Vidaurre, A. Llera, S.M. Smith, M.W. Woolrich, Behavioral relevance of spontaneous, transient brain network interactions in fMRI, NeuroImage, https://doi.org/10.1016/j.neuroimage.2020.117713


Saturday July 20, 2024 10:45am - 12:15pm PDT
Jacarandá

10:45am PDT

T07: Single cell signal processing and data analysis in Matlab
Matlab (Mathworks, Natick, MA) is a popular computing environment that offers an alternative to more advanced environments with its simplicity, especially for those less computationally inclined or for collaborating with experimentalists. In this tutorial, we will focus on the following tasks in Matlab: (1) Signal processing of recorded or simulated traces (e.g., filtering noise, spike and burst finding in single-unit intracellular electrophysiology data in current-clamp, and extracting numerical characteristics); (2) analyzing tabular data (e.g., obtained from Excel or the result of other analyses); and (3) plotting and visualization. For all of these, we will take advantage of the PANDORA
toolbox, which is an open-source project that has been proposed for analysis and visualization (RRID:
SCR_001831, [1]).

PANDORA was initially developed to manage and analyze brute-force neuronal parameter search databases. However, it has proven helpful for various other types of simulation or experimental data analysis [2-7]. PANDORA’s original motivation was to offer an object-oriented program for analyzing neuronal data inside the Matlab environment, in particular with a database table-like object, similar to the “dataframe” object offered in the R ecosystem and the pandas
Python module. PANDORA offers a similarly convenient syntax for a powerful database querying system. A typical workflow would consist of generating parameter sets for simulations, analyzing the resulting simulation output and other recorded data to find spikes, measuring additional characteristics to construct databases, and finally analyzing and visualizing these database contents. PANDORA provides objects for loading datasets, controlling simulations,
importing/exporting data, and visualization. This tutorial uses the toolbox’s standard features and shows how to customize them for a given project.

Curator:
Cengiz Gunay
Department of Information Technology, School of Science and Technology
Georgia Gwinnett College, Lawrenceville, GA, USA

References:
  1. Günay et al. 2009 Neuroinformatics, 7(2):93-111. doi: 10.1007/s12021-009-9048-z
  2. Doloc-Mihu et al. 2011 Journal of biological physics, 37(3), 263–283. doi:10.1007/s10867-011-9215-y;
  3. Lin et al. 2012 J Neurosci 32(21): 7267–77;
  4. Wolfram et al. 2014 J Neurosci, 34(7): 2538–2543; doi: 10.1523/JNEUROSCI.4511-13.2014;
  5. Günay et al. 2015 PLoS Comp Bio. doi: 10.1371/journal.pcbi.1004189;
  6. Wenning et al. 2018 eLife 2018;7:e31123 doi: 10.7554/eLife.31123;
  7. Günay et al. 2019 eNeuro, 6(4), ENEURO.0417-18.2019. doi:10.1523/ENEURO.0417-18.2019

Speakers

Saturday July 20, 2024 10:45am - 12:15pm PDT
Cedro VI

12:15pm PDT

Lunch
Saturday July 20, 2024 12:15pm - 2:00pm PDT

2:00pm PDT

T01: Building mechanistic multiscale models using NEURON and NetPyNE to study brain function and disease
Understanding the brain requires studying its multiscale interactions, from molecules to cells to circuits and networks. Although vast experimental datasets are being generated across scales and modalities, integrating and interpreting this data remains a daunting challenge. This tutorial will highlight recent advances in mechanistic multiscale modeling and how it offers an unparalleled approach to integrate these data and provide insights into brain function and disease. Multiscale models facilitate the interpretation of experimental findings across different brain regions, brain scales (molecular, cellular, circuit, system), brain function (sensory perception, motor behavior, learning, etc), recording/imaging
modalities (intracellular voltage, LFP, EEG, fMRI, etc) and disease/disorders (e.g., schizophrenia, epilepsy, ischemia, Parkinson's, etc). As such, it has a broad appeal to experimental, clinical, and computational neuroscientists, students, and educators.

This tutorial will introduce multiscale modeling using two NIH-funded tools: the NEURON 9.0 simulator (https://neuron.yale.edu/neuron/), including the Reaction-Diffusion (RxD) module and the NetPyNE tool (http://netpyne.org). The tutorial will combine background, examples, and hands-on exercises covering the implementation of models at four key scales: (1) intracellular dynamics (e.g., calcium buffering, protein interactions), (2) single neuron electrophysiology (e.g., action potential propagation),(3) neurons in extracellular space (e.g., spreading depression), and (4) neuronal circuits, including dynamics such as oscillations and simulation of recordings such as local field potentials (LFP) and electroencephalography (EEG). For circuit simulations, we will use NetPyNE, a high-level interface to NEURON supporting programmatic and GUI specifications that facilitate the development, parallel simulation, and analysis of biophysically detailed neuronal circuits. We conclude with an example combining all three tools that link intracellular/extracellular molecular dynamics with network spiking activity and LFP/EEG. The tutorial will incorporate recent developments and new features in the NEURON and NetPyNE tools.

Speakers (in alphabetical order):

Valery Bragin, NetPyNE circuit modeling
Charité – Berlin University Medicine / State University of New York (SUNY) Downstate Health
Sciences University

Salvador Dura-Bernal, NetPyNE circuit modeling
State University of New York (SUNY) Downstate Health Sciences University

William W Lytton, Multiscale Modeling Overview
State University of New York (SUNY) Downstate Health Sciences University

Robert A McDougal, NEURON single cells
Yale University

Adam Newton, NEURON Reaction-Diffusion
State University of New York (SUNY) Downstate Health Sciences University



Saturday July 20, 2024 2:00pm - 3:30pm PDT
Cedro I

2:00pm PDT

T02: From single-cell modeling to large-scale network dynamics with NEST Simulator
NEST is an established community code for simulating spiking neuronal network models that capture the full details of the structure of biological networks [1]. The simulator runs efficiently on various architectures, from laptops to supercomputers [2]. Over the years, a large body of peer-reviewed neuroscientific studies have been carried out with NEST, and it has become the reference code for research on neuromorphic hardware systems.

This tutorial provides hands-on experience with recent NEST feature additions. First, we explore how an astrocyte-mediated slow inward current impacts typical neural network simulations. Here, we introduce how astrocytes are implemented in NEST and investigate their dynamical behavior. Then, we create small neuron-astrocyte networks and explore their interactions before adding more complexity to the network structure. Second, we develop a functional network that can be trained to solve various tasks using a three-factor learning rule that approximates backpropagation through time: eligibility propagation (e-prop). Specifically, we use e-prop to train a network to solve a supervised regression task to generate temporal patterns and a supervised classification task to accumulate evidence. Third, we investigate how dendritic properties of neurons can be captured by constructing compartmental models in NEST. We import dendritic models from an existing repository and embed them in a network simulation. Finally, we learn to use NESTML, a domain-specific modeling language for neuron and synapse models. We implement a neuron model with an active dendritic compartment and a third-factor STDP synapse defined in NESTML. These models are then used in a network to perform learning, prediction, and replay of sequences of items, such as let-
ters, images, or sounds [3].

[1] Gewaltig M-O & Diesmann M (2007) NEST (Neural Simulation Tool) Scholarpedia 2(4):1430.
[2] Jordan J., Ippen T., Helias M., Kitayama I., Sato M., Igarashi J., Diesmann M., Kunkel S. (2018). Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers. Frontiers in Neuroinformatics 12: 2
[3] Bouhadjar Y, Wouters DJ, Diesmann M, Tetzlaff T (2022) Sequence learning, prediction, and replay in networks of spiking neurons. PLoS Comput Biol 18(6): e1010233.

Speakers (in alphabetical order):

Iiro Ahokainen, Astrocytes in NEST
Tampere University, Finland

Jasper Albers, E-prop in NEST
Jülich Research Centre, Germany

Joshua Boettcher, Compartmental models in NEST; NESTML
Jülich Research Centre, Germany



Saturday July 20, 2024 2:00pm - 3:30pm PDT
Cedro II

2:00pm PDT

T03: Modeling cortical networks dynamics
The Tutorial aims to provide an essential introduction to modeling biophysically realistic neuronal networks, emphasizing essential circuital components underpinning asynchronous vs. synchronous dynamics. The morning session will introduce classic models of balanced excitatory/inhibitory (E–I) networks with analytical insights into some mechanisms
for the emergence of asynchronous and irregular firing. The afternoon session will shift focus towards network models displaying synchronous dynamics, with hands-on interactive jupyter sessions and practical numerical simulations delving into fundamental theory and clinical applications, including models of epileptiform activity.

MORNING SESSION: Asynchronous dynamics
Emergence of irregular activity in networks of strongly coupled spiking neurons
Alessandro Sanzeni, Bocconi University, Milan, Italy

Introducing glia into cortical network models and the emergence of glial attractors
Maurizio De Pitta, Krembil Research Institute, Toronto, Canada

AFTERNOON SESSION: Synchronous Activity
Oscillations in networks of excitatory and inhibitory neurons: the PING framework
Scott Rich, University of Connecticut, CN, USA

Simulating network models of reactive astrogliosis underpinning epilepsy
Pamela Illescas-Maldonaldo, and Vicente Medel, University of Valparaiso, Valparaiso, Chile


Saturday July 20, 2024 2:00pm - 3:30pm PDT
Cedro III

2:00pm PDT

T04: Standardised, data-driven computational modelling with NeuroML using the Open Source Brian
Data-driven models of neurons and circuits are important for understanding how the properties of membrane conductances, synapses, dendrites, and the anatomical connectivity between neurons generate the complex dynamical behaviors of brain circuits in health and disease. However, even though data and models have been made publicly available in recent years and the use of standards such as Neurodata Without Borders (NWB) (https://nwb.org)
and NeuroML (https://neuroml.org) to promote FAIR (Findable, Accessible, Interoperable, and Reusable) neuroscience is on the rise, but the development of data-driven models remains hampered by the difficulty of finding appropriate data and the inherent complexity involved in their construction.

The Open Source Brain web platform (OSB) (https://opensourcebrain.org) combines data, accompanying analysis tools, and computational models in a scalable resource. It indexes repositories from established sources such as the DANDI data archive (https://dandiarchive.org), the ModelDB model sharing archive (https://modeldb.science), and GitHub to provide easy access to a plethora of experimental data and models, including a large number standardized in NWB and NeuroML formats.OSB also incorporates the NeuroML software ecosystem. NeuroML is an established community standard and software ecosystem that enables the development of detailed biophysical models using a declarative, simulator-independent description. The software ecosystem supports all steps of the model lifecycle and allows users to automatically generate code and run their NeuroML models using some well-established simulation
engines (NEURON/NetPyNE).

In this tutorial, attendees will learn about:
  • Finding data and models on OSB
  • NeuroML and its software ecosystem
  • Using NeuroML models on OSB
  • Building and simulating new NeuroML models constrained by the data on OSB

We will also assist with advanced tasks and discuss new features to aid researchers further.

Speakers (in alphabetical order):
Padraig Gleeson, University College London, London, UK
Boris Marin, Universidade Federal do ABC, Brazil
Angus Silver, University College London, London, UK
Ankur Sinha, University College London, London, UK


Saturday July 20, 2024 2:00pm - 3:30pm PDT
Cedro IV

2:00pm PDT

T08: Unraveling dynamics and connectivity from spiking time series of in-vitro neuronal cultures
This tutorial will equip participants with comprehensive skills for analyzing spiking time series from DishBrain, a pioneering system demonstrating rudimentary biological intelligence by leveraging the adaptive properties of neurons [1,2]. DishBrain integrates in-vitro neuronal networks with in-silico computing through high-density multi-electrode arrays (HD-MEAs). These cultured neuronal networks exhibit biologically-based adaptive intelligence within a simulated gameplay environment of ‘Pong’ in real-time, facilitated by closed-loop stimulation and recordings [1]. We will introduce this unique dataset and provide samples of these time series data during gameplay and spontaneous activity, their behavioral labels in the game environment, and custom Python scripts in Jupyter Notebook to analyze spiking data from 1024 channels on the HD-MEA.

The tutorial will emphasize methods for extracting meaningful insights from a time series of spiking data. We will address the challenges of sparse, high-dimensional data and introduce custom-designed pipelines. These include lower-dimensional embedding algorithms such as UMAP [3], t-SNE [4], and CEBRA [5]. Additionally, we will demonstrate the application of Gaussian kernels for data smoothing, followed by community detection techniques to identify influential neurons and reduce computational complexity. These techniques will enable visualization and interpretation of complex neural activity patterns in lower-dimensional spaces, facilitating insights into network dynamics.

Finally, we will introduce methods to quantify statistical dependence and infer connectivity networks from time series, with a particular focus on transfer entropy analysis. Transfer entropy is a powerful tool for quantifying the directed information flow in complex systems [6]. Through hands-on exercises and practical demonstrations, participants will learn how to apply transfer entropy analysis using spiking or continuous data to unravel patterns of information flow in neuronal ensembles.

By the end of the tutorial, participants will have a robust toolkit for analyzing and interpreting high-dimensional, sparse spiking time series and extracting insights into neural network dynamics.

Program:
Introduction to DishBrain: In-vitro spiking neuronal cultures in an embodied game environment
Forough Habibollahi, Cortical Labs Pty Ltd, Melbourne, Australia

Lower dimensional embedding approaches to study complex network dynamics of neuronal systems
Moein Khajehnejad, Turner Institute for Brain and Mental Health, Monash University, Clayton, Australia

Inferring directed statistical dependence using Transfer Entropy
Leonardo Novelli, Turner Institute for Brain and Mental Health, Monash University, Clayton, Australia

References
  1. Kagan BJ, Kitchen AC, Tran NT, Habibollahi F, Khajehnejad M, Parker BJ, Bhat A, Rollo B, Razi A, Friston KJ. (2022) In vitro neurons learn and exhibit sentience when embodied in a simulated game world. Neuron 110(23):3952-69.
  2. Cortical Labs: https://corticallabs.com/
  3. McInnes, L., Healy, J., Saul, N. & Großberger, L. (2018) UMAP: Uniform Manifold Approximation and Projection for dimension reduction. J. Open Source Softw. 3, 86.
  4. Maaten, L. V., Postma, E. O. & Herik, J. V. (2009) Dimensionality reduction: a comparative review. J. Mach. Learn. Res. 10, 13.
  5. Schneider S, Lee JH, Mathis MW. (2023) Learnable latent embeddings for joint behavioral and neural analysis. Nature 617, 360–368.
  6. Bossomaier, T., Barnett, L., Harré, M., & Lizier, J. T. (2016). An Introduction to Transfer Entropy. Springer International Publishing.


Saturday July 20, 2024 2:00pm - 3:30pm PDT
Cedro V

2:00pm PDT

T09: Interactive data visualization techniques
Nowadays, the Web has become very prominent in computing. As a result, web standards and programming have seen tremendous improvement, with many tools becoming available at lower or no cost. Trends are visible when traditional desktop applications are moving to web versions individually. For computational neuroscience, Jupyter notebooks already provided a way to share research results on the web. However, Jupyter still requires a backend server running Python, and visualizations are static and not dynamic like one would get from using web-native languages. It is possible to use some Python plugins to generate interactive visualizations, but the real power is achieved when using JavaScript directly via several expansive visualization libraries (Plotly, Vega, D3, Three.js, etc). This tutorial will focus on D3.js to create interactive visualizations for computational neuroscience applications running directly on the browser. We will review the basics for using online JavaScript notebooks (ObservableHQ) and then show more advanced examples via a hands-on tutorial format.

Speakers (in alphabetical order):
Anca Doloc-Mihu
Dept. Information Technology, School of Science and Technology,
Georgia Gwinnett College, Lawrenceville, GA, USA

Cengiz Gunay
Dept. Information Technology, School of Science and Technology,
Georgia Gwinnett College, Lawrenceville, GA, USA



Saturday July 20, 2024 2:00pm - 3:30pm PDT
Cedro VI

2:00pm PDT

T10: Training recurrent spiking neural networks to generate experimentally recorded neural activities
Recent advances in machine learning methods enable training recurrent neural networks (RNNs) to perform highly complex and sophisticated tasks. One of the tasks, particularly interesting to neuroscientists, is to generate experimentally recorded neural activities in recurrent neural networks and study the dynamics of trained networks to investigate the underlying neural mechanism. Here, we showcase how a widely-used training method, known as recursive least squares (or FORCE), can be adopted to train spiking RNNs to reproduce spike recordings of cortical neurons. First, we give an overview of the original FORCE learning, which trains the outputs of rate-based RNNs to perform tasks, and we show how it can be modified to generate arbitrarily complex activity patterns in spiking RNNs. Using this method, we show only a subset of neurons embedded in a network of randomly connected excitatory and inhibitory spiking neurons can be trained to produce cortical activities. We demonstrate GPU implementation of the training algorithm, which enables fast training of large-scale networks, and show that the spiking activities of > 60k neurons recorded with NeuroPixel Probe can be reproduced by spiking RNNs.

Presenter:
Christopher Kim
Staff Scientist
Laboratory of Biological Modeling, NIDDK, National Institutes of Health, Bethesda, MD, USA

References
  1. Kim, C. M., & Chow, C. C. (2018). Learning recurrent dynamics in spiking networks. Elife, 7, e37124.
  2. Kim, C. M., Finkelstein, A., Chow, C. C., Svoboda, K., & Darshan, R. (2023). Distributing taskrelated neural activity across a cortical network through task-independent connections. Nature Communications, 14(1), 2851.
  3. Arthur, B. J., Kim, C. M., Chen, S., Preibisch, S., & Darshan, R. (2023). A scalable implementation of the recursive least-squares algorithm for training spiking neural networks. Frontiers in Neuroinformatics, 17.
  4. Sussillo, D., & Abbott, L. F. (2009). Generating coherent patterns of activity from chaotic neural networks. Neuron, 63(4), 544-557.

Speakers

Saturday July 20, 2024 2:00pm - 3:30pm PDT
Jacarandá

3:30pm PDT

Coffee break
Saturday July 20, 2024 3:30pm - 4:00pm PDT

4:00pm PDT

T01: Building mechanistic multiscale models using NEURON and NetPyNE to study brain function and disease
Understanding the brain requires studying its multiscale interactions, from molecules to cells to circuits and networks. Although vast experimental datasets are being generated across scales and modalities, integrating and interpreting this data remains a daunting challenge. This tutorial will highlight recent advances in mechanistic multiscale modeling and how it offers an unparalleled approach to integrate these data and provide insights into brain function and disease. Multiscale models facilitate the interpretation of experimental findings across different brain regions, brain scales (molecular, cellular, circuit, system), brain function (sensory perception, motor behavior, learning, etc), recording/imaging
modalities (intracellular voltage, LFP, EEG, fMRI, etc) and disease/disorders (e.g., schizophrenia, epilepsy, ischemia, Parkinson's, etc). As such, it has a broad appeal to experimental, clinical, and computational neuroscientists, students, and educators.

This tutorial will introduce multiscale modeling using two NIH-funded tools: the NEURON 9.0 simulator
(https://neuron.yale.edu/neuron/), including the Reaction-Diffusion (RxD) module and the NetPyNE tool (http://netpyne.org). The tutorial will combine background, examples, and hands-on exercises covering the implementation of models at four key scales: (1) intracellular dynamics (e.g., calcium buffering, protein interactions), (2) single neuron electrophysiology (e.g., action potential propagation), (3) neurons in extracellular space (e.g., spreading depression), and (4) neuronal circuits, including dynamics such as oscillations and simulation of recordings such as local field potentials (LFP) and electroencephalography (EEG). For circuit simulations, we will use NetPyNE, a high-level interface to NEURON supporting programmatic and GUI specifications that facilitate the development, parallel simulation, and analysis of biophysically detailed neuronal circuits. We conclude with an example combining all three tools that link intracellular/extracellular molecular dynamics with network spiking activity and LFP/EEG. The tutorial will incorporate recent developments and new features in the NEURON and NetPyNE tools.

Speakers (in alphabetical order):

Valery Bragin, NetPyNE circuit modeling
Charité – Berlin University Medicine / State University of New York (SUNY) Downstate Health
Sciences University

Salvador Dura-Bernal, NetPyNE circuit modeling
State University of New York (SUNY) Downstate Health Sciences University

William W Lytton, Multiscale Modeling Overview
State University of New York (SUNY) Downstate Health Sciences University

Robert A McDougal, NEURON single cells
Yale University

Adam Newton, NEURON Reaction-Diffusion
State University of New York (SUNY) Downstate Health Sciences University



Saturday July 20, 2024 4:00pm - 5:15pm PDT
Cedro I

4:00pm PDT

T02: From single-cell modeling to large-scale network dynamics with NEST Simulator
NEST is an established community code for simulating spiking neuronal network models that capture the full details of the structure of biological networks [1]. The simulator runs efficiently on various architectures, from laptops to supercomputers [2]. Over the years, a large body of peer-reviewed neuroscientific studies have been carried out with NEST, and it has become the reference code for research on neuromorphic hardware systems.

This tutorial provides hands-on experience with recent NEST feature additions. First, we explore how an astrocyte-mediated slow inward current impacts typical neural network simulations. Here, we introduce how astrocytes are implemented in NEST and investigate their dynamical behavior. Then, we create small neuron-astrocyte networks and explore their interactions before adding more complexity to the network structure. Second, we develop a functional network that can be trained to solve various tasks using a three-factor learning rule that approximates backpropagation through time: eligibility propagation (e-prop). Specifically, we use e-prop to train a network to solve a supervised regression task to generate temporal patterns and a supervised classification task to accumulate evidence. Third, we investigate how dendritic properties of neurons can be captured by constructing compartmental models in NEST. We import dendritic models from an existing repository and embed them in a network simulation. Finally, we learn to use NESTML, a domain-specific modeling language for neuron and synapse models. We implement a neuron model with an active dendritic compartment and a third-factor STDP synapse defined in NESTML. These models are then used in a network to perform learning, prediction, and replay of sequences of items, such as let-
ters, images, or sounds [3].

[1] Gewaltig M-O & Diesmann M (2007) NEST (Neural Simulation Tool) Scholarpedia 2(4):1430.
[2] Jordan J., Ippen T., Helias M., Kitayama I., Sato M., Igarashi J., Diesmann M., Kunkel S. (2018). Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers. Frontiers in Neuroinformatics 12: 2
[3] Bouhadjar Y, Wouters DJ, Diesmann M, Tetzlaff T (2022) Sequence learning, prediction, and replay in networks of spiking neurons. PLoS Comput Biol 18(6): e1010233.

Speakers (in alphabetical order):

Iiro Ahokainen, Astrocytes in NEST
Tampere University, Finland

Jasper Albers, E-prop in NEST
Jülich Research Centre, Germany

Joshua Boettcher, Compartmental models in NEST; NESTML
Jülich Research Centre, Germany



Saturday July 20, 2024 4:00pm - 5:15pm PDT
Cedro II

4:00pm PDT

T03: Modeling cortical networks dynamics
The Tutorial aims to provide an essential introduction to modeling biophysically realistic neuronal networks, emphasizing essential circuital components underpinning asynchronous vs. synchronous dynamics. The morning session will introduce classic models of balanced excitatory/inhibitory (E–I) networks with analytical insights into some mechanisms
for the emergence of asynchronous and irregular firing. The afternoon session will shift focus towards network models displaying synchronous dynamics, with hands-on interactive jupyter sessions and practical numerical simulations delving into fundamental theory and clinical applications, including models of epileptiform activity.

MORNING SESSION: Asynchronous dynamics
Emergence of irregular activity in networks of strongly coupled spiking neurons
Alessandro Sanzeni, Bocconi University, Milan, Italy

Introducing glia into cortical network models and the emergence of glial attractors
Maurizio De Pitta, Krembil Research Institute, Toronto, Canada

AFTERNOON SESSION: Synchronous Activity
Oscillations in networks of excitatory and inhibitory neurons: the PING framework
Scott Rich, University of Connecticut, CN, USA

Simulating network models of reactive astrogliosis underpinning epilepsy
Pamela Illescas-Maldonaldo, and Vicente Medel, University of Valparaiso, Valparaiso, Chile


Saturday July 20, 2024 4:00pm - 5:15pm PDT
Cedro III

4:00pm PDT

T04: Standardised, data-driven computational modelling with NeuroML using the Open Source Brian
Data-driven models of neurons and circuits are important for understanding how the properties of membrane conductances, synapses, dendrites, and the anatomical connectivity between neurons generate the complex dynamical behaviors of brain circuits in health and disease. However, even though data and models have been made publicly available in recent years and the use of standards such as Neurodata Without Borders (NWB) (https://nwb.org)
and NeuroML (https://neuroml.org) to promote FAIR (Findable, Accessible, Interoperable, and Reusable) neuroscience is on the rise, but the development of data-driven models remains hampered by the difficulty of finding appropriate data and the inherent complexity involved in their construction.

The Open Source Brain web platform (OSB) (https://opensourcebrain.org) combines data, accompanying analysis tools, and computational models in a scalable resource. It indexes repositories from established sources such as the DANDI data archive (https://dandiarchive.org), the ModelDB model sharing archive (https://modeldb.science), and GitHub to provide easy access to a plethora of experimental data and models, including a large number standardized in NWB and NeuroML formats.OSB also incorporates the NeuroML software ecosystem. NeuroML is an established community standard and software ecosystem that enables the development of detailed biophysical models using a declarative, simulator-independent description. The software ecosystem supports all steps of the model lifecycle and allows users to automatically generate code and run their NeuroML models using some well-established simulation
engines (NEURON/NetPyNE).

In this tutorial, attendees will learn about:
  • Finding data and models on OSB
  • NeuroML and its software ecosystem
  • Using NeuroML models on OSB
  • Building and simulating new NeuroML models constrained by the data on OSB

We will also assist with advanced tasks and discuss new features to aid researchers further.

Speakers (in alphabetical order):
Padraig Gleeson, University College London, London, UK
Boris Marin, Universidade Federal do ABC, Brazil
Angus Silver, University College London, London, UK
Ankur Sinha, University College London, London, UK


Saturday July 20, 2024 4:00pm - 5:15pm PDT
Cedro IV

4:00pm PDT

T08: Unraveling dynamics and connectivity from spiking time series of in-vitro neuronal cultures
This tutorial will equip participants with comprehensive skills for analyzing spiking time series from DishBrain, a pioneering system demonstrating rudimentary biological intelligence by leveraging the adaptive properties of neurons [1,2]. DishBrain integrates in-vitro neuronal networks with in-silico computing through high-density multi-electrode arrays (HD-MEAs). These cultured neuronal networks exhibit biologically-based adaptive intelligence within a simulated gameplay environment of ‘Pong’ in real-time, facilitated by closed-loop stimulation and recordings [1]. We will introduce this unique dataset and provide samples of these time series data during gameplay and spontaneous activity, their behavioral labels in the game environment, and custom Python scripts in Jupyter Notebook to analyze spiking data from 1024 channels on the HD-MEA.

The tutorial will emphasize methods for extracting meaningful insights from a time series of spiking data. We will address the challenges of sparse, high-dimensional data and introduce custom-designed pipelines. These include lower-dimensional embedding algorithms such as UMAP [3], t-SNE [4], and CEBRA [5]. Additionally, we will demonstrate the application of Gaussian kernels for data smoothing, followed by community detection techniques to identify influential neurons and reduce computational complexity. These techniques will enable visualization and interpretation of complex neural activity patterns in lower-dimensional spaces, facilitating insights into network dynamics.

Finally, we will introduce methods to quantify statistical dependence and infer connectivity networks from time series, with a particular focus on transfer entropy analysis. Transfer entropy is a powerful tool for quantifying the directed information flow in complex systems [6]. Through hands-on exercises and practical demonstrations, participants will learn how to apply transfer entropy analysis using spiking or continuous data to unravel patterns of information flow in neuronal ensembles.

By the end of the tutorial, participants will have a robust toolkit for analyzing and interpreting high-dimensional, sparse spiking time series and extracting insights into neural network dynamics.

Program:
Introduction to DishBrain: In-vitro spiking neuronal cultures in an embodied game environment
Forough Habibollahi, Cortical Labs Pty Ltd, Melbourne, Australia

Lower dimensional embedding approaches to study complex network dynamics of neuronal systems
Moein Khajehnejad, Turner Institute for Brain and Mental Health, Monash University, Clayton, Australia

Inferring directed statistical dependence using Transfer Entropy
Leonardo Novelli, Turner Institute for Brain and Mental Health, Monash University, Clayton, Australia

References
  1. Kagan BJ, Kitchen AC, Tran NT, Habibollahi F, Khajehnejad M, Parker BJ, Bhat A, Rollo B, Razi A, Friston KJ. (2022) In vitro neurons learn and exhibit sentience when embodied in a simulated game world. Neuron 110(23):3952-69.
  2. Cortical Labs: https://corticallabs.com/
  3. McInnes, L., Healy, J., Saul, N. & Großberger, L. (2018) UMAP: Uniform Manifold Approximation and Projection for dimension reduction. J. Open Source Softw. 3, 86.
  4. Maaten, L. V., Postma, E. O. & Herik, J. V. (2009) Dimensionality reduction: a comparative review. J. Mach. Learn. Res. 10, 13.
  5. Schneider S, Lee JH, Mathis MW. (2023) Learnable latent embeddings for joint behavioral and neural analysis. Nature 617, 360–368.
  6. Bossomaier, T., Barnett, L., Harré, M., & Lizier, J. T. (2016). An Introduction to Transfer Entropy. Springer International Publishing.


Saturday July 20, 2024 4:00pm - 5:15pm PDT
Cedro V

4:00pm PDT

T09: Interactive data visualization techniques
Nowadays, the Web has become very prominent in computing. As a result, web standards and programming have seen tremendous improvement, with many tools becoming available at lower or no cost. Trends are visible when traditional desktop applications are moving to web versions individually. For computational neuroscience, Jupyter notebooks already provided a way to share research results on the web. However, Jupyter still requires a backend server running Python, and visualizations are static and not dynamic like one would get from using web-native languages. It is possible to use some Python plugins to generate interactive visualizations, but the real power is achieved when using JavaScript directly via several expansive visualization libraries (Plotly, Vega, D3, Three.js, etc). This tutorial will focus on D3.js to create interactive visualizations for computational neuroscience applications running directly on the browser. We will review the basics for using online JavaScript notebooks (ObservableHQ) and then show more advanced examples via a hands-on tutorial format.

Speakers (in alphabetical order):
Anca Doloc-Mihu
Dept. Information Technology, School of Science and Technology,
Georgia Gwinnett College, Lawrenceville, GA, USA

Cengiz Gunay
Dept. Information Technology, School of Science and Technology,
Georgia Gwinnett College, Lawrenceville, GA, USA



Saturday July 20, 2024 4:00pm - 5:15pm PDT
Cedro VI

4:00pm PDT

T10: Training recurrent spiking neural networks to generate experimentally recorded neural activities
Recent advances in machine learning methods enable training recurrent neural networks (RNNs) to perform highly complex and sophisticated tasks. One of the tasks, particularly interesting to neuroscientists, is to generate experimentally recorded neural activities in recurrent neural networks and study the dynamics of trained networks to investigate the underlying neural mechanism. Here, we showcase how a widely-used training method, known as recursive least squares (or FORCE), can be adopted to train spiking RNNs to reproduce spike recordings of cortical neurons. First, we give an overview of the original FORCE learning, which trains the outputs of rate-based RNNs to perform tasks, and we show how it can be modified to generate arbitrarily complex activity patterns in spiking RNNs. Using this method, we show only a subset of neurons embedded in a network of randomly connected excitatory and inhibitory spiking neurons can be trained to produce cortical activities. We demonstrate GPU implementation of the training algorithm, which enables fast training of large-scale networks, and show that the spiking activities of > 60k neurons recorded with NeuroPixel Probe can be reproduced by spiking RNNs.

Presenter:
Christopher Kim
Staff Scientist
Laboratory of Biological Modeling, NIDDK, National Institutes of Health, Bethesda, MD, USA

References
  1. Kim, C. M., & Chow, C. C. (2018). Learning recurrent dynamics in spiking networks. Elife, 7, e37124.
  2. Kim, C. M., Finkelstein, A., Chow, C. C., Svoboda, K., & Darshan, R. (2023). Distributing taskrelated neural activity across a cortical network through task-independent connections. Nature Communications, 14(1), 2851.
  3. Arthur, B. J., Kim, C. M., Chen, S., Preibisch, S., & Darshan, R. (2023). A scalable implementation of the recursive least-squares algorithm for training spiking neural networks. Frontiers in Neuroinformatics, 17.
  4. Sussillo, D., & Abbott, L. F. (2009). Generating coherent patterns of activity from chaotic neural networks. Neuron, 63(4), 544-557.

Speakers

Saturday July 20, 2024 4:00pm - 5:15pm PDT
Jacarandá

5:15pm PDT

Break
Tutorial's end and preparation for the inaugurual Keynote Lecture 

Saturday July 20, 2024 5:15pm - 5:30pm PDT

5:30pm PDT

Welcome and Keynote #1
Saturday July 20, 2024 5:30pm - 6:30pm PDT

6:30pm PDT

Welcome reception
Saturday July 20, 2024 6:30pm - 6:30pm PDT
 
Sunday, July 21
 

8:30am PDT

Registration
Sunday July 21, 2024 8:30am - 8:30am PDT

9:10am PDT

Announcements and Keynote #2
Sunday July 21, 2024 9:10am - 10:10am PDT

10:10am PDT

Coffee Break
Sunday July 21, 2024 10:10am - 10:40am PDT

10:40am PDT

Oral Session 1: Sensory processing
Sunday July 21, 2024 10:40am - 12:30pm PDT

10:41am PDT

FO1: From Population to Place Coding: Mechanistic Insights into the transformation of ITD representation along the auditory pathway
Lavínia Mitiko Takarabe, Bóris Marin, Rodrigo Pavão

Sunday July 21, 2024 10:41am - 11:10am PDT

11:10am PDT

O1: Touch stimulation to enhance separation of sound sources
Farzaneh Darki, Piotr Slowinski, Marc Goodfellow, James Rankin

Sunday July 21, 2024 11:10am - 11:30am PDT

11:30am PDT

O2: Coherent Motion Detection Facilitated by Surround Suppression
Elnaz Nemati, Anthony Burkitt, David Grayden, Parvin Zarei Eskikand

Sunday July 21, 2024 11:30am - 11:50am PDT

11:50am PDT

O3: Extracting regularities embedded within stochastic sequences of sensorimotor events.
Claudia D Vargas, Antonio Galves, Jesus E Garcia, Noslen Hernández, Paulo Roberto Cabral-Passos

Sunday July 21, 2024 11:50am - 12:10pm PDT

12:10pm PDT

O4: Recurrent neural networks outperform canonical computational models at fitting auditory brain responses
Ulysse Rancon, Timothee Masquelier, Benoit Cottereau

Sunday July 21, 2024 12:10pm - 12:30pm PDT

12:30pm PDT

Lunch
Sunday July 21, 2024 12:30pm - 2:00pm PDT

12:30pm PDT

Program Committee Meeting
Sunday July 21, 2024 12:30pm - 2:00pm PDT

2:00pm PDT

Oral Session 2: Navigation
Sunday July 21, 2024 2:00pm - 3:30pm PDT
Jacarandá

2:01pm PDT

FO2: Learning egocentric spatial cells in the postrhinal cortex
Yanbo Lian, Patrick LaChance, Samantha Malmberg, Michael Hasselmo, Anthony Burkitt

Sunday July 21, 2024 2:01pm - 2:30pm PDT
Jacarandá

2:30pm PDT

2:50pm PDT

O6: Distributed engrams constitute flexible and versatile neural representations
Douglas Feitosa Tomé, Tim P. Vogels

Sunday July 21, 2024 2:50pm - 3:10pm PDT
Jacarandá

3:10pm PDT

O7: Unraveling the brain circuits underlying target pursuit in the hoverfly
Anindya Ghosh, Sarah Nicholas, Karin Nordström, Thomas Nowotny, James Knight


Sunday July 21, 2024 3:10pm - 3:30pm PDT
Jacarandá

3:30pm PDT

O8:Effect of Focused Ultrasonic Stimulation via Intramembrane Cavitation in the Squid Giant Axon
Mithun Padmakumar, Divya Rajan, John Eric Steephen

Sunday July 21, 2024 3:30pm - 3:50pm PDT
Jacarandá

3:50pm PDT

Coffee Break
Sunday July 21, 2024 3:50pm - 4:20pm PDT

4:20pm PDT

P001 Recurrent models optimized for face recognition exhibit representational dynamics resembling the primate brain
Neurons with selectivity for different categories are found in higher areas of the primate ventral visual pathway. A well characterized network of ventral cortical areas are the face patches, where neurons respond more to faces than non-faces. The face patches form a coarse hierarchy of areas that become progressively more identity-selective and view-invariant. The face system therefore provides an excellent system for studying the mechanisms of category and identity recognition in the primate brain. A fundamental ongoing debate in the field is whether (a) category-selective cells are specialized for detecting features associated with a specific category or (b) features for all categories span a more domain-general multivariate representational space with different categories forming clusters in that space. (Note that (a) and (b) are not mutually exclusive, but (a) makes a strong claim that certain neurons contribute exclusively or primarily to the representation of certain categories.) Recent work using deep neural networks (DNNs) have shown that models trained to identify faces, but not models trained for general object recognition, are able to predict behavioral signatures of face processing. Feedforward DNNs correspond in their feature selectivity across layers to the brain areas in the hierarchy of the ventral pathway (V1->V2->V4->IT). However these models cannot capture the dynamics of neural response within each layer due to lack of recurrent connections. In this work, we examine recurrent neural networks (RNNs) trained with different objectives to better understand these dynamics. We used CORNet-RT, a four-layer convolutional network, corresponding to the four stages in the ventral pathway mentioned above. Each network layer goes through 8 steps of recurrence. The computational graph follows the principle of biological unrolling, so the input reaches the final layer after 3 steps of processing. We trained the model both for face recognition (using the VGGFace2 dataset) and for object recognition (using the Imagenet dataset). We matched the number of training images across the two datasets.  First, using representational similarity analysis, we found that models trained on face recognition, but not models trained on general object recognition,  when tested on a held-out set, show face-identity selectivity resembling primate face patch AM (Fig. 1). Visualization of the representational space using multidimensional scaling (MDS, metric stress objective) further showed the emergence of identity clusters over timesteps. Next we tested the same models on a dataset including human faces, monkey faces, and non-face objects. The representational dissimilarity matrices (RDMs, Euclidean distance) showed distinctions among human face identities emerging only late in the process. In earlier steps, the representational geometry separates the objects from the faces, and in later steps the differences among face identities come to be prevalently represented.  It has been reported that face-selective cells can exhibit a dynamic signature of changing from face detection separating faces from non-faces) to face identification (separating different face identities). That this dynamic signature emerges in recurrent models trained on face recognition suggests a synthesis of hypotheses (a) and (b)  in which face-selective cells are both part of a domain-general processing and more specifically differentiate faces.



Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P002 Theoretical considerations of spiking neural networks with STDP and homeostatic balancing
In computational neuroscience, one intriguing question is how to enhance the ability of learning models by regulating their activity through homeostatic mechanisms and functions. Spiking neural networks (SNNs) are/is a commonly used tool in computational neuroscience to model cortical and other networks. SNNs hold significant implications for artificial intelligence as well, such as unsupervised learning. The synergistic utilization of homeostatic balancing with Spike-Time-Dependent Plasticity (STDP) and the Winner-Take-All (WTA) architecture for SNN enables the emergence of learning outcomes without the need for global learning rules. STDP, a synaptic learning rule, strengthens or weakens connections between neurons based on the relative timing of their spikes, mimicking the fundamental mechanisms in biological neural networks. The WTA circuit architecture, on the other hand, is a commonly used hypothesis for human decision making. The WTA design forces a competition between decision neurons and the homeostatic balancing covers methods for enhancing this competition throughout the neuronal specialization process.

In this study, we examine the theoretical foundations, mechanisms, and potential challenges of the STDP-WTA paradigm. Most of the recent research (for example see Diehl and Cook, Frontiers in Computational Neuroscience, 2015; Dong et al., Neural Networks, 2023) has focused on utilizing an adaptive firing threshold as the adaptation method and adaptive input scaling for the WTA circuit. We here propose an alternative approach by considering an adaptive current and an analytic initialization scheme. The analytical initialization scheme helps to find a suitable balance between the inputs and the decision layer. Also, the role of inhibition in the WTA circuit can be tuned with the analytical approach. Adaptation current is an alternative approach to adaptive firing threshold as it provides a more versatile (biological) options for homeostatic balancing. In addition, we develop a new form of learning rate scheduling that acts locally for each excitatory neuron in the network. The learning rate scheduling is especially useful with large network sizes for enforcing a competition between the excitatory neurons. The systems are simulated with NEST (NEural Simulation Tool) which provides easy to add components and convenient parallel computing for larger network sizes.

Through a rigorous exploration, we identify a set of homeostatic balancing methods that yield efficient learning outcomes when combined with the STDP-WTA system. The here discussed STDP-WTA paradigm has potential in testing various theoretical brain models and hypotheses, especially in the realm of decision making. In addition, by empowering networks to uncover patterns and features without external supervision, the STDP-WTA paradigm unlocks exciting prospects for unsupervised learning in future applications, including energy efficient neuromorphic chips. Our approach bridges the gap between neurobiology, computational neuroscience, and machine learning, paving the way for innovative advancements at the intersection of these disciplines.



Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P003 Quantifying structural similarity between real matrices with arbitrary shape
Quantifying the similarity of matrices is valuable for analyzing common features of data sets in tasks such as data clustering, dimensionality reduction, pattern recognition, group comparisons, and graph analysis. Methods for comparing vectors, such as the cosine similarity or Euclidean distance, can be readily generalized to matrices. However, these approaches usually neglect the inherently two-dimensional structure of matrices. Existing methods that take this structure into account [1] are only well-defined on square, symmetric, positive-definite matrices, limiting the range of applicability.
In this work, we propose Singular Angle Similarity (SAS), a measure for quantifying the structural similarity between two arbitrary, real matrices of the same shape. SAS captures structural features that cannot be identified by traditional measures such as Euclidean distance or cosine similarity by explicitly taking the two-dimensional structure of matrices into account.
After introducing and characterizing the measure, we apply SAS to two neuroscientific use cases: network connectivity described by probabilistic adjacency matrices, and neural brain activity represented by state evolution matrices (Fig. 1). First, we demonstrate that SAS can distinguish between probabilistic network models based on their sampled adjacency matrices. Second, we show that SAS captures differences in high-dimensional responses to different stimuli in macaque V1 as characterized by the multi-unit activity envelope [2]. Moreover, the differences identified by SAS can be meaningfully related to the underlying response properties of the neurons. Thus, SAS allows for quantifying the closeness of related response patterns in a network of neurons in a meaningful way. We conclude that SAS is suitable for quantifying the shared structure of matrices with arbitrary shape in neuroscience and beyond.


Acknowledgements
This work has been supported by NeuroSys as part of the initiative “Clusters4Future” by the Federal Ministry of Education and Research BMBF (03ZU1106CB); the DFG Priority Program (SPP 2041 "Computational Connectomics"); the EU's Horizon 2020 Framework Grant Agreement No. 945539 (Human Brain Project SGA3); the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 491111487; the Ministry of Culture and Science of the State of North Rhine-Westphalia, Germany (NRW-network "iBehave", grant number: NW21-049); the Joint Lab "Supercomputing and Modeling for the Human Brain".


References
[1] R. Gutzen, S. Grün, and M. Denker. “Evaluating the statistical similarity of neural network activity and connectivity via eigenvector angles”. BioSystems 223 (2023), p. 104813
[2] X. Chen et al. “1024-Channel Electrophysiological Recordings in Macaque V1 and V4 during Resting State”. Scientific Data 9.1 (2022), p. 77.


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P004 An Adaptive Robot Controller Based on Distributed Synaptic Plasticity in a Cerebellar Network Model
An Adaptive Robot Controller Based on Distributed Synaptic Plasticity in a Cerebellar Network Model
Mahsa Ali Akbarzadeh, Frank Foerster, Volker Steuber, Nada Yousif
Biocomputation Research Group, University of Hertfordshire, Hatfield, United
Kingdom
Email: ma21ahg @herts.ac.uk
Human-robot interaction benefits from robots that are autonomous and compliant [1]. A compliant robot requires flexible and soft materials which necessitates an adaptive controller. Inspired by the ability of the cerebellum to perform adaptive motor control, an adaptive cerebellar controller was previously designed and shown to be able to control a Baxter robot [2].
This cerebellar controller consists of 60,000 granule cells (GCs), 600 Purkinje cells and 600 deep cerebellar nucleus (DCN) neurons implemented as LIF neurons to control six robotic joints. Mossy fibers convey information about the desired and actual position and velocity, whilst the climbing fiber input represents an error signal. The DCN output reaches the Baxter robot after the spike output is converted into an analogue command. The cerebellar controller is simulated in the real-time SNN simulator EDLUT [3]. Synaptic plasticity between the parallel fibers (PFs) and PCs leads to motor learning, enabling the robot to learn and follow smooth trajectories characterized by sinusoidal-like profiles.
In the present work, we implemented and replicated Abadía et al.'s cerebellar controller using a simulation of the Baxter robot in the Gazebo simulator and in a real Baxter robot. The compliance of the robot was also tested by attaching different masses and an elastic band to the end-effector.
In Abadía et al.’s cerebellar controller, the adaptive robot control relies entirely on synaptic plasticity between PFs and the PC. Experimental studies, however, indicate multiple roles of different types of distributed synaptic plasticity in the cerebellum, and little work has been done to understand the role of distributed synaptic plasticity in motor learning in a robotic control system [4,5,6].We are currently investigating the potential of synaptic plasticity in the granular layer, molecular layer and the DCN for enhancing adaptive robot control by modifying the current cerebellar controller.


References
 
1. Oertel C, Castellano G, Chetouani M, et al. Engagement in Human-Agent Interaction: An Overview. Front Robot AI. 2020, 4, 92.
2. Abadia I, Naveros F, Garrido JA, et al. On robot compliance, A cerebellar control approach. IEEE Trans Cybern. 2021, 51(5), 2476-2489.
3. Naveros F,  Luque NR, Garrido JA, et al. A spiking neural simulator integrating event-driven and time-driven computation schemes using parallel CPU-GPU co-processing: a case study. IEEE Trans Neural Netw Learn Syst. 2015, 26(7), 1567-74.
4. D’Angelo E, Mapelli L, Casselato C, et al. Distributed Circuit Plasticity: New Clues for the Cerebellar Mechanims of Learning. Cerebellum. 2016, 15(2), 139-51.
5. Garrido JA, Luque NR, D'Angelo E, et al. Distributed cerebellar plasticity implements adaptable gain control in a manipulation task: a closed-loop robotic simulation. Front Neural Circuits. 2013, 9, 159.

6. Gao Z, van Beugen BJ, De Zeeuw CI. Distributed synergistic plasticity and cerebellar learning. Nat Rev Neurosci. 2012, 13(9), 619-35.

Speakers
avatar for Volker Steuber

Volker Steuber

Professor, Centre for Computer Science and Informatics Research, University of Hertfordshire


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P005 Ratio of excitatory to inhibitory neurons shapes computational properties in cortex
The cerebral cortex exhibits a sophisticated neural architecture across its six layers. A recent study reported that these layers exhibit distinct ratios of excitatory to inhibitory (EI) neurons [1]. This ratio is a key property for achieving the often-reported balance between excitation and inhibition. However, neither previous theoretical nor simulation studies have addressed how these differences in EI composition will affect layer-specific dynamics and computational properties.
We investigate how the EI ratio influences dynamics and computation by varying this ratio in a randomly connected network. We consider the prototypical `balanced' network of leaky integrate-and-fire neurons [2]. The simplified features of this unstructured, sparsely connected network lead to a small number of free parameters and a detailed understanding of the dynamics. We consider different EI ratios in the balanced network by varying the number of inhibitory neurons. To keep the network in a physiological operating range in terms of firing rate (i.e. to keep the inhibitory drive constant), we either varied the firing threshold of the inhibitory neurons, or the synaptic strength between inhibitory and excitatory neurons. The influence of these three network parameters (the EI ratio, the inhibitory to excitatory synaptic strength and inhibitory firing threshold) on the network dynamics are examined by measuring the network firing rate, the coefficient variation of the inter-spike interval distribution, the network synchrony and the dimensionality of the network activity (the participation ratio). We assess the dimensionality of the network response to two different time-dependent inputs by looking at the distribution of explained variance over the principal components [3].
By varying the EI ratio, but keeping the network in the balanced state by balancing the inhibitory drive with either inhibitory spike threshold or synaptic strength, we can vary the dimensionality of the network activity. This means that within the balanced state, there are states with a higher or lower dimensionality, depending on the EI ratio, which has an influence on the type of computation that this network can perform [4]. So by varying the EI ratio over layers and networks, the cortex increases its dynamic repertoire and can tune different subnetworks for different tasks.
This finding is consistent with the hypothesis that in superficial layers 2/3, a lower EI ratio increases variability in neuronal firing patterns. This heightened the layer's activity’s dimensionality, enabling it to handle complex tasks like distinguishing between stimuli. Conversely, layer 4 exhibits a higher EI ratio, resulting in reduced variability of neuronal firing patterns. This makes the layer more efficient at transmitting information to layer 2/3 [5,6].
References
[1] Huang C, et al. Neuroinformatics. 2022, 20(4), 1013-1039.
[2] Brunel N.  Journal of computational neuroscience. 2000, 8, 183-208.
[3] Gao P, et al. BioRxiv. 2017, 214262. https://doi.org/10.1101/214262
[4] Gast R, et al. Proceedings of the National Academy of Sciences. 2024, 121(3), e2311885121.
[5] Moerel M, et al. Scientific reports. 2019, 9(1), 5502.
[6] Ostojic S. Nature neuroscience. 2014, 17(4), 594-600.

 


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P006 From ion channel dynamics to chaotic neuronal population effects: Analysis of chaotic oscillator model with applications to mouse data
Ion channels represent a crucial link between genetic information and electrical activity of the brain. While their activity can be studied at the cellular and molecular levels, connecting these observations to whole-brain dynamics remains a challenge. Large-scale brain models allow for the simulation of brain network activity at the macroscopic level. By incorporating ion channel surrogates into these models, it becomes possible to simulate the impact of changes in ion channel parameters on emergent dynamics at the level of neural populations. In this study, we used the Larter-Breakspear model implemented in The Virtual Brain (TVB) software, a Python-based open-source brain simulation platform, to model whole brain dynamics. The Larter-Breakspear model is a conductance-based neural mass model with chaotic oscillator dynamics that includes ion gradient dynamics and allows for the manipulation of ion channel properties in simulations. We present analysis using both single-node and large-scale networks, the latter using tracer-based mouse structural connectome from the Allen Institute. In addition, we performed fitting using a stochastic grid optimization with mouse local field potential (LFP) data from the Allen Institute and analysed best-fitting simulation dynamics. Analysis of different chaotic dynamics was performed using Poincaré maps, which also served to determine chaoticity of the dynamical regimes. In addition, Lyapunov spectrums were computed from simulated time series data as indicator of chaotic dynamics of the system. We explored the role of conduction speed, global coupling, and ion channel parameters on patterns of synchronization and metastability. We successfully reproduced oscillatory, chaotic, and fixed-point regimes in our virtual mouse brain, as observed in previous works using our more bio-physically inspired network model. In addition, we gained a deeper understanding in the different chaotic dynamical regimes with our detailed analysis of Poincaré maps and Lyapunov spectrum. We present an extensive analysis of chaotic dynamics, elucidating methods to measure and distinguish different chaotic regimes within the neural mass model. Fitting simulated functional connectivity (FC) with empirical FC derived from electrophysiological data showed a high fit, with correlations as high as 0.9. Previous studies using the Larter-Breakspear model mostly used binary structural connectivity and either zero-lag or constant conduction delays. Meanwhile, the current study employs weighted structural connectivity and distance-dependent conduction delays, emphasizing increased biophysical realism. Neural mass models are a useful tool to bridge between microscopic activity of individual neurons and macroscopic brain dynamics, such as firing rates, synchronization, and oscillatory patterns. The combination of the conductance-based neural mass model and the analysis methods implemented here allows us to simulate altered ion channel gradients and to observe their effects on whole-brain dynamics. This provides the potential to link molecular, cellular, and network-level mechanisms that underlie neural function and dysfunction. Beyond theoretical insights, our results suggest the potential to elucidate mechanisms underlying brain disorders that involve genes implicated in ion channel activity, such as epilepsy or several neuropsychiatric disorders.



Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P007 Whole-brain connectome-based simulation in Parkinson’s disease
Expanding from single-node and small network studies of the basal ganglia region, we extend our work to whole-brain simulation utilizing the Human Connectome. The Virtual Brain (TVB), known for its computational prowess in modeling brain dynamics, has shown promise in personalized brain network modeling for epilepsy. In our current endeavor, we adapt TVB methodologies to investigate Parkinson's disease. Our focus lies in integrating specific features relevant to Parkinson's disease within the TVB simulation framework, particularly addressing the influence of neuromodulators on brain activity patterns and in particular avalanches. To achieve this, we augment a neural mass model based on a mean-field approach, to now encompass the dynamics of dopamine, central to Parkinson's pathology. We use clinical data from patients with deep implemented electrodes recording simultaneously with scalp electrodes under on and off medication (Levodopa) conditions.  Under different conditions, we observed specific pattern of avalanches. This features presents in the avalanche transistion matrix are capture by our whole-brain model. Presenting results from whole-brain simulations based on human connectome, we studyed how impaired dopamine dynamics in the basal ganglia affect the dynamics of the entire brain.




Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P008 Attractor-based neuromimetic models of mammalian spatial navigation circuits learn to navigate agents in simulated environments
Animals use internal neural representation of external world to navigate around complex and dynamic environments. Grid and place cells together with head-direction and boundary cells enable robust navigation in the real world. Grid cells display hexagonally-arranged spatial receptive fields with different scales and orientations, whereas place cells are activated at irregularly arranged locations. Theoretical models based on biological descriptions of grid and place cells [3] generally lack learning and function. In contrast, artificial neural networks are inefficient for dynamic, complex spatial reasoning tasks. By building circuit models of mammalian spatial navigation structures, we aim to increase understanding of the neural basis of navigation in animals, and use it to improve fully autonomous or hybrid artificial systems with humans in the loop.
We designed spiking neuronal network models of navigation by incorporating well-established theoretical models [1-4] of grid and place cells based on proposed architecture of ring attractor models. We further included head-direction and conjunctive cells (encoding grid position and head-direction) in our model. We used 4 grid modules (scales: 1x, 2x, 4x and 8x) each with 64 grid cells wired in a toroidal configuration capturing a unique spatial scale. Grid cells in each module concurrently received direct inputs from a simulated environment and drove place cells (6400) in the network model. Each grid cell mapped onto 4 conjunctive cells associating grid positions with 4 head directions (North, South, East, West). Head direction cells received input from motor areas, which generated moves randomly during training. Synaptic weights between conjunctive and grid cells were changed during training using reinforcement based spike-timing-dependent plasticity to store suitable heading direction leading to goals. This strategy embedded directional pathways across grid cells - conjunctive cell modules. After training, the agent only needed visual information about the starting position, from which the network led the agent to target location by sequentially activating grid and conjunctive cells. The motor actions were generated via serial activation of conjunctive and place cells dictating head rotation direction and move/stay actions. Place cells’ activity in our model was successful in reconstructing traversed paths in the external world, affirming the ability of the grid cell based architecture used in our model to internally generate learned navigation routes. 
Our modeling results affirm that grid cells are required to internally decode/encode external world spatial representation and predict that grid cells play a role in visually-deprived navigation. In future, we aim to incorporate additional details of the neurobiology to improve understanding of how animals solve spatial navigation tasks, and eventually improve performance of hybrid artificial and human systems solving similar tasks.
References
[1] McNaughton B, Battaglia F, Jensen O, Moser EI, Moser M-B (2006) Path integration and the neural basis of the 'cognitive map'. Nat Rev Neurosci 7, 663–678.
[2] Burak Y, Fiete IR (2009) Accurate path integration in continuous attractor network models of grid cells. PLoS Comp Biol 5(2): e1000291.
[3] Giocono LM, Moser M-B, Moser EI (2011) Computational models of grid cells. Neuron 71, 589-603.
[4] Bush D, Barry C, Manson D, Burgess N (2015) Using grid cells for navigation. Neuron 87, 507-520.



Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P009 Abstract Diagnosis of Multimodal Neuroimaging Data for Amyotrophic Lateral Sclerosis
Amyotrophic Lateral Sclerosis (ALS) is a progressive neurodegenerative disorder affecting the upper and lower motor neurons. Studies indicate remarkable heterogeneity of ALS in patients, with wide variability in the age and site of onset, observable manifestations and the rate of progression [1, 2]. Also, importantly, the life expectancy of the patients can vary from just a few months to several years [2]. There is no cure for ALS currently, with medical treatment largely aimed at managing the disease and its effects, to be less debilitating for the patients. Diagnostic delays are often up to a year after onset of ALS, thereby hampering early identification and management of the disease [1]. It is crucial to diagnose ALS much earlier, and also to predict its expected progression, so that treatments can be tailored based on the current, as well as expected course of the disease [3].

To this end, we are developing a multimodal prediction pipeline that harnesses the strengths of machine learning to analyze neuroimaging data. The data is obtained via advanced magnetic resonance imaging (MRI) techniques such as structural, functional, diffusion and sodium imaging. By integrating these various MRI data modalities, through the use of graph fusion techniques, we aim to create a comprehensive and multi-modal representation of brain networks involved in ALS. Such models can help identify biological markers for the disease and thereby assist in clinical predictions. Sodium MRI, in particular, holds immense promise in view of the insight they can offer with regards to metabolic dysfunctions as precursors to cell death [4].
Our preliminary studies have demonstrated promise for such an approach to help identify and classify ALS based on their expected rate of progressions. Each data modality was found to endow the model with distinct prediction capabilities. We are working towards enhancing the pipeline by integrating various modalities that deliver synergistic effects towards increasing the overall predictive power.
Our study extends beyond traditional MRI analysis methods by exploring biomarkers represented as graph data structures. This holds potential to uncover signatures and associations that may otherwise be indiscernible. It aims to go beyond traditional architectures, like convolutional neural networks (CNNs), that can extract effective biomarkers, but may lack in interpretability and in taking the inherent structural relationships within the brain into account. By utilizing graph neural networks (GNNs) to analyze MRI data for ALS, we aim to specifically focus on graph-based biomarkers that can capture these relationships [5].
 
Acknowledgements
This project has received funding from the Excellence Initiative of Aix-Marseille Université - AMidex, a French “Investissements d’Avenir programme” AMX-21-IET-017 (via the institutes NeuroMarseille and Laënnec), and from the Association pour la Recherche sur la Sclerose Laterale Amyotrophique et Autres Maladies du Motoneurone (ARSLA), grant ARSLA-2023-ac-05.
 
References
  1. Masrori, P., & Van Damme, P. (2020). https://doi.org/10.1111/ene.14393
  2. Tan, H. H., et al. (2022). https://doi.org/10.1002/ana.26488
  3. Westeneng, H. J., et al. (2018). https://doi.org/10.1016/S1474-4422(18)30089-9
  4. Zaaraoui, W., et al. (2012). https://doi.org/10.1148/radiol.12112680
  5. Li, X., et al. (2021). https://doi.org/10.1016/j.media.2021.102233


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P010 Natural Language Processing for Early Detection of Psychological Disorders in Brazilian Portuguese
Anxiety and depression are the most prevalent mental disorders, affecting approximately 581 million individuals globally, and negatively impacting their quality of life. These disorders have a multifactorial etiology and a silent manifestation, making early diagnosis challenging. Technological advances in Computational Linguistics and the use of Large Language Models for Natural Language Processing (NLP) have shown relevant results in detecting early stages of Schizophrenia and Dementia. The present study aims to adapt NLP methodologies used for Schizophrenia and Dementia, with the goal of facilitating the differential diagnosis and detection of early signs of anxiety and depression in written Brazilian Portuguese. To do so, we analyzed data from a sample of 476 individuals (mean age 33.9) from all Brazilian regions, who completed an online questionnaire about risk and protective factors for mental illness, as approved by the local ethics committee (CAAE: 54127221.0.0000.5537). The questions covered sociodemographic aspects, lifestyle, general health, written reports, COVID-19 impact, occupational and social contexts. Psychometric screening instruments for anxiety (GAD-7: mean=11.9, SD=6.23), depression (PHQ-9: mean=13.8, SD=7.21), sleep disorders (PSQI: mean=8.41, SD=3.85), and psychosocial safety climate (PSC-12: mean=35, SD=18) were also applied. According to the GAD-7 results, 40% of this sample were classified with severe symptoms of anxiety. PHQ-9 estimated that 26.1% had highly severe symptoms of depression. PSQI revealed that 75.6% had poor sleep quality in the previous month. To compare the psychometric screening results with the individual clinic history, participants were asked about previous anxiety and depression diagnoses. 51.3% did not have any psychiatric diagnosis, 13.9% had been previously diagnosed with both anxiety and depression, 7.1% had depression, and 18.3% had anxiety. When asked about the frequency of suicidal ideation, 25.4% of this sample reported some occurrence of death thoughts in the previous weeks, and 7.8% had death thoughts almost every day. Among those who reported a daily frequency of thoughts about death, the number of participants without any previous psychiatric diagnosis was similar to those with depression and/or anxiety. This is a concerning finding that prompts discussion about the lack of access to mental health professionals, proper diagnoses, and treatment. Combined information about previous diagnosis, current presence of symptoms, and suicidal ideation provides a rich database for developing and testing NLP applications for mental health. Preliminary NLP analysis of written self-reports on the current state of health identified topics associated with anxiety, eating disorders, and prevalence of negative emotions, like sadness and anger. Such results may be useful to outline paths that enable faster and more efficient strategies for the detection of early signs of anxiety, depression, and suicide risk, as well as fostering a deeper and more comprehensive understanding of the cognitive mechanisms underlying these disorders.
Acknowledgments: This study was financed by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES)




Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P011 Modeling the connectivity of excitatory neuronal networks derived from human iPSCs
Animal-derived neuronal cultures have been the gold standard for in
vitro
models. However, the outcome from animal models are not always relevant
to the human brain, especially in the framework of the personalized medicine. Human-induced
pluripotent stem cells (hiPSCs) offer a promising alternative to overcome this
barrier. However, hiPSC cultures still require extensive characterization, and
in vitro cultures are not sufficient for a comprehensive analysis, as certain
parameters may be inaccessible, especially at the level of network dynamics.
Computational modeling is a powerful tool that can help investigate these
elusive aspects and provide insight into the mechanisms behind peculiar
electrophysiological activities or pathological conditions. Additionally,
computational approaches could direct the design of experiments to investigate
the results obtained in silico.
The focus of this work consists of reproducing excitatory hiPSC
neuronal networks coupled to Micro-Electrode Arrays (MEAs) with a computational
model to infer which parameters and features are mostly involved in the genesis
of the spontaneous network activity. We developed an in silico neuronal
network of 100 neurons, modeled according to Hodgkin-Huxley formalism. Each
cell features an after-hyperpolarization (AHP) current to simulate neuronal adaptation,
and each synapse features both facilitation and depression short-term
plasticity (STP). As in vitro neuronal networks manifest self-sustained
spontaneous activity, we introduced white noise to all cells and a subset (5%)
of tonic spiking neurons (pacemakers) to sustain random spiking in the network.
Pacemakers were also included because they have been identified as a better trigger
for the genesis of network bursts, a well-known characteristic of in vitro
cultures, than random noise. To reproduce the main characteristics of both firing
and bursting activity (i.e., rate, duration, instantaneous firing profile), we
focused on two fundamental features of the model: (a) adaptive mechanisms and (b)
connectivity rules. For (a), we investigated the strength and time constant of
both AHP and STP. For (b), we focused on mean network connectivity and
connectivity distribution, testing all possible combinations of the following connectivity
rules: random (RND), scale-free (SF), and small-world (SW).
Regarding the mean network connectivity, we assessed the range that
allowed for physiological firing and bursting rates. We identified the best (in
terms of yielded results) to be around 10% (i.e., 10 connections per neuron).
Secondly, we focused on tuning both AHP and STP to reproduce physiological
burst duration, as burst rise and decay phases have been associated with synaptic
facilitation and depression mechanisms, respectively. To adjust and smoothen
the decay phase of the instantaneous firing profile during population events,
we introduced a SF connectivity, for incoming and outgoing degrees, with slope
2. Other connectivity rules (RND and SW connectivity) and combinations thereof
all produced a too-sharp decay in the profile. Finally, we varied the limits in
connectivity degree, tuning once again the mean network connectivity for the SF
distribution.







All simulations were performed with Brian2 (2.5.1) simulator, in the
Python language (3.10.11). Differential equations were integrated numerically
with euler or exponential euler methods, with a time step of 0.1 ms.


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P012 Exploring the Role of Plasticity in Modulating Hippocampal Replays
Current understanding of hippocampal replay involves sequential activation of place-cell ensembles linked through plasticity. However, the plasticity rules that give rise to hippocampal (HpC) replay are poorly understood, particularly for bi-directional replays. Further, our understanding of HpC plasticity and its variation across online/offline (wake/sleep) states is limited, largely based on in-vitro studies. A novel model by Ecker et al. (2022) [1] successfully produced spontaneous bi-directional replays in CA3 during offline states by incorporating a symmetric spike-timing-dependent plasticity (STDP) [2] rule during exploration. We used this model to test the effects of different plasticity rules in the offline state on the structure and speed of replay. Our model shows that when long-term potentiation (LTP) or long-term depression (LTD) are applied to synapses with retrograde directionality counter to the direction of replay, replays respectively slow down or accelerate. Further, our model demonstrates that maintaining symmetric STDP leads to hyperactivity resembling epilepsy (Fig 1). Interestingly, asymmetric Hebbian STDP (H-STDP) [3], especially when modulated by acetylcholine during offline states [4], biases the direction of replays, accelerating them, and ultimately eliminating them due to decoupling through synchrony [5]. In contrast, asymmetric anti-Hebbian STDP (AH-STDP) [6] preserve replays in both directions, decreases their speed to a stable bound and decreases mean network connectivity (aka “Synaptic Compression”). Our research thus clarifies the surprising correlation between extended exploration and slower replays identified by Berners-Lee et al. (2022) [7] and indicates a role for H-STDP in the degradation of sequences generated by the CA3 HpC region. Additionally, our results propose a significant role for AH-STDP in the formation of memory traces and synaptic efficiency, a relationship that has not been extensively corroborated by experimental data to date.
 
1.   Ecker et al. Elife 11 (2022): e71850.
2.   Mishra et al. Nature communications 7.1 (2016): 11552.
3.   Bi and Poo. Journal of neuroscience 18.24 (1998): 10464-10472.
4.   Sugisaki et al. Neuroscience 192 (2011): 91-101.
5.   Lubenov, E. V., & Siapas, A. G. (2008). Decoupling through synchrony in neuronal circuits with propagation delays. Neuron, 58(1), 118-131.
6.   Pandey and Sikdar. The Journal of Physiology 592.16 (2014): 3537-3557.

7.   Berners-Lee et al. Neuron 110.11 (2022): 1829-1842.


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P013 Evolutionarily Conserved Circuit Motifs in Drosophila and C. elegans
Using comparative connectomics, we analyze the published connectome of C. elegans [1,2,3] in light of the antennal lobe and mushroom body neural circuit architecture of the larval Drosophila [4,5], the phylogenetically closest reconstructed connectome.  We find the typical configuration of an antennal lobe, complete with inhibitory local neurons and projection neurons.  The projection neurons synapse onto the dendrites of neurons with a morphology and connectivity consistent with that of the parallel fibers in a mushroom body.  Dopaminergic and octopaminergic neuromodulatory neurons tile the axons of these parallel fibers, providing further evidence for the analogy.  Our findings are consistent with the preservation of a common ancestral neural architecture in nematodes, ecdysozoans, and lophotrochozoans.  We, therefore, suggest that the worm's nerve ring is, after all, a brain.  
References
1. White JG, Southgate E, Thomson JN, and Brenner S. The structure of the nervous system of the nematode Caenorhabditis elegans. Philosophical Transactions of the Royal Society of London - Series B: Biological Sciences, 314:1–340, 1986.
2. Varshney LR, Chen BL, Paniagua E, Hall DH, and Chklovskii DB. Structural properties of the Caenorhabditis elegans neuronal network. PLoS Comput Biol, 7(2):e1001066, 02 2011.
3. Witvliet D, Mulcahy B, Mitchell JK, Meirovitch Y, Berger DK, Wu Y, Liu Y, Koh WX, Parvathala R, Holmyard D, Schalek RL, Shavit N, Chisholm AD, Lichtman JW, Samuel ADT, and Zhen M. Connectomes across development reveal principles of brain maturation in c. elegans. bioRxiv, 2020.
4. Eichler K, Li F, Kumar AL, Park Y, Andrade I, Schneider-Mizell C, Saumweber T, Huser A, Bonnery D, Gerber B, Fetter RD, Truman JW, Priebe CE, Abbott LF, Thum A, Zlatic M, and Cardona A. The complete connectome of a learning and memory centre in an insect brain. Nature, 548(7666):175–82, 2017.
5. Berck ME, Khandelwal A, Claus L, Nunez LH, Si G, Tabone CJ, Li F, Truman JW, Fetter RD, Louis M, Samuel AD, and Cardona A. The wiring diagram of a glomerular olfactory system. eLife, page e14859, 2016.




Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P014 Information integration (ϕID) and high order interactions in Caenorhabditis elegans sleep-wakefulness neural dynamics
Sleep is ubiquitous within Metazoa, but consciousness is traditionally attributed to few animal lineages. Caenorhabditis elegans, a 302-neuron nematode, displays spontaneous bouts of locomotor quiescence and developmentally-timed, stress-induced, and hypoxia-induced quiescence. Given that they fulfill all behavioral criteria of sleep, which are: a reversible quiescent behavior, increased arousal threshold, a stereotypical posture, and homeostatic rebound in the case of developmentally-timed sleep; both ‘developmental’, ‘satiety-induced’, and ‘stress-induced’ are regarded as true sleep states. Given the relationship between sleep and shifts in consciousness, measuring dynamic variations in neural activity during sleep-wake dynamics is a promising paradigm to assess consciousness in distant branches of the phylogenetic tree. Here, using data from [1], different informational metrics were implemented in a hypoxia-induced quiescence experiment with npr-1 C. elegans mutants expressing a genetically encoded calcium-sensor (NLS-GCaMP5K) that allows the simultaneous recording of calcium dynamics from several individual neurons. Their functional connectivity was characterized using graph topological metrics, as well as high-order interdependences, such as O-information and S-information, to unveil synergistic and redundant interactions. In addition, we measured Phi Information Decomposition (ϕ-ID), a version of the information integration measure ϕ, which come from Information Integration Theory of consciousness IIT 2.0 and is based on time dependent mutual information (TDMI). It showed higher average values of ϕR during wakefulness, meaning that pairwise information integration is higher while awake. The network analysis of functional connectivity did not show differences in segregation or integration measures between asleep vs. awake nematodes. On the other hand, O-information, which measures the balance between synergy and redundancy in multivariate data happens to be higher for asleep nematodes in average for triplets of neurons. We conclude that sleep and awake states can be characterized by different informational measures of their neural dynamics, and quiescence coincide with a diminished ability of the network to integrate information, hinting at a possible loss of consciousness during quiescence. The discrepancy between ϕ-ID and O-information might be due to the different way in which they treat the time-dependency of the data. Different hubs of highly synergistic neurons are identified for sleep and wakefulness.


AcknowledgmentsFondo Nacional de Desarrollo Científico y Tecnológico (FONDECYT): Patricio A Orio grant number 1211750; ANID-Basal: Patricio A Orio grant number FB0008; ANID-Doctoral Fellowship: Diego Becerra 21210914.

Reference[1] Nichols ALA, Eichler T, Latham R, Zimmer M.  A global brain state underlies C. elegans sleep behavior. Science. 2017 Jun 23;356(6344):eaam6851




Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P015 Predicting eye movements with a detailed network model of the cerebellar cortex
Many models of transcranial stimulation have been developed to study its mechanisms of action. However, one of the main limitations of these models is their lack of validation methods [1], [2]. Our goal is to develop a validation method for non-invasive stimulation models. We connected a detailed model of the cerebellar cortex to a control theory model of the vertical oculomotor system [3]. The output of our cerebellar model predicts a measurable behaviour - gaze holding and smooth pursuit movements.
To predict eye movements from our cerebellar network, we constructed the activation function used in the oculomotor system model from the population response of 51 simulated Purkinje cells. We ran 94 simulations of the granular layer, with the mossy fibre input increasing from 0 to 186 Hz in 2 Hz steps. 
We simulated the population of Purkinje cells by providing the Purkinje cell model with 51 different background inputs (Poisson spike trains with a mean frequency between 0 and 50 Hz in 1 Hz steps) and generated the population response by averaging the mean firing rate of the 51 simulated Purkinje cells. To normalise the population response, we:
  1. Fitted the curve with three straight lines;
  2. Determined the line with the maximum slope;
  3. Normalised the slope of that line to 1 and normalised the slope of the other lines consequently;
  4. Normalised the amplitude of the midpoint of the line with slope 1 to 0.5 and shifted the midpoint to zero.
We replicated the results corresponding to a healthy participant in the original study [3] by simulating the oculomotor system model with our activation function.
In conclusion, we connected the output of our cerebellar network to a measurable behaviour. We are currently stimulating our cerebellar network with a simulated electric field estimated by an MRI-generated head model. We will validate the non-invasive stimulation model by comparing the eye movements predicted for a baseline condition with the eye movements predicted from the stimulated network.
References
  1. Seo H, Jun S C. Multi-Scale Computational Models for Electrical Brain Stimulation. Front Hum Neurosci. 2017, 11, 515.
  2. Shahid S. S, Bikson M, Salman H, et al. The value and cost of complexity in predictive modelling: role of tissue anisotropic conductivity and fibre tracts in neuromodulation. J Neural Eng. 2014, 11, 3.
  3. Glasauer S, Rössert C. Modelling drug modulation of nystagmus. Prog Brain Res. 2008, 171, 527–534.

Speakers
avatar for Volker Steuber

Volker Steuber

Professor, Centre for Computer Science and Informatics Research, University of Hertfordshire


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P016 Self-sustained activity and intermittent synchronization in balanced networks
Self-sustained activity in the brain is observed in the absence of external stimuli and contributes to signal propagation and cognitive processes.  In this work, using intracellular recordings from CA1 neurons and networks of adaptive exponential integrate-and-fire neurons (AdEx), we demonstrate that self-sustained activity presents high variability of patterns with low neural firing rates and small-bursts in distinct neurons. We show that both connection probability and network size are fundamental properties that give rise to self-sustained activity in qualitative agreement with our experimental results [1]. Moreover, we provide a more detailed description of self-sustained activity in terms of lifetime distributions, synaptic conductances, and synaptic currents. After this, we considered synaptic modifications that can be related to activity-regulated cytoskeleton-associated (ARC) protein. Particularly, we included connectivity alterations in intense ARC immunoreactive neurons (IAINs) observed in the rodent epileptic model [2]. We observed that these alterations contributed to the appearance of epileptic seizure activity and intermittent up and down activities associated with synchronous bursts and asynchronous spikes, respectively. We characterized the intermittent activity and applied the optogenetics control. Synchronized burst patterns are controlled when IAINs are chosen as photosensitive, but not effective in non-IAINs, showing that IAINs play a pivotal role in both the generation and suppression of highly synchronized activities.

[1] Borges F.S., Protachevicz P.R., Pena R.F.O., et al. Self-sustained activity of low firing rate in balanced networks. Physica A. 2020, 537, 122671.

[2] Borges F.S., Gabrick E.C., Protachevicz P.R., et al. Intermittency properties in a temporal lobe epilepsy model. Epilepsy & Behavior. 2023, 139, 109072.



Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P017 Reproducing behavior-related neural manifolds in a detailed model of motor cortex circuits
Several studies over the past decade strongly suggest the existence of low-dimensional latent dynamics in the primary motor cortex (M1) responsible for generating motor behaviors. Researchers have shown these latent dynamics are surprisingly consistent across individuals performing the same motor behavior. Beyond the computational role of these underlying latent dynamics, these findings have important implications for the development of stable and easy-to-train brain machine interfaces (BMIs) for spinal cord injury. These latent dynamics result from the combined activity of individual neurons in M1. However, the specific cell types, cortical layers, and biophysical mechanisms underlying these dynamics remain largely unknown. We previously developed a realistic computational model of M1 circuits, including highly detailed corticospinal neurons models, which send motor commands to the spinal cord. We validated the M1 model against in vivo spiking and local field potential experimental data and demonstrated it can generate accurate predictions and help to understand brain disease. Using the model, we generated low-dimensional manifolds of neural activity across different behaviors (e.g. quiet vs movement) and experimental manipulations (e.g. inactivation of noradrenergic and or thalamic inputs). Low-dimensional representations of network activity exposed clear clusters related to behavior and experimental manipulations. Attempts to reconstruct high-dimensional activity from low-dimensional embedding were remarkably successful (66% and 97% correlation for cell and population firing rates), suggesting that latent dynamics may underlie model neuronal activity despite not being built in. The similarity of movement dynamics after lesions to the control quiet state dynamics is consistent with behavioral deficits associated with these lesions.
In this work we aim to tune the M1 model to reproduce the specific neural manifolds associated with mouse in vivo recordings during a motor task. For this, we analyzed associations between low dimensional embedding of spiking patterns in M1 and behavioral outcomes in experiments on mice performing a single-target joystick reaching task. In this experiment, Neuropixels probes were implanted in mice M1 and ventrolateral (VL) thalamus. Both spiking patterns and trajectories were jointly analyzed to study whether the embeddings share or not commonalities, and a decoder was built. In this preliminary work, we evaluated different approaches to adapt the model output responses (firing patterns of selected subpopulations) to reproduce the experimental manifolds, including varying long-range inputs to a specific network instantiation, modifying circuit connectivity via global optimization algorithms and using biological synaptic plasticity learning rules. By investigating the contribution of different cells (layer, class) to the decoder, we analyzed this scheme in the in-silico model and revealed the putative latent dynamics of the output responses.
Reproducing experimental behavior-related neural manifolds in large-scale detailed cortical models, can serve to 1) link circuit dynamics across scales (membrane voltages, spikes, LFPs, and EEG) to behavior, manipulations, and disease, 2) further constrain the model, and 3) characterize the relation between low-dimensional latent dynamics and the activity of specific cell types. This may lead to better understanding of how the brain circuits generate motor behavior.


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P018 The NetPyNE multiscale modeling tool: latest features and models
NetPyNE is an NIH-funded tool for data-driven multiscale modeling of brain circuits. It enables users to consolidate complex experimental data from different brain scales into a unified mechanistic computational model. NetPyNE builds on top of NEURON, one of the most widely used neural simulation engines. NetPyNE uniquely integrates all major steps of the modeling workflow under a single framework. The core of NetPyNE consists of a standardized JSON-like declarative language that allows the user to define the model across scales, from molecules to neurons to circuits, and which has been officially endorsed by The International Neuroinformatics Coordinating Facility (INCF) as an INCF standard. The NetPyNE API can then be used to generate the NEURON network, run parallel simulations, optimize and explore parameters, visualize and analyze the results. NetPyNE facilitates model sharing by exporting/importing to the NeuroML and SONATA formats.
All functionality is also available via state-of-the art graphical web application, which now includes automated parameters exploration, the ability to specify function-based parameter values and complex stimulation patterns, such as rhythmic, poisson, gaussian, etc. To enhance new users’ experience, the graphical tool is equipped with the step-by-step tutorials. The web app is fully integrated with the Open Source Brain (OSB) platform, providing users with an online persistent workspace, file management, access to online resources and interactive jupyter notebooks.
NetPyNE has been interfaced with CoreNEURON, and several large-scale models were benchmarked on GPUs for the first time, obtaining impressive 40x speedup. The interface with the LFPykit tool allows NetPyNE to generate dipole current moments, and simulate EEG signals at electrodes placed along a head volume conduction model. The new co-simulation interface between NetPyNE and The Virtual Brain (TVB) achieves a new milestone for multiscale modeling: linking molecular chemical signaling (via RxD) to whole-brain network dynamics. Both NetPyNE and TVB-NetPyNE are now accessible through the HBP EBRAINS platform, including example use cases.
The functionality of NetPyNE, its robustness, unit-tests coverage and source code quality is constantly improving. Most recent features include automated validation of user-provided network specification, API for selective loading of NEURON mechanisms, a universal way to describe gap junctions, graded synapses or, more generally, any mechanism of continuous information transmission between pre- and postsynaptic variables. Ongoing efforts project to enhance NetPyNE with extended capabilities regarding external stimulation (tACS, MEG).
The batch simulations and parameter optimization functionality has been refactored for maintainability and reliability, as well as expanded to utilize Ray Tune's optimization tools and data reporting.
To ensure a consistent user experience across various NetPyNE models, we've devised a standardized way for organizing model files, which also simplifies the loading and saving process. In addition, it will set the basis for the "netpyne" command-line application.
At least 25 publications describe models or tools that have made use of NetPyNE, including our recent detailed models of the motor, auditory and somatosensory thalamocortical circuits, and of spinal cord circuits. Others have developed NetPyNE models to study Parkinson's disease, schizophrenia, ischemic stroke and epilepsy.


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P019 Reorganisation of modular activity in cortical circuits
Early in development, spontaneous cortical activity in ferrets is organised into modular, low-dimensional patterns, and this organisation is highly similar across multiple cortical areas [1]. Currently, it is unclear how cortical activity changes in development, whether these changes reflect the different functional specialisations across cortical areas, and what circuit changes underlie these changes.
Here, we investigated how the structure of spontaneous cortical activity changes over the course of development by recording it with two-photon calcium imaging before (P21-24), around (P27-32) and after (P39-43) eye opening in five cortical areas (V1, A1, S1, PPC, PFC). Surprisingly, activity patterns in all five areas followed a similar developmental trend: a pronounced decrease in the correlation of nearby neurons and a strong increase in dimensionality, indicating a transition, from a modular to a more fine-scaled organisation.
To explore a possible circuit mechanism underlying these common developmental changes, we studied a linear recurrent neural network model. Assuming recurrent interactions (RI) follow a local excitation and lateral inhibition, the network reproduces the modular structure of spontaneous activity observed in the young cortex [2]. We then considered three possible changes of RI and their effect on dimensionality and correlation structure: 1) transition towards local inhibition and lateral excitation, 2) increase in their heterogeneity, 3) decrease in their effective strength. We found that scenario 3) agrees best with the experimentally observed changes in activity, suggesting that a mild effective weakening of RI during development could underlie the reorganisation from modular to fine-scaled activity observed in diverse cortical areas.
References:
[1] N Powell, B Hein, D Kong, J Elpelt, H Mulholland, M Kaschube, G Smith. Common modular architecture across diverse cortical areas in early development. Proceedings of the National Academy of Sciences. In press.
[2 ] GB Smith, B Hein, DE Whitney, D Fitzpatrick, M Kaschube. Distributed network interactions and their emergence in developing neocortex. Nature neuroscience 21 (11), 1600-1608 (2018); https://doi.org/10.1038/s41593-018-0247-5


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P020 Preserving network responses under dendritic simplification with the NEST::multiscale toolchain
Synaptic transmission is crucial for information exchange between neurons, but voltage dynamics within neurons, influenced by dendritic currents, further shape network responses. These dynamics alter neuronal output based on the spatial arrangement of synaptic inputs on the dendritic tree. However, the spatial scale at which these dynamics affect brain circuit computations remains unclear, in large part due to a lack of modelling tools that allow for coarse-grained descriptions of the intra-neural dynamics, and the inclusion thereof at the network level.
On the one hand, simulators like NEURON and Arbor focus on modeling highly detailed neuron models, while, on the other hand, simulators like NEST focus on efficiently implementing large scale networks of abstract spiking neurons. This leads to the situation that the Blue Brain Project (BBP) and Allen Brain models are implemented in NEURON, incorporating highly detailed dendritic dynamics, while their abstract counterparts are implemented in NEST, thus omitting dendritic dynamics altogether. To bridge this knowledge gap, three key components are essential. Firstly, detailed neuron model databases, such as those Allen Brain Project and the BBP, are needed as a source of ‘ground truth’ models. Secondly, a systematic approach for deriving simplified models, such as the methodology proposed by Wybo et al. [1], is crucial for systematically creating increasingly coarse-grained descriptions of the neural dynamics. Thirdly, a simulation tool capable of accommodating both detailed and simplified models, optimized for efficient inter-neuron communication at the network level, is essential to create efficient network simulations. While NEURON and Arbor can simulate complex networks, they are not primarily designed to do so. Conversely, the NEST simulator excels in routing spikes efficiently in large-scale distributed networks, but lacks functionality for simulating compartmental dynamics. Here we integrate the neuron model collection from the Blue Brain Project (BBP) in its entirety into the NEural Analysis Toolkit (NEAT, [1]), therefor allowing each neuron model to be simplified into any desired coarse-grained description. Additionally, we introduce a compartmental modeling framework within the NEST simulator using the NESTML model
description language, enabling seamless simulation of both full and reduced NEAT models. This combined approach forms the NEST::multiscale toolchain. Using this toolchain, we developed a network model focused on layer 5 of the or visual cortex, leveraging state-of-the-art connectomics data [2]. By simulating our network model at full spatial complexity, as well as at various coarse-grained levels, we elucidate the effective spatial resolution needed to model brain circuits. Our results thus shed light on the effective spatial complexity of brain circuits, and consequently provide insight into the effective ’minimal’ model of a neuron, i.e. the simplest possible neuron model that is still able to reproduce all the computational functions performed by real cortical neurons.
[1] Wybo et. al., Data-driven reduction of dendritic morphologies with preserved dendro-somatic responses, eLife, 2021, e60936,
[2] Jiang et al., Principles of connectivity among morphologically defined cell types in adult neocortex, Science, 2015,  6264


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P021 Extracellular ionic modulation: computational investigation of a new neuromodulatory tool
Epilepsy is a debilitating disease affecting millions worldwide, with about one third of the cases being resistant to existing pharmacological interventions. [1]
Alternative approaches to the problem include the surgical removal of affected tissue, or invasive neuromodulatory techniques such as electrical deep brain stimulation. However, these methods still present major limitations due to the unphysiological, broadly affecting nature of the stimulation. In this work, we demonstrate how an emerging technology, the modulation of extracellular ionic concentrations, can affect the properties of a simulated hippocampal epileptic network.
We studied the modulation of the firing rates of both excitatory and inhibitory neurons, and demonstrated the potential efficacy of this technology in dampening epileptic-like activity as it propagates across neighboring hippocampal regions.
 
To achieve this, we modified a simulation of an epileptic human hippocampus based on the work of Aussel et al (2022) [2]. 
Since a change in extracellular potassium concentration effectively alters the E_k for neighboring neurons, according to the Nernst equation [3], we considered recent estimates of the change in potassium concentration - and therefore E_k - that such a device might induce for a neighboring neuron (10~20mV) [4], and how such a modulation could modulate neuronal activity. To simulate the effect of an implanted device capable of altering the extracellular concentration of potassium ions and modulate the activity of the human hippocampus, we considered 90% of both inhibitory and excitatory neurons in a radius of 1.2mm in the Entorhinal Cortex (EC) as being modulated by the change in E_k induced by the device (Figure 1A). 
To measure the theoretical effect of this device in stopping epileptic-like activity (Figure 1C), we assessed how the modulation of E_k can, despite every single neuron in the EC being externally activated by the same strong electrical stimulus (Figure 1B), dampen the amplitude of interictal-like signals that propagate to other neighboring areas of the hippocampus (CA1, CA3, Dentate Gyrus), of up to 25%.

Finally, we demonstrated how this modulatory effect could be used as an online method to fine-tune neurons in specific areas that are showing signs of abnormal activity, within a different, more generic inhibitory and excitatory circuit.




Acknowledgments


We acknowledge funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 862882 (IN-FET project) and University Fund Limburg.


References

[1] Fisher, Robert S., et al. "Operational classification of seizure types by the International League Against Epilepsy: Position Paper of the ILAE Commission for Classification and Terminology." Epilepsia 58.4 (2017): 522-530.
[2] Aussel, Amélie, et al. "Cell to network computational model of the epileptic human hippocampus suggests specific roles of network and channel dysfunctions in the ictal and interictal oscillations." Journal of Computational Neuroscience 50.4 (2022): 519-535.
[3] Pabst, M., et al. "Solution of the Poisson-Nernst-Planck equations in the cell-substrate interface." The European Physical Journal E 24 (2007): 1-8.
[4] Verardo, Claudio, et al. "Bidirectional modulation of neuronal excitability via ionic actuation of potassium." bioRxiv (2022): 2022-04.



Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P022 Building in 3D: how different scaffold approaches influence neuronal activity in in vitro modelling
It has been proven that three-dimensionality is essential for developing reliable
models for different anatomical compartments and diseases.
Currently, we can produce implantable structures that help to regenerate
different tissues such as bone and heart. This is not the case when it comes to the neuronal
compartment. As it is still challenging to understand how the
brain computes information, a comprehensive and practical model of neuronal
tissue still has not yet been found. The present work was conceived in this
framework: we aimed to contribute to what must be a collective effort by
filling in some information on possible 3D strategies to pursue. We developed
and compared directly different kinds of scaffolds (i.e., PDMS sponges,
thermally crosslinked hydrogels, glass microbeads) in their effect on the
electrophysiological activity of neuronal networks recorded using
Micro-Electrode Arrays. We commented on the reproducibility, efficacy, and
scalability of the methods, where the classic beads still offer superior
performances. Glass microbeads require a longer cell seeding process, whereas
thermogels offer simplicity but pose challenges in yield. PDMS sponges, while
requiring customization, offer multiple scaffolds per preparation and efficient
sterilization. Despite scaffold variations, all allowed the recording of
typical neuronal features, with beads showing superior electrode coupling.
While the overall rate of spiking activity remained
consistent, the type of scaffold had a notable impact on bursting dynamics. The frequency, density, and occurrence
of random spikes were all affected. Specifically, networks created with
sponge scaffolds exhibited a higher burst frequency but lower density, whereas
beads showed less frequent bursts with more spikes, and the thermogels had
intermediate behavior. Examination of inter-burst intervals revealed distinct
burst generation patterns unique to scaffold type, with sponge and beads
configurations exhibiting well-separated bursts and geltrex displaying greater
temporal complexity. Results suggest that scaffold regularity impacts the
richness and regularity of electrophysiological activity. Notably, the
propagation of network collective activity showed the most differences among
configurations, underlying that functional variations may arise from spatial
organization differences within the 3D constructs. This evidence suggests that
not all 3D neuronal constructs can sustain the same level of richness of
activity. Beads, with their precise shape and geometrical arrangement, form
networks where the bursting rate is very regular. Their population events
involve most of the network. By contrast, the network burst spread in sponges
is more saltatory, with varying numbers of electrodes involved. The bursting
variability and propagation parameters in the two thermogels are in between the
other conditions, which could be consistent as the connections are free to grow
unbounded.
By comparing different 3D scaffolds for creating
neuronal constructs, our results move towards understanding the best strategy
to develop functional 3D neuronal units for reliable pre-clinical studies.

Acknowledgments








This work was supported by
#NEXTGENERATIONEU (NGEU) and funded by the Ministry of University and Research
(MUR), National Recovery and Resilience Plan (NRRP), Project MNESYS
(PE0000006)—A Multiscale integrated approach to the study of the nervous system
in health and disease (DN. 1553 11.10.2022).


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P023 Accessing signatures of criticality in neuronal data using maximum entropy models
Since the pioneer work from Beggs and Plenz [1] reporting neuronal avalanches in cultured cortical slices, the idea that the brain operates in a critical state has gained traction. Since then, most studies obtain signatures of criticality from exponents of size and duration power law distributions of neuronal avalanches[2]. Here we use a completely different and independent approach [3], employing a maximum entropy model to test whether signatures of criticality appear in urethane-anesthetized rats[4].

Speakers

Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P024 Benchmarking Deep Learning Architectures for Predicting Visual Stimuli Given Single Neuron Spike Patterns
Drawing insights from neuronal processes is integral for understanding the neural mechanisms underlying cognitive processes, providing a higher definition recording for brain-computer interfaces, and helping develop advanced neurorehabilitation strategies. Our study sought to survey and identify machine-learning models and deep-learning architectures capable of predicting visual stimuli based on the spike patterns of single neurons. We worked with Neuropixel data from the Allen Brain Observatory [1,2] consisting of the firing rates from single neurons, referred to as units, from several male mice's visual cortex, thalamus, and hippocampus. Each recording involved around 2,000 separate units. The mice were shown 118 different natural images of predators, foliage, and other scenes from their natural habitat at random in repetition and for 250 ms each. The firing rates of the separate units were then used as predictors for the shown images. We assessed the prediction performance on test data for various machine and deep learning architectures built on training data. A random guess was associated with a test accuracy of 1/118 = 0.85% for the baseline. Support Vector Machine and Principal Component Regression had minimal success. A single-layer neural network (NN) on aggregate firing rates over the length of a visual stimulus resulted in a test accuracy of 93%. The test accuracy of multi-layer NNs would diminish for each layer added. To test the utility of spatial modeling, a single-layer Graph Neural Network(GNN), a simple graph convolution, and a graph attention (GAT) network were tried with 48%, 68.7%, and 89.8% accuracies, respectively. An LSTM was tested to consider the temporal aspect of the data, whereby the firing rates were presented in a sequence over time bins during a visual stimulus. This produced the highest test accuracy at 96.6%. A spatial-temporal GAT(ST-GAT) network had 88.3% accuracy. The ST-GAT also allowed for an adjacency matrix to be found through backpropagation that may represent functional connections between single neurons [3]. We also implemented a Transformer network that provided a test accuracy of 93%. See the figure for a visualization of the key steps of the study and the results. Our results provide key insights into the compatibility of different learning architectures in drawing valid conclusions from neural data at the micro-level. We have found that architectures with fewer layers, including NNs, LSTMs, and Transformers, consistently demonstrated higher test accuracies than their multi-layered counterparts, which had many more parameters and likely overfitted by capturing noise in the data. Additionally, the success of LSTM and Transformers suggests that including a temporal component allows models to handle the sequential nature of neural data, increasing prediction accuracy. Results were consistent across several mice. References 1. Allen Brain Observatory. Neuropixels Visual Coding. https://portal.brain-map.org/circuits-behavior/visual-coding-neuropixels 2. Paulk AC, Kfir Y, Khanna AR et al. Large-scale recording with single neuron resolution using Neuropixels probes in human cortex. Nature Neuroscience, 2022, 25: 252-263. 3. Wein S, Schuller A, Tome AM, et al. Forecasting brain activity based on models of spatiotemporal brain dynamics: A comparison of graph neural network architectures. Network Neuroscience, 2022, 6 (3): 665–701. 4. Ray Tune: Hyperparameter Tuning. https://docs.ray.io/



Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P025 Harmonic oscillator RNNs: single node dynamics, resonance and the role of feedback connections.
Cortical activity is characterized by oscillatory dynamics at many scales. Inspired by experimental findings and to elucidate on potential functional roles of such oscillatory dynamics, we study recurrent networks composed of damped harmonic oscillator nodes (DHOs). Training such harmonic oscillator networks (HORNs) on standard pattern recognition tasks allowed us to give functional interpretations of oscillatory dynamics in cortical networks[1]. In HORNs, each DHO node has a state variable x that represents the aggregate activity of a spatially confined recurrently connected E-I circuit such as a cortical column. Other DHO parameters are: the excitability parameter 𝛼, the damping factor ɣ and the natural frequency ⍵. Inspired by balanced state networks, HORN nodes are additively coupled on the velocity term. This mesoscale approach to modeling cortical activity creates two different spatial scales: a local scale (i.e. dynamics abstracted into the activity of a single DHO node), and a global scale (capturing connections between DHO nodes, i.e. local circuits). We model properties of the dynamics at the local circuit level by introducing feedback connections to the DHO nodes. In particular, feedback parameters v and w control the amount of the feedback on each oscillator’s amplitude, and velocity, respectively. In this model, the time-varying total input I(t) to each node is given by the sum of the external input and the recurrent input, which are superposed to the feedback input (see figure, left). Here, we analyze the dynamics of an isolated, single DHO node subject to such feedback connections for different external inputs. In particular, we show how the feedback parameters v and w influence the frequency-dependent gain curve G (the ratio between the input and the node stationary amplitudes, G>1 indicates resonance) of a node (see figure, right a,b). This allows DHO nodes to tune their gain curves and act as feature detectors by adjusting their feedback parameters v and w when learning a stimulus classification task (figure, right, c,d). Additionally, we conducted a bifurcation analysis of nodal dynamics within the parameter space (v, w), revealing a (Z2)-symmetric Takens-Bogdanov bifurcation (figure, center). This bifurcation allows the emergence of bistability, limit cycles, and their interplay within a global bifurcation structure (figure, center). This endows nodes with dynamical mechanisms to increase the complexity of their responses to inputs, thereby improving discernibility of stimuli with differing temporal patterns. Moreover, we observe certain features of the nodes’ dynamics that resembles some of those observed in biological neuronal systems, e.g. up-down states, bursting activity, and periodic activity. Overall, our analysis shows how feedback connections enable DHO nodes to express dynamics that go beyond the ones of a standard damped harmonic oscillator without feedback connections, how this allows units to shift their gain curves to change their feature tuning in HORNs, and how feedback mechanisms control nodes dynamics. Such enriched dynamics strengthen the ability of nodes to separate temporally organized inputs, in networks, nodes with heterogeneous feedback properties can even combine different dynamics, improving the overall computational performance of a HORN.
References
1. Effenberger, et al. The functional role of oscillatory dynamics in neocortical circuits: a computational perspective. bioRxiv(2023).


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P026 Role of information flow dynamics (top-down or bottom-up) in the gamma frequency band (≈40 Hz) of the EEG in cognitive functions and consciousness
Cognitive processes and consciousness rely on extensive thalamocortical and corticocortical recurrent interactions at a large scale. It has been hypothesized that oscillations within the gamma frequency band (30 to 45 Hz) of the electroencephalogram (EEG) are generated as a result of these interactions and play a role in cognitive functions. These oscillations have been implicated in the integration of spatially separated yet temporally correlated neural events, leading to a cohesive perceptual experience. It is widely recognized that top-down processing refers to the brain's ability to utilize our expectations, attentional focus, and other cognitive factors to dynamically influence bottom-up sensory processing. However, the precise directionality of the information flow encoded by gamma band oscillations remains unknown. Therefore, the primary objective of our study is to investigate the specific information flow patterns within the gamma band during both wakefulness and sleep.
To achieve this, five cats were chronically prepared for polysomnographic recordings, with electrodes placed in various cortical regions. To investigate the directionality of information flow of the gamma band during wakefulness and sleep, we quantified the phase shifts of amplitude envelopes of filtered gamma oscillations and employed the "Granger causality" analysis. This statistical test allowed us to determine if one time series could predict or forecast the other.
In the baseline condition when analyzing 500-second windows, the results revealed that during wakefulness, the primary direction of information flow in the gamma band was from the dorsolateral prefrontal cortex (Pfdl) to the posterior parietal cortex (Pp), as well as from Pfdl to the primary somatosensory cortex (S1), primary visual cortex (V1), and primary auditory cortex (A1). Additionally, there was a predominance of directionality from Pp to the primary cortices (S1, A1, V1). This indicates a significant influence of top-down information processing. However, this top-down flow of information was not observed during both NREM and REM sleep.
Furthermore, when investigating late gamma oscillations induced by click stimuli (analyzed within 1 second windows starting 0.5 seconds after the stimulus), we found a predominant bottom-up directionality from the Pp to the Pfdl, as well as from A1 to Pfdl. In contrast, late gamma oscillations induced by more complex stimuli, such as 0.2 seconds variable sounds, demonstrated a predominant top-down directionality from Pfdl to Pp, A1, and V1 cortex. Notably, these specific directionalities were not observed during sleep.
The data indicate that during wakefulness, different patterns of information flow emerge depending on the nature of the stimuli. Specifically, bottom-up processing was found to predominate in the case of simple repetitive sound stimuli, while top-down processing prevailed for complex and variable sounds, as well as in the baseline condition (without stimuli). In addition, no specific directionality of information flow was observed during NREM and REM sleep, suggesting a different mode of cognitive processing during sleep.

This research was supported by CSIC-I+D grupos 2022 and CSIC-I+D-2020-393.


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P027 A biologically constrained model displaying gamma-to-theta cross-frequency directionality in the CA3 hippocampal circuit
In the hippocampus, coordinated transmembrane dendritic currents give rise to oscillating Local Field Potentials (LFP) in the theta (4-12Hz) and gamma (30-100Hz) frequency bands. These rhythmic patterns are tightly coupled, particularly during learning, such that the amplitude of the faster gamma oscillations is synchronized to the phase of the slower theta activity. Contrary to the prevailing intuition that theta wave phase dictates gamma wave amplitude, estimation of the cross-frequency directionality (CFD) index applied to hippocampal electrophysiological recordings has revealed that gamma amplitude fixes theta phase (Lopez-Madrona et al., 2020)
To investigate this mechanism, we utilized a simplified yet biologically plausible circuit model of the hippocampal CA3 area. This model includes dendritic compartments in pyramidal cells (adapted from Neymotin et al., 2011), an inhibitory interneuron population (basket cells), and an external theta driver (input from layer II of the medial entorhinal cortex). The theta driver induces gamma activity by exciting the interneuron population which in turn inhibits pyramidal cells in theta cycles. The theta driver also excites the distal dendrite of the pyramidal cell, thus creating an excitatory theta activity that propagates through the dendrite to the soma. When we impose different synaptic delays between the driver to interneurons and pyramidal cells, we find both positive and negative CFD depending on whether the pyramidal cells or the interneurons are excited first. Interestingly, in the most biologically plausible case where inputs from a common driver arrive simultaneously in both populations, the CFD obtained from the model is negative (see Fig. 1), in good agreement with the experimental data. Negative CFD in the model is the consequence of fast feedforward gamma inhibition reaching the pyramidal cell soma before the peak of the excitatory theta activity.
In summary, we introduced a computational model to investigate theta-gamma CFD. As our model is mechanistic, experimental manipulations could validate its predictions. For instance, interfering with the entorhinal cortex drive to basket cells should reduce theta-gamma coupling and shift the interaction to a theta-drive gamma mode (positive CFD).


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P028 State Transitions of Neural Populations Underlying the Alpha and Gamma Rhythms in EEG/LFP
EEG Rhythms in the alpha (8-12Hz) and gamma (>30 Hz) frequency range exhibit distinct spatiotemporal patterns of brain states under tasks or during rest. The distinct patterns may partly result from the various types of GABAergic interneurons that differentially govern cortical functionality. In previous work, we showed the bistable switch between neural states built on mutual inhibitory connectivity motives between parvalbumin (PV), somatostatin (SOM), and vasoactive intestinal polypeptide (VIP) interneurons, where the two states are characterized by low- and high-frequency oscillations, respectively [1]. In this study, we investigate the neural dynamics of a cortical column model that encompasses excitatory (E) and inhibitory (PV, SOM, and VIP) populations across cortical layers (L2/3, L4, and L5/6), with a focus on the biologically realistic configurations of inter-layer connectivity as well as cell-type specific sigmoid functions. We analyze the power spectrum of the noise-driven fluctuation in the simulated EEG signals under a combination of thalamic, feedback, and modulatory input configurations. The column model exhibits the switch between PV-dominant and SOM-dominant states governed by the VIP interneurons, which aligns with our previous finding [1]. The preliminary simulation results also show that the thalamic input (targeting E and PV neurons) and the modulatory input (targeting VIP neurons) push the fluctuation into the gamma range, while the feedback input (targeting E and SOM neurons) brings the fluctuation back into the alpha range, which resembles the EEG experimental observations of higher alpha power in the visual cortex during eyes-closed than during eyes-open resting states. We identify factors that give rise to the observed EEG phenomena by systematically examining model configurations across the parameter space. The nuanced interplay of these factors provides valuable insights for future investigations into the underlying dynamic columnar states in EEG/LFP studies.
References: 
[1] Hahn G, Kumar A, Schmidt H, et al. Rate and oscillatory switching dynamics of a multilayer visual microcircuit model. eLife. 2022,11:e77594.



Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P029 A metric to evaluate the spatial tuning of hippocampal place fields that is reliable at low firing rates and short observation times
Studying place cells over short timescales can give insight into short term changes to place field tuning, including encoding of new place fields and experience dependent changes to existing place fields. When analysing changes to place field shape on short timescales, robust inclusion criteria are necessary to ensure that results are not dominated by spatially uncorrelated firing. Skaggs’ spatial information score [1], and mutual information based criteria are commonly employed to identify cells with spatial tuning, but tend to be unreliable when applied to low numbers of spikes [2].

The objective of this study is to create a metric to identify and validate the presence of spatial tuning in potential place cells, that remains reliable when relatively few spikes are recorded (either from low firing rates, or short observation times). The metric should not be sensitive to place field size or small lap to lap inconsistencies in place field shape or firing rate, but rather only to whether a place field is present. The metric should yield scores that are interpretable, and that are comparable between different cells, and different recording lengths.

The proposed method considers a comparison between the firing rate on one pass nearby a location (within 10cm) to the average firing rate from other visits to that location. Rather than collapsing all laps together, each distinct visit to a location is treated as a separate point in a sequence of places visited. The question of whether the cell is spatially tuned is then reformulated as a hypothesis test on the conditional independence of the firing rate on any given pass, to the firing rate averaged over all other passes. The alternate hypothesis is that the firing rate on a single pass is positively correlated to the firing rate average at that location. Independence is tested using Kendall’s tau statistic, which considers the proportion of pairs of points in the position sequence that have concordant ranks. Pairwise comparisons are made only between visits to locations in the place field (average firing rate > 75% peak), and outside of the place field (average firing rate < 25% peak). This allows the metric to only judge the degree to which a cell can differentiate between ‘in’ and ‘out’ of the place field. This metric yields a score between -1 and 1 where positive scores represent a degree of spatial tuning.

Our method was validated using synthetic place field recordings, where spikes are generated using an inhomogeneous Poisson process on tuning curve functions of position. To validate that objectives are met, the tuning curves were varied in stability of shape, firing rate stability, place field size, average firing rate, and time observed. On synthetic place fields, the metric can identify the presence of a place field within approximately 5 laps and scores remain consistent with additional laps. The scores are invariant to place field size (up to 70% of track) and consistent across place fields with the same tuning characteristics but different firing rates.

References
[1] Skaggs, W., Mcnaughton, B., Gothard, K. & Markus, E. (1992). An Information Theoretic Approach to Deciphering the Hippocampal Code. Advances in Neural Information Processing Systems
[2] Souza, B. C., Pavão, R., Belchior, H., & Tort, A. B. L. (2018). On Information Metrics for Spatial Coding. Neuroscience, 375, 62–73.


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P030 Biophysical modeling to inform performance in motor imagery-based Brain-Computer Interfaces
Brain-Computer Interface (BCI), by translating brain activity into commands for communication or control, is a promising tool for patients who suffer from neuromuscular pathologies or lesions. Nevertheless, it fails to detect intents in 15-30 % of the BCI users, due notably to a poor understanding of the mechanisms underlying the BCI performance. Here, we aim at using a biophysically interpretable and analytical model to identify biophysical changes occurring while controlling a BCI. We hypothesized that excitatory and inhibitory neuronal populations model parameters will differ when comparing the performed tasks in a BCI setting. 
We used source-reconstructed magnetoencephalography signals in a BCI framework where 19 subjects were instructed to modulate their brain activity to control the position of a cursor displayed on a screen by either performing a motor imagery task or remaining at rest [1]. We divided the cohort into two subgroups, namely G1 and G2, with subjects who performed better or worse than the average respectively.
We employed a linearized neural mass model to infer four biophysically realistic parameters from the estimation of the power spectra: two neural gains capturing overall synaptic strength between excitatory and inhibitory neuronal populations (g_ei) and among inhibitory neuronal populations (g_ii), time constant of the excitatory neuronal population (tau_e), and time constant of the inhibitory neuronal population (tau_i) [2]. We inferred the optimal model parameters to match the shape of the modeled power spectra with the empirical power spectra for each subject during both rest and MI. We then compared the model parameters between rest and MI.
To check that the spectral power in the alpha frequency band carried relevant information, we performed statistical tests between the Rest and the MI conditions on data from G1 and G2. Whereas no significant difference between Rest and MI in G2, in G1 significant condition effects were observed  in associative and sensorimotor regions (Fig 1A). 
Then, we studied to which extent the excitatory/inhibitory neuronal population parameters could differ depending on the performed task. The neural gain g_ei shows a significant condition effect in regions involved in visual motion processing in G1 and in regions involved in the default mode network in G2. The neural gain g_ii significantly differs between Rest and MI in regions involved in decision-making processes in G1 and in areas involved in attention processes in G2. The time constant tau_e shows no significant condition effect in G1 whereas in G2, such an effect was observed in areas involved in visual recognition. 
Lastly, the time constant tau_i shows a significant condition effect in regions involved in motor imagery performance and in decision making processes in G1 (Fig 1B) and in areas involved in attention processes in G2. 
These results indicate changes in the excitatory and the inhibitory between the conditions with an alteration of the inhibitory neuronal population activity over sensorimotor areas in the most responsive subjects only. These can be potentially used as biophysically realistic markers of BCI performance.

[1] Corsi, M-C, et al. Functional disconnection of associative cortical areas predicts performance during BCI training.


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P031 Investigating the Role of Astrocytes in Neural Networks Activity
            Recent advances have challenged the traditional view of glial cells as ‘supporting cells,’ instead suggesting they have a more active role in neuronal activity. Among the diversity of glial cell subtypes, astrocytes are the most abundant in the brain. In fact, several studies have shown that dysfunction of astrocytes can be found in pathological states such as Alzheimer’s disease [1]. We explored to identify interactions between neurons and astrocytes and how they impact brain activity in different scenarios, therefore elucidating potential targets and improving the utilization of conventional artificial intelligence tools.
Our computational network model is composed of randomly connected excitatory and inhibitory conductance-based leaky integrate-and-fire neurons with astrocyte models that follow a closed-loop gliotransmission [2,3]. We discern distinct patterns of astrocyte activity in networks exhibiting asynchronous activities compared to synchronous activities based on the firing rate power spectrum. We systematically varied the number of astrocytes, the synaptic parameters corresponding to maximum conductance, and the network stimulation properties which are chosen to be either constant or a frequency-dependent sinusoidal function.
We observed that astrocytes have a more profound impact in cases of synchronous activity while the inclusion of astrocytes does not significantly change the firing rate, the CV, and the response to oscillatory stimulation are altered. Astrocytes show the ability to amplify firing-rate resonance when synchronous activity is present. Upon a systematic investigation of pairwise coherence, we found that the spike trains of pairs of individual neurons tend to work more coherently under the presence of astrocytes in those synchronized states. The coherence improves with the number of astrocytes and creates a network that is more susceptible to external stimulation. 
Furthermore, we explored the use of artificial neural networks to recognize these astrocytic signatures on the network activity. While the presence of astrocytes is clearly identified in synchronous activity due to the changes they promote, the algorithm has difficulties identifying astrocytes on asynchronous activity networks. This study lays the foundation for the identification of abnormal glial activity and has the potential to facilitate early recognition of pathological network states.
[1] Cai, Z., Wan, C.-Q., Liu, Z.: Astrocyte and Alzheimer’s disease. Journal of Neurology 264, 2068–2074 (2017)
[2] Stimberg, M., Goodman, D.F., Brette, R., Pittà, M.D.: Modeling neuron–glia interactions with the brian 2 simulator. Computational Glioscience, 471–505 (2019)
[3] De Pittà, M., Brunel, N.: Multiple forms of working memory emerge from synapse–astrocyte interactions in a neuron–glia network model. Proceedings of the National Academy of Sciences 119, 2207912119 (2022)


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P032 Robust bistability in spinal pain processing neurons
At the spinal cord level, dorsal horn projection neurons are a key relay for pain processing. In its early stages, afferent sensory inputs are encoded by a complex network of interneurons and transmitted to the brain by projection neurons. Studies have shown that these projection neurons show different types of firing patterns, associated with different types of signal transmission. Two of them are tonic firing and plateau potentials [1]. For the same input pulse of current, tonic neurons spike during the pulse only, while plateau neurons can spike during and after the pulse. Therefore, an important feature—called bistability—emerges in neuronal excitability after a switch from tonic to plateau. By nature, a bistable neuron exhibits either resting or spiking for the same input current value depending on the initial conditions and the input signal history. However, how such robust bistability arises at the cellular level is unclear. Conductance-based modeling revealed that bistability arises with calcium channels, but at unrealistically low resting equilibriums potentials [2]. These resting equilibriums can be improved by a complementary increase in the conductance of potassium channels, but at the cost of reducing, and even destroying bistability [3]. 
In this work, we show that robust bistability can be achieved by combining calcium channels with inward-rectifier potassium (Kir) channels (see Fig. 1). For this purpose, we studied the behavior of a conductance-based model including these ion channels. At steady-state, Kir channels singularly combine a strong inward current and a small outward current. In silico blocking of one of them at a time reveals two different contributions of the Kir current on resting equilibriums and bistability. Additionally, we show that Kir channel and calcium channel current-voltage curves both display nearly co-localized regions of negative differential conductance (positive feedback). This intrinsic property is responsible for maintaining, and enhancing bistability. In fact, after replacing Kir channels with common-shaped potassium (KM) channels, opening at the same timescale but lacking a negative-slope region, bistability is highly weakened. Combining calcium channels with KM channels also switched the excitability observed from plateau-type to tonic-type, with low levels of bistability. This comparison allowed to contrast the distortions of the bifurcation diagrams corresponding to either plateau-type or tonic-type excitability. With sets of calcium and Kir or KM conductances leading to comparable levels of bistability, we show that the dynamical mechanisms involved and the types of bifurcations involved are very different. Finally, we tested the robustness to noise of plateau-type and tonic-type neurons with comparable levels of bistability. These results show that spiking in tonic-type bistable neurons is more robust than resting in the bistability window. Conversely, resting is more robust than spiking in plateau-type bistable neurons.
This work highlights that the switch from tonic firing to plateau potentials must target the synergic increases in both calcium and Kir channels to achieve robust bistability. As different types of excitability respond differently to noise, channel expression must balance differential conductances sign depending on the functional state needed at the membrane level.


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P033 Implementation and Validation of a Balanced Excitatory-Inhibitory Network in Loihi
We implement a balanced excitatory/inhibitory network (EI) in Intel’s neuromorphic SDE Lava and hardware Loihi. The original network was introduced as a framework to study the dynamics of biological neural networks. A version has been used by researchers as a benchmark and validation for simulators in various software and hardware environments [1]. The implementation here has the same LIF neurons, but exponential decay synapses, which accommodate current software and hardware limitations of Lava/Loihi. We implement the same network in NEST for validation.
Loihi is a highly capable and scalable neuromorphic hardware system. In principle a single chip can simulate up to 1M neurons with up to 120M synapses. We have access to boards with up to 8 chips on them. Due to current constraints of Intel’s neuromorphic compilers, we could only build EI networks of moderate sizes, ~ 15K neurons and 20M synapses, some of which still required the use of up to 6 chips. Nevertheless, such networks are comparable in size to the single HPC  compute node benchmark models used in [1], which allows for direct performance comparisons to HPC implementations of the same network. We anticipate that ongoing improvements in the Loihi compiler will allows us to implement simulations of larger network sizes and report those results at the conference. In this work we only report runtime comparisons between the different architectures, as the Lava compiler is not yet optimized to generate competitive build times. We also make a brief note on respective power consumptions.
The Loihi implementation was designed as in [2], by scaling and shifting the LIF models and their parameters in order to fit within Loihi’s integer state arithmetic and limited-precision storage for parameters. As noted in [2], the individual neurons match well their standard CPU implementations. For the network implementations here, we observe visually very similar firing patterns in both NEST and neuromorphic implementations. Network rate and correlation comparisons yield similar numeric results, although not at the level of comparisons between standard simulations on varied architectures.
Within the current limitations, the NEST implementation of the EI network runs approximately in real time, ~20s/s biological time. It scales weakly with the increase of network size from ~10s for 10K neurons, to ~20s for 15K neurons, as in [1]. The Loihi simulation performs much faster, as expected for a specialized hardware simulator. It exhibits a similar weak scaling, running from ~20ms/s for 1000 units to 40ms/s for 15K units, hence being about 500 faster than a standard CPU implementation, and 25-50 times faster than biological real time. Power consumption was also much lower, at approximately constant 2W/chip, with up to 6 chips used for these simulations. We do not have exact measure of CPU power consumption, but that is approximately 100-200W/chip for modern server-class CPUs, with typically 2 chips per compute node.
In conclusion, Loihi shows promise as an accelerator for biological neural network simulations. However more studies are needed to fully qualify its benefits and trade-offs of this platform.

[1]     J. Jordan et al.,  Front. Neuroinf., 12(2) 2018
[2]     S. Dey and A. Dimitrov,  Front. Neurosci., 16, 883360, 2022


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P034 Alpha oscillations as the basis for erasing in a mechanistic model of working memory
Working memory is a fundamental cognitive faculty characterized by the short-term storage and manipulation of information without the need of important long term  physical or biochemical changes. Computational neuroscientists have employed two distinct models to explain this phenomenon: attractor networks, whose stable states could account for the persistent activity observed in stimulated neurons and networks; or single-cell transient excitability coupled with external oscillations, such as theta (~ 8 Hz) and gamma (~ 40 Hz) oscillations, which could enable multiple-item working memory by reactivation of tagged cells. In the present work, we adopt the latter perspective. Beyond theta and gamma oscillations, alpha oscillations ($\sim$10 Hz) have been linked to declines in working memory performance, yet without clear mechanistic explanations. A recent approach suggests that neural oscillations orchestrate a series of critical working memory operations, including the initial loading of information into the network, its maintenance over time, and subsequent erasure [1]. In our recent study, we proposed a multi-item working memory model based on theta-gamma coupled oscillations, and we integrated alpha oscillations as a mechanistic component for erasure [2]. We  tested the robustness of this mechanism by examining the probability of success in executing the erase operation. The model network consists of spiking neurons distributed across four modules, driven by an external theta oscillation. Excitatory neurons display a membrane phenomenon known as afterdepolarization current (ADP), and when the peaks of the internally generated depolarizing current align temporally with those of the ongoing theta rhythm, the result is a cyclic reactivation of the spike patterns. Inhibitory neurons promotes feedback inhibition and create the gamma oscillations in the model. Here, we present the results of the computational analysis of the viability of alpha oscillations as the basis for an erasure operation. Our findings indicate that the success of this operation is not contingent upon the onset of alpha or its relative amplitude to theta. Rather, it hinges on its frequency relative to theta. Consequently, we conclude that the mechanism by which alpha oscillations interfere with theta oscillations, generating a beat pattern that induces a temporary pause in the oscillation coinciding with the ADP peak, could be a robust mechanism for halting the cyclic reactivations of spike patterns and, thus, erasing memory.




Acknowledgements
G.S. and M.I. acknowledge funding from the Brazilian Agencies CAPES (88887.583995/2020-00) and CNPq (311497/2021-7). A.V. received funding from 457 EPSRC (EP/T02450X/1)


References
1. Dipoppa, M.; Gutkin, B. S. Flexible frequency control of cortical oscillations enables computations required for working memory. Proceedings of the National Academy of Sciences, v. 110, n. 31, p.12828–12833, 2013.
2. Soroka, G.; Idiart, M.; Villavicencio, A. Mechanistic role of alpha oscillations in a computational model of working memory. PLOS ONE, Public Library of Science, v. 19, n. 2, p. 1–21, 02 2024.







Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P035 Using linear-look ahead modules of grid cells to navigate agents through reinforcement learning
Machine Learning (ML) has been heavily influenced by Biology and Neuroscience. Visual learning models like the Convolutional Neural Network are inspired by biological circuitry of the visual system, and are still at the core of image AI tools thirty years later. Research on spatial navigation and reasoning using artificial neural networks (ANNs) is ongoing, but there are still few generalizable architectures for navigation tasks. The mammalian Hippocampus and Entorhinal Cortex contain well-researched cell types known to contribute to spatial representations and navigation [1]. To improve spatial navigation capabilities in ANNs, we aimed to engineer these known cell types in Spiking Neuronal Networks (SNN) to derive practical and generalizable architectures for neural navigation.

This study uses the BindsNET platform [2] to model two interconnected networks: 1) a spiking neural network (SNN) representing Grid Cells (GC), and 2) an Artificial Neural Network (ANN) for high-level navigation. Each GC is simulated as an integrate-and-fire neuron with random excitatory synaptic connections between GCs. The SNN is connected to the ANN, which learns to predict safe paths in a hazardous environment represented by a 2D map with a goal and obstacles. The ANN processes environmental data to determine heading, curvature, and distance for navigation. This data is converted into a spike-train and fed into the GC module to guide the agent along the predicted path. The training algorithm comprises two stages: 1) unsupervised training within the GC module using spike-timing dependent plasticity (STDP) to adjust synapses, and 2) reinforcement learning (RL) at the SNN/ANN interface based on reward signals indicating proximity to the goal [3,4]. During the RL training, reward signals were sent back to the ANN according to how close the agent got to the goal. Based on the synaptic structure of the GCs after STDP, we determined that GCs could transform simpler navigation concepts (including heading direction, curvature, and travel distance) into sequences of physical movements, enabling the integrated model to produce safe paths towards the goal [1].

We developed a novel hybrid architecture merging GC neuroanatomy details with established neurobiological learning rules to form an SNN circuit for environment path prediction. Mimicking mammalian navigation, our GC module replicated spiking behavior using a biologically-inspired neural network and learning principles. While a simplified model of entorhinal cortex architecture, our research suggests animals may employ similar computational functions in navigation. The SNN predictions were effectively utilized by a simpler ANN in real-world scenarios. Our future focus is on incorporating additional microcircuit architecture details and mammalian hippocampus and entorhinal cortex encodings to enhance learning for complex spatial navigation and reasoning tasks.

References

[1] Linear look-ahead in conjunctive cells: an entorhinal mechanism for vector-based navigation. Front. Neural Circuits, 2012

[2] BindsNET: A Machine Learning-Oriented Spiking Neural Networks Library in Python. Front. Neuroinform., 2018

[3] Training spiking neuronal networks to perform motor control using reinforcement and evolutionary learning Front. Comput Neurosci 2022

[4] Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning PLoS One 2022


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P036 Biology-Inspired Oscillator Networks and the Functional Role of Oscillatory Dynamics in Neocortical Circuits
Biological neuronal networks exhibit hallmark features such as oscillatory dynamics, heterogeneity, modularity, and conduction delays [1]. However, it has remained unclear to what extent these serve computational purposes [2].
Inspired by physiological findings [3], we simulate recurrent networks (RNNs) of damped harmonic oscillators (DHOs) and evaluate their performance on benchmark pattern recognition tasks. The DHO nodes in our model represent the aggregate activity of an underlying E-I circuit such as a cortical column. By enforcing nodal activity to be oscillatory, we enable such Harmonic Oscillator RNNs (HORNs) (Fig. 1) to code and process information in not only nodal amplitudes but also their phases. This furthermore allows the networks to systematically exploit dynamical properties such as resonance, synchronization, and desynchronization for computations, which are not readily available to non-oscillating architectures. Importantly, while non-oscillating architectures can generate oscillatory activity on the network level, these dynamics and the resulting computational primitives in many cases cannot be readily exploited by a gradient-based learning scheme due to their transient nature. Thus, enforcing oscillatory activity in each network node is essential to perform a systematic evaluation of the functional role of oscillatory dynamics presented here.


We find that backpropagation learning is able to capitalize on these enhanced dynamical repertoires of our coupled oscillator networks. Consequently, we find that HORNs outperform non-oscillating RNN architectures in learning speed, task performance, parameter efficiency, and noise tolerance, in some cases by orders of magnitude. To elucidate on possible functional roles of other characteristic features of biological neuronal networks, such as heterogeneity and modularity, we successively endow HORNs with heterogeneous node frequencies, scattered conduction delays, and multilayer architectures [4]. We find that these features result in further increased task performance without increasing model size. This increase in task performance of HORNs can be explained by the inductive biases introduced by the oscillatory activity, and the result that adding additional features such as heterogeneity has on the properties of fading memory in the networks, bringing their dynamics closer to the critical point [5].


Despite their conceptual simplicity, HORNs are able to reproduce a surprising number of fundamental neurophysiological findings. Our analyses uncover the powerful computational principles realized in such networks (such as feature detection and coding through resonance and stimulus coding by means of waves and their interference patterns) and allow us to give plausible a posteriori functional interpretations for many fundamental anatomical and physiological features of cortical networks such as the dynamics of synchronization and desynchronization, the heterogeneity of preferred oscillation frequencies and conduction delays, the context dependence of receptive fields, and multilayer hierarchies.


Lastly, we show how our model enables biologically plausible unsupervised Hebbian learning.


References
[1] W. Singer, PNAS, 118.33, 2021
[2] T. J. Sejnowski and O. Paulsen, J Neurosci, 26.6, 2006
[3] G. Spyropoulos et al. Nat Comm 13.1, 2022
[4] F. Effenberger et al., bioRxiv, 2022.11. 29.518360, 2023
[5] I. Dubinin and F. Effenberger, Neural Netw, 106179, 2024


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P037 Forgetting impairs reversal learning in mice and artificial neuronal networks
Learning is an essential mechanism in many neuronal systems, and while it has been extensively studied, less emphasis has been put on forgetting. One possible factor for passive forgetting could be the spontaneous remodeling of neuronal circuitry. Here, we seek to investigate the role of forgetting in mice undergoing a reversal learning task, in which we invert the reward initial stimulus-to-outcome associations, and interpret our findings by modeling the impact of synaptic remodeling on reversal learning with a recurrent neural network (Fig. 1). Previous studies present conflicting views on the role initial learning plays during the reversal learning task: some suggest that prior learning of a related task can facilitate adaptation to new tasks [1], while others argue it may hinder the process [2]. Our research aims to clarify these ambiguities by combining experimental findings and a computational model.
We trained mice in an operant learning task to discriminate two auditory stimuli in a go/no-go paradigm. After learning, animals underwent a passive forgetting interval of two or 16 days without further training. Next, we probed memory retention, followed by a reversal learning task with inverted reinforcement contingencies for both stimuli. Initial learning curves exhibit a sigmoidal shape, signifying the delayed transition from chance-level performance to near perfection after 1000-3000 trials. Also, reversal learning follows a sigmoidal trend with the delay of the rapid learning transition depending on the forgetting interval: Animals with a shorter pause showed higher performance during the memory test and faster reversal learning, suggesting a retention of the task structure in memory. Mice with less forgetting were able to flexibly adapt to the reversed task more quickly. This finding indicates that higher forgetting of previously learned associations has a negative effect on the behavioral adaptability of the brain.
To contextualize these findings, we employed a single-layer recurrent neural network trained to perform a binary classification, equivalent to the above experiment. The network faithfully replicated the delayed learning during initial and reversed conditions when its connectivity was enforced to be within an inhibition dominated regime. As a simple model of passive forgetting, we simulated increasing forgetting intervals by shuffling increasing fractions of synaptic weights of the network's recurrent connectivity, thereby simulating spontaneous synaptic remodeling. In accordance our experimental observations, the fraction of shuffled synaptic weights exhibited an inverse relationship with the speed of reversal learning.
In conclusion, our study sheds light on the interplay of forgetting, synaptic remodeling and reversal learning: Delayed reversal learning after higher rates of forgetting suggests a decline of task-relevant memory structures over time. These findings provide valuable insights into memory dynamics and may have implications for understanding cognitive flexibility and adaptation in both biological and artificial neural systems on a network level.


References
1. Gonzales et al., Reversal learning and forgetting in bird and fish. Science. 1967, 158(3800), 519-521.
2. Bouton et al., Stimulus Generalization, Context Change, and Forgetting. Psychological Bulletin. 1999, 125(2), 171-186.


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P038 Metabolism and Electrophysiology: Investigating the Crosstalk in a Reconstructed Rat Neocortical Circuit
The brain requires a considerable amount of energy for effective neuronal communication. This energy is primarily sourced from the breakdown of glucose which is taken up from blood vessels and processed by astrocytes and neurons, leading to the production of ATP.  It is the collaborative effort of neurons, glial cells, and vasculature which ensures the production of energy and thus sustains the brain. In fact, impairment of energy metabolism is often associated with neurodegenerative diseases, leading to neuronal deaths.
Mathematical models focusing on a unitary neuro-glia-vasculature framework have been proposed to understand how energy production and signalling processes interact. Despite advancements, understanding how the brain efficiently consumes energy and what happens during neurodegeneration remains a significant challenge.  To this end, we propose a comprehensive model describing details of the metabolism, neuronal electrophysiology, blood flow and the processes happening in the extracellular space. In our study, we consider an in silico reconstruction of the rat somatosensory cortex, including neurons, astrocytes, and blood vessels. The metabolic and electrophysiological models are coupled through ATP and solved on a large scale in the whole circuit, allowing us to explore energy demands across different layers and morpho-electric neuron types.
Our simulations are consistent with experimental findings where neuronal stimuli consume ATP, increase intracellular sodium, and decrease potassium. Finally, results highlight the crucial role of energy supply to characterise neuronal sodium and potassium in physiological range. Our framework offers valuable insights into the metabolic dynamics underlying neuronal activation. 
Acknowledgements
This study was supported by funding to the Blue Brain Project, a research center of the École polytechnique fédérale de Lausanne (EPFL), from the Swiss government’s ETH Board of the Swiss Federal Institutes of Technology.


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P039 Implications of the Volume Transmission of Information on Learning and Prediction: An Analysis from Grossberg's Learning Theory
It is currently accepted that the links between nerve cells are established not only through synaptic connection, but also through the confluence of various cellular signals, which affect global brain activity. These include cell signaling whose underlying mechanism is the diffusion of neuroactive substances into the extracellular space. An example of these substances is the free radical gas called Nitric Oxide (NO), which, in turn, determines a new type of information transmission: Volume Transmission (VT). The VT performs a non-simple type of communication, both short and long distance, and the extracellular space (ECS) acts not only as a micro environment for separating nerve cells, but also as an information channel [1, 2]. Likewise the characteristics of VT do not follow Cajal's dynamic polarization law because the transfer of information is not unidirectional at synapses, but volume in brain tissue.
In this work we study the implications of the VT, in learning theory or, alternatively, in prediction theory whose goal is the prediction of individual events, in a fixed order, and at prescribed times, analyzed by Grossberg [3]. In these studies, Grossberg presents and analyzes a set of systems of nonlinear differential equations describing the dynamics of a set of machines  that are capable to embody the previously cited learning or prediction theory.
Such nonlinear systems display the mathematical formalization of what Stephen Grossberg later called Non-Stationary Prediction Theory, or Mathematical Theory of Learning [4, 5, 6].
In our study we propose a new artificial computation scheme (ACS), extending the one proposed by Grossberg by incorporating the VT using our multi-compartmental model of NO diffusion [7]. It is defined by a system of first order differential equations based on multi-compartmental systems and transportation phenomena. It gathers the real features of the ECS such as the no homogeneity and the non-isotropy.
We will analyze the relevance of this new model for the learning theory, performing studies of its dynamics when it is subjected to constant inputs and when it is in a resting state. We will also analyze the behavior of this new ACS in changing environments. These studies will allow us to determine whether the VT plays a relevant role in learning and to further explore what kind of role this might be. The development of this new ACS is located within the Global Study Framework of the NO Retrograde Messenger, and can be considered as a significant advance in the theoretical area of this Framework [8].
Finally, our work will analyze the implications of VT in a more complex ACS such as an adaptive neural architecture, which obeys aspects of Grossberg's learning and prediction theory. An example of these architectures is the Adaptive Resonance Theory 2 (ART 2), which was proposed by Carpenter and Grossberg [9]. We approach this development by modifying the fundamental learning equation scheme of the ART models to incorporate the effect of VT due to NO gas diffusion through a modulatory effect on learning.
We present and compare results from both ACSs: the original ART 2 and ART 2 with VT, being able to propose both new advances in the brain stability and plasticity dilemma, and the influence of VT by NO in this dilemma. This will lead us towards the improvement of learning mechanisms embodied in neural architectures closer to the function and structure of the brain.



Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P040 Study of Dynamic Behavior in Neuron-Astrocyte Networks
The exploration of dynamic phenomena in neuronal networks, including chimera states and synchronization, has attracted significant interest across diverse disciplines, from neuroscience to physics, cybernetics, and mechanics. These alterations in the topology and coupling of complex systems profoundly impact the spatial and temporal distribution of synchronized regions. Despite existing techniques for controlling these regions, significant theoretical and practical limitations persist.
This work investigates the relationship between synaptic pruning, external electrical fields, and neuronal synchronization within the context of chimera states. Two models are studied: a coupled FitzHugh-Nagumo (FHN) system incorporating unidirectional couplings, and a modified FHN model incorporating astrocytic modulation. Computational simulations explore how synaptic pruning influences neuronal synchronization dynamics and chimera pattern distribution.
Astrocytes, known as the support cells of the brain, play a key role in regulating neuronal activity and synaptic transmission. Their interaction with neurons has implications for various brain functions and disorders. Astrocytes are modeled as a glutamate reservoir, with linear modulation of glutamate concentration and synaptic strength. Computational simulations investigate how the presence of astrocytes affects the emergent properties of neuronal networks.
The dynamics of single neurons are analyzed through bifurcation diagrams and interspike interval coefficient of variation, demonstrating the significant impact of external stimulus current and electric field on neuronal activity. To study network behavior, neurons are coupled with nearest neighbors under various setups. Moreover, the collective behaviors of the network under the influence of an electric field are analyzed using measures such as strength of incoherence and coefficient of variation.
One crucial aspect explored is the role of the number of neighbors and the impact of astrocyte interaction on the emergence and dynamics of synchronized states and chimera patterns in neuronal networks. By systematically analyzing interactions between neurons and astrocytes under different neighbor configurations, valuable insights into the complex interplay between network structure and astrocytic regulation of neuronal activity are gained.
In summary, this study aims to provide a comprehensive understanding of the interaction between astrocytes and neurons in regulating brain function. By integrating astrocytic modulation into FHN models and examining its effects on network dynamics, novel insights into the complex interactions in the brain are expected.


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P041 Dynamic Effects of Electric Fields on Thermosensitive Neuronal Networks
Dynamic Effects of Electric Fields on Thermosensitive Neuronal Networks
* Ediline Laurence Fouelifack Nguessap,Fernando Fagundes Ferreira, Antonio C. Roque da Silva Filho
Department of Physics,FFCLRP/University of Sao Paulo,Brazil..
* fonela@usp.br
 
 
             The Fitzhugh-Nagumo neuronal model is used to explore the influence of the electric field on thermosensitive neurons' dynamics. This study investigates how the electric field affects polarization modulation in cell media induced by changes in ion charge density by adding electrical field as a new variable. Driven by a voltage source acting as an external stimulus current, different firing mode responses of the proposed model are analyzed when an external electrical field is applied. Through computational analysis, the study evaluates the impact of parameters such as cell radius, stimulus voltage source amplitude, frequency, and as well as the presence of an external electric field. The dynamics of the network are studied through non-local coupling of neurons via electrical and chemical synapse functions and the strength of incoherence and the discontinuity measure are computed to characterize different collective behaviors. The results demonstrate distinct mode transitions of isolated neurons ranging from spiking to bursting, regular oscillation, and chaotic dynamics. Numerical simulations reveal the emergence of traveling chimera states within the network in the absence of an external electric field, while the application of such a field highlights chimera and multichimera states. These findings suggest that the firing mode is triggered by periodic external electric fields and cell radius, with the electric field's involvement enhanced to regulate neuron activity and control network dynamics, potentially inducing artificial chimera or synchronization states. External electric fields and stimuli play a crucial role in neuronal firing dynamics, affecting the transition between different firing modes and influencing collective behaviors within neuronal networks. Understanding these effects contributes to the comprehension of neural processes and the potential manipulation of neural activity for various applications in neuroscience and biophysics.
 


Sunday July 21, 2024 4:20pm - 6:20pm PDT
TBA
 
Monday, July 22
 

8:30am PDT

Registration
Monday July 22, 2024 8:30am - 8:30am PDT

9:10am PDT

Announcements and Keynote #3
Monday July 22, 2024 9:10am - 10:10am PDT

10:10am PDT

Coffee Break
Monday July 22, 2024 10:10am - 10:40am PDT

10:40am PDT

Oral Session 3: From cells to circuits
Monday July 22, 2024 10:40am - 12:30pm PDT
Jacarandá

10:41am PDT

FO3: Neural Heterogeneity Controls the Computational Properties of Spiking Neural Networks
Richard Gast , Sara A. Solla  , Ann Kennedy

Monday July 22, 2024 10:41am - 11:10am PDT
Jacarandá

11:10am PDT

11:30am PDT

O10: Functional Connectivity and Complex Network Dynamics of In-Vitro Neuronal Spiking Activity During Rest and Gameplay
Moein Khajehnejad, Forough Habibollahi Saatlou, Alon Loeffler, Brett J. Kagan, Adeel Razi

Monday July 22, 2024 11:30am - 11:50am PDT
Jacarandá

11:50am PDT

12:10pm PDT

12:30pm PDT

Lunch
Monday July 22, 2024 12:30pm - 2:00pm PDT

12:30pm PDT

OCNS Board Meeting
Monday July 22, 2024 12:30pm - 2:00pm PDT

2:00pm PDT

2:01pm PDT

FO4: Backwards and forwards, hot or cold: robust and flexible rhythms in a neural network model
Lindsay Stolting, Joshua Nunley, Eduardo Izquierdo

Monday July 22, 2024 2:01pm - 2:30pm PDT
Jacarandá

2:30pm PDT

2:50pm PDT

O14: Forecasting Seizure Duration from Neural Connectivity Patterns
Parvin Zarei Eskikand, Mark Cook, Anthony Burkitt, David Grayden

Monday July 22, 2024 2:50pm - 3:10pm PDT
Jacarandá

3:10pm PDT

O15: A computational model to help in understanding the impact of a 3D organization on cortical dynamics
Francesca Callegari, Martina Brofiga, Paolo Massobrio

Monday July 22, 2024 3:10pm - 3:30pm PDT
Jacarandá

3:30pm PDT

O16: Self-organized emergence of multi-areal information processing in a non human primate connectome-based model
Vinicius Lima Cordeiro, Nicole Voges, Andrea Brovelli, Demian Battaglia

Monday July 22, 2024 3:30pm - 3:50pm PDT
Jacarandá

3:50pm PDT

Coffee Break
Monday July 22, 2024 3:50pm - 4:20pm PDT

4:20pm PDT

P042 Optimal coding and information processing due to firing threshold adaptation near criticality
The brain encodes information through neuronal populations' output firing rates [1] or spike patterns [2]. However, weak inputs have limited impact on output rates, hindering this type of encoding from explaining all sensory system behavioral performance. Spike patterns that are implicated in perception and memory can generate sparse and combinatorial codes, enhancing memory capacity, robust signal encoding, information transmission, and energy efficiency [2, 3].
This study investigates input-output (I/O) relations in a recurrent excitatory network, describing the effect of spike threshold adaptation in both rate and pattern coding. We compare networks with adaptive and constant firing thresholds, showing that adaptive networks exhibit both optimal pattern coding capacity and I/O mutual information for weak inputs. Our model allows us to reveal the underlying mechanism of the optimization – a partial Self-Organized quasi-Critical (SOqC) dynamics [4]. The adaptation enables a smooth transition from pattern coding to rate coding as input rates increase, with a threshold recovery timescale of ~100 ms. This holds around the critical point, while constant threshold networks only perform pattern coding in the supercritical state and for stronger inputs, and are thus not capable of discriminating weak stimuli. However, the adaptive network's rate coding capacity (as described by its dynamic range) is equivalent to the subcritical regime of constant threshold networks.
The identified threshold timescale aligns with various cells in the brain, including the mammalian cortex, hippocampus (e.g., [5]), teleost pallial region, and sensory neurons. Our findings lead to the hypothesis that threshold adaptation – one of the ingredients of spike frequency modulation – is exploited by these systems in order to generate sensitivity to weak and strong stimuli alike through pattern and rate coding, respectively. For instance, threshold changes were observed in hippocampus, enhancing factors such as information transmission, feature selectivity (e.g., [6]), neural code precision, and synchrony detection. These brain regions, critical for discriminating sensory inputs and memory tasks, stand to benefit from improved pattern coding.
References:
1. W Gerstner et al. (2014): Neuronal Dynamics. Cambridge University Press.
2. BA Olshausen, DJ Field (2004): Sparse coding of sensory inputs. Curr Opin Neurobiol 14  481-487.
3. V Itskov et al. (2011): Cell Assembly Sequences Arising from Spike Threshold Adaptation Keep Track of Time in the Hippocampus. J Neurosci 31: 2828-2834.
4. G Menesse et al. (2022): Homeostatic Criticality in Neuronal Networks. Chaos Solitons Fractals 156: 111877.
5. A-T Trinh et al. (2023): Adaptive spike threshold dynamics associated with sparse spiking of hilar mossy cells are captured by a simple model. J Physiol 601  4397-4422.
6. WB Wilent, D Contreras (2005): Stimulus-Dependent Changes in Spike Threshold Enhance Feature Selectivity in Rat Barrel Cortex Neurons. J Neurosci 25: 2983-2991.
 



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P043 Diving into space: emerging and disappearing shared dimensions in neuronal activity under the influence of psychedelics
Classical psychedelics (5-HT2A agonists) wield profound influence over the orchestrated interplay of billions of neurons in the cortex. Dimensionality reduction techniques like Principal Component Analysis (PCA) reveal that lower dimensional structures of spontaneous brain activity are persistent across time and neurons [1]. However, when classical psychedelics are introduced, do new dimensions emerge or vanish within neuronal population activity? Moreover, what biological mechanisms underlie emerging new dimensions? We set out to answer these questions by identifying contrastive principal components between different brain states. 
Therefore, we analysed Neuropixels recordings of spontaneous brain activity in rodents before vs. after psychedelic drug administration (TCB-2, Psilocybin, LSD, DMT), during wakefulness vs. non-REM sleep [2], and during low vs. high arousal [3]. We utilized contrastive Principal Component Analysis (cPCA) to identify dimensions that either appear or disappear [4], i.e. directions that are present in the target dataset (e.g. before drug) but not the background dataset (e.g. after drug). Contrastive components were identified by conducting an eigen decomposition on subtracted target and background covariance matrices by multiplies of contrastive parameter alpha (Fig. 1). We methodologically extended cPCA by analysing the position of contrastive components on the alpha spectrum.
Preliminary results indicate that classical psychedelics consistently caused dimensions to disappear, which was particularly prominent on slow timescales. This trend persists even after excluding neurons that decreased firing rates before vs. after drug administration. Contrastive dimensions were also rarely unique to one dataset, but rather were shared across target and background datasets to varying extends. This holds true for psychedelic (before drug vs. after drug) and sleep (non-REM vs. wakefulness) datasets. To quantify and compare contrastive dimensions, we measured them either towards the left end of the alpha spectrum, where background variance equals to random variance of a single neuron, or towards the centre of the alpha spectrum, where the principal component captures half of the variance of principal component 1 (Fig. 1). 

References
[1] Stringer, C., Pachitariu, M., Steinmetz, N., Reddy, C. B., Carandini, M., & Harris, K. D. (2019). Spontaneous behaviors drive multidimensional, brainwide activity. Science364(6437), eaav7893.
[2] Senzai, Y., Fernandez-Ruiz, A., & Buzsáki, G. (2019). Layer-specific physiological features and interlaminar interactions in the primary visual cortex of the mouse. Neuron101(3), 500-513.
[3] Stringer, C., Pachitariu, M., Reddy, C., Carandini, M., & Harris, K. D. (2018). Recordings of ten thousand neurons in visual cortex during spontaneous behaviors (Version 4). Janelia Research Campus. https://doi.org/10.25378/janelia.6163622.v4
[4] Abid, A., Zhang, M. J., Bagaria, V. K., & Zou, J. (2018). Exploring patterns enriched in a dataset with contrastive principal component analysis. Nature communications9(1), 2134.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P044 How to measure the dynamic range of complex response functions?
The neuronal response function delineates the interplay between external stimuli and neural activity, serving as a pivotal tool for unraveling how neurons encode and process information. A fundamental aspect of response functions is their dynamic range, which quantifies the range of input level that yields distinguishable neuronal responses. Many response functions exhibit a simple sigmoidal profile, featuring subtle firing rate changes at low and high inputs and more pronounced changes for intermediate input level. For these typical cases, the conventional dynamic range definition, which assumes that the entire input range comprised between 10% and 90% of the maximum output contributes to the dynamic range (whilst the rest is discarded), proves to be successful. However, growing evidence also indicates the presence of more complex response functions with double-sigmoid or multiple-sigmoidal curves and often plateaus within the customary 10%–90% response range. In the cases of complex response functions, the conventional dynamic range definition often generates inflated results, as indistinguishable inputs (plateaus) may improperly contribute to the measured dynamic range. To better understand complex response functions, we study a set of complex response functions from previously published empirical and modeling studies, and a neuronal model of a mouse retinal ganglion cell with detailed dendritic structure capable of featuring both simple-sigmoid and complex response functions. The model incorporates two dynamical elements that reduce or increase the energy consumption of the neuron, and both alterations can yield double-sigmoid response functions. We introduce a novel way of classifying response functions based on their complexity. To estimate the dynamic range of only the discernible responses in both simple and complex response functions, we propose alternative definitions of dynamic range. These alternative approaches match the measured dynamic range of the conventional definition for simple response functions and generalize the measure for complex response functions. We discuss the advantages and limitations of each proposal, highlighting that all of them have fewer arbitrary choices than the conventional definition of dynamic range. These newly developed methods are general, and adaptable to various research fields.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P045 Hierarchical Brain Dynamics: Insights from Multicompartmental Neuronal Modeling
Cytoarchitectonic studies have uncovered a correlation between higher levels of cortical hierarchy and reduced dendritic size [1]. This hierarchical organization extends to the brain's timescales, revealing longer intrinsic timescales at higher hierarchical levels [2-4]. However, the contribution of single-neuron morphology in this hierarchy of timescales, which typically occurs at the whole-brain level, remains unclear. We employ a multicompartmental neuronal modeling approach from digitally reconstructed neurons [5], which has previously enabled the classification of neurons based on their dynamical features [6], and the study of aging effects on neuronal structure and dynamics [7]. This flexible approach has provided valuable insights into dendritic computation and the intricate interplay between age-related changes and neuronal behavior. Here we establish a significant correlation: neurons with larger dendritic structures exhibit shorter intrinsic timescales. Furthermore, we investigate the influence of inhomogeneous propagation of dendritic activity and synaptic input on neuronal energy consumption, which is also heterogeneously distributed across brain regions [8]. Our results reveal mechanisms underlying complex neuronal response functions characterized by plateaus and a double-sigmoid shape, which are akin to patterns observed in retinal ganglion cells [9]. Our findings highlight the crucial role of single-neuron structure in contributing to a hierarchy of intrinsic timescales in the brain, aligning with observations from electrophysiology experiments [10] and whole-brain resting-state functional magnetic resonance imaging [11]. This study advances our understanding of neuronal dynamics and sheds light on the intricate relationship between neuronal structure, hierarchy of timescales, and energy consumption in the brain.

[1] Hilgetag, C.C. and Goulas, A., Philos Trans R Soc Lond B Biol Sci, 2020, 375(1796), p.20190319.

[2] Kiebel, S.J., Daunizeau, J. and Friston, K.J., PLoS Comput. Biol., 2008, 4(11), p.e1000209.

[3] Chaudhuri, R., Knoblauch, K., Gariel, M.A., Kennedy, H. and Wang, X.J., Neuron, 2015, 88(2), pp.419-431.

[4] Gollo, L.L., Zalesky, A., Hutchison, R.M., Van Den Heuvel, M. and Breakspear, M., Philos Trans R Soc Lond B Biol Sci, 2015, 370(1668), p.20140165.

[5] Ascoli, G.A., Donohue, D.E. and Halavi, M., J. Neurosci., 2007, 27(35), pp.9247-9251.

[6] Kirch, C. and Gollo, L.L., PeerJ, 2020, 8, p.e10250.

[7] Kirch, C. and Gollo, L.L., Sci. Rep., 2021, 11(1), p.1309.

[8] Shokri-Kojori, E., Tomasi, D., Alipanahi, B., Wiers, C.E., Wang, G.J. and Volkow, N.D., Nat. Commun., 2019, 10(1), p.690.

[9] Deans, M.R., Volgyi, B., Goodenough, D.A., Bloomfield, S.A. and Paul, D.L., Neuron, 2002, 36(4), pp.703-712.

[10] Murray, J.D., Bernacchia, A., Freedman, D.J., Romo, R., Wallis, J.D., Cai, X., Padoa-Schioppa, C., Pasternak, T., Seo, H., Lee, D. and Wang, X.J., Nat. Neuroscience, 2014, 17(12), pp.1661-1663.

[11] Raut, R.V., Snyder, A.Z. and Raichle, M.E., PNAS, 2020, 117(34), pp.20890-20897.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P046 Basic biophysical models of short-term presynaptic plasticity
Communication between neurons via chemical synapses may generate different postsynaptic responses for consecutive activations. The differences can be explained by physiological phenomena that occur pre- and postsynaptically, possibly evolving in different time scales ranging from tens or hundreds of milliseconds to hundreds of seconds or even more. Based on previous experimental work (Barroso-Flores et al., 2015), we developed two biophysical, yet simple models of presynaptic neurotransmitter release that can be used to fit and explain existing electrophysiological data. The first model is based on a continuous, deterministic, 3D dynamical system that captures the dynamics of presynaptic calcium, the activation of the presynaptic release machinery, and the available neurotransmitter from the readily releasable pool of vesicles. This model captures the antagonistic dynamics of accumulation of residual calcium and the depletion of vesicles from the readily releasable pool. The model also gives predictions of whether synaptic release will display facilitation, depression, or a biphasic release profile as a function of the characteristic rates for calcium accumulation and vesicle replenishment, and the presynaptic firing rate among other parameters.  Examination of the biochemistry of the release machinery grants arguments for a quasi-steady state reduction that yields a 2D version of the first model. A second model that is closer to the biology of a chemical synapse is derived from the first model by replacing the neurotransmitter available for release with an integer random variable representing the number of vesicles in the readily releasable pool. The geometry and topology of both models can be studied with analytical expressions using standard dynamical systems techniques. Further, these models can also be added to continuous models of membrane potential. Examples of how these models can be used to explain changes in network dynamics induced by short-term presynaptic plasticity will be presented.


Barroso-Flores, Janet, Marco A. Herrera-Valdez, Violeta Gisselle Lopez-Huerta, Elvira Galarraga, and José Bargas. "Diverse short-term dynamics of inhibitory synapses converging on striatal projection neurons: differential changes in a rodent model of parkinson’s disease." Neural plasticity, 2015 (2015).



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P047 On EEG microstates and linear dynamics
This study delves into EEG microstates [1], which are enduring patterns of brain activity associated with cognitive and clinical phenomena. Despite their significance, there's a lack of consensus on how to analyze microstates. To address this gap, we apply various state-of-art microstate algorithms to a substantial EEG dataset, aiming to elucidate their relationships and dynamics. We propose that the properties of microstates are heavily influenced by the linear characteristics of EEG signals.
We conducted our research using the Max Planck Institut Leipzig Mind-Brain-Body Dataset. Among other, participants completed a 62-channel EEG experiment at rest using two paradigms: eyes open and eyes closed. We used the preprocessed EEG data (total N = 204) provided as EEGLAB .set and .fdt files, the data have a sampling frequency of 250 Hz, are low-pass-filtered at 125 Hz and are ~8 min long. The complete description can be found in [2].
We compared the performance of six different clustering algorithms: (Topographic) atomize and agglomerate hierarchical clustering, Modified K-means, Principal component analysis, Independent component analysis, and Hidden Markov model. These algorithms were assessed based on microstate measures such as lifespan, coverage, and occurrence, as well as dynamic statistics like mixing time, entropy, entropy rate, and the first peak of the auto-mutual information function, see [3] for detailed methods and results.
We found that microstate statistics derived from real EEG data closely resembled those obtained from Fourier surrogates, suggesting a strong dependence on the linear covariance and autocorrelation structure of the underlying EEG data. Moreover, when employing a linear vector autoregression (VAR) model, we observed highly comparable microstates to those estimated from actual EEG data. This indicates that linear VAR models could potentially provide more reliable estimates of microstate repertoire and dynamics due to their robustness [3,4].

Our findings underscore the significance of linear EEG models in comprehending both the static and dynamic properties of human brain microstates. By demonstrating high reproducibility of microstate properties from linear models, particularly Fourier surrogates and VAR models, we contribute to advancing the methodological and clinical interpretation of EEG data, and EEG microstates in particular, paving the way for a deeper understanding of brain dynamics and its links to function and pathology.

Acknowledgments

The publication was supported by ERDF-Project Brain dynamics, No. CZ.02.01.01/00/22_008/0004643 and the Czech Science Foundation project No. 21-32608S.


References
1. Pascual-Marqui RD. Segmentation of brain electrical activity into microstates: Model estimation and validation. IEEE Trans Biomed Eng. 1995;42(7):658-665.
2. Babayan A. Data descriptor: A mind-brain-body dataset of MRI, EEG, cognition, emotion, and peripheral physiology in young and old adults. Sci Data. 2019;6:1-21.

3. Jajcay N, Hlinka J. Towards a dynamical understanding of microstate analysis of M/EEG data. NeuroImage. 2023; 281:120371.
4. Pascual-Marqui RD. On the relation between EEG microstates and cross-spectra. 2022:1-15. Available from: http://arxiv.org/abs/2208.02540


h3.cjk { font-family: "Noto Serif CJK SC" }h3.ctl { font-family: "Lohit Devanagari" }p { margin-bottom: 0.1in }


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P048 Formation of artificial neural assemblies by the E%Max-winners-take-all process
The concept of neural assembly is not new and dates back to the times of psychologist Donald O. Hebb. According to his hypothesis, assemblies are sets of strongly connected neurons responsible for representing cognitive information. He believed that through the activity of these assemblies, combined with internal biological mechanisms that facilitate the formation and maintenance of these sets, more complex cognitive functions could emerge, such as language and reasoning. Recently, a framework called "Assembly Calculus" [1], describing possible operations involving neural assemblies, was proposed to tackle high-order neural computations, including those necessary for language processing. The proposed neural model is a discrete-time dynamic system equipped with some operations responsible for the formation and maintenance of assemblies. Its structure consists of a set of brain areas, each containing a finite number of excitatory neurons with synapses randomly formed within and between areas. In each area, inhibition is modeled by the k-winners-take-all process, allowing only the k neurons with the highest synaptic inputs to fire in each iteration. In the present work, we explore the properties of the model when neural competition due to inhibition is instead implemented by a more biologically plausible mechanism called E%Max-winners-take-all [2]. In it, the number of neurons firing in each iteration is variable and depends on the distribution of synaptic inputs across the network. Therefore, unlike the original model, neural synchronization and brain rhythms play important roles in assembly formation, recall, and information transfer among areas. We present a computational study where we describe the distribution of assembly sizes and the retrieval capabilities of the model network as functions of connectivity and plasticity.


References
  1. Papadimitriou C. H., et al, Brain computation by assemblies of neurons, Proceedings of the National Academy of Sciences, 2020, 14464-14472
  2. de Almeida, L., et al, A Second Function of Gamma Frequency Oscillations: An E\%-Max Winner-Take-All Mechanism Selects Which Cells Fire, Journal of Neuroscience, 2009, 7497--7503


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P049 The Virtual Brain links neurotransmitter changes and TMS-evoked potential alterations in major depressive disorder
Transcranial magnetic stimulation (TMS) has emerged as a promising therapeutic approach for major depressive disorder (MDD). TMS-evoked potentials (TEPs, Fig.1A) in EEG contain specific peaks, such as N45 and N100, which can be linked to GABAergic neurotransmission. MDD patients show higher amplitudes of the whole TEP and of single peaks, while their cerebral GABA levels are lowered. This complex and poorly understood interplay of GABA, TEPs and MDD presents a compelling case for the use of brain network models. A recent study achieved high levels of fit between empirical and simulated TEPs of healthy controls based on mean-field modeling [1]. However, a modeling perspective of MDD pathology related to TEPs is missing so far. Therefore, we aim to demonstrate in silico how inhibitory neurotransmitter changes, similar to MDD pathology, affect TEP amplitudes.


We created a brain network model using the open-source whole-brain simulator ‘The Virtual Brain’ (TVB, thevirtualbrain.org). The activity of brain regions was simulated with the Jansen & Rit (JR) mean-field model (Fig.1D). The electrical field of TMS stimulation was estimated by the software package SimNIBS (Fig.1B, simnibs.github.io/simnibs/, [2]). In line with the previous study, we applied the ADAM-based gradient-descent optimization to fit whole-brain simulations (Fig.1C), i.e. the JR parameters (Fig.1D) and the effective connectivity (Fig.1E), to empirical TEPs of healthy individuals (n=20, 14 females, 24.5±4.9 years, Fig.1A). After fitting, two inhibitory JR parameters (inhibitory time constant b and the number of inhibitory synapses C4, Fig.1H) were altered to mimic GABA-related MDD pathophysiology in TVB. The effect of these parameter alterations onto the TEP amplitude was analyzed (Fig.1J).


We achieved high mean fits between TVB simulations and empirical TEPs (r=0.696, p<0.001), reproducing with TVB the results of the previous study. Alterations of inhibitory JR parameters had statistically significant impacts on the amplitudes of the whole TEP and all peaks. Both, C4 (r=-0.48, p<0.001) and b (r=-0.37, p<0.001) negatively correlated with the global TEP amplitude. This negative correlation was also observed between C4 and all single peaks (N45: r=-0.43, p<0.001, P60: r=-0.64, p<0.001, N100: r=-0.24, p<0.001, P185: r=-0.38, p<0.001), as well as b and three peaks (N45: r=-0.20, p<0.001, N100: r=-0.30, p<0.001, P185: r=-0.23, p<0.001), while for one peak a positive correlation to b was detected (P60: r=0.10, p<0.001).


Lowering GABAergic inhibitory synaptic transmission in our model led to alterations in simulated TEPs, comparable to the ones observed empirically in MDD patients. Thus, we successfully simulated MDD pathology with TVB, offering a modeling perspective on MDD TEPs. Through our computational virtual TMS framework, we provided a mechanistic explanation for the relationship between GABAergic inhibitory synaptic transmission and pathological TEP amplitude observations in MDD. Our work symbolizes a steppingstone towards understanding and linking MDD pathology on different hierarchical levels of the brain.


References

  1. Momi D, Wang Z, Griffiths JD. TMS-evoked responses are driven by recurrent large-scale network dynamics. Elife, 2023. 12.
  2. Thielscher A, Antunes A, Saturnino GB. Field modeling for transcranial magnetic stimulation: A useful tool to understand the physiological effects of TMS?, 2015 37th Conf of EMBC. 2015.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P050 Visualizing information in Deep Neural Networks receiving competitive stimuli
Deep Neural Networks (DNNs) exhibit significant parallels with the hierarchical organization of representations in the primate visual system. However, their feed-forward architecture, where all information in a scene is processed simultaneously, is unlikely to accurately reflect reality [1]. Typically, when an animal or a human observes a scene, even without saccadic movements, covert attention enables the shifting of focus and the processing of an image as a collection of recognizable items, whose identities are then transferred to working memory. Consequently, the static image is transformed into a temporal sequence. Such temporal dynamics have been overlooked by DNN models.To push towards more realist models, we designed an experiment where a DNN is presented with two competing items, one more salient than the other, placed in different areas of the visual field represented by non-overlapping receptive fields.  We utilize the MNIST digit dataset to illustrate the model. The network's task is to identify the more salient (or target) item. However, we devised a training strategy  so the network is able to recognize the identity of the less salient (or background) item, even though this information is not explicitly required in the output layer. We subsequently developed visualization tools capable of tracking the flow of information through the layers of the network. Of particular interest to us is understanding how latent information about the background item is retained within the network. In this study, we introduce our novel tool designed for visualizing information flow within networks. Additionally, we present results obtained from networks with different architectures, subjected to the training strategy that allows maintenance of background information.

[1] Katharina Duecker, Marco Idiart, Marcel AJ van Gerven, and Ole Jensen. Oscillations in an artificial neural network convert competing inputs into a temporal code. bioRxiv, 2023. 


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P051 Astrocytic modulation of brain oscillations in a network model with neurons-astrocytes interactions in epilepsy
Epilepsy is a chronic syndrome characterized by a predisposition to generate excessive or hypersynchrony of neuronal activity in the brain, known as seizures, with neurobiological, cognitive, psychological, and social consequences. Under normal physiological conditions, astrocytes participate in regulation of neuronal excitability, transmission, and synaptic connectivity [1]. In patients with epilepsy, astrocytes show morphological and functional alterations, known as reactive astrogliosis, and the empirical experiments suggests that neurons-astrocytes interactions are key to the development and progression of the epileptogenesis and ictogenesis [2]. However, the relationship between the reactive astrogliosis and the epileptiform activity in brain circuits is not fully understood.
 
In this work, we develop a computational theoretical framework to understand how structural and physiological changes in astrocytes modulate epileptiform activity in the local field potential (LFP) of a brain circuit. We simulate a volume of human cortical tissue, with a balanced network composed of 10000 neurons, of which 8000 are excitatory and 2000 are inhibitory, and a variable number of astrocytes able to interact with synapses. Structural connectivity between neurons was random, with a 5% probability, and connectivity of synapses with astrocytes is limited to a variable distance depending on the overlap of astrocytic territories. The network dynamics are modeled with spiking neurons, adaptive exponential integrate-and-fire, and astrocytic dynamics were modelled with leaky integrate-and-fire astrocytes [3]. The synaptic model between neurons was modeled based on conductance, and the neuron-astrocyte interactions were bidirectional, with activation of astrocytes based on intracellular calcium concentration by synaptic stimulation, and gliotransmission from astrocytes to neurons. The simulation of LFP was based on the synaptic currents and the distance between the recording point and the synapses.
 
We developed an exploration of the biophysical parameter space of gliotransmission, adaptation neuronal and structural connectivity of astrocytic interactions, and we analyzed the power spectral density (PSD) of simulated LFPs to detect the emergence of brain oscillations and characterize the periodical and aperiodic components. Preliminary results suggest that morphological and physiological changes in neuron-astrocyte interactions in the context of reactive astrogliosis modulate the occurrence of high-frequency oscillations present in epilepsy.
 
Acknowledgments
 
We are grateful to National Agency for Research and Development of the Government of Chile for supporting P Illescas.
 
References


1. Santello, M; Toni, N; Volterra, A. Astrocyte function from information processing to cognition and cognitive impairment. Nature neuroscience, 2019, vol. 22, no 2, p. 154-166.


2. Verghoog, Q P., et al. Astrocytes as guardians of neuronal excitability: mechanisms underlying epileptogenesis. Frontiers in neurology, 2020, vol. 11, p. 591690.

3. De Pitta, M; Brunel, N. Multiple forms of working memory emerge from synapse–astrocyte interactions in a neuron–glia network model. Proceedings of the National Academy of Sciences, 2022, vol. 119, no 43, p. e2207912119.

Speakers

Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P052 piNET: A Neural Network Architecture Design to Maximize Decoding Accuracy Using Minimal Training Data
The multilayer perceptron (MLP) network is an essential and widely used architecture in artificial neural networks. By utilizing multiple layers of interconnected binary classifiers, MLPs are capable of modeling intricate nonlinear relationships between inputs and outputs. However, this comes at the cost of increased computational complexity and susceptibility to overfitting. The removal (or pruning) of unnecessary nodes and/or connections, using e.g. magnitude-based pruning or structured pruning, might lead to improved computational efficiency, reduced overfitting, and enhance decoding performance. However, sparse neural networks can be difficult to initialize, as the weights of the connections between neurons need to be carefully chosen to ensure that the network learns efficiently. Here we introduce a pre-initialized network architecture (piNET) that is based on co-transcriptional gene regulation networks in the somatosensory cortex. We identify the molecular network architecture and weight distributions using information-theoretic calculations. This approach results in an extremely sparse network with only $1.53\%$ of all possible edges. We conducted a performance evaluation of the network by decoding random sequences of data with high dimensionality and sequence length and compared the results against different network structures. The results showed that pre-initialized neural network architecture recovered input with $\approx94\%$ accuracy after training the network with as few as $25\%$ of the data, even at small sample sizes of $1000$. Computationally efficient artificial network architectures that perform with greater accuracy despite the limited availability of training data offer exciting opportunities for embodied computing.



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P053 A Supercomputing Simulation of Serotonergic Densities in a Shark Brain: Reflected Fractional Brownian Motion in Expanding Shapes
The axons of serotonergic neurons have strongly stochastic trajectories that support their dispersal throughout the entire brain in a diffusion-like process. Single-cell transcriptomics analyses suggest that serotonergic axons (fibers) may be partially guided to specific brain regions (depending on the neuron’s transcriptome program) [1], but their overall behavior differs substantially from that of “strongly-deterministic” axons (which often fasciculate and connect specific neuroanatomical regions). As other “strongly-stochastic” axons, serotonergic fibers form meshworks, the density of which shows a substantial regional variability. We have previously shown with supercomputing simulations that serotonergic fiber densities in the mouse brain can be partially predicted by modeling individual serotonergic fibers as paths of a superdiffusive, reflected fractional Brownian motion (FBM) [2]. FBM is a continuous-time stochastic process that generalizes Brownian motion and is parametrized by the Hurst index H (0 < H < 1; the superdiffusive regime corresponds to H > ½ which produces “persistent” paths with positively correlated increments). Questions posed by this research also have stimulated our theoretical work on the reflected FBM in shapes of various spatial dimensions [3] and on the “continuous memory” FBM with a time-dependent Hurst index [4].
In this project, we investigate whether the FBM-properties of serotonergic axons generalize across the vertebrate clade and also further study the properties of the reflected FBM. First, we use a supercomputing simulation to predict the regional serotonergic densities in a shark brain. Cartilaginous fish brains share the same Bauplan with other vertebrates, but they have highly diverse shapes and can continue to grow in adulthood (also, their neural tissue is much less differentiated). Second, we investigate the accumulation patterns of reflected-FBM trajectories in linearly and non-linearly expanding shapes. We present the results of these analyses.



Acknowledgements

This research was funded by an NSF-BMBF CRCNS grant (NSF #2112862 to SJ & TV; BMBF #STAXS to RM).


References


1. Okaty BW, Sturrock N, Escobedo Lozoya Y, Chang Y, Senft RA, Lyon KA, Alekseyenko OV, Dymecki SM. A single-cell transcriptomic and anatomic atlas of mouse dorsal raphe Pet1 neurons. eLife. 2020, 9, e55523.
2. Janušonis S, Haiman JH, Metzler R, Vojta T. Predicting the distribution of serotonergic axons: a supercomputing simulation of reflected fractional Brownian motion in a 3D-mouse brain model. Front Comput Neurosci. 2023, 17, 1189853.
3. Vojta T, Halladay S, Skinner S, Janušonis S, Guggenberger T, Metzler R. Reflected fractional Brownian motion in one and higher dimensions. Phys Rev E. 2020, 102, 032108.
4. Wang W, Balcerek M, Burnecki K, Chechkin AV, Janušonis S, Ślęzak J, Vojta T, Wyłomańska A, Metzler R. Memory-multi-fractional Brownian motion with continuous correlations. Phys Rev Res. 2023, 5: L032025.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P054 A novel method to predict subject phenotypes from EEG Spectral Signatures
The prediction of subject traits using brain data is an important goal in neuroscience, with relevant applications in clinical research, as well as in the study of differential psychology and cognition. While previous research has primarily focused on neuroimaging data, our focus is on the prediction of subject traits from electroencephalography (EEG), a relatively inexpensive, widely available and non-invasive data modality. However, EEG data is complex and needs some form of feature extraction for subsequent prediction. This process is almost always done manually, risking biases and suboptimal decisions. Here, we propose a largely data-driven use of the EEG spectrogram, which reflects macro-scale neural oscillations in the brain. Specifically, the key idea is to use the full spectrogram, reinterpret it as a probability distribution and then leverage advanced machine learning techniques that can handle probability distributions with mathematical rigour and without the need for manual feature extraction [1,2,3]. The resulting techniques Kernel Ride Regression (KRR) and Kernel Mean Embedding Regression (KMER), show superior performance to alternative methods thanks to their capacity to handle nonlinearities in the relation between the EEG spectrogram and the trait of interest. We leveraged this method to predict biological age in a multinational EEG data set, HarMNqEEG [4], showing the method's capacity to generalise across experiments and acquisition setups.


Acknowledgements:


D. Vidaurre is supported by a Novo Nordisk Foundation Emerging Investigator Fellowship (NNF19OC-0054895), an ERC Starting Grant (ERC-StG-2019-850404), and a DFF Project 1 from the Independent Research Fund of Denmark (2034-00054B). This research was funded in part by the Wellcome Trust (215573/Z/19/Z).fro We acknowledge support from PICT 2020-01413.


References:


[1] Franke, K. and Gaser, C. (2019). Ten years of brain age as a neuroimaging biomarker of brain ageing: What insights have we gained? Frontiers in Neurology, 10(JUL).

[2] Smith, S. M., Vidaurre, D., Alfaro-Almagro, F., Nichols, T. E., and Miller, K. L. (2019). Estimation of brain age delta from brain imaging. NeuroImage, 200:528–539

[3] Smola, A., Gretton, A., Song, L., and Schölkopf, B. (2007). A Hilbert space embedding for distributions. In Hutter, M., Servedio, R. A., and Takimoto, E., editors, Algorithmic Learning Theory, pages 13–31, Berlin, Heidelberg. Springer Berlin Heidelberg.

[4] Li, M., Wang, Y., Lopez-Naranjo, C., Hu, S., Reyes, R. C. G., Paz-Linares, D., Areces-Gonzalez, A., Hamid, A. I. A., Evans, A. C., Savostyanov, A. N., Calzada-Reyes, A., Villringer, A., Tobon-Quintero, C. A., Garcia-Agustin, D., Yao, D., Dong, L., Aubert-Vazquez, E., Reza, F., Razzaq, F. A., Omar, H., Abdullah, J. M., Galler, J. R., Ochoa-Gomez, J. F., Prichep, L. S., Galan-Garcia, L., Morales-Chacon, L., Valdes-Sosa, M. J., Tröndle, M. Zulkifly, M. F. M., Abdul Rahman, M. R. B., Milakhina, N. S., Langer, N., Rudych, P., Koenig, T., Virues-Alba, T. A., Lei, X., Bringas-Vega, M. L., Bosch-Bayard, J. F., and Valdes-Sosa, P. A. (2022). Harmonized-multinational qeeg norms (harmnqeeg). NeuroImage, 256:119190.





Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P055 Mesoscopic and microscopic information and its energy cost during synaptic plasticity
The relationship between long-term information encoding in synapses (synaptic learning and memory) and its associated metabolic cost is important for neuroscience, but is not well understood [1,2]. Recently, we showed that real synapses in different parts of the brain across different mammalian species nearly maximize their information content, given their mean synaptic weights [3]. This empirical observation is an example of mesoscopic information, based on synaptic sizes, and does not take into account many internal microscopic synaptic degrees of freedom. In this work, the focus is on the information content and its energy cost associated with these microscopic (mostly hidden) synaptic processes, i.e., receptors (AMPA,NMDA) and protein (PSD). This is studied using recent advances in the physics of stochastic thermodynamics and informatics [4], which are universal and applicable to micro-scale objects such as synapses. This relatively new approach to modeling of synaptic plasticity has a large methodological potential, but is still virtually unknown in computational neuroscience. We initiated this interdisciplinary approach to synaptic plasticity in a series of papers [2,5], but here we present a more unifying picture, based on relevant microscopic dynamics, using multidimensional probabilistic master equation. We find that under quite general conditions, PSD proteins can encode huge amounts of information, much bigger than membrane receptors related to synaptic weight, thus dramatically increasing the information capacity of a synapse. Moreover, this information capacity can be retained for a long time in an energy efficient way, suggesting thermodynamic stability of synaptic memory. 
References:
1) Kandel ER, et al,  Cell  157  163-186 (2014); Benna MK, Fusi S,  Nature Neurosci.  19  1697-1706 (2016); Chaudhuri R, Fiete I,  Nature Neurosci.  19  394-403 (2016).
2) Karbowski J,  J. Neurophysiol.  122  1473-1490 (2019); Karbowski J,  J. Comput. Neurosci.  49  71-106 (2021). 
3) Karbowski J, Urban P,  Scientific Rep. 13  22207 (2023).  
4) Parrondo JMR, et al,  Nature Phys. 11  131-139 (2015).  
5) Karbowski J, Urban P,   Neural Comput. 36  271-311 (2024).  


Speakers

Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P056 Homeostatic self-organization towards the edge of neuronal synchronization
Transient or partial synchronization can be used to do computations, although a fully synchronized network is frequently related to epileptic seizures. Here, we propose a homeostatic mechanism that is capable of maintaining a neuronal network at the edge of a synchronization transition, thereby avoiding the harmful consequences of a fully synchronized network. We model neurons by maps since they are dynamically richer than integrate-and-fire models and more computationally efficient than conductance-based approaches. We first describe the synchronization phase transition of a dense network of neurons with different tonic spiking frequencies coupled by gap junctions.  Then, we introduce a local homeostatic dynamics in the synaptic coupling and show that it produces a robust tuning towards the edge of this phase transition. We discuss the potential biological consequences of this self-organization process, such as its relation to the Brain Criticality hypothesis, its input processing capacity, and how its malfunction could lead to pathological synchronization


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P057 Delineating roles of TRP channels in Drosophila larva cold nociception
In Drosophila larvae, noxious cold temperature is detected by Class III (CIII) primary sensory neurons lining the inside of the body wall. Transient receptor potential (TRP) channels such as TRPA1 and PKD2 are implicated in cold sensitivity [1,2]. To distinguish roles of these TRP channels and their signal transduction mechanisms in cold temperature sensitivity, we conducted a series of experiments including electrophysiological recordings and Ca2+ imaging using CIII-specific expression of GCaMP6m comparing responses of gene-specific RNAi knockdown (KD) of each TRP channel, and we constructed biophysical models to compare the roles of these channels.
When subjected to a rapid temperature drop from 24°C to 10°C at the rate of 2-6°C/sec, CIII neurons responded with a typical peak in spiking rate [3]. Half of these neurons showed transient bursts during the peak. After the peak, the spike rate settled to a steady-state low level. Compared to the control group, TRPA1-KD exhibited a decrease in spiking rate and in occurrence of bursts during a rapid temperature decrease. Conversely, PKD2-KD maintained the transient bursting but significantly attenuated tonic spike activity at the steady low temperature.
We constructed multi-compartmental models in NEURON [4] representing the cases of TRPA1- and PKD2-KDs. These models structurally comprised branching dendrite, soma and axon. They inherited the spike generation mechanisms from previous single-compartment models [3] and included TRPA1 and PKD2 channels implemented as adjusted "two-state" class models [5]. The latter were parameterized to recapitulate the electrophysiological responses of CIII in wild-type and the knockdowns. Asymmetric distributions of TRPA1 and PKD2 channels among sister dendritic branches allowed CIII models to generate cold-induced bursts followed by steady tonic spiking. Importantly, under PKD2-KD, the TRPA1 channels ensured transient burst activity encoding the rate of temperature change, while in TRPA1-KD case, the PKD2 channels enabled the model to generate continuous spiking without bursts suggesting their role in representing temperature magnitude. These findings shed light on the complex mechanisms underlying cold sensation in Drosophila larvae and highlight the role of TRP channels such as TRPA1 in coding rate of the temperature change and PKD2 in coding the magnitude of the steady temperature.
Acknowledgements
This work was supported by NIH grant 5R01NS115209 to DNC and GSC.
References
1.       Turner HN, et al. The TRP Channels Pkd2, NompC, and Trpm Act in Cold-Sensing Neurons to Mediate Unique Aversive Behaviors to Noxious Cold in Drosophila. Curr Biol, 2016. 26(23): p. 3116-3128.
2.     Letcher JM, et al. TrpA1 mediates cold nociception in Drosophila melanogaster. In preparation.
3.       Maksymchuk NV, et al. Transient and Steady-State Properties of Drosophila Sensory Neurons Coding Noxious Cold Temperature. Front Cell Neurosci, 2022. 16: p. 831803.
4.       Carnevale NT and Hines ML. The NEURON Book. Cambridge, UK: Cambridge University Press, 2006.

5.       Voets, T. (2012). Quantifying and Modeling the Temperature-Dependent Gating of TRP Channels. In: Reviews of Physiology, Biochemistry and Pharmacology, Volume 162. Springer, Berlin, Heidelberg. 


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P058 Beyond the Connectome: Divisive Normalization Processors in the Drosophila Early Olfactory and Vision Systems
The Drosophila brain has only a fraction of the number of neurons of higher organisms such as mice and humans. Yet the sheer complexity of its neural circuits recently revealed by large connectomics datasets [1] suggests that computationally modeling the functional logic of fruit fly brain circuits at this scale poses significant challenges. In principle, a whole brain simulation could be instantiated by modeling all the neurons and synapses of the connectome/synaptome with the simple dynamics of integrate-and-fire neurons and synapses, with parameters tuned according to certain criteria. Such an effort, however, would fall short of revealing the fundamental computational units necessary for understanding the true functional logic of the brain, as the complexity of the different computational units becomes lost with a single uniform treatment of such a vast number of neurons and their connection patterns. It is, therefore, imperative to develop a formal reasoning framework of the functional logic of brain circuits that goes beyond simple instantiations of flows on graphs generated from the connectome [2].
To address these challenges, we present here a framework for building functional brain circuits from com- ponents whose functional logic can be readily evaluated, and for determining the canonical computational principles underlying these components using available data. Our focus is on modeling neural circuits arising in odor signal processing in the early olfactory, and motion detection in the early vision systems of the fruit fly using divisive normalization [3] building blocks.

We developed a model of local neuron pathways in the Antennal Lobe (AL) termed the differential Divisive Normalization Processors (DNPs) [4], which robustly extract the semantics (the identity of the odorant object) and the ON/OFF semantic timing events indicating the presence/absence of an odorant object. For real-time processing with spiking projection neuron (PN) models, we showed that the phase-space of the biological spike generator of the PN offers an intuitive perspective for the representation of recovered odorant semantics. The dynamics induced by the odorant semantic timing events were explored as well. Finally, we provide theoretical and computational evidence for the functional logic of the AL as a robust ON-OFF odorant object identity recovery processor across odorant identities, concentration amplitudes and waveform profiles.

We demonstrate that three key processing steps in the motion detection pathway, including the elementary motion detector and the intensity and contrast gain control mechanisms, can be effectively modeled with DNPs [5]. Three cascaded DNPs implementing intensity and contrast gain control and elementary motion detection, respectively, model effectively the robust motion detection realized by the early visual system of the fruit fly brain. This suggests that, despite its nonlinearity, the differential class of DNPs can be used as canonical computational building blocks in early sensory processing.
Acknowledgments
The research reported here was supported, in part, by the National Science Foundation under grant #2024607.
References
[1] Lazar et al., eLife, 2021.
[2] Lazar et al., Frontiers in Neuroinformatics, 2022.
[3] Carandini et al., Nature Reviews Neuroscience, 2012.
[4] Lazar et al., PLOS Computational Biology, 2023.
[5] Lazar et al., Biological Cybernetics, 2023.


Speakers

Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P059 Unveiling the Impact of Brain's Scale-Free Topology on Information Processing
The human brain operates as a sophisticated modular system, with interconnected modules playing pivotal roles in orchestrating and defining its method of information processing, thereby giving rise to its diverse functions. For instance, when processing visual information, each component undergoes independent processing within discrete brain regions before converging and integrating as it progresses to higher brain areas, ultimately culminating in the comprehensive interpretation of the visual input. A wealth of biological experiments and computer simulations shed light on the intricate dynamics of information flow, elucidating how it is integrated and segregated through various mechanisms, including top-down and bottom-up processes, as well as the unique connection properties of interneurons within specific brain regions. However, despite significant advancements, there remains a notable gap in our understanding of how the characteristics of network topology within these modules, along with the methods of connection between them, influence the intricate process of information processing.
 The anatomical connections in the brain can exhibit various network topology characteristics, such as small-world or scale-free features. In this study, we simulated neuronal dynamics based on various structural connectivity to investigate how the characteristics of topology shape functional networks and influence brain dynamic fluctuations. To efficiently simulate the activity of thousands of neurons, we developed parallel GPU-based code, utilizing the Izhikevich neuron model for large-scale spiking neural network simulations. We used public calcium imaging data. Zebrafish, known for easy genetic manipulation and real-time tracking of individual neuron activity, offer the advantage of providing activity at the single-neuron level for thousands of neurons. We created various modules and connected them, each with different topology characteristics, such as a random network, a scale-free network, or a small-world network. We observed how spiking patterns were segregated and integrated under various topologies. To measure segregation, coherence of spikes within each module was measured, while for integration, entropy between modules was measured. The results revealed that in small-world and random networks, coherence within modules was low, and entropy values were not particularly high. However, in the scale-free network, both coherence and entropy values maintained a high level across coupling constants. The results were consistently confirmed through mathematical stability analysis. Our findings demonstrated that functional networks within different brain systems, including data from mice and zebrafish, displayed characteristics consistent with scale-free network topology and exhibited dynamic fluctuations in brain activity. Moreover, our simulations of brain dynamics using zebrafish structural connection data, which incorporated scale-free network properties, exhibited the closest resemblance between empirical and simulated functional networks.
In conclusion, our study highlights that connectivity properties at the individual neuron level, characterized by scale-free topology, play a significant role in shaping brain information processing.

This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea  government(MSIT)  (NO. 2023R1A2C20062171, 2022M3E5E8081199)





Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P060 Neural Modeling of Channelopathies to Elucidate Neural Mechanism of Neurodevelopmental Disorders
Neurodevelopmental disorders (NDDs) such as epilepsy, autism spectrum disorder, and developmental delays vary greatly clinically and effect a large portion of the population [1]. Despite this variability, NDDs share common pathophysiological characteristics: the hallmark of these disorders is imbalanced excitatory inhibitory input (E/I), which during development leads to dysfunction in neuronal circuits [2,8]. Multiple factors such as genetic expression, environmental factors, and complex compensatory mechanisms influence the E/I balance [2]. Brain channelopathies are particularly useful in studying the E/I imbalance mechanism because their function can be linked to neuronal excitability [3]. Neuronal ion channels are important in generating electrical activity in all neurons, and disrupting this activity has been highly associated with NDDs [4]. Channelopathies can cause an increase or decrease in the excitability of neurons, and these changes can be a result in a change in the number of functional channels or to a change in channel biophysics [2,9]. Using a previously published primary motor cortex (M1) model [5], we utilize a large-scale, highly detailed biophysical neuronal simulation to investigate how channel mutations affect individual and network neuronal activity. This model was built using NetPyNE and the NEURON simulator. The stimulations provide a detailed mechanistic understanding of the role channelopathies play in the E/I imbalance and will allow us to better understand therapeutic targets that specifically target disease symptoms. Pyramidal tract projecting (PT) neurons are involved in the forwarding of motor commands to the lower motor neurons and sit strategically in the layer 5B of the cortex, a known output from the cortical circuit [6]. Layer 5 pyramidal neurons (L5PNs) are the main output of cortical networks and have a high expression of NDD-associated genes [7]. L5PN dendrites receive inputs from all cortical layers and long-range projections from distal brain regions [7]. These connections make L5PNs particularly sensitive to E/I imbalance. Additionally, previous studies have shown that the excitability of L5PN is a reliable marker for the behavior of the whole circuit [8]. Using the M1 cortical column simulation, we can measure how channel biophysical changes affect the overall excitability of the network. Specifically, we can observe how L5PN change their firing patterns to better understand the pathophysiology of the simulated channelopathy. Our M1 model is based on the Hodgkin & Huxley (HH) formalism, however, HH channels cannot capture the different biophysical properties that more complex models offer. We will replace HH channels with Hidden Markov Models (HMM) to capture all of the biophysical properties involved in channelopathies and better replicate empirical data. This model will allow us to realistically examine how NDDs alter the intrinsic excitability of each neuron and the network as a whole. This will provide a tool to investigate the underlying neuronal mechanisms of NDDs affecting many children worldwide and will allow us to stimulate how novel therapeutics can return excitability to neurotypical levels and ultimately be translated clinically.
Acknowledgements: This work was supported by the Hartwell Foundation through an Individual Biomedical Research Award. 


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P061 Brain network flexibility as a marker of early adaptation between humans and intelligent wearable machines
Merging human and biological systems with augmentation devices could substantially modify human capabilities, but there are still significant challenges in integrating these technologies with the human body. Monitoring neural behavior could provide a key non-invasive biomedical strategy for mutually adaptive human augmentation systems where neural flexibility may be harnessed to predict and monitor human adaptation to the system. Here, we investigate if neural flexibility correlates with the human ability to adapt to assistive technologies by monitoring the brain of individuals who used an "intelligent" exoskeleton boot (ExoBoot), designed to augment walking efficiency by applying bilateral torque at the ankles. We analyzed the resting state activity of 20 individuals using electroencephalography (EEG) recordings for 5 minutes collected before the ExoBoots were utilized. First, we estimated the dynamic synchronization between brain regions (electrodes) with a weighted phase lag index (wPLI) and then distilled the dynamic connectivity patterns into network modules with a generalized Louvain algorithm [1] and automated parameter search [2]. We next estimated flexibility—the propensity of sensors to change their affiliation to network modules and correlated this resting state metric with an adaptation metric derived from electromyography (EMG) during ExoBoot utilization. We use EMG-derived metrics for an objective measurement of adaptation, where less muscular effort corresponds to better adaptation to the ExoBoot. We found a strong positive correlation between individual adaptation in the initial exposure to the device and neural flexibility, particularly within the posterior and central areas of the scalp which are known to be crucial for motor, and visual processing.  Our findings also suggest temporal alterations in the adaptation process, demonstrating that while individuals with high neural flexibility exhibit rapid adaptation early on, all participants eventually reach a proficient level of device integration, suggesting a benefit from the ExoBoot's assistance over time. This distinction between short-term and long-term adaptation, adds to our understanding of the human-machine adaptation loop, particularly within the context of wearable technology. By identifying a neural marker of adaptation, our study not only advances the theoretical foundation of how humans integrate with assistive devices but also opens new avenues for the development of adaptive technologies where assistive devices are fine-tuned to individual neural profiles, contributing to future personalized, adaptive technologies that enhance user experience and efficiency in real-time.


[1] Peter J. Mucha et al., Community Structure in Time-Dependent, Multiscale, and Multiplex Networks.Science328,876-878(2010). https://doi.org/10.1126/science.1184819

[2] Italo'Ivo Lima Dias Pinto, Javier Omar Garcia, Kanika Bansal; Optimizing parameter search for community detection in time-evolving networks of complex systems. Chaos 1 February 2024; 34 (2): 023133. https://doi.org/10.1063/5.0168783



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P062 Training history determines shortcut usage in artificial agent navigation
We trained artificial agents with deep reinforcement learning (RL) on navigation tasks analogous to ones performed with humans and mice (for related earlier work see [2]). We looked at different training environments, learning rules, and developed behaviors, and drew correlations with the internal representations that these RL agents developed. We used our analyses to predict the kinds of neural activations that might exist in real brains during navigation tasks and suggest experiments that might help to uncover them.
We were inspired by studies of humans navigating in virtual environments that showed [1] that people who grew up in cities with grid-like streets were, in general, weaker navigators that people raised in more organic irregular streets. More specifically, the use of shortcuts through the novel environment was significantly higher in the latter population. We wanted to explore what navigational strategies were underlying this difference of behaviors between the two groups and, further, to generate hypotheses about possible corresponding brain activities.
To achieve this goal, we developed a navigation environment in which an agent needs to reach a goal that is hidden behind a barrier. To reach it, the agent needs to go around the barrier or, on some trials, a shortcut in the barrier will be open that can be used by the agent to reach the goal faster. Each agent is trained in a modification of this environment with a certain frequency of the shortcut availability. They are all later tested on fixed set of trials, some with the shortcut open, and some with the shortcut closed. We find that the overall navigational strategies are similar in agents with different training histories, even though the shortcut usage is much higher in the ones who have more experience with it. However, the internal representations and the temporal dynamics of their development were quite different in the two classes of agents. They differed both in the sensitivities of individual nodes to environmental landmarks and the global features of the nodes’ population activity.
These results led us to predict that humans who grow up in places with fewer available directional cues may develop an increased awareness of and ability to navigate using global landmarks. This finding is consistent with existing literature on navigational skills [3]. Our results also suggest the existence of landmark-sensitive neurons in skilled navigators. If such navigators are placed in one environment where a global cue is available and an identical one without the global cue, these neurons should display differential activation. Further, based on our results, we hypothesize that successful navigation decisions are based on population-level code involving not only spatially-sensitive, but also landmark-sensitive neurons, and that the latter have an outsized representation in the population code.
References
1. Barhorst-Cates, E. M., Meneghetti, C., Zhao, Y., Pazzaglia, F., & Creem-Regehr, S. H. (2021). Journal of Environmental Psychology, 74, 101580. 
2. A. Liu, A. Borisyuk. Investigating navigation strategies in the Morris Water Maze through deep reinforcement learning. Neural Networks. (2024) Apr:172:106050. 
3. Padilla, L. M., Creem-Regehr, S. H., Stefanucci, J. K., & Cashdan, E. A. (2017). Sex differences in virtual navigation influenced by scale and navigation experience. Psychonomic Bulletin & Review, 24(2), 582–590. 



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P063 Disentangling circuit mechanisms of how prior expectations affect decision making across the mouse brain
Biases stemming from prior knowledge or expectations have long been known to influence sensory processing and decision-making. However, the loci and mechanisms of such modulation remain unclear. Empowered by brain-wide recordings during a sensory decision-making task that spans the arc from sensory processing to action[1] and the discovery that prior expectations can be decoded widely across the brain[2], we seek to more precisely identify the areas and circuit mechanisms of modulation by prior expectations. We first disentangle neural representations of the highly correlated prior expectation, sensory input, and choice variables by balancing conditions for all 3-way dichotomies (stimulus side, choice side, and prior side) to find that sparse sets of brain regions act as stimulus responders, stimulus integrators, and choice/action generators. We next evaluate five hypotheses for how and where in these defined regions the prior knowledge exerts its bias: 1. In the activity of stimulus responders; 2. In weights from stimulus responders to stimulus integrators; 3. In the activity of integrators; 4. In weights from stimulus integrators to choice/action generators; 5. In the activity of choice/action generators. We identify predicted neural signatures of these hypotheses through models that implement the different mechanisms. Comparing predictions with the brainwide recordings, we find no significant prior encoding effects on the stimulus responders, but significant modulations in the activity of stimulus integrators and choice/action generators. Further, we find that these effects take the form of a gain modulation rather than an initial activity bias. Collectively, our results only support hypotheses 2 and 5, suggesting that prior expectations about sensory inputs influence decision making in the brain through a multiplicative gain on stimulus integration and choice/action generation, but not directly on low-level stimulus representation.


References:
  1. International Brain Laboratory et al. A Brain-Wide Map of Neural Activity during Complex Behaviour. bioRxiv, 2023.
  2. Findling, Hubert, International Brain Laboratory et al. Brain-wide representations of prior information in mouse decision-making. bioRxiv, 2023.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P064 Comparison of methods of functional connectivity estimation in investigation of diurnal changes in working memory performance

Correlation matrix estimation from functional magnetic resonance (fMRI) data presents a major challenge for a multitude of reasons, including non-stationarity of the signal and low temporal resolution, resulting in the number of variables (locations from which the signal is sampled) exceeding the number of time points. The Pearson correlation matrix is most commonly used, but likely constitutes a suboptimal choice, as in the typical fMRI setting it exhibits strong sensitivity to any noise present in the signal. Hence, comparison of alternative methods of functional connectivity estimation is the subject of this contribution. The methods compared include: sample Pearson correlation, the detrended cross-correlation coefficient [1], and a symmetrized variant a non-linear cross-correlation based on filtering high-amplitude events (rBeta) [2]. Additionally, Ledoit-Wolf shrinkage was applied to each method for noise reduction.
The methods were compared in their ability to detect statistically significant differences between experimental conditions using data obtained in an fMRI experiment investigating the effects of diurnal changes on memory performance [3]. Comparison was conducted between resting state and task performance data, experimental phases (information encoding and retrieval) and tasks based on the Deese-Roediger-McDermott paradigm: involving either linguistic processing semantically and phonetically related words, or visual processing of images of global or local similarity.  The comparison focused on eigenvalues of correlation matrices’. To identify eigenvalues to corresponding eigenvectors in different conditions and subjects, agglomerative hierarchical clustering of eigenvectors was performed.
All correlation matrix estimation methods besides the rBeta-based method detected statistically significant differences between experimental tasks. All methods led to detection of differences between experimental tasks, but these were not consistent with respect to the estimation method. Application of Ledoit-Wolf shrinkage led to a more consistent detection of condition differences. Several aspects of this investigation merit further attention, particularly the impact of the details of the data analysis pipeline on the results, including the eigenvector clustering algorithm applied.
References
1.      Kwapień J, Oświęcimka P, Dróżdż S. Detrended fluctuation analysis made flexible to detect range of cross-correlated fluctuations. Phys. Rev. E. 2015, 92, 052815.
2.      Cifre I, Miller Flores MT, Penalba L, Ochab JK, Chialvo DR. Revisiting Nonlinear Functional Brain Co-activations: Directed, Dynamic, and Delayed. Front. Neurosci. 2021, 15, 1194.
3.      Lewandowska K, Wachowicz B, Marek T, et al. Would you say “yes” in the evening? Time-of-day effect on response bias in four types of working memory recognition tasks. Chronobiol. Int. 2018, 35(1), 80-89.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P065 Idiom-independent reduced social references in Alzheimer's disease evoked speech
Alzheimer's disease (AD) is a neurodegenerative disease that affects millions of people with multiple cognitive dysfunctions, including a decline in language production. However, a complete description of the linguistic aspects of AD is needed. For example, we do not know whether the changes are idiom-specific or invariant or which elements of the context of the communication process are affected by Alzheimer's disease. Here, we used a novel linguistic accountability methodology to evaluate how evoked speech differs between AD patients and healthy volunteers in two different idioms: English and Brazilian Portuguese. We fine-tuned the Bidirectional Encoder Representations from Transformers (BERT) large-case model and its Portuguese counterpart, BERTimbau, and tested them on labeled datasets designed to diagnose Alzheimer's disease. The English dataset consisted of audio recordings and transcripts from the Cookie Theft picture description task. In contrast, the Portuguese dataset consisted of audio recordings and transcripts from the Dog Story description task. We evaluated the performance of the models using a 5-fold cross-validation procedure, which resulted in an accuracy of 87% for the English dataset and 80% for the Portuguese dataset. Our results indicate that BERT and BERTimbau capture social references when classifying AD subjects in English and Portuguese. The models identified reduced social references in the subjects' communication as the pathology progressed, providing valuable insights into LLMs' linguistic and psychological patterns for text classification. Our study contributes to understanding the linguistic and psychological features that drive the models' classification decisions.



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P066 Complexity is maximized close to the criticality between ordered and disordered cortical states
Complex systems are typically characterized as an intermediate situation between a complete reg-
ular structure and a totally random system. Brain signals can be studied as a striking example of
such systems: cortical states can range from highly synchronized and ordered neuronal activity (with
higher spiking variability) to desynchronized and disordered regimes (with lower spiking variability).
It has been recently shown, by testing independent signatures of criticality, that a phase transition
occurs in in a cortical state of intermediate value of spiking variability. Here we use a symbolic in-
formation approach to show that, despite the monotonical increase of the Shannon entropy between
ordered and disordered regimes, we can determine an intermediate state of maximum complexity
based on the Jensen disequilibrium measure. More specifically, we show that the statistical com-
plexity is maximized close to the criticality for the analyzed data of urethane-anesthetized rats, as
well as, for a network model of excitable elements that presents a critical point of a non-equilibrium
phase transition.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P067 Phase relations diversity between cortical populations: anticipated synchronization and phase bistability
Two spiking neuron populations unidirectionally connected in a sender-receiver configuration can exhibit anticipated synchronization (AS), which is characterized by a negative phase-lag. This phenomenon has been reported in electrophysiological data of non-human primates and human EEG during a visual discrimination cognitive task [1]. In experiments, the unidirectional coupling could be accessed by Granger causality and can be accompanied by both positive (the usual delayed synchronization, DS) or negative (which would characterizes AS) phase difference between cortical areas [1]. Here we show a model of two coupled populations [2,3] in which the neuronal heterogeneity and external noise can determine the dynamical relation between the sender and the receiver and can reproduce diversity in phase relations reported in experiments. We show that depending on the relation between excitatory and inhibitory synaptic conductances the system can also exhibit phase bistability between anticipated and delayed synchronization. Recently it has been reported that bistable phase-differences in magnetoencephalography (MEG) recordings appear when participants listening to bistable speech sequences that could be perceived as two distinct word sequences repeated over time [4]. This result suggests that phase-bistability in cortical regions could be related to bistable perception [3].



Acknowledgments The authors thank CNPq (grants 402359/2022-4, 314092/2021-8), FAPEAL (grant SEI n.º E:60030.0000002401/2022), UFAL, and CAPES for financial support.

References
[1] Matias, F. S., Gollo, L. L., Carelli, P. V., Bressler, S. L., Copelli, M., & Mirasso, C. R. Modeling positive Granger causality and negative phase lag between cortical areas. NeuroImage. 2014, 99, 411-418.
[2] Brito, K. V., & Matias, F. S. Neuronal heterogeneity modulates phase synchronization between unidirectionally coupled populations with excitation-inhibition balance. Physical Review E. 2021, 103(3), 032415.
[3] Machado, J. N., & Matias, F. S. Phase bistability between anticipated and delayed synchronization in neuronal populations. Physical Review E 2020, 102(3), 032412.
[4] Kösem, A., Basirat, A., Azizi, L., & van Wassenhove, V. High-frequency neural activity predicts word parsing in ambiguous speech streams. Journal of neurophysiology. 2016, 116(6), 2497-2512.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P068 Mapping brain lesions to conduction delays: the next step for personalized brain models in Multiple Sclerosis
Multiple sclerosis (MS) is a clinically
heterogeneous, multifactorial autoimmune disorder affecting the central nervous
system (CNS). Structural damage the myelin sheath, with the consequent slowing
of the conduction velocities, is a key pathophysiological mechanisms. In fact, studies have shown that
the conduction velocities of action potentials are closely related to the
degree of myelination, with thicker myelin sheaths associated to higher
conduction velocities. However, how the intensity of the structural lesions of
the myelin translates to slowing of conduction delays is not known, and lesion
volume alone is a poor predictor of clinical disability. In this work, we use
large-scale brain models and Bayesian inversion to estimate how myelin lesions
translate to longer conduction delays [1]. Each subject underwent MEG and MRI, with detailed
white matter tractography analysis. We also derived a lesion matrix indicating
the percentage of lesions for each edge in every patient. We utilized a
large-scale brain model, where neural activity of each region was represented
as a Stuart-Landau oscillator in a regime with damped oscillations, and regions
are coupled according to the empirical connectomes [2]. We proposed a mathematical function elucidating the relationship
between the conduction delays and structural damage percentages in each white
matter tract. Using deep
neural density estimators [3], we inferred the most likely relationship
between lesions and conduction delays. MS patients consistently exhibited decreased power
within the alpha frequency band compared to the healthy group. Dependent upon
the parameter alpha, this function translates lesions into edge-specific
conduction delays (leading to shifts in the power spectra). We found
that the
estimation of the alpha parameter showed a strong correlation with the alpha
peak. The most probable
inferred alpha for each subject is inversely proportional to empirically
observed peaks, while power peaks themselves do not correlate with total lesion
volume. This is the first study demonstrating the topography-specific effect of
myelin lesions on conduction delays. This adds one layer to the personalization
of models in persons with multiple sclerosis.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P069 Investigating the cellular and circuit mechanisms underlying schizophrenia-related EEG biomarkers using a multiscale model of auditory thalamocortical
Individuals with schizophrenia exhibit a deficit in sensory processing, which researchers have extensively investigated in primary auditory cortex (A1) using electroencephalogram (EEG) techniques. These deficits manifest as abnormalities in event-related potentials and cortical oscillations. These alterations reflect a broader disturbance in the balance between excitation and inhibition (E/I balance) that characterizes cortical networks. We have extended our previously developed model of auditory thalamocortical circuits to better reproduce and investigate the biophysical source of these schizophrenia-related EEG biomarkers. The A1 model simulates a cortical column with a depth of 2000 μm and 200 μm diameter, containing over 12k neurons and 30M synapses. Neuron densities, laminar locations, classes, morphology and biophysics, and connectivity at the long-range, local, and dendritic scale were derived from published experimental data. Auditory stimulus-related inputs to the thalamus were simulated using phenomenological models of the cochlear/auditory nerve and the inferior colliculus. The model reproduced in vivo cell type and layer-specific firing rates, local field potentials (LFPs), and EEG signals consistent with healthy controls. We are now leveraging this validated A1 model to gain insights into mechanisms responsible for observed EEG changes in schizophrenia. Changes made to the model to reproduce schizophrenia patient EEG biomarkers were informed using data from positron emission tomography (PET) imaging, genetics, and transcriptomics specific to schizophrenia patients. Specifically, we are employing the model to explore three changes associated with schizophrenia: 1) Reduced inhibition through parvalbumin (PV) interneurons, 2) Reduced inhibition through somatostatin (SST) interneurons, and 3) N-methyl-D-aspartate receptor (NMDAR) hypofunction on PV cells. We found that all three molecular disturbances affected firing rates in a layer- and cell-type specific way, mostly leaving granular layer responses unperturbed but significantly altering superficial and deep layers. Furthermore, in EEG recordings, they altered the 1/f slope, with differential effects in lower frequencies (4-30Hz) compared to the higher frequencies (30-80Hz). PV and NMDAR reductions on both scales showed opposite effects compared to SST reductions. Next, we plan to characterize the impact of schizophrenia-specific cannabinoid and cholinergic pathway modifications on EEG biomarkers such as P300 peak and Auditory Steady State Response (ASSR), as well as extend the model to capture stimulus-specific adaptation (SSA) and mismatch negativity (MMN). This work aims to fill a critical gap in our understanding by elucidating how experimentally determined genetic changes associated with schizophrenia result in altered circuit and network behavior, leading to the emergence of robust EEG biomarkers of the disorder.



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P070 Neuron participation in temporal patterns forms cross-layer, non-random networks in rat motor cortex
Spatiotemporal patterns of neuronal activity are hypothesized to be essential for information processing in the brain. These patterns suggest the existence of cell assemblies, groups of co-active neurons that represent a distinct cognitive unit, potentially related to specific behaviors or representations [1, 2]. Conventionally, cell assemblies are thought to be defined by strong structural connections, such that the stimulation of a portion of its members should transiently activate the entire assembly. The exact composition and computational role of these assemblies, however, has yet to be fully clarified.

Here we present the detection of spike patterns in both superficial and deep layers of rat motor cortex in a voluntary forelimb-movement task. We extend on a previous report of diverse activation of pyramidal neurons to different sequential motor phases [3] and ask how patterns are organized beyond single-neuron activation. In our approach, snapshots of neuronal activity were compared with flexible temporal alignment, relying on an extension of "edit similarity score", a metric originally introduced to compare strings [4]. We further investigated the participation of individual neurons in these flexible spike patterns, hereby named "profiles", through graph analysis and visualization.

Across animals, profiles were largely composed of neurons from both layers, and occurred preferentially, but not exclusively, close to moments of reward. By connecting neurons in a weighted graph by their co-participation in profiles, we observed non-trivial structures with effective hubs that were not explained by shuffled models. Detected profiles were not representative of entire experimental sessions (~2 hours), but specific nodes (neurons) and edges (pairs of neurons who appear together in different profiles) were sustained. We argue that beyond synchronous activation, neurons that form patterns are organized in what we call a "profile space", in which profiles with strong overlap in neuron participation are grouped together. Individual profiles can therefore be understood as different realizations of an underlying functional community, an extension with temporal flexibility to the concept of cell assembly.



Acknowledgements
We thank Japan Society for the Promotion of Science (JSPS) for supporting T. F. with KAKENHI no. JP23H05476.



References
1. Hebb DO. The organisation of behaviour: a neuropsychological theory. New York: Science Editions; 1949.
2. Buzsáki G. Neural Syntax: Cell Assemblies, Synapsembles, and Readers. Neuron. 2010, 68(3), 362-385.
3. Isomura Y, Harukuni R, Takekawa T, et al. Microcircuitry coordination of cortical motor information in self-initiation of voluntary movements. Nat Neurosci. 2009, 12(12), 1586-1593.
4. Watanabe K, Haga T, Tatsuno M, et al. Unsupervised Detection of Cell-Assembly Sequences by Similarity-Based Clustering. Front Neuroinform. 2019, 13, 39.



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P071 A Model of Activation of Cortical Cell Populations through TMS
Modeling non-invasive brain stimulation, particularly transcranial magnetic stimulation (TMS) of the primary motor cortex (M1), has been previously explored through simulation methods at different length and complexity scales. However, the coupling of TMS induced electric fields to neural mass models is still largely unexplored, while previously approached as a current pulse with arbitrary width and height [1, 2]. Via multi-scale simulations at the subcellular and neural mass level we study the underlying mechanisms of electromagnetic activation of cortical tissue by TMS. Validation of the coupling model is aided by measurements like EMG of muscle activation [3], EEG [4], and invasive recordings of so-called DI-waves on the spinal cord following TMS [3]. The model architecture is defined by choosing the cell morphologies, electric fields, connectivity, and cortical circuitry that describes the desired system. The methods developed here lay the groundwork for studying the effects of electromagnetic stimulation on any circuit architecture and facilitate realistically motivated coupling between electric fields and mean field state variables.

TMS stimulation of M1 is characterized by corticospinal pyramidal tract axons originating from deep layer 5 (L5) that carry direct (D-) and indirect (I-) waves following TMS. D-waves are believed to be generated by direct stimulation of L5 axons, while I-waves may stem from indirect activation of L5 cells from presynaptic cells [3]. This study focuses on the generation of I-waves from within the cortex, as a model D-wave generation in corticospinal tracts can be treated separately. Using reconstructed compartment models of neuron morphologies we simulate spatiotemporal dynamics on L23 and L4 axons in response to TMS induced electric fields. Generated action potentials propagate through the axonal arbor to axon terminals, forming synapses to other cells. In our model, L23 and L4 cells couple synaptically to L5. The postsynaptic potential and thereby the intracellular current is governed by synaptic and dendritic dynamics. The resulting current entering L5 somata, averaged over cells, defines the current inputs to a neural mass model governing the firing rate of a L5 population. The electric field induced by TMS is thus coupled to mean field state variables that parameterize cortical activity. The L5 population’s mean firing rate is proportional to the average cortical output that projects to the spinal cord and is qualitatively comparable to I-wave measurements. We validate the coupling model against the measured I-waves and explore directional sensitivity and dose dependence of cortical activation as driven by the underlying biophysics and stimulation paradigm.  

1. Rusu, C. V., et al. (2014). A model of TMS-induced I-waves in motor cortex. Brain stimulation, 7(3), 401–414.

2. Wilson, M. T., et al. (2021). Modeling motor-evoked potentials from neural field simulations of transcranial magnetic stimulation. Clinical neurophysiology, 132(2), 412–428.

3. Di Lazzaro V, Rothwell J.C. Corticospinal activity evoked and modulated by non-invasive stimulation of the intact human motor cortex. J Physiol. 2014, 592: 4115-4128.

4. Gordon, P. C., et al. (2021). Recording brain responses to TMS of primary motor cortex by EEG - utility of an optimized sham procedure. NeuroImage, 245, 118708.



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P072 Dendritic persistent calcium current amplifies low-frequency fluctuations in alpha motor neurons
Persistent inward currents (PICs), mediated by calcium and sodium permeable ion channels, promote the amplification of synaptic currents in alpha motor neurons (MNs). The role of self-sustained discharge, promoted by dendritic calcium persistent current, has been hypothesized as fundamental during postural and stabilization motor tasks. However, it is unclear how MN dendrite morphology, electrophysiological properties, and noradrenergic modulation of calcium persistent current shape the bandwidth of the synaptic input signal, thereby influencing the transmission of synaptic oscillations in the neural drive to the muscle. To investigate how persistent calcium current alters the MN frequency response, we conducted computer simulations with slow (S)-type and fast fatigable (FF)-type alpha MN models subjected to different noradrenergic conditions. The morphologies of models were based on detailed reconstructions of cat lumbar alpha MNs with 2,570 and 3,001 dendritic compartments for S- and FF-type models, respectively. Cable theory was adopted while modeling the propagation of signals across the dendritic membrane of the MN. The electrophysiological properties observed in vivo (cat) were reproduced by tuning the biophysical properties of ionic channels employed in the models. Independent and homogeneous Poisson stochastic point processes modeled the presynaptic commands to MNs. The mean value of the presynaptic commands was adjusted so that the discharge rate of MN models was 20 spikes/s (average). The conductance of the persistent calcium channel (gCa) was adjusted to reproduce the relationship between the amplitude of the injected current and the firing rate of MNs under the effect of noradrenergic agonist (active dendrite) and anesthetized MNs (passive dendrite, with gCa=0). Spectral analysis was employed to assess the models' frequency responses. For the models with passive dendrite, the DC gain, cutoff frequency (CF), and CF delay were 0.9 (0.5), 82 Hz (60 Hz), and 2.3 ms (3.0 ms) for the S-type (FF-type) MN model, respectively. Furthermore, the S- and FF-type models presented: i) a DC gain of 1.1 and 1.4 (increase of 22% and 180%, respectively); ii) a CF of 62 Hz and 11 Hz (reduction of 32% and 82%, respectively); and 3) a delay associated with the CF of 2.9 ms and 8.4 ms (increase of 26% and 180%, respectively). Therefore, activating the dendritic persistent calcium channel amplified MN output's low-frequency components (<5 Hz), especially in the FF-type. Also, the results suggest that the persistent dendritic calcium current in alpha MNs may shape the bandwidth of the motor commands that reach the muscles, and the amplification of low-frequency fluctuations coincides with the frequency band associated with isometric muscle contractions and postural control.



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P073 Exploring Seizure Dynamics: A Computational Model of Epilepsy
This
article encapsulates an
exploration of seizure dynamics through the lens of computational modeling in
epilepsy. Starting with a foundational understanding of the FitzHugh-Nagumo
model, a simplified representation of neuronal activity, we try
to understand the
intricacies of epileptic seizures. We methodically demonstrated the transition
from normal neuronal activity to a seizure-like state by altering key
parameters in the model, such as external current and coupling strength. This
was followed by an extension of the model to a network of neurons, simulating
the complex interactions and synchronization patterns indicative of seizure
propagation. Numerical simulations were conducted to visualize the impact of
varying coupling strengths on network dynamics, offering insights into the
mechanisms of seizure initiation and spread. The study was complemented by a
discussion on the implications of these findings for understanding epilepsy,
highlighting the bridging of theoretical models with clinical understanding.
Our approach not only illuminates the potential of computational models in
epilepsy research but also underscores the significance of interdisciplinary
collaboration in advancing our comprehension of neurological disorders. Through
this article, we aim to provide a nuanced perspective on the modeling of
epileptic seizures, offering a valuable resource for researchers and clinicians
in the field.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P074 Computational Model of the Mouse Whisker Thalamocortical Pathway
Detailed reconstructions of neuronal projections and circuit mapping studies uncovered new cell-type-specific pathways of information flow and integration across cortical and thalamic regions [1]. This includes the existence of direct projections from thalamocortical (TC) neurons to layer 6 corticothalamic (L6 CT) neurons. This direct connection is more evident in the awake vs sleep state and enables a short-latency feedback pathway that bypasses the full loop in the cortical column, but its function remains poorly understood [2]. In the whisker pathway of rodents, this direct short-latency feedback could work as a mechanism to selectively increase the responsiveness of specific thalamic neurons to incoming streams of information while silencing others, contributing to the emergence of direction-selective angular tuning in the network. Selective silencing of the direct L6 CT inputs is not possible via experimentation, and computational models provide an alternative to do so without disrupting the system. We developed a detailed multiscale mechanistic model of the mouse whisker pathway in NetPyNE. Our goal is to study the overall effect of modulatory L6 CT projections [3] and the influence of this direct L6 CT feedback in regulating network excitability [2]. We will characterize the network based on the angular tuning response of thalamic neurons to different whisker deflection angles and evaluate the contribution of direct activation of L6 CT neurons by the thalamus in this process. The model comprises a thalamic barreloid, a portion of the thalamic reticular nucleus, and a full cortical infrabarrel from L6. It includes biophysically detailed neurons, a topological distribution of synaptic inputs, short-term plasticity properties, and detailed mapping of local and external projections based on the latest experimental data available [4]. We also developed a novel realistic model of whisker deflection responses in the brainstem based on different deflection angles, providing topological feedforward inputs to the thalamus. We validated the single cell and the network models based on membrane potentials and firing frequency for different cell types. Our current results show that the architecture of thalamic projections is crucial for preserving the angular tuning across the network and that CT feedback is essential to keep the balance of thalamic excitation. Next, we will test the influence of the timing of this CT feedback, which we believe is key to sharpening the angular tuning in the thalamic network to brainstem inputs. Ultimately, our model will provide insights into the mechanisms that regulate thalamocortical excitability and how interactions between L6 CT neurons and the thalamus can shape the information arriving at the cortex.
1. Shepherd GMG, Yamawaki N. Untangling the cortico-thalamo-cortical loop: cellular pieces of a knotty circuit puzzle. Nat Rev Neurosci. 2021;22: 389–406.
2. Hirai D, Nakamura KC, Shibata K-I, et al. Shaping somatosensory responses in awake rats: cortical modulation of thalamic neurons. Brain Struct Funct. 2018;223: 851–872.
3. Crandall SR, Cruikshank SJ, Connors BW. A corticothalamic switch: controlling the thalamus with dynamic synapses. Neuron. 2015;86: 768–782.
4. Iavarone E, Simko J, Shi Y, Bertschy M, et al. Thalamic control of sensory processing and spindles in a biophysical somatosensory thalamoreticular circuit model of wakefulness and sleep. Cell Rep. 2023;42. doi:10.1016/j.celrep.2023.112200



Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P075 Biologically Inspired Constraints are Compatible with Gradient-Descent-based Learning in Spiking Neural Networks
This study explores Spiking Neural Networks (SNNs), leveraging a unique combination of three algorithms to unravel the intricate dynamics within biological constraints. Our primary contributions lie in the integration of Dilated Convolutions with Learnable Spacings for delay learning[1], coupled with the fusion of two dynamic pruning methods: DeepR[2] for disconnecting and RigL[3] for reconnecting synaptic weights. 
Dynamic pruning, a method derived in machine learning, operates akin to a structure learning algorithm. We begin by initializing the neural network with sparse connectivity and maintain a constant number of active synapses throughout training. One of our innovations lies in the utilization of DeepR, which not only facilitates weight pruning, but also makes it convenient to incorporate Dale’s Principle by maintaining consistent weight column signs. This ensures the creation of exclusively excitatory and inhibitory neurons, further enriching the biological plausibility of our SNN model. While DeepR randomly reconnects weights, we instead utilize RigL which reintroduces the synapses with the highest gradient magnitudes. 
Synaptic delays denote the time required for a signal to propagate from one neuron to an adjacent neuron. These delays influence spike arrival times, which matter since spiking neurons respond more strongly to coincident input spikes. Dilated Convolutions with Learnable Spacings introduces a new approach to delay learning in deep SNNs that is compatible with typical gradient-based learning methods. The incorporation of learnable delays allows us to identify spatiotemporal “receptive fields”, a structure of spatiotemporal groups that are purely excitatory or purely inhibitory. We found that this spatiotemporal grouping of excitation and inhibition not only arose in dense networks but also persisted, despite alterations due to enforced sparsity and Dale’s Principle. 
Comparing the classification performance of a dense non-Dalean and a sparse Dalean network on the Raw Heidelberg Digits[4] dataset shows that the latter achieves 89% test accuracy with 75% sparsity, slightly below the former which reaches 94%. When comparing networks with a fixed number of active synapses, the sparse model surpasses the dense one at 87.5% sparsity (89% vs. 88% test accuracy), and this performance gap widens when  the number of active synapses is further decreased. 
This study provides new insights into the synergistic effects of sparsity, delays and Dale’s Principle in SNNs. Our findings advance the understanding of biologically-inspired computational principles in neural networks, laying a foundation for further exploration and application in the realm of neuro-inspired computing. 
 [1]  G. Bellec et al. 2018. arXiv: 1711.05136 [cs.NE]. 
 [2]  U. Evci et al. 2021. arXiv: 1911.11134 [cs.LG]. 
 [3]  I. Hammouamri et al. 2023. arXiv: 2306.17670 [cs.NE]. 
[4]  B. Cramer et al. IEEE Transactions on Neural Networks and Learning Systems 33.7, 2744–2757, 2022

Speakers
avatar for Thomas Nowotny

Thomas Nowotny

Professor of Informatics, University of Sussex, UK
I do research in computational neuroscience and bio-inspired AI. More details are on my home page http://users.sussex.ac.uk/~tn41/ and institutional homepage (link above).


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P076 Decomposition of brain calcium signals in a Pavlovian learning task
We are submitting a paper / extended abstract in .pdf format.


Acknowledgments


The authors acknowledge support from the National Institutes of Health grants NIH MH060605, NIH MH115604 and NIH DA044761, and from the National Science Foundation grant NSF IOS-2002863





Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P077 Quantifying the contribution of underlying physiological networks in functional brain connectivity through remnant functional maps
Contemporary cognitive neuroscience emphasizes that cognition does not occur in isolation within specific neural locales but rather emerges from the dynamic interplay of distributed areas across the brain. This interplay is often captured as functional connectivity networks, and the foundational principles governing the organization of these networks are related to a spectrum of cognitive functions from processing stimuli to decision-making, cognitive control and emotional regulation. The emergent properties of functional networks have been linked to various physiological factors such as structural connectivity (SC), distance-dependent connectivity (DC), similarity in gene expression (GC), and similarity in neuroreceptor composition (RC) across brain regions. However, it remains unknown what aspects of functional brain organization these underlying factors support. To address this unknown, we develop an analytical framework to evaluate the influence of SC, DC, GC, and RC on shaping the organization of functional brain networks and propose remnant functional maps (RFMs). We estimate RFMS by removing edges from the functional connectivity that represent direct links of an underlying network of interest – SC, DC, GC, and RC. We find that each of these underlying factors aid in shaping the organization of functional connectivity. Notably, similarity in neuroreceptor composition among brain regions is the primary factor shaping the organization of functional brain connectivity. The dominance of neuroreceptors was also observed when modeling functional connectivity from these physiological networks. We propose that this RFMs based framework provides a tool to quantify the contribution of underlying physiological networks in shaping brain functional organization and could also aid the identification of diverse physiological alterations due to task demands and disease onset and progression.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P078 Spreading depolarization in neocortical microcircuits
Spreading depolarization (SD) is characterized by a wave of depolarization preceded by a brief period of hyperexcitability that propagates through gray matter at 2-6 mm/min [1]. SD is accompanied by spreading depression, a prolonged neuronal silence caused by depolarization block, and disruption of ion homeostasis. SD is observed in neurological disorders, including migraine aura, epilepsy, traumatic brain injury, and ischemic stroke. Blood vessels contribute to SD as a source of oxygen and nutrients to the affected tissue. Understanding these mechanisms is essential for targeted interventions in conditions like ischemic stroke.



We used the NEURON and NetPyNE simulation platforms to investigate ion homeostasis at the tissue scale. We developed an in vivo network model based on an established cortical microcircuit model [2,3] and our previous in vitro model [4]. Point neurons with Hodgkin-Huxley style ion channels were augmented with additional homeostatic mechanisms, including Na+/K+-ATPase, NKCC1, KCC2, and dynamic volume changes. We simulate the intracellular and extracellular concentrations Na+, K+, Cl-, and O2 using NEURON/RxD [5]. The contribution of astrocytes is modeled as the O2-dependent clearance of K+. NetPyNE with the evolutional optimization Opuntia was used to find appropriate parameters for the mode [6]. Around 13,000 neurons were simulated in 1 mm3 of cortex (layers 2-6). We used histologic images to determine the locations of oxygen sources in the model. A 2.0 x 2.3 cm cross-section of the human cortical plate in V1 with immunostaining for CD34, was used to determine the locations of 918 capillaries (mean capillary density: 199.6/cm2; mean±SD capillary cross-sectional area: 16.7±11.9μm2). A biased random walk was used to  generate a 3-dimensional distribution of capillaries from this 2D cross-section.



SD was reliability triggered in this model by a bolus of extracellular K+ applied to layer 4. Our model predicts that the ability of a neuron to maintain a physiological firing rate is influenced by its proximity to an oxygen source.  We also found neuronal depolarization occurred in all cortical layers, with pathological activity spreading through extracellular K+ diffusion and network connectivity.


AcknowledgmentsResearch supported by NIH grant R01MH086638


References1. Dreier JP. The role of spreading depression, spreading depolarization and spreading ischemia in neurological disease. Nat Med. 2011;17: 439–447.
2. Potjans TC, Diesmann M. The Cell-Type Specific Cortical Microcircuit: Relating Structure and Activity in a Full-Scale Spiking Network Model. Cereb Cortex. 2012;24: 785–806.
3. Romaro C, Najman FA, Lytton WW, Roque AC, Dura-Bernal S. NetPyNE Implementation and Scaling of the Potjans-Diesmann Cortical Microcircuit Model. Neural Comput. 2021;33: 1993–2032.
4. Kelley C, Newton AJH, Hrabetova S, McDougal RA, Lytton WW. Multiscale Computer Modeling of Spreading Depolarization in Brain Slices. eNeuro. 2022;9. doi:10.1523/ENEURO.0082-22.2022
5. Newton AJH, McDougal RA, Hines ML, Lytton WW. Using NEURON for Reaction-Diffusion Modeling of Extracellular Dynamics. Front Neuroinform. 2018;12: 41.
6. Dura-Bernal S, Suter BA, Gleeson P, Cantarelli M, Quintana A, Rodriguez F, et al. NetPyNE, a tool for data-driven multiscale modeling of brain circuits. Elife. 2019;8. doi:10.7554/eLife.44494


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P079 Reinforcement and evolutionary learning of spatial navigation using models of hippocampal and entorhinal circuits
Deep learning models successful on visual processing tasks were loosely inspired by simplified visual system neuroanatomy. These architectures and neuronal receptive fields (RF) are not ideal for complex spatial reasoning and navigation tasks in dynamic environments. By using the architecture and RFs of mammalian hippocampal (HPC) spatial navigation circuits we hope to first understand then improve on performance and efficiency of models trained on navigation tasks. Here, we develop detailed circuit models containing entorhinal grid cells, place cells, and motor output circuits, interfaced with agents learning to navigate in simulated environments. 


Our grid cells (GCs) have varied spatial scales, allowing multi-resolution agent localization. Convergence of GCs onto place neurons creates place cells’ irregular RFs, and allows enhanced localization of agents. Navigation goals are encoded within a target area, with neurons with topographic RFs. Target and place cells project to an association area that integrates information about agent and goal location. This area projects to a motor output area that generates movements based on the maximally firing motor sub-population. Each area has excitatory (E) and inhibitory (I) interneurons modeled as event-based integrate and fire neurons, that synapse using standard AMPA (GABA) time-constants. 


We trained the models to perform navigation tasks to encoded target locations using a set of biologically inspired learning rules including spike-timing dependent reinforcement learning (STDP/RL), evolutionary strategy (ES), and hybrid algorithms that incorporate the strengths of each individual algorithm [1,2]. Fitness functions integrated total moves towards a target, and penalized moves away from the target. Extra reward was given for reaching a target. After training, we analyzed emergent structure in the circuits, and the dynamics enabling navigation. 


Each algorithm trained models to navigate agents to targets. STDP/RL (ES) used short (long) time-scales for weight adjustment. Therefore, post-learning dynamics in the circuit differed: STDP/RL enhanced synchronized neuronal firing and coding, and ES created diffuse neuronal firing and coding. Overall, ES may produce better fitness due to fewer constraints, but since STDP/RL uses extra information of neuron-to-neuron communication, it can reach optimal performance more quickly. Learning redistributed synaptic weights: many synapses had extremely low weights, and a few had very high weights, contributing in an outsized fashion to output..


By implementing representations and computations performed within mammalian entorhinal, hippocampal, and motor circuits, we aim to set groundwork for developing next-generation algorithms that support spatial navigation. Our modeling allows generating data that could be analyzed and compared to neurophysiology data, offering improved interpretability of neurophysiological signals, and predictions on the function of specific cell classes and their dynamics. Overall, this could eventually lead to improved teaming and communication between models, agents, and humans. 


References


[1] Training spiking neuronal networks to perform motor control using reinforcement and evolutionary learning Front. Comput Neurosci 2022


[2] Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning PLoS One 2022




Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

4:20pm PDT

P081 Behavior-dependent layer-specific oscillations, phase-amplitude coupling and spike-to-LFP coupling in a data-driven model of motor cortex circuits
Exploring the primary motor cortex (M1) is crucial for understanding motor functions in both health and disease, as well as for developing new treatments for motor disorders. Neural oscillations, a universal hallmark of brain activity, exhibit specific patterns within M1, related to motor control. During movement, gamma activity increases and beta activity decreases, reflecting active motor engagement. During immobility (including rest and isometric contraction), an opposite pattern is observed – decrease of gamma and increase of beta activity. Theta and delta oscillations orchestrate higher-frequency activities along the cortex, which manifests as cross-frequency coupling between theta/delta phase and beta/gamma amplitude. 
Previously, we built a biophysically detailed computational model of the M1 circuit validated against in vivo experimental data. The model spontaneously generated delta, beta, and gamma oscillations, with gamma increase and delta decrease during the movement state. Interestingly, beta and gamma were both locked to the delta cycle and occurred at opposite delta phases.
To further test our modeling results, we analyzed multi-layer LFP data recorded from the M1 of mice engaged in a reaching task, where they had to move a joystick following an auditory cue and maintain its position for a certain time period. Following the cue, we observed an overall low-frequency power decrease (below 25 Hz) in deep layers, except for the theta activity, which remained unchanged. High-frequency power increased in superficial and middle layers, with stronger gamma during the initial ballistic movement phase and stronger high-beta during the subsequent maintenance of joystick position. The amplitudes of gamma and beta were locked to theta cycle, with various depth profiles and preferred theta phases. Despite the discrepancies in the frequency bands between the model and the experiment, a common pattern was observed: movement-related gamma, holding-related beta, and low-frequency activity that modulates both of them in a phase-dependent manner.
Moreover, we explored the interaction between spikes and local field potentials in these experiments. Specifically, we examined associations between the spikes emitted by cells (loosely identified by their extracellular recording shape) located at different depths in the motor cortex and phases of oscillations filtered at different frequency bands, both recorded from M1 and VL thalamic areas. We found strong modulation across different frequency ranges, which were further used to constrain our detailed model. In addition, for some cells slight but significant changes in the spike-to-phase coupling were observed according to the cognitive demand (rest, expectation, execution of motor plan), which could be instrumental to further tune movement-dependent actions associated with M1.


Acknowledgments: The work is supported by NIBIB U24EB028998 and NYS DOH01-C32250GG-3450000 grants


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

4:20pm PDT

P082 Oscillation-Induced Firing Rate Shift in a Working Memory Model
Neural oscillations are ubiquitous in the brain and associated with various cognitive functions, including working memory (WM). Gamma oscillations are linked to selective activation and information transfer, while alpha/beta is associated with inhibitory control and status quo maintenance [1]. In WM tasks, gamma is involved in stimulus loading and selective retention, and alpha/beta – in distractor filtering and erasure of irrelevant information [2]. Despite theoretical understanding of how neural activity level affects oscillations, the reciprocal effect – oscillation-induced firing rate shift – remains underexplored. In the context of WM, it is of particular interest, as WM functions are often explained in terms of average neural activity levels [3].
In this study, we examine how input oscillations affect the time-averaged activity in a firing rate model with a rectified quadratic gain function, consisting of an excitatory and an inhibitory population. We introduce a method for estimating time-averaged firing rates in the presence of input oscillations without direct system simulation. Utilizing harmonic balance, we decompose dynamic variables into Fourier series, forming algebraic equations to relate various harmonic amplitudes self-consistently, including the time-averaged activity represented by the 0-th harmonic. While the system should be solved numerically, it is faster than simulating the original model and offers insight into the system's time-averaged equilibria through phase plane graphical analysis. By eliminating one harmonic balance equation, we derive curves analogous to nullclines on the phase plane, whose intersections indicate time-averaged equilibria, aiding in understanding how oscillations influence time-averaged activity and potentially alter activity regimes via bifurcation.
We applied our method to a WM model with potentiating E-E connection and demonstrated several effects of input oscillation on its functioning. Gamma input excited the system in the active state and increased the inter-state difference; alpha/beta input inhibited the active state and decreased the inter-state difference. It could be interpreted as oscillation-induced increase and decrease of information about WM content relative to the background, respectively. Strong alpha/beta destroyed the active state, erasing WM content. Gamma input decreased the critical stimulus amplitude required for loading into WM; alpha/beta increased this amplitude, protecting WM from overwriting. Finally, we showed that gamma input can support WM retention in a metastable system. All the results were confirmed by direct numerical simulations of the model.
References
1. Engel, AK, Fries, P: Beta-band oscillations - signalling the status quo? Curr Opin Neurobiol 2010, 20(2):156-165
2. Lundqvist, M, Rose, J, Herman, P, Brincat, SL, Buschman, TJ, Miller, EK: Gamma and Beta Bursts Underlie Working Memory. Neuron 2016, 90(1):152-164
3. Goldman-Rakic PS: Cellular basis of working memory. Neuron 1995, 14(3):477-485.


Monday July 22, 2024 4:20pm - 6:20pm PDT
TBA

7:10pm PDT

Banquet Dinner
Monday July 22, 2024 7:10pm - 9:40pm PDT
 
Tuesday, July 23
 

TBA

Party
Tuesday July 23, 2024 TBA

8:30am PDT

Registration
Tuesday July 23, 2024 8:30am - 8:30am PDT

9:00am PDT

9:00am PDT

Brain Modes: Uncovering fundamental dimensions of brain structure and function
Speakers
JP

James Pang

Research Fellow, Monash University


Tuesday July 23, 2024 9:00am - 12:30pm PDT
Cedro II

9:00am PDT

9:00am PDT

The structure-function binomial of cortical circuits across multiple scales.
Speakers
avatar for Patricio Orio

Patricio Orio

Full Professor, Universidad de Valparaíso


Tuesday July 23, 2024 9:00am - Wednesday July 24, 2024 5:30pm PDT
Jacarandá

10:20am PDT

Coffee Break
Tuesday July 23, 2024 10:20am - 10:50am PDT

12:30pm PDT

Lunch
Tuesday July 23, 2024 12:30pm - 2:10pm PDT

2:00pm PDT

Keynote #4
Tuesday July 23, 2024 2:00pm - 3:20pm PDT

3:20pm PDT

Conference Photo
Tuesday July 23, 2024 3:20pm - 3:25pm PDT

3:20pm PDT

Coffee Break
Tuesday July 23, 2024 3:20pm - 3:50pm PDT

3:50pm PDT

Members' Meeting
Tuesday July 23, 2024 3:50pm - 4:50pm PDT

4:50pm PDT

P083 A simple computational model elucidates the origin of non-intuitive properties of ephaptic interactions between olfactory receptor neurons
Olfactory sensing in insects begins with the transduction of odors into receptor currents on the dendrites of olfactory receptor neurons (ORNs) housed in hair-like sensilla on the antenna. In Drosophila, and many other insects, ORNs are grouped stereotypically in identified sensillum types and ORNs in the same sensillum interact with each other through non-synaptic (“ephaptic”) interactions (NSIs) [1,2], leading to mutual inhibition. Zhang et al. [3] demonstrated that these interactions are electrical in nature. However, the magnitude of the effect does not seem to be predicted by the firing rates of the involved neurons alone, which seems to contradict the purely electrical nature of the inhibition.
Here we present experiments and a computational model which together resolve the apparent contradiction. We recorded spikes from ab3 sensilla in Drosophila melanogaster, using electrophoretically sharpened tungsten electrodes and sorted them by size and shape (Fig A). We then stimulated the neurons with methyl hexanoate (selectively activating ab3A) as a background odour and a delayed 2-heptanone pulse (selectively activating ab3B). We observed the expected inhibition of ab3A by ab3B but intriguingly the reduction of firing in ab3A was most pronounced when it was strongly adapted but less when it was not, even if it fired at the same rate (Fig B). 
We designed a model of two 2-compartment Hodgkin-Huxley neurons with an M-type spike-rate adaptation current, whose dendrites share the same, confined extracellular space (Fig C), and were able to reproduce the observed phenomena with minimal model tuning (Fig D). We then used the model to dissect the separate contributions of (i) the change of reversal potential for the receptor currents of ab3A caused by ab3B activity, (ii) the knock-on change in the M current and (iii) the contribution of the nonlinear f-I curve of the Hodgkin-Huxley spike generator. 
In essence, NSIs have a strong effect when a neuron is adapted because adaptation occurs at the spike generator through the M current and the receptor current remains strong. Hence a change in its driving potential has a strong effect on the neuron and additionally, its adaptation means that it is in the steep part of its f-I curve. On the contrary, if an ORN is spiking little because the driving background odour is weak, then the receptor current is weak and a change of the driving potential has little effect, even if the neuron is in the steep part of its f-I curve. To make things even less intuitive, the M current additionally masks the effect of inhibition by dis-adaptation.
In summary, our simple phenomenological model reproduces the observed unintuitive phenomenology and examination of the model offered a straightforward explanation for the observations, a prime example of how computational modelling can aid neuroscientific understanding.


Acknowledgements
This work was funded by the Leverhulme Trust and the EPSRC (EP/S030964/1).


References
Su CY, Menuz K, Reisert J, Carlson JR. Non-synaptic inhibition between grouped neurons in an olfactory circuit. Nature (2012), 492(7427), 66-71.
Pannunzi M, Nowotny T. Non-synaptic interactions between olfactory receptor neurons, a possible key feature of odor processing in flies. PLoS Comp Biol (2021) 17(12), e1009583.
Zhang Y, Tsang TK, Bushong EA, ET AL. Asymmetric ephaptic inhibition between compartmentalized olfactory receptor neurons. Nature Comm (2019), 10(1), 1560.


Speakers
avatar for Thomas Nowotny

Thomas Nowotny

Professor of Informatics, University of Sussex, UK
I do research in computational neuroscience and bio-inspired AI. More details are on my home page http://users.sussex.ac.uk/~tn41/ and institutional homepage (link above).


Tuesday July 23, 2024 4:50pm - 6:40pm PDT
TBA

4:50pm PDT

P084 Astrocyte Morphology and Neurotransmitter Type Affect Intracellular Ca2+ and IP3 Dynamics
Astrocytes influence a variety of brain functions and behavior [1]. They also exhibit sharp increases in intracellular Ca2+ concentration (Ca2+ signals) in response to neurotransmitters. Ca2+ is released by stores in the endoplasmic-reticulum (ER) activated by IP3 receptor channels on the ER membrane. The Ca2+ and IP3 dynamics in astrocytes depend on the cell morphology and the neurotransmitter type. However, it is not yet clear how these two factors interact and influence the astrocyte activity [2].

Here we introduce a single-compartment, two-variable astrocyte model and study the effects of cell morphology and neurotransmitter type on the Ca2+ and IP3 dynamics. The model is a simplification of a biophysically detailed astrocyte model [3]. The two variables describe the intracellular Ca2+ and IP3 concentrations. We implemented the currents between intracellular, extracellular and intra-ER, the main mechanisms generating the Ca2+ signals. The model receives glutamatergic and dopaminergic inputs (simulated as Poisson processes) that promote IP3 synthesis and trigger Ca2+ signals. First, we simulated the single compartment model with different radii and either glutamatergic or dopaminergic input. Compartment radius controls the ER volume. Next, we compared the results from the single-compartment tests to a simulation of a linear nine-compartment model. In this test, only the distal compartments received glutamatergic input and all compartments received dopaminergic input.

The model reproduces the main characteristics of astrocyte activity, and a phase plane analysis shows that the compartment radius acts as an excitability threshold parameter. While thick compartments trigger Ca2+ signals, thinner compartments integrate inputs in terms of IP3 concentration but do not lead to Ca2+ signals. The model also shows different types of responses for glutamatergic and dopaminergic stimulations. The frequency of the response to glutamate increases linearly with the frequency of stimulation, while for dopaminergic input the response frequency saturates. In the linear model, applying glutamatergic and dopaminergic stimulation concurrently, the model triggered Ca2+ signals with a plateau in the distal compartments. This response depends on Ca2+ and IP3 diffusion between the distal compartments.

Results show that thinner astrocytic processes can integrate signals without amplification. After a Ca2+ signal is triggered in thicker astrocyte processes, it can travel back toward thinner regions. Regions with higher ER volume can both integrate and amplify signals. Finally, communication between different regions of an astrocyte process can alter the type of response generated and influence astrocyte computation.

Acknowledgments

This work was produced as part of the activities of FAPESP Research, Innovation and Dissemination Center for Neuromathematics (grant 2013/07699-0). TOB is supported by a FAPESP PhD scholarship (grant 2021/12832-7). ACR is partially supported by a CNPq fellowship (grant 303359/2022-6).

References

1. Santello M, Toni N, Volterra A. Astrocyte function from information processing to cognition and cognitive impairment. Nat Neurosci. 2019, 22, 154–166.
2. Bazargani N, Attwell D. Astrocyte calcium signaling: the third wave. Nat. Neurosci. 2016, 19, 182–189.
3. Bezerra TO, Roque AC. Dopamine facilitates the response to glutamatergic inputs in a computational model of astrocytes”. bioRxiv, 2022.


Tuesday July 23, 2024 4:50pm - 6:40pm PDT
TBA

4:50pm PDT

P085 Modelling astrocyte Ca2 dynamics as an integrator of synaptic activity
Astrocytes respond rapidly to synaptic activity, with astrocyte Ca2+ promoting the release of neuroactive molecules that modulate neuronal signaling [1]. A recent study showed that radial astrocytes in zebrafish integrate signals of swimming failure and, upon sufficient astrocyte Ca2+ buildup, facilitate the activation of GABAergic neurons, which inhibit swimming [2]. This process is mediated by noradrenergic (NA) neurons, with the authors suggesting that the regulation of K+ could be the mechanism facilitating GABAergic neuron activation.

The objectives of this work were to implement the proposed circuit controlling motor behavior in zebrafish [2] and to test the K+ release hypothesis. To accomplish this, we developed a compartmental astrocyte model. The Ca2+ and IP3 dynamics of each compartment were described using a simplified version of a previous model [3]. Our model has a star-shaped morphology, with astrocytic processes comprised of three cylindrical compartments and a spherical soma (16 compartments). Glutamatergic input was applied to the tips of the astrocyte processes to simulate a local circuit, while all compartments received NA input. The post-astrocytic GABAergic neuron was modeled as a Hodgkin-Huxley neuron, wherein the extracellular K+ concentration is time- and Ca2+-dependent. To investigate the interplay between local and NA input, we randomly adjusted the strength of the local circuitry input and examined whether local circuitry activity was necessary to observe the rapid NA response. To test the K+ release hypothesis, we increased the extracellular K+ concentration during each Ca2+ signal.

Simulations revealed that both local circuitry and NA input were necessary for generating Ca2+ signals with temporal precision. NA input alone did not trigger any Ca2+ spikes, while local circuitry input triggered Ca2+ signals with low time precision. Local circuitry likely promoted Ca2+ accumulation prior to NA input, which then activated the astrocyte. Varying the timing of the NA input in relation to the end of the local circuit input indicated that the astrocyte response exhibited high sensitivity to the input timing of local circuitry and NA. Finally, K+ released from the astrocyte was sufficient to trigger action potentials in the GABAergic neuron.

Acknowledgments

This work started as a students project during the IX Latin American School on Computational Neuroscience (LASCON 2024), held in São Paulo, Brazil, January 8 - February 2, 2024 and supported by FAPESP (grant 2023/06880-4), CNPq (grant 445851/2023-6) and the IBRO-LARC Schools Funding Program. This work was produced as part of the activities of FAPESP Research, Innovation and Dissemination Center for Neuromathematics (grant 2013/07699-0). TOB is supported by a FAPESP PhD scholarship (grant 2021/12832-7). ACR is partially supported by a CNPq fellowship (grant 303359/2022-6). PR acknowdleges travel support from the Florida Atlantic University Graduate Professional and Student Association as well as the Jupiter Life Science Initiative.

References

1. Covelo A, Araque A. Neuronal activity determines distinct gliotransmitter release from a single astrocyte. eLife. 2018, 7, e32237.
2. Mu Y, et al. Glia accumulate evidence that actions are futile and suppress unsuccessful behavior. Cell. 2019, 178.1, 27–43.
3. Bezerra TO, Roque AC. Dopamine facilitates the response to glutamatergic inputs in a computational model of astrocytes. bioRxiv, 2022.


Tuesday July 23, 2024 4:50pm - 6:40pm PDT
TBA

4:50pm PDT

P086 Integrating the reaction-diffusion NEURON module in a Purkinje cell model
A major challenge to understand how neurons process synaptic input is to build detailed biophysical models of neuronal function that integrate morphology, electrophysiology, and biochemical reactions. The Purkinje cell is the principal neuron of the cerebellar cortex. Modeling of Purkinje cell electrophysiology spans more than 60 years of efforts [1]. This neuron contains voltage activated calcium conductances as well as calcium activated potassium conductances. These conductances are distributed over the soma, axonal initial segment, and a complex dendritic tree. As such, there is a need to accurately and efficiently model intracellular calcium diffusion. Furthermore, calcium is a second messenger essential for the activation of biochemical reactions involved in the expression of long-term synaptic plasticity [2].
In all models of Purkinje cells, calcium diffusion is assumed to be exclusively a radial process. However, synaptic plasticity in the granule cell to Purkinje cell synapse requires calcium influx through membrane conductances in conjunction with calcium released from intracellular stores. The synapses are on passive dendritic spines. Since voltage activated calcium conductances are only on the dendrite, but not on the spine, there is a need to model axial diffusion between these compartments to study synaptic plasticity in full Purkinje cell models.
In this project we will describe our efforts to integrate the reaction-diffusion (RXD) NEURON [3] module into a highly detailed model of a Purkinje cell. The first study aims to reproduce the normal excitability of the cell under current clamp conditions. The second implements axial diffusion in the spiny dendrites and 3D diffusion in the smooth dendrites and soma compartments for computational efficiency. The third study looks at axial diffusion between the dendrite and a passive spine head after activation of the climbing fiber input. Finally, we implement a reduced model where a dendritic segment has one spine. The spine contains the biochemical reactions involved in the expression of long-term depression (LTD) [4]. We will describe the technical challenges, advantages and disadvantages of each implementation compared to traditional methods. Our study will be a platform for all those interested in using realistic reaction-diffusion models with morphologically complex neuronal models.


 
References
1.              Bower, J.M., The 40-year history of modeling active dendrites in cerebellar Purkinje cells: emergence of the first single cell "community model". Front Comput Neurosci, 2015. 9  p. 129.
2.         Zamora Chimal, C.G. and E. De Schutter, Ca(2+) Requirements for Long-Term Depression Are Frequency Sensitive in Purkinje Cells. Front Mol Neurosci, 2018. 11  p. 438.
3.         Carnevale, N.T.a.H., M.L. The NEURON Book. and U.C.U.P. Cambridge, 2006.
4.         Kuroda, S., N. Schweighofer, and M. Kawato, Exploration of signal transduction pathways in cerebellar long-term depression by kinetic simulation. J Neurosci, 2001. 21(15): p. 5693-702


Tuesday July 23, 2024 4:50pm - 6:40pm PDT
TBA

4:50pm PDT

P087 Dynamic range and pattern formation near transition points of networks of either map-based neurons or heart cells
Discrete-time recurrent equations (also known as maps) can describe the full dynamics of the action potential (AP) of both neurons and heart cells [1]. In particular, maps of the KTz family were successfully employed to unveil synchronization features of adaptive networks [2], and describe critical avalanche dynamics [3]. Here, we perform two case studies of networks of the logistic KTz map [1]: (a) we first look at the patterns emerging in a diffusive lattice of mixed healthy and unhealthy cardiomyocytes with plateau action potential (AP); and (b) we create a lattice of these maps with traditional (narrow) spikes and determine the magnitude of the dynamic range around a continuous phase transition, expecting it to be maximized at the critical point as predicted by branching models [4].
We identified three mechanisms that generated disrupted cardiomyocyte APs: (a) prolongation of the spike repolarization via infinite period bifurcation causes early afterdepolarization (EAD), linked to cardiac arrhythmias; (b) a multistable transition to bursting induces delayed afterdepolarization (DAD) with increasing fast sodium conductance; (c) a fast unstable spiral gives rise to a non-chaotic aperiodic cycle as the spiking slows down. We then generated diffusive networks with mixed healthy and unhealthy cell models. We study the conditions for the appearance of dynamical patterns, such as traveling pulses, synchronization, and spiral waves.
The map-based neuronal network is constructed in a square lattice with excitatory chemical synapses. The intrinsic dynamics of the network allows us to show that the synapses need noise to generate a critical point. But more importantly, the synaptic recovery time scales must be fast compared to the refractory periods of the neurons in order to dissipate excess spiking. Contrary to what is found for simple integrate-and-fire or branching models, we show that the stimulus-response curves of map-based networks, and consequently their defining dynamic range, show reentrant behavior as the phase transition occurs.
The study of maps opens the door for the identification of dynamical parameters influencing the collective behavior. This is because these mathematical structures have way fewer parameters and variables than conductance-based models, enabling a comprehensive exploration of the parameter space when running the simulations, and bringing an integrative view of complex phenomena, like heart cell and neuronal networks.
Acknowledgements
B.L.P. acknowledges financial support from FAPESC.
References
1. Girardi-Schappo M, Bortolotto GS, Stenzinger RV, Gonsalves JJ and Tragtenberg MHR (2017): Phase diagrams and dynamics of a computationally efficient map-based neuron model. PLoS ONE, 12(3):e0174621.
2. Rhâmidda SL, Girardi-Schappo M, Kinouchi O (2024): Optimal input reverberation and homeostatic self-organization towards the edge of synchronization. arXiv: 2402.05032 [nlin.AO]
3. Girardi-Schappo M, Kinouchi O and Tragtenberg MHR (2013): Critical avalanches and subsampling in map-based neural networks coupled with noisy synapses. Phys Rev E, 88:024701.
4. Kinouchi O, Copelli M (2006): Optimal dynamical range of excitable networks at criticality. Nat Phys 2  348-351.


Tuesday July 23, 2024 4:50pm - 6:40pm PDT
TBA

4:50pm PDT

P088 Deep linear networks: a framework for understanding perceptual learning in natural and artificial networks
Perceptual learning, the process by which sensory experiences fine-tune our neural responses, is fundamental to both human and artificial intelligence systems. Despite significant advancements, understanding the mechanisms underlying these adaptations remains a challenge. Deep linear networks (DLNs), by mimicking the brain's hierarchical structure in a computationally and mathematically simple form, offer a promising framework for bridging this gap. This research investigates the capacity of DLNs to model and elucidate the dynamics of perceptual learning, leveraging their structural parallels with the visual cortical hierarchy.
Confronted with the complexity of neural changes during perceptual tasks, previous models have often been constrained to alterations within single layers of the cortex or to the re-weighting of connections to decision-making structures. However, these approaches fall short of capturing the holistic nature of learning across the brain's layered structure. Our study introduces the application of DLNs to address this limitation, presenting a comprehensive model that spans multiple levels of the cortical hierarchy. Whereas simple, this model captures some of the fundamental aspects of learning across the visual stream and compares to learning in a complex Deep Neural Network (DNN).
Employing gradient descent learning dynamics, our model provides insights into the roles of deep neural structures and pre-training in learning processes. Specifically, we demonstrate how deep networks navigate the challenges of non-convex learning landscapes, revealing a strategic shift from targeting the most informative neurons in shallow architectures to optimising the least informative layers in deeper configurations. Additionally, our analyses reveal variations in the magnitude of changes across different layers. The changes in the decision layer, which are initially untuned, happen faster and have a greater magnitude compared to the highly tuned representation layer (Fig. 1). This is in line with the Reverse Hierarchy Theory. Finally, our analyses suggest that the neurons undergoing changes depend on the task's precision executed by the network. In high-precision tasks, the most informative neurons change the most. Conversely, in low-precision tasks, the most active neurons exhibit greater changes (Fig. 1). These findings align with empirical results and DNN simulations, illustrating neural adaptation patterns that favour a hierarchical, depth-dependent modulation.
Deep linear networks emerge as a powerful tool for advancing our understanding of perceptual learning. Further expansions of these models include the implementation of a two-eye system to investigate the effect of incongruent inputs or the implementation of different learning protocols to unravel the role of curriculum learning on the learning outcomes. Our model's predictions about the size and timing of changes across the cortical hierarchy, as well as the differential impacts of task precision on learning outcomes, provide a robust framework for interpreting experimental data and designing future studies. By capturing the depth and fundamental complexity of the cortical hierarchy, deep linear networks offer new perspectives on the neural bases of learning and adaptation. This research not only captures previous experimental findings but, by utilising an analytically tractable model, it also deepens our theoretical understanding of perceptual learning.



Tuesday July 23, 2024 4:50pm - 6:40pm PDT
TBA

4:50pm PDT

P089 An extended and improved CCFv3 annotation and Nissl atlas of the mouse brain
Reference atlases play an essential role for understanding the complex nature of the brain. Conventional histology coupled with microscopic imaging persists as the prevailing method for exploring the rodent brain. The Allen Institute provides one of the most precise and widely used mouse brain atlas [1]. The recently released Common Coordinates Framework version 3 (CCFv3) delineates over 600 anatomical regions but excludes the rostral and caudal parts from the atlas. Another limitation arises because the reference Nissl-stained volume (Nissl volume) is not precisely aligned with the CCFv3 [2]. Furthermore, the CCFv3 does not include the delineation of the granular and molecular layers of the cerebellum. Here, we aim to overcome these limitations by creating an extended Nissl volume which is accurately aligned with the CCFv3 and includes missing layers in the cerebellum through several automated registration methods. First, the Nissl volume should be accurately registered to the template on which the CCFv3 was delineated [2]. To tackle the complex challenge of large-scale multimodal registration between the Nissl volume and the template, we introduced a method prioritizing a hierarchical region-by-region processing schema, incorporating widely recognized registration algorithms from the literature [3,4]. We used Normalized Mutual Information [5] to measure the improvement of the multimodal registration at each hierarchical level of the ontology. Second, we identified an appropriate dataset [2] that covers the rostral and caudal regions of the brain and filled the missing part of the Nissl volume using automated registration technique. Given this new Nissl-stained tissue, we extended the CCFv3 and validated it using image processing tools under expert supervision. We also automatically identified new cerebellar layers using the same dataset. The new atlas was processed at 25 and 10 micrometer isotropic resolutions. These improvements make the atlas more generic and accurate, paving the way for more in-depth studies in regions such as the olfactory bulb or the cerebellum. Moreover, the registered Nissl volume will enable precise quantitative analysis of cells in the new extended brain atlas.
Acknowledgments

This study was supported by funding to the Blue Brain Project, a research center of the École polytechnique fédérale de Lausanne (EPFL), from the Swiss government’s ETH Board of the Swiss Federal Institutes of Technology.
References



    1. Wang. The Allen mouse brain common coordinate framework: a 3D reference atlas. Cell. 2020, 181(4), 936-953.
    2. Kuan. Neuroinformatics of the allen mouse brain connectivity atlas. Methods. 2015, 73, 4-17.
    3. Avants. Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Medical image analysis. 2008, 12(1), 26-41.
    4. Modat. Global image registration using a symmetric block-matching approach. Journal of medical imaging. 2014, 1(2), 024003-024003.
    5. Studholme. Normalized entropy measure for multimodality image alignment. Medical imaging 1998: image processing SPIE. 1998, Vol. 3338, pp. 132-143.


    Tuesday July 23, 2024 4:50pm - 6:40pm PDT
    TBA

    4:50pm PDT

    P090 Neuronal interactions described with a Game Theory-inspired network model
    Exploring the complexities of the
    brain from a computationalist standpoint involves navigating the intricate
    structure and functioning of this organ. This line of research relies heavily
    on simplified experimental models like in vitro assemblies (slices,
    cultures, organoids, etc.). Nonetheless, the experimental findings obtained
    with these models alone are not enough to provide a comprehensive mathematical
    and biophysical representation of the observed phenomena. Thus, the advancement
    of techniques enabling the creation of in vitro neuronal models must be supported
    by the development of in silico biophysical models capable of capture
    the principles determining the experimental outcomes. From this perspective,
    two primary considerations emerge. Firstly, there is a need to mathematically
    model the electrophysiological dynamics of individual neurons, striking a
    balance between the simplicity of the mathematical implementation and the
    capacity to accurately emulate real neuronal dynamics from a biophysical point
    of view. Secondly, understanding and modeling the intricate connections and
    communication among diverse neurons, particularly within large-scale networks,
    poses a critical challenge. This work introduces a novel network model designed
    to replicate the behaviour of electrophysiological dynamics observed in
    neuronal networks. The modeling approach involves employing the
    well-established Hindmarsh Rose model for describing the dynamics of each
    neuron of the network. Briefly, the model consists of three differential
    equations: the first equation characterizes the electrophysiological behaviour
    of the membrane potential, while the remaining two ones describe the slow and
    fast ion flows. This formulation enables the Hindmarsh Rose model to exhibit
    both spiking and bursting dynamics, capturing essential features of real
    neuronal behaviour. In this work, we integrated the principles of the Evolutionary
    Game Theory to model the communication among different neurons. Specifically,
    we proved the scalability of the "game" observed in brain districts'
    activation levels, as measured with functional Magnetic Resonance Imaging (fMRI)
    in previous research, to the synaptic level. Remarkably, alongside the dynamics
    generated by the Hindmarsh Rose neuron, our model embodies the main features of
    emulation or non-emulation among distinct nodes of the network, which are
    characteristic of Evolutionary Game Theory. Mathematically, the relationship
    between the membrane potentials of the ith and jth
    neurons are modulated by a parameter aij, where i and j
    range from 1 to the total number of neurons in the simulated network. A
    positive (negative) aij formalizes an emulative
    (non-emulative) behaviour among the ith and jth
    neurons. Collectively, the aij values represent the entries
    of the adjacency matrix A, formulated within the framework of Evolutionary
    Game Theory. This matrix delineates the emulation or non-emulation
    relationships among all the nodes within the network, with the objective of
    furnishing a parameter set able to capture diverse functional communication
    mechanisms compared to prevalent methods for estimating connectivity. Derived
    from a distinct paradigm, these parameters offer a novel formalization of
    neuronal network interaction, potentially yielding valuable insights into the
    experimental dynamics when fitted with the model proposed in this study.


    Tuesday July 23, 2024 4:50pm - 6:40pm PDT
    TBA

    4:50pm PDT

    P091 Wavelet-based detection algorithm for the electrophysiological characterization of neurospheroid signals
    Exploiting dissociated neurons
    coupled to a grid of microelectrodes is a fundamental in vitro approach
    for studying and characterizing network dynamics. However, oversimplification
    in these models have to be avoided as it may lead to disregarding essential
    characteristics that distinguish the real brain, such as its
    three-dimensionality and the modular interactions between different cerebral regions.
    In this work, we explored a methodology for detecting and characterizing
    electrophysiological signals in in vitro neuronal spheroids derived from
    rat embryonic tissue. Unlike conventional two-dimensional models, neurospheroids
    mimic the three-dimensional structure of the brain, providing a unique platform
    for studying neural interactions. When two spheroids are coupled together, they
    create a structure called 'assembloid' which allows for simplified modeling of
    modular 3D interactions among different brain regions when this interaction
    involves spheroids of distinct types, (e.g., hippocampal versus cortical). This
    work focuses on the challenges that arise when transitioning from planar
    networks to a spheroid model, where the conventional spike detection methods increase
    their rate of failure since the overlapping contributions of numerous active
    neurons originating from a compact spherical volume. This leads to record a wave
    characterized by a slower dynamic than the rapid spiking activity. Thus, to analyse
    the signal recorded from the neurospheroids, we devised a wavelet transform
    approach that is commonly utilized for wave-like signals. Specifically, we applied
    a thresholding technique on the spectrogram to identify spheroid-generated
    events and filtering out noise. This method allows simultaneous detection in
    both time and frequency domains, extracting temporal intervals containing
    spheroid signals. It provides insights into temporal dynamics and specific
    frequency bands associated with each detected interval. To facilitate the
    comparison of time-frequency information across different experimental configurations
    achievable using this model, we quantified the expression of the well-known
    brain waves, including delta (1-4 Hz), theta
    (4-8 Hz), alpha (8-12 Hz), beta (12-35 Hz), and gamma (35-100 Hz), for each
    experiment. This approach enables us to examine the impact of brain modularity
    and the interaction between different cerebral regions on the appearance of
    brain rhythms in comparison to a control condition. In particular, we analyzed signals
    coming from assembloids consisting of two spheroids of the same neuronal type (for
    testing the brain modularity) and assembloids made of two spheroids of distinct
    neuronal types (for testing interaction between different cerebral regions). A
    control condition was established with a single spheroid composed of a single
    neuronal type. The reliability of the
    achieved results makes the proposed experimental model as well as the analysis
    method potentially adaptable and applicable to more intricate
    electrophysiological assessments. This includes to take into account not only
    spontaneous activity of the spheroid, but also other dynamics that may be
    generated by the chemical modulation of a drug or spheroids affected by specific
    neurodegenerative diseases.


    Tuesday July 23, 2024 4:50pm - 6:40pm PDT
    TBA

    4:50pm PDT

    P092 Exploring neurotransmitter release in a model of synaptic dynamics
    The synaptic dynamic is influenced by many factors, where the fraction of neurotransmitters released and the spike times of the firing patterns are two of the main ones [1]. In this study, we obtain an approximated solution for the dynamics of a synaptic model, investigating the fraction of neurotransmitters activated. They are directly related to the intensity of conductance transmitted by the synapse. To do that, we consider the presynaptic neurons under periodic and random spikes with the same mean frequencies. Periodic and random spikes are two extreme cases expected for the neuronal spikes, regular spikes with low variability and irregular spikes with high variability, respectively [2,3]. We separate the synaptic dynamics into three regimes: facilitation, depression, and biphasic. For each of these regimes, we determine the final and maximal values of active neurotransmitters due to different mean frequencies and fractions of neurotransmitters released. In this framework, we trace the correspondence between the periodic and random spike activities for the fraction of active neurotransmitters. The  methodology and results provided in this study may inspire and open the avenue for obtaining solutions for more complex synaptic and neuron models in the future.
    [1] Tsodyks M., Uziel A., Markram H. Synchrony generation in recurrent networks with frequency-dependent synapses. The Journal of Neuroscience. 2020, 20, 1-5.

    [2] Mazzoni A., Broccard F.D., Garcia-Perez E., Bonifazi P., Ruaro M.E., Torre V. On the dynamics of the spontaneous activity in neuronal networks. PloS One. 2007, 2, e439.

    [3] Perucca P., Dubeau F., Gotman J. Intracranial electroencephalographic seizure-onset patterns: effect of underlying pathology. Brain. 2014, 137, 183-196.


    Tuesday July 23, 2024 4:50pm - 6:40pm PDT
    TBA

    4:50pm PDT

    P093 Spatiotemporal Variability in EEG Recordings Associated with Cognitive Impairments in Parkinson's Disease
    Electroencephalography (EEG) stands as a powerful and inexpensive diagnostic resource, but it is difficult to interpret. Established methods rely heavily on signal processing, statistical averaging, and data science techniques, often overlooking the physiological mechanisms of the neuronal activity recorded by EEG [1-3]. Understanding these mechanisms is nevertheless important for neuroscientists who investigate potential causes for brain activity change in patients with cognitive impairments as observed, for example, in Parkinson’s Disease (PD) [2-3].

    Numerical tools like the Human Neocortical Neurosolver (HNN) [4-5] provide an alternative, modeling-based approach to EEG data: HNN calculates predicted EEG readings by simulating the postsynaptic current flows in the dendritic trees of pyramidal neurons. However, scaling such models to multiple simultaneous EEG channels has proven to be computationally intractable. Modern theoretical developments in large-scale brain activity analysis have instead turned to neural field models, which treat the cortex as a continuum, rather than a discrete collection of neurons [6]. This approach is equivalent to projecting the problem onto a lower-dimensional system of integro-differential equations, significantly increasing computational efficiency.

    In this presentation we analyze EEG recordings from a single, midfrontal electrode (Cz), from a group of participants across a range of cognitive function: (1) PD patients with dementia, (2) PD patients with mild cognitive impairments, and (3) a control group of healthy individuals. The EEG data was collected while the participants performed an interval timing task in which they were asked to estimate an interval of several seconds by making a motor response [2-3]. We  propose a two-population, second-order neural field model which encodes the behavior of the HNN model [4-5] into a spatiotemporal kernel. Through simulations, we demonstrate comparable dynamics between the two models in a single cortical column. We utilize both models to capture key features of the interval-timing EEG data at the Cz electrode, then compare them against previous results obtained using classic signal processing methods [2-3]. We show that our neural field model performs on par with the Jones et al model [4] but with greatly reduced computational cost, while capturing differences in specific brain rhythms found in PD participants versus control. Those differences are encoded in the model’s parameterization which, in turn, suggests potential neural physiological changes associated with the occurrence, or evolution, of PD.

    Our findings underscore the potential of neural field models in advancing EEG-based diagnostics, paving the way for enhanced understanding and treatment of neurological disorders. In future work, we aim to leverage the improved efficiency of our neural field model, by expanding it to the broader, multi-electrode case.

    References
    1. Delorme A, Makeig S. Jl Neurosci Methods. 2005, 134, 9-21.

    2. Singh A, et al. NPJ Parkinson's Disease. 2001, 7(14), 1-7.

    3. Singh A et al. J Neurol Neurosurg Psychiatry. 2023, 94(11), 945-953.

    4. Jones SR et al. J of Neurophys. 2009, 102, 3554–3572.

    5. Neymotin SA et al. Elife. 2020, 9, e51214.

    6. Cook BJ, Peterson A DH. Mathematical Neuroscience and Applications. 2002, 2(2), 1-67.



    Tuesday July 23, 2024 4:50pm - 6:40pm PDT
    TBA

    4:50pm PDT

    P094 KTH Model: Investigating single neuron functionality
    We present here the importance of computational simulations in understanding neuronal behaviors and the formation of diseases in the complex network of the brain. Specifically, it highlights the focus on unraveling the critical behavior of neural networks near synchronization bifurcation points. The study presents a new biological neuronal network model using coupled maps characterized by variables that approximate important biological features with minimal computational cost and a straightforward mathematical formalism. Referred to as the KTH model, this map-based neuron exhibits a remarkable ability to replicate various biological behaviors, including fast and slow spikes, bursting phenomena, chaotic spikes, and cardiac pulses, thus offering valuable insights into the dynamics of neuronal systems.



    Tuesday July 23, 2024 4:50pm - 6:40pm PDT
    TBA

    4:50pm PDT

    P095 A Methodology for Explaining Computational Psychiatric Diagnoses with Large Language Models, Integrated Gradients, and Linguistic Analysis
    Brain disorders, such as Alzheimer's Disease (AD) and Schizophrenia, impact the language output of those affected by the disease in numerous ways that are still not fully known. Recent advances in computational psychiatry, a fresh discipline in neuroinformatics, have approached this subject by providing multiple strategies for automatically identifying one's eventual disorder through analyzing the discourse. Natural Language Processing (NLP) methods have achieved extraordinary accuracy in this task. However, for actual clinical suitability, there needs to be more clarity on which aspects of the speech were determinant for the automatic diagnostic decision. Here, we describe a methodology for achieving syntax-related explainability on Large Language Models (LLMs) in text classification tasks of interest to the computational psychiatric community. The method uses Integrated Gradients (IG) attribution to identify the segments of the text more relevant to the decision process and the Linguistic Inquiry and Word Count (LIWC) toolkit to annotate these segments with appropriate syntactic and linguistic descriptors. On a study level, the methodology can pinpoint which descriptors are statistically pertinent for the diagnoses, whereas on an individual analysis, it can describe the relevant segments for the decision. We demonstrate the use of the methods with an English dataset of audio recordings and transcripts from the Cookie Theft picture description task and a fine-tuned BERT (Bidirectional Encoder Representations from Transformers) model that achieved an accuracy of 87% in a 5-fold cross-validation method. We discuss how to apply the methodology in scientific and clinical settings and its limitations.



    Tuesday July 23, 2024 4:50pm - 6:40pm PDT
    TBA

    4:50pm PDT

    P096 A Century of the Alpha Rhythm and Its Relatives: A Unified Theory via Eigenmodes
    Berger first recorded human EEG on 6 July 1924, the first noninvasive measurement of human brain activity; he noted the ~10 Hz alpha rhythm to be the most prominent activity [1]. Alpha is concentrated over visual cortex at the back of the head, sometimes displays a double peak, and is suppressed by visual inputs [2]; the beta rhythm occurs at its harmonic. Later, the ~10 Hz mu rhythm was discovered, concentrated over sensorimotor cortex near the crown of the head, suppressed by motor activity, and sometimes associated with ~20 Hz activity [2]. The ~10 Hz tau rhythm is concentrated over auditory cortex near the ears and is suppressed by sound. Early theories argued that separate groups of neurons fire at ~10 Hz or ~20 Hz at the relevant locations, but these were ad hoc and lacked explanatory power [3]. More recently, the alpha rhythm was argued to be a natural mode of activity in the cortex [3] or of the corticothalamic (CT) system [4,5], and was analyzed using neural field theory (NFT), which averages over the activity of large numbers of neurons in order to calculate the dynamics of activity fields. Here, we show that just 4 corticothalamic eigenmodes of activity can explain the key features of spontaneous alpha, mu, and tau rhythms, including their frequency structure and topography [5]. Splitting is due to eigenmodes having different frequencies, whereas CT loops account for the basic 10 Hz frequency and correlations between alpha and beta, and between mu and its harmonic. Observed split-alpha, split-beta, and split-mu rhythms are explained, and it is predicted that split-tau and split second-harmonic mu and tau rhythms can occur. Spatial concentrations of activity are found to be due to constructive interference of modes in the relevant sensory regions, supported by enhanced CT gains , and are suppressed when those gains are reduced by attention [5. Fits of theory to data will enable brain states to be probed in real time, as is already the case for spectra including the basic alpha rhythm [7]. Links to evoked responses and other phenomena can also be made via NFT using eigenmode analysis, thereby unifying many classes of observations and phenomena and providing a systematic means of calculation.


    Tuesday July 23, 2024 4:50pm - 6:40pm PDT
    TBA

    4:50pm PDT

    P097 Modeling of temporal variability of gamma oscillations synchrony
    Synchronization of oscillations implies repetitive temporal coordination of oscillatory signals. However, it is possible to consider if signals are synchronized or not at each cycle of oscillations if there is a statistically significant synchrony overall. Techniques to analyze this temporal patterning of synchronization on very short time scales have been developed over the last decade [1]. It allows to distinguish cases of many short desynchronizations, a few long desynchronizations, and a wide spectrum of possibilities in between them, even if they all have the same average synchronization strength. The latter situation - same average synchronization, but different temporal patterning of synchronized dynamics - can be observed experimentally [2,3].
     
    We used a medium-sized pyramidal interneuron gamma rhythm (PING) network (similar to [4]) to investigate the effect of synaptic connection strength on the temporal patterning of partially synchronized gamma oscillations. We found that changing synaptic strength does not only change the average synchrony but also alters the temporal patterning of synchronization (and these two do not necessarily co-vary in the same way). The effect of synaptic strength on the temporal patterning of neural synchrony is observed to follow a general trend: stronger local connection tends to produce longer desynchronization dynamics while stronger distant connection tends to produce shorter desynchronization dynamics, and this is true for both excitatory and inhibitory connections. Thus, local synapses and distant synapses have the opposite effects.
     
    An earlier study suggested that different temporal patterns of synchronization may have an influence on how neural circuits synchronize with input [5]. To show that the temporal patterning of synchronization may affect potential function, we considered how the system used here can respond to external synchronizing input signal. We found that model circuits with the same base level of synchrony may respond to synaptic input in different ways, depending on the temporal pattern of desynchronizations in the circuits without input. Therefore, the way synchronized and desynchronized intervals are distributed in time may make a difference for how a circuit responds to incoming signals.
     
    Thus, the synaptic changes, which affect gamma oscillations (and potentially underlies abnormalities in several neurological and neuropsychiatric disorders) may mediate physiological properties of neural circuits not only via change in the average synchrony level, but also via the change of how synchrony is patterned in time over very short time scales.
     
    1. Ahn S, Rubchinsky LL. Short desynchronization episodes prevail in synchronous dynamics of human brain rhythms. Chaos. 2013, 23, 013138.
     
    2. Ahn S, Rubchinsky LL, Lapish CC. Dynamical reorganization of synchronous activity patterns in prefrontal cortex - hippocampus networks during behavioral sensitization. Cereb Cortex. 2014, 24, 2553.
     
    3. Ahn S, Zauber SE, Witt T, Worth RM, Rubchinsky LL. Neural synchronization: Average strength vs. temporal patterning. Clin Neurophysiol. 2018, 129, 842.
     
    4. Nguyen QA, Rubchinsky LL. Temporal patterns of synchrony in a pyramidal-interneuron gamma (PING) network. Chaos. 2021, 31, 043133.
     
    5. Ahn S, Rubchinsky LL. Potential mechanisms and functions of intermittent neural synchronization. Front Comput Neurosci. 2017, 11, 44.


    Tuesday July 23, 2024 4:50pm - 6:40pm PDT
    TBA

    4:50pm PDT

    P098 Critical behavior in hierarchical modular networks of stochastic neurons with reversal membrane potential
    Over the years, many attempts have been made to create theoretical models of the brain to understand its macroscopic behavior. The critical brain hypothesis posits that the cortical network self-organizes into a critical state that optimizes information processing, memory, and sensitivity to stimuli [1]. A signature of criticality would be the occurrence of neuronal avalanches with power law size and duration distributions. Cortical network models show that hierarchical modular (HM) networks exhibit self-organized sustained activity states that have significantly longer durations compared to homogeneous networks [2]. This study examines SOC behavior in HM networks of stochastic leaky integrate-and-fire neurons. It extends previous works [3,4] by the addition of a reversal potential, which imposes a biologically realistic lower bound to the membrane potential, modulating network excitation and inhibition [5]. The network activity is characterized in terms of the coefficient of variation of the interspike interval and a synchrony index. We use a mean-field approximation to calculate the critical exponents of the network analytically. Moreover, we study the dynamics of depressing/recovering neuronal gain that leads to homeostatic criticality [4]. We found that neuronal networks regulated by homeostatic mechanisms have the potential to exhibit critical behavior.


    Keywords: self-organized criticality; hierarchical modular network; reversal potential; neuronal avalanches; stochastic neuron.


    Acknowledgements
    This work was produced as part of the activities of FAPESP Research, Innovation and Dissemination Center for Neuromathematics (FAPESP grant #2013/ 07699-0, São Paulo Research Foundation). FR is supported by a FAPESP postdoctoral fellowship (grant #2020/12121-0). ACR is partially supported by a research fellowship from CNPq (grant #303359/2022-6).


    References
    [1]  Chialvo. Emergent complex neural dynamics. Nature Physics. (2010), 6, 744–750 
    [2] Tomov, Zaks, Roque. Sustained oscillations, irregular firing, and chaotic dynamics in hierarchical modular         networks with mixtures of electrophysiological cell types. Frontiers in Computational Neuroscience. (2014).       18. 103 
    [3] Brochini, de Andrade Costa, Abadi, Roque, Stolfi, Kinouchi. Phase transitions and self-organized criticality       in networks of stochastic spiking neurons. Scientific Reports. (2016). 6. 35831 
    [4] Kinouchi, Brochini, Costa, Campos, Copelli. Stochastic oscillations and dragon king avalanches in self-             organized quasicritical systems. Scientific Reports. (2019). 9. 3874 
    [5] Burkitt. Balanced neurons: analysis of leaky integrate-and-fire neurons with reversal potentials. Biological         Cybernetics. (2001). 85(4). 247–255 


    Tuesday July 23, 2024 4:50pm - 6:40pm PDT
    TBA

    4:50pm PDT

    P099 From muscle spindle to spinal cord: A modelling approach of the hierarchical organization in motor control
    The muscle spindle is an essential proprioceptor, playing a crucial role in the sensation of limb position and movement. Although its importance, nowadays most multi-body models do not include explicit sensor dynamics like spindles. As part of our goal of improving proprioceptor dynamics in biomechanical and neuroscience simulations, we recently developed a physiologically enhanced model of the muscle spindle that considers the individual characteristics of involved tissue compartments and it is easier to be integrated in multi-body systems [1]. During my presentation, I will discuss ongoing research, in which, as a further step, we aim to demonstrate that the muscle spindle afferent firings can be processed by neuronal networks and are important for motor control.
    We have integrated our spindle model to the muscle model from [2], inside of the Demoa multi-body simulation framework [3]. Such structure composed by extrafusal (muscle) and intrafusal (spindle) fibers was implemented as the muscle-tendon units (MTUs) of the arm model composed by two degrees of freedom and six MTUs [4], into the same simulation environment. Additionally, a spinal cord model, based on the work from [5], was implemented in the Nest spiking neural network simulator. Our spinal network has 6 neurons per muscle – alpha, gamma dynamic and gamma static motoneurons, together with Ia, propriospinal and renshaw interneurons – and their respective physiological connections. The coupling between Demoa and Nest simulators was implemented using a Cython interface. In the sequence, the spinal cord network in Nest had its synaptic weights optimized to perform a center-out reaching task using the musculoskeletal model implemented in Demoa, demonstrating the motor control learning in our environment formed from muscle spindle to spinal circuitry, inside two different simulators. Finally, we added perturbation to our reaching task, in order to verify the meaningful role of the muscle spindle in an unstable environment.
    Acknowledgments
    The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Pablo F. S. Chacon.
    References
    1. Chacon, P. F. S.; Hammer, M.; Wochner, I.; Walter, J. R. & Schmitt, S. A physiologically enhanced muscle spindle model: using a Hill-type model for extrafusal fibers as template for intrafusal fibers. Computer Methods in Biomechanics and Biomedical Engineering, Informa UK Limited. 2023, 1-20.
    2. Haeufle, D.; Günther, M.; Bayer, A. & Schmitt, S. Hill-type muscle model with serial damping and eccentric force–velocity relation. Journal of Biomechanics, Elsevier BV. 2014, 47, 1531-1536.
    3. Schmitt, S. demoa-base: A Biophysics Simulator for Muscle driven Motion. DaRUS. 2022.
    4. Wochner, I. & Schmitt, S. arm26: A Human Arm Model. DaRUS. 2022.
    5. Tsianos, G. A.; Goodner, J. & Loeb, G. E. Useful properties of spinal circuits for learning and performing planar reaches. Journal of Neural Engineering, IOP Publishing. 2014, 11, 056006.


    Tuesday July 23, 2024 4:50pm - 6:40pm PDT
    TBA

    4:50pm PDT

    P100 Waveform-based classification of dentate spikes
    Synchronous excitatory discharges from the entorhinal cortex (EC) to the dentate gyrus (DG) generate fast and prominent patterns in the hilar local field potential (LFP), called dentate spikes (DSs). As sharp-wave ripples in CA1, DSs are more likely to occur in quiet behavioral states, when memory consolidation is thought to take place. However, their functions in mnemonic processes are yet to be elucidated. The classification of DSs into types 1 or 2 is determined by their origin in the lateral or medial EC, as revealed by current source density (CSD) analysis, which requires recordings from linear probes with multiple electrodes spanning the DG layers. To allow the investigation of the functional role of each DS type in recordings obtained from single electrodes and tetrodes, which are abundant in the field, we developed an unsupervised method using Gaussian mixture models to classify such events based on their waveforms. Our classification approach achieved high accuracies (> 80%) when validated in 8 mice with DG laminar profiles. The average CSDs, waveforms, rates, and widths of the DS types obtained through our method closely resembled those derived from the CSD-based classification. As an example of application, we used the technique to analyze single-electrode LFPs from apolipoprotein (apo) E3 and apoE4 knock-in mice. We observed that the latter group, which is a model for Alzheimer’s disease, exhibited wider DSs of both types from a young age, with a larger effect size for DS type 2, likely reflecting early pathophysiological alterations in the EC-DG network, such as hyperactivity. In addition to the applicability of the method in expanding the study of DS types, our results show that their waveforms carry information about their origins, suggesting different underlying network dynamics and roles in memory processing.


    Tuesday July 23, 2024 4:50pm - 6:40pm PDT
    TBA

    4:50pm PDT

    P101 AnalySim new features: Interactive notebooks and CSV browser
    In this poster, we present the updates in the development of the Analysim science gateway for data sharing and analysis. An alpha testing version of the gateway is currently hosted at https://analysim.tech , supported by the NSF-funded ACCESS advanced computing and data resource. The Analysim gateway is an open source software whose source code is hosted at https://github.com/soft-eng-practicum/AnalySim . AnalySim aims to help with data sharing, data hosting for publications, interactive visualizations, collaborative research, and crowdsourced analysis. Special support is planned for datasets with many changing parameters and recorded measurements, such as those produced by neuronal parameter search studies with large number of simulations. However, AnalySim is not limited to this type of data and allows running custom analysis code in interactive notebooks. Currently, it has a proof-of-concept demonstration of analysis capabilities by embedding JavaScript notebooks provided from ObservableHQ.com. Support for Jupyter notebooks using Python is currently in progress.
    AnalySim has been a participant of the International Neuroinformatics Coordinating Facility (INCF) Google Summer of Code (GSoC) program since 2021. Participation in GSoC 2023 improved both the user interface and added major new functionality. Parts of the user interface was improved to have a more consistent visual style, and new pages and screens were added to support new functionality. In the backend, several changes were made: (1) improved security of the user registration system; (2) added a feature to add multiple analysis notebooks and associate datasets with them; and (3) added a CSV data file browsing and visualization component.  We are currently looking for testers of the gateway and soliciting feedback of the design, current features, and the future vision. In this poster, we will review existing features and introduce new ones from the ongoing development as part of GSoC 2023.
    AnalySim is developed with the vision of offering features on an interactive web platform that improves visibility of one’s research and helps the paper review process by allowing to reproduce others’ analyses. In addition, it aims to foster collaborative research by providing access to others' public datasets and analysis, by creating opportunities to ask novel questions, to guide one's research, and to start new collaborations or to join existing teams. It aims to be a “social scientific environment”, where one can fork or clone existing projects to customize them, and tag or follow researchers and projects. In addition, one can filter datasets, duplicate analyses and improve them, and then publish findings via interactive visualizations. In summary, AnalySim aims to be a Github-like tool specialized for scientific problems - especially when datasets are large and complex as in parameter search.
    Acknowledgments
    We thank INCF and GSoC for supporting Analysim. This work used Jetstream2 at Indiana University through allocation BIO220033 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296.


    Tuesday July 23, 2024 4:50pm - 6:40pm PDT
    TBA

    4:50pm PDT

    P102 Kolmogorov-Smirnov Statistic Based Parameter Learning for Complexity Synchronization Analysis of Physiological Time-Series
    Complexity synchronization (CS) analysis facilitates the detection of interactions among complex organ-networks such as the brain, lungs and heart functioning at different time scales [1]. This analysis measures the statistics of events defined as the crossing times of the signal from one amplitude stripe (level) to another. If these events have no correlation, and the time-intervals between consecutive events have inverse power law (IPL) μ (1 < μ < 3) distribution, they are crucial events (CEs). The diffusion time series generated by the extracted CEs has a scaling index δ which can be measured via the modified diffusion entropy analysis (MDEA) [1]. This approach works well in uncovering CS, when certain parameters such as the stripe size and length of data are properly selected by trial and error that may be impractical. We design an automated stripe size estimator using the property that CE time-intervals follow an IPL.



    We use the Kolmogorov-Smirnov (KS) statistic [2] to quantify how well the CE time-intervals follow an IPL for a given stripe size. The KS statistic quantifies how well the resulting CE time-intervals follow an IPL. The KS statistic is the maximum absolute difference between the empirical CE time-interval distribution that depends on the stripe size, and a candidate IPL. The IPL parameter μ and stripe size are set to minimize the KS statistic. Fig. 1 (left) shows that the stripe size selection can drastically affect the distribution of CE time-intervals. If the stripe size is not properly selected then the empirical distribution will not follow an IPL [Fig. 1 (top left)], while when properly set to minimize the KS statistic it results an empirical distribution closely fitted to an IPL [Fig. 1 (bot. left)].



    To show the effectiveness of the proposed KS-based stripe size and μ estimator we evaluate its bias and variance using synthetic data generated via the Mittag-Leffler waiting time distribution [2]. Fig. 1 (top right) shows the mean (blue) and variance (red), averaged over 100 Monte Carlo independent trials of the KS-based (solid) and MDEA-based μ estimator (dashed) versus the number of data samples Nsamp. As Nsamp increases, both the bias and variance of the KS-based estimator decreases; the same is also true for the MDEA estimator when using the KS-based stripe size estimate. However, this is not the case for MDEA when the stripe size is not correctly selected in which case both the estimator bias and variance deviate. 


    To show the strength of the KS-based stripe size selection we utilize the estimated stripe size values in MDEA [1] to uncover CS among EEG, ECG, and respiratory (RESP) signals. Fig. 1 (bot. right) shows the complexity scaling factor δ returned by MDEA using the KS-based estimated stripe size values versus time. The effectiveness of KS-based stripe size selection combined with MDEA can be seen in uncovering CS patterns in EEG, ECG and RESP. Currently, we are studying other crucial CS analysis parameters, such as the fit region and fit method of the IPL scaling index, data window length and overlap.


    Acknowledgement
    Work supported by US Army Research Lab CA W911NF-22-2-0097. 


    References
    [1] Mahmoodi, K., Kerick, S.E., Grigolini, P. et al. Complexity synchronization: a measure of interaction between the brain, heart and lungs. Sci Rep 13, 11433 (2023).

    [2] Corder, G. W.; Foreman, D. I. (2014). Nonparametric Statistics: A Step-by-Step Approach. Wiley.


    Tuesday July 23, 2024 4:50pm - 6:40pm PDT
    TBA

    4:50pm PDT

    P103 The effect of nonlinear dynamics and axonal delays on the relationship between structural and functional brain connectivity
    Functional magnetic resonance imaging (fMRI) data are a standard experimental tool in investigating healthy and pathological brain activity, and are typically used to examine the functional connectivity (FC) between brain areas. In addition, MRI-based diffusion tractography can be used to uncover structural connectivity (SC) between brain areas. It has been demonstrated experimentally and theoretically that FC correlates with SC, but their exact relationship is still unknown. Here, we focus on the effect of nonlinear dynamics and axonal delays on FC. We extend the multivariate Ornstein-Uhlenbeck process by weak nonlinearities and axonal delays to derive the FC matrix analytically for a given SC matrix.
    To investigate the effect of nonlinearities, we focus on the limit of weak nonlinearities in which we can use scale separation to obtain analytical results for the FC matrix. The nonlinear term is cubic to ensure that the fixed point structure and stability of the linear part of the system is preserved. The resulting FC is the sum of the linear FC and a perturbation term, which is proportional to the product of the SC and the linear SC. This in turn allows us to derive an expression for the SC given the perturbed FC, with a parameter α indicating the relative contribution of the nonlinearity.  We apply the latter to empirical FC to derive the “FC-based” SC (through model inversion), and compare it with the empirical SC obtained through tractography. By varying the level of nonlinearity through α and computing the correlation between “FC-based” SC and empirical SC, we find that nonlinear dynamics are present in the empirical data, but their relative contribution to the FC is low (Figure, left panel).
    Axonal delays are incorporated by dividing the tract lengths between brain areas by the axonal propagation velocity. In the limit of weak connectivity and fast delays, the delays effectively increase the structural connectivity between brain areas, and this increase is stronger when the axonal propagation velocity and the time constant of the Ornstein-Uhlenbeck process is fast. This allows us to estimate these parameters from empirical data, and we find that the highest correlation between “FC-based” SC and empirical SC occurs at physiologically plausible parameters (10ms to 40ms for time constant, 1m/s to 4m/s for axonal propagation velocity). Including axonal delays also significantly increases the match between “FC-based” SC and empirical SC. The results are consistent for different levels of preprocessing and whether low-pass filtering effects are taken into account or not (Figure, right panel).



    Tuesday July 23, 2024 4:50pm - 6:40pm PDT
    TBA

    4:50pm PDT

    P104 The internal dynamics of the free-running receiver can mediate phase relations between two unidirectionally coupled neuron models
    Two identical autonomous dynamical systems unidirectionally coupled in a sender-receiver configuration can exhibit anticipated synchronization (AS) if the receiver also receives a delayed negative self-feedback [1]. In this regime, the receiver can anticipate the activity of the sender. This non-intuitive phenomenon has been experimentally verified in electronic circuits and semiconductor lasers. AS was also shown to occur in a three-neuron motif model with standard chemical synapses where the delayed inhibition was provided by an interneuron [2]. Recently, it has been shown that a two-neuron model in the presence of an inhibitory autapse, which is a massive self-innervation present in the cortical architecture, can also present AS [3]. Both the GABAergic autapse and the interneuron regulate the internal dynamics of the receiver neuron and act as the negative delayed self feedback required by dynamical systems in order to exhibit AS. In these biologically plausible scenarios, a smooth transition from the usual delayed synchronization (DS) to AS typically occurs when the inhibitory conductance is increased. The phenomenon was shown to be robust when model parameters are varied within a physiological range. Here we show for different neuron models and sets of parameters that anticipated synchronization can be facilitated by faster internal dynamics of the free-running receiver when the sender and the receiver are uncoupled. This means that the internal dynamics of the receiver can influence the phase diversity between synchronized neurons. Therefore, we give more examples to strengthen the hypothesis that the faster internal dynamics of the receiver could be the mechanism underlying anticipated synchronization [4] and the DS-AS transition via zero lag synchronization.

    Acknowledgments

    The authors thank CNPq (grants 402359/2022-4, 314092/2021-8), FAPEAL (grant SEI n.º E:60030.0000002401/2022), UFAL, and CAPES for financial support.

    References


    1. Voss, H. U. Anticipating chaotic synchronization. Physical Review E. 2000, 61(5), 5115.

    2. Matias, F. S., Carelli, P. V., Mirasso, C. R., & Copelli, M. Anticipated synchronization in a biologically plausible model of neuronal motifs. Physical Review E. 2011, 84(2), 021922.

    3. Pinto, M. A., Rosso, O. A., & Matias, F. S. Inhibitory autapse mediates anticipated synchronization between coupled neurons. Physical Review E. 2019, 99(6), 062411.

    4. Dalla Porta, L., Matias, F. S., Dos Santos, A. J., Alonso, A., Carelli, P. V., Copelli, M., & Mirasso, C. R. Exploring the phase-locking mechanisms yielding delayed and anticipated synchronization in neuronal circuits. Frontiers in systems neuroscience. 2019 13, 41.



    Tuesday July 23, 2024 4:50pm - 6:40pm PDT
    TBA

    4:50pm PDT

    P105 Applying information theory quantifiers to analyze motor cortex activity of a non-human primate during an instructed delayed reach-to-grasp task
    One of the greatest questions in neuroscience is to understand how the brain processes information. To address this question, we can analyze cortical time series during different cognitive tasks to characterize the statistical properties of brain signals. Here we analyze an open dataset of electrophysiological records in the motor cortex of a non-human primate during an instructed delayed reach-to-grasp task in the light of information theory quantifiers. The monkey could be instructed to perform four different types of trials: to grasp the object using either a side grip or a precision grip and also to pull the object towards him/her against one of two possible loads requiring either a high or low pulling force. The grip and force instructions for the requested trial type were provided to the monkeys independently. The two consecutive visual cues are separated by a one-second delay. We employ a symbolic information approach called the Bandt-Pompe methodology to associate a probability distribution function (PDF) to each time series and then use this PDF to calculate time causal quantifiers based on Information Theory. To characterize different features of time series and characterize each trial type we employ two indexes: Shannon entropy, and the corresponding statistical complexity, based on the disequilibrium between the actual time series and one with a uniform probability distribution function. This definition of complexity both extremes of order and disorder presents low complexity. For example, a constant time series or a very noisy time series would present low complexity. In the same direction, a perfect crystal or a random distribution of atoms are not complex systems. This approach has been successfully applied to study different regimes of phase synchronization and criticality in neuronal models and animal data [1,2] as well as to estimate response-related differences between Go and No-Go trials using monkey local field potential data [3]. Here we use the multi-scale entropy-complexity plane to visualize statistical properties of different types of trials at specific time windows. By using the multi-scale approach and embedding time delays to downsample the data we can estimate the important time scales in which the relevant information related to the four different types of trials is being processed and specific brain regions that are more related to the task.




      Acknowledgments 
      The authors thank CNPq (grants 402359/2022-4, 314092/2021-8) , FAPEAL (grant SEI n.º E:60030.0000002401/2022), UFAL, and CAPES for financial support. 


      References 


      1. Montani, F., Rosso, O. A., Matias, F. S., Bressler, S. L., & Mirasso, C. R.. A symbolic information approach to determine anticipated and delayed synchronization in neuronal circuit models. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2015, 373(2056), 20150110.
      2. Lotfi, N., Feliciano, T., Aguiar, L. A., Silva, T. P. L., Carvalho, T. T., Rosso, O. A., Copelli, M., Matias, F. S., Carelli, P. V. Statistical complexity is maximized close to criticality in cortical dynamics. Physical Review E, 2021, 103(1), 012415.
      3. de Lucas, H. B., Bressler, S. L., Matias, F. S., & Rosso, O. A. A symbolic information approach to characterize response-related differences in cortical activity during a Go/No-Go task. Nonlinear Dynamics. 2021,  104(4), 4401-4411.




      Tuesday July 23, 2024 4:50pm - 6:40pm PDT
      TBA

      4:50pm PDT

      P106 Characterizing a visual search task using a symbolic information approach applied to human intracranial data

      How the brain processes information during different cognitive processes is one of the greatest questions in neuroscience. Understanding the statistical properties of brain signals during different tasks is one promising way to address this question. Here we analyze freely available data from 67 implanted electrocorticographic (ECoG) in five human subjects in the light of information-theory quantifiers ideas. Our methodology involves employing symbolic information techniques to determine the probability distribution function associated with time series data from distinct cortical areas. Then we use these probabilities to calculate the Shannon entropy and a statistical complexity measure based on the disequilibrium between the actual time series and one with a uniform probability distribution function. This approach was originally introduced to distinguish chaotic from stochastic systems in time series analysis [1]. Recently, it has been successfully applied to study brain signals: to show that complexity is maximized close to criticality in cortical states [2], to distinguish cortical states using and to estimate time differences during phase synchronization in computational models and monkey data [3,4]. Here we define a Euclidian distance in the complexity-entropy plane to distinguish visual search tasks from blank screen intervals in specific brain regions and time scales. Therefore, we can estimate time intervals along the 2-second-long trials where these differences are more pronounced. We can also estimate differences in brain activity related to the direction of an arrow during the visual epochs.







        Acknowledgments 


          The authors thank CNPq (grants 402359/2022-4, 314092/2021-8) , FAPEAL (grant SEI n.º E:60030.0000002401/2022), UFAL, and CAPES for financial support.


          References 
          1. Zunino L, Soriano MC, Rosso OA. Distinguishing chaotic and stochastic dynamics from time series by using a multiscale symbolic approach. Phys. Rev. E. 2012, 86, 046210.
          2. Lotfi, N., Feliciano, T., Aguiar, L. A., Silva, T. P. L., Carvalho, T. T., Rosso, O. A., Copelli, M., Matias, F. S., Carelli, P. V. Statistical complexity is maximized close to criticality in cortical dynamics. Physical Review E, 2021, 103(1), 012415.
          3. Montani, F., Rosso, O. A., Matias, F. S., Bressler, S. L., & Mirasso, C. R.. A symbolic information approach to determine anticipated and delayed synchronization in neuronal circuit models. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2015, 373(2056), 20150110.

          4. de Lucas, H. B., Bressler, S. L., Matias, F. S., & Rosso, O. A. A symbolic information approach to characterize response-related differences in cortical activity during a Go/No-Go task. Nonlinear Dynamics. 2021, 104(4), 4401-4411.



          Tuesday July 23, 2024 4:50pm - 6:40pm PDT
          TBA

          4:50pm PDT

          P107 A standardised pipeline for analysing neuronal models in NeuroML
          Experimental work continues to generate increasing amounts of data spanning multiple spatial and temporal scales of the brain. However, combining the knowledge distilled from experiments into unified theories of brain function, in health and disease, requires the use of theoretical and modelling approaches. Biophysically detailed modelling is an important tool that enables the mechanistic understanding of neuronal circuits, their components, and emergent behaviours.


          NeuroML[1] is an established community standard that provides a common language for the exchange of models and model components between the many computational modelling tools. It is modular, hierarchical, structured, fully machine readable and translatable, and supports a large ecosystem of software tools that address various steps of the model life cycle.  The use of NeuroML, therefore, helps to make computational neuroscience more FAIR (Findable, Accessible, Interoperable, and Reusable)[2]. A large number of model components have been standardised in NeuroML and openly shared on general and NeuroML specific platforms such as ModelDB[3], Open Source Brain[4], and NeuroML-DB[5]. These include ionic conductance models, morphologically detailed cell models, synapse models, and network models that use these basic building blocks.



          Although the aforementioned platforms include some analysis and simulation features, there is currently a lack of automated analysis pipelines for NeuroML model components. Given the high utility of biophysically detailed models and the need to make them more accessible to the research community, it is imperative that complete descriptions of these models and model components are readily available.



          In this work, we present an accessible, automated analysis pipeline for biophysically detailed cell models standardised in NeuroML. We demonstrate its usefulness by generating detailed descriptions of a number of well used NeuroML cortico-cerebellar cell models. In addition to NeuroML model descriptions, this includes information on the electrophysiological properties of the models under a number of standard experimental and realistic stimulus protocols. The pipeline is freely available to the research community, and can be used locally or on integrated research platforms such as Open Source Brain v2[6].


          References
          1. Gleeson, P. et al. NeuroML: A Language for Describing Data Driven Models of Neurons and Networks with a High Degree of Biological Detail.  PLoS Computational Biology 6 (ed Friston, K. J.) e1000815 (2010).
          2. Sinha, A. et al. The NeuroML ecosystem for standardized multi-scale modeling in neuroscience. bioRxiv (2023).

          3. Migliore, M. et al. ModelDB: making models publicly accessible to support computational neuroscience. 2003.

          4. Gleeson, P. et al. Open Source Brain: A Collaborative Resource for Visualizing, Analyzing, Simulating, and Developing Standardized Models of Neurons and Circuits. Neuron 103, 395-411 (2019).

          5. Birgiolas, J. et al. NeuroML-DB: Sharing and characterizing data-driven neuroscience models described in NeuroML. PLOS Computational Biology 19, 1-29 (Mar. 2023).

          6. Sinha, A. et al. Open Source Brain v2.0: Closing the loop between experimental neuroscience data and computational models. Journal of Computational Neuroscience 49, S75-S76 (2021).



          Tuesday July 23, 2024 4:50pm - 6:40pm PDT
          TBA

          4:50pm PDT

          P108 Geometric eigenmodes as a means to constrain the source localisation problem of EEG and MEG
          Recent literature has suggested that the geometry of the brain offers a fundamental, yet simplistic constraint on neuronal activity [1]. This is achieved through the construction of geometric eigenmodes (GEMs) of the brain’s physical shape, offering a biologically plausible explanation of associated excitation between brain regions. Adjacent to this work on relating structure to function is the source localisation problem of EEG/MEG, which aims to identify the underlying neuronal generators of EEG/MEG recordings. The underdetermined nature of the problem necessitates sufficient constraints to produce realistic and unique solutions of source activity [2]. GEMs offer a new means to constrain the source localisation problem of EEG and MEG in a neurologically-feasible manner. In this work, we investigate the effectiveness of different methods of applying GEMs for source localisation.


          The methodology of Pang et al. [1] is applied to construct geometric eigenmodes of the cortical surface using EEG/MEG data.  Eigenmodes are constructed directly on the source space instead of the triangular mesh of the cortical midthickness surface.  An optimisation algorithm is used to find the optimal weights, in terms of reconstruction accuracy, of eigenmodes at each time step.  The aggregate source activity is thus represented as a multiplication between the eigenmodes and the fitted weights.  We examine the utility of this source localisation approach against two datasets: a simulated dataset of seizure activity and an empirical dataset of simultaneous sEEG and EEG recordings of cortico-cortical evoked potentials.  


          Our results demonstrate a proof of concept that GEMs offer a computationally efficient means to compute biologically-feasible sources that can reconstruct ~75% of the variance of an EEG signal with only 400 eigenmodes of a 20,484 dipole source space.  The results are consistent with previous work on using GEMs to reconstruct fMRI data with >80% accuracy with the same number of eigenmodes (200 per hemisphere) [1]. 


          This work supports the view that GEMs offer a lower-dimensional and more interpretable (relative to the entire source space) view of the brain’s macroscale activity [1]. However, ongoing studies are underway to analyse the ideal implementation of GEMs for EEG/MEG source imaging.  We plan to provide recommendations such as whether to include medial white matter areas in source space or apply a cortical mask to limit the surface mesh.


          Advancements in source localisation have direct applications in improving patient outcomes in the diagnosis and treatment of epilepsy, such as the localisation of the epileptogenic zone [3].  This proof-of-concept demonstrates the potential of neural field theory and computational neuroscience principles in biologically constraining source localisation approaches.


          References
          1. Pang JC, et al. Geometric constraints on human brain function. Nature. 2023;618(7965):566-574. doi:10.1038/s41586-023-06098-1  
          2. He B, et al. Electrophysiological Source Imaging: A Noninvasive Window to Brain Dynamics. Annu Rev Biomed Eng. 2018;20:171-196. doi:10.1146/annurev-bioeng-062117-120853
          3. Makhalova J, et al. Virtual epileptic patient brain modeling: Relationships with seizure onset and surgical outcome. Epilepsia. 2022;63(8):1942-1955. doi:10.1111/epi.17310



          Tuesday July 23, 2024 4:50pm - 6:40pm PDT
          TBA

          4:50pm PDT

          P109 Potassium and calcium currents trigger epileptic activity
          Each type of ionic channel present in the neuronal membrane contributes differently to the generation of action potentials. Their activity is crucial for micro and macro activity observed in the brain. For instance, knockouts and alterations of specific ion channels are associated with certain brain disorders such as epilepsy, Parkinson’s disease, among others. Therefore, understanding the role of different ionic channels at various levels is essential. In this work, we consider a conductance-based model of cortical neurons to describe each neuron. The parameters considered in this work have been taken from whole-cell patch-clamp recordings of rodents. We first investigate the influence of the slow-potassium and calcium channels in a single neuron cell. Secondly, we examine the effects of these ionic channels in synchronization. Our findings show that in a single neuron, the spike-to-burst transition is only reached for high values of high-threshold calcium conductance. At the network level, the absence of slow potassium and calcium current hinders bursting activity. However, the presence of slow potassium is enough to induce bursting synchronization. Moreover, we propose that the presence of high-threshold calcium current facilitates bistable states. Our results shed light on the effects of specific ionic current types in single neurons and on networks. We highlight that understanding high-synchronization and the spike-to-burst pattern transition can elucidate the appearance of synchronization related to epileptic seizures. Based on our findings, we can predict which ionic channels must be targeted for suppression and treat high synchronization.


          Tuesday July 23, 2024 4:50pm - 6:40pm PDT
          TBA

          4:50pm PDT

          P110 Topological approaches to understanding multi-human team EEG state
          How can we capture structure in high-dimensional behavioral and physiological data obtained from human subjects performing a teaming task? For example, EEG data is often collected across n bands at a high sampling rate. This results in large quantities of data points that live in n-dimensional (n-dim) space. One approach is use topological data analysis to identify and study the underlying structure of data. Topology is a field of mathematics concerned with the shape and connectivity of spaces. Mapper is a topological tool that allows the user investigate a dataset's connectivity [1]. The Mapper algorithm takes in high-dimensional data, projects it down to lower dimensional space, groups it there, and then applies those groupings to the original data points, where it connects clusters whenever two clusters contain a point, see (Fig. 1 -- top). Mapper allows for observation of underlying patterns and structure that are impossible to visualize or understand in the original space.

          Mapper has been used previously by Saggar et. al, for example see [2], to analyze MRI data collected from five different tasks, instructions, working memory, video, math, and rest. The authors showed that Mapper could elucidate the relationship between behavior and the evolution of underlying brain states over time and across tasks and represent it in a hub-like network. Building on this work, we use Mapper to provide insight into a multi-human finger tapping teaming task that includes both behavioral and EEG data. Applying Mapper to understand properties from EEG time-series data as opposed to MRI data will require overcoming signal noise, artifacts, and the signal's non-linear nature.

          We present results from data collected from a set of collaborative tapping tasks. Participants were paired off, isolated, and instructed to tap synchronously or syncopated to a metronome's beat. Next, subjects had to self-pace, pace off of their partner, lead their partner, or pace off of each other, depending on the trial. We analyzed EEG data across 32 different channels at a sample rate of 2000Hz, which yielded datasets consisting of ~800,000 data points in 32-dim space for each of the 12 trials. For each dataset, we fed the 32-dim data into Mapper where it projected it down to 2-dim using t-SNE. There, Mapper grouped the points in the 2-dim space and applied those groupings to the points in the 32-dim space. Finally, Mapper clustered the groups using DBSCAN and built a graph where clusters that contained the same points were connected, see (Fig. 1 -- bottom).

          We observed structural differences between the Mapper graphs of the two tapper's brain states across all 12 tasks. Across time, one individual showed distinct partitioning between the self-led task and the other three tasks. Finally, the graphs exhibited hub-like behavior, similar to those referenced above by Sagger et. al [2].


          Acknowledgements:
          We thank the US Army Research Lab for funding Ramesh Srinivasan and Zhibin Zhou under Contract No W911NF2420013.


          References:

          [1] Singh, G., Mémoli, F., & Carlsson, G. (2007). Topological Methods for the Analysis of High Dimensional Data Sets and 3D Object Recognition. Eurographics Symposium on Point-Based Graphics 2007, 91-100. 
          [2] Saggar, M., Sporns, O., Gonzalez-Castillo, J., et al. (2018). Towards a new approach to reveal dynamical organization of the brain using topological data analysis. Nature Communications, 9, 1399. 


          Tuesday July 23, 2024 4:50pm - 6:40pm PDT
          TBA

          4:50pm PDT

          P111 Using brain network modeling to understand the pharmacodynamics of ketamine
          Ketamine, a non-competitive N-Methyl-D-aspartate (NMDA) receptor antagonist, is known for modeling psychosis as a psychotropic drug, but also currently used in treatment-resistant depression and anesthesia. Ketamine shows diverging behavioral effects, ranging from the induction of dissociative states in low doses to anesthetic effects in higher doses. These distinct behavioral effects are associated with an antagonism of NMDA receptors expressed by various excitatory and inhibitory neuron types. They come along with widespread changes in the large-scale electrophysiological activity in the alpha and theta-band.
          On a molecular level, ketamine induces local alterations of NMDAergic transmission of pyramidal cells. In lower doses, this leads to an increase in the excitation of pyramidal cells, while in higher doses a decrease in excitation is observed. Given that ketamine impairs NMDAergic transmission, the increase of excitation at lower doses seems counterintuitive. One of the most prominent theories about ketamine’s mechanism of action is the disinhibition theory stating that low doses of ketamine cause cortical disinhibition by preferentially binding NMDA receptors on inhibitory interneurons. However, the exact mechanism of action is not yet fully understood and subject of current research.
          In this work we show how computational modeling with The Virtual Brain (TVB) can help to identify the causal mechanisms of drug effects, using ketamine as an example. The brain network model incorporates biologically interpretable parameters and is based on the human connectome. Each node in the network is represented by an abstracted cortical column, namely the Jansen-Rit neural mass model, comprising two excitatory and one inhibitory sub-populations that are coupled locally.
          To investigate ketamine’s effects, we implemented two types of dose-dependent NMDA receptor antagonism, both manipulating the excito-excitatory and excito-inhibitory connection between the sub-populations of the model. First, we tested the impact of uniform antagonism, for which the coupling parameters were scaled down by an identical factor that is determined by the ketamine dose. Second, we implemented a selective antagonism, for which the coupling parameters change selectively through sigmoidal functions with distinct inflection points. We found that selective antagonism, but not uniform antagonism could reproduce the changes found in excitation and electrophysiological activity in the alpha and theta-band.
          Our computational results support the disinhibition theory - ketamine in low doses preferentially binds to NMDA receptors on inhibitory interneurons, preferably impairing excitatory to inhibitory transmission. In anesthetic doses on the other hand antagonism takes place on all NDMA receptors, thereby impacting as well excito-excitatory transmission. Here we exemplified the approach using the Jansen-Rit neural mass model. However, the relationship of selective NMDAergic transmission efficiencies and ketamine dose can be used as a framework to study ketamine’s effects and other anti-NMDAergic phenomena in other computational models. 


          Tuesday July 23, 2024 4:50pm - 6:40pm PDT
          TBA

          4:50pm PDT

          P112 Is the cortical dynamics ergodic? A numerical study in partially-symmetric networks of spiking neurons
          Cortical neurons in vivo display significant temporal variability in their spike trains even in virtually-identical experimental conditions. This variability is partly explained by the intrinsic stochasticity of the spike-generation mechanism. In fact, to account for the levels of variability observed in the experiment, one needs to assume additional fluctuations in the level of activity over longer time scales [1]. But what is the origin of these fluctuations in the "firing rates"? Some theories explain them as a result of precise adjustments of the synaptic connectivity [2]. Others claim that slow fluctuations in the "rates" are instead a signature of non-ergodic network dynamics [3]. The non-ergodicity arises as consequence of the partially-symmetric synaptic connectivity, consistently with anatomical observations [4]. It is unclear, however, whether such ergodicity breaking occurs in networks of spiking neurons, due to the presence of fast temporal fluctuations in the synaptic inputs generated by the spiking variability [5].
          To address this question we study the dynamics of sparse networks of quadratic-integrate-and-fire neurons [6], with arbitrary levels of symmetry, q, in the synaptic connectivity. The connectivity matrix is random for q=0 and fully symmetric for q=1.The neurons also receive an excitatory drive, dynamically balanced by the recurrent synaptic inputs [7]. This results in low and heterogeneous average levels of activity across neurons Fig.1(a-b), and temporally irregular spike trains, mimicking features of the cortex activity [8].
          We estimate over increasing time intervals, T, the single-neuron "firing rates", starting from different initial distribution of the membrane voltages. If the dynamics is ergodic, the difference D between the estimates obtained from different initial settings should go to zero for suitably long time windows. This is, in fact, what happens in random networks. In partially symmetric networks q>0, the onset of the "ergodic" regime occurs at longer and longer times.The situation becomes dramatic for the fully symmetric network (Fig.1c); the network dynamics is non-ergodic, at least in a weak sense. In this regime, the network activity is sparse, with a large fraction of almost-silent neurons, and the auto-correlation function of the spike trains exhibits long time scales (Fig.1d). These features are also routinely observed in experimental recordings [9].
          Our results provide support to the idea that many features of cortical activity can be parsimoniously explained by the non-ergodicity of the network dynamics [4]. In particular, in this regime, the activity level of the single neurons can significantly change depending on the "microscopic" initial conditions, providing a simple explanation for the large trial-to-trial fluctuations observed in the experiment.
          References
          1. A. K. Churchland, et al. Neuron 69, 818, 2011
          2. C. Huang & B. Doiron. Current Opinion in Neurobiology 46, 31, 2017
          3. K. Berlemont & G. Mongillo. bioRxiv, pages 2022–03, 2022
          4.  L. Campagnola, et al. Science 375.6585, 2022: eabj5861
          5. S. Rao, D. Hansel & C. Van Vreeswijk, Sci Rep 9, 3334, 2019
          6. M. Monteforte & F. Wolf. PRL, 105(26):268104, 2010
          7. M. Di Volo & A. Torcini. PRL, 121(12):128301, 2018
          8. A. Roxin, et al. J Neurosci, 31.45, 2011: 16217-16226
          9. J. D. Murray et al., Nat Neurosci ,17, 1661, 2014


          Tuesday July 23, 2024 4:50pm - 6:40pm PDT
          TBA

          4:50pm PDT

          P113 Robustness of sparse cortical networks
          The primary sensory cortices perform the first stages of cortical information processing as the brain builds a representation of the world surrounding us. The sparse connectivity, low neuronal firing frequency and variable trial-to-trial responses are hallmarks of the cortical network activity [1–3]. In vivo ablation of the touch-sensitive neurons in layer (L) 2/3 of the primary somatosensory (S1) cortex revealed population-wide modulation of stimulus-evoked firing rates [4]. This suggests the susceptibility of cortical networks to sparsification. Here, we systematically investigate the robustness of sparse cortical networks to input-related perturbations — specifically, to the loss of thalamocortical (TC) projections which drive the network activity. We developed a biologically inspired multi-layer spiking neural network (SNN) with Hodgkin-Huxley neurons to mimic realistic interactions between cortical neurons in L4 and L2/3 of S1 and ventral posteromedial (VPM) thalamic neurons (Fig. 1a-c). We systematically removed (i.e., sparsified) TC projections from VPM to excitatory (E) and inhibitory (I) L4 neurons, while evaluating the network robustness defined as the population-specific firing rate (FR) stability. We sparsified TC projections as a group (i.e., VPM-L4E and VPM-L4I simultaneously) or individually, targeting either VPM-L4E or VPM-L4I projections. 
          The results show several effects: first, the network is symmetrically sensitive in its L4 to the grouped TC projection sparsification, as both L4 E and I populations show significant FR modulation onset at identical sparsification levels (Fig. 1d, yellow). We observe the asymmetrical loss of robustness in L4 after the targeted VPM-L4E projection sparsification (Fig. 1e, blue) and no effects in the VPM-L4I sparsification case (Fig. 1f). Second, the symmetry in L4 undergoes cross-laminar transformations. Simultaneous removal of TC projections causes asymmetrical responses in L3 E and I (Fig. 1d, green), whereas the asymmetry in L4 (E neurons show FR instability before I) changes directionality in L3 (I neurons show effects before E) after targeted VPM-L4E sparsification (Fig. 1e, green). Finally, we show a layer-specific tolerance threshold for different sparsification cases. The tolerance threshold is the sparsification-level boundary between stable and significantly modulated FR. Interestingly, the L2E neurons display the highest tolerance to TC projection loss across all sparsification cases (Fig. 1d-f, red).
          These results show that the cortical network has a high robustness to input perturbations. The strongest loss of robustness in the network results from the loss of input to L4E neurons (Fig. 1d-e). We speculate that this triggers the activation of long-term plasticity mechanisms at excitatory synapses, resulting in functional rewiring of the network as shown in sensory deprivation studies [1,5]. Moreover, the robustness is layer-specific and increases with deeper network layers. This suggests the role of intracortical connectivity in robustness. The connectivity is potentially optimised for stable activity in L2E (Fig. 1d-f, red), which is the main source of cross-columnar connectivity in the cortex.


          References
          [1] T. Celikel et al., Nat. Neurosci. 7 (2004) 534–541.
          [2] H. Markram et al., Cell 163 (2015) 456–492.
          [3] B. Voelcker et al., Nat. Commun. 13 (2022) 5484.
          [4] S. Peron et al., Nature 579 (2020) 256–259.
          [5] C.B. Allen et al., Nat. Neurosci. 6 (2003) 291–299.


          Tuesday July 23, 2024 4:50pm - 6:40pm PDT
          TBA

          4:50pm PDT

          P114 OpenWorm updates: Consolidating connectomes, 2D worm body models, biophysically detailed neuron models and using LLMs to help build a worm

          The OpenWorm project (http://openworm.org) is a global, online collaboration of computational and experimental neuroscientists, software developers and interested volunteers with an ambitious long-term goal: creating a cell-by-cell computer model of the worm C. elegans which reproduces the behaviour of the real animal in as much detail as possible [1]. The project takes a unique Open Science approach to development, and provides a community resource which consolidates our anatomical and physiological knowledge of the worm, and will allow investigators to examine the mechanistic underpinnings of how behaviour is generated by a complete nervous system.

          We will provide an update on a number of interrelated activities within the project:

          1) While C. elegans is often said to have a completely mapped connectome, there have been a number of datasets published over the past few decades which have provided (sometimes overlapping, sometimes independent) information on synaptic connectivity in the worm [2–4]. We have systematically analysed published datasets, applying uniform formatting to these to allow greater comparison and consolidation of these (chemical, electrical & extrasynaptic) connectomes for use in theoretical and modelling studies.2) OpenWorm contributors have already developed a 3D worm body model (Sibernetic) which incorporates a fluid mechanics simulator for modelling the interactions between the worm body and the external environment. While this can be used for detailed simulations, it is computationally intensive and unsuitable for running large numbers of simulations for parameter estimation. We will update on our plans to adapt existing 2D worm body simulators [5] for use in the project as more efficient alternatives for testing the generation of behaviour in the body by the nervous system.
          3) We will describe biophysical cell models [6] which have been translated to standardised NeuroML format to ease incorporation into our simulations of the complete nervous system. 
          4) Large Language Models (LLMs) hold great promise for facilitating access to huge amounts of scientific literature across multiple domains. We will outline our developments to date to create a corpus of literature related to C. elegans which can be used to fine-tune LLMs to allow extraction of scientific properties related to the worm, and ease their use as a basis for, and for validating computational models of worm anatomy, physiology and behaviour.
          References:

          1. Sarma GP, et al. OpenWorm: overview and recent advances in integrative biological simulation of Caenorhabditis elegans. Philos Trans R Soc Lond B Biol Sci. 2018; 373. 
          2. White JG, Southgate E, Thomson JN, Brenner S. The structure of the nervous system of the nematode Caenorhabditis elegans: the mind of a worm. Phil Trans R Soc Lond. 1986; 314:1–340.
          3. Cook SJ, et al. Whole-animal connectomes of both Caenorhabditis elegans sexes. Nature. 2019; 571:63–71.
          4. Varshney LR, Chen BL, Paniagua E, Hall DH, Chklovskii DB. Structural properties of the Caenorhabditis elegans neuronal network. PLoS Comput Biol. 2011; 7:e1001066.
          5. Boyle JH, Berri S, Cohen N. Gait Modulation in C. elegans: An Integrated Neuromechanical Model. Front Comput Neurosci. 2012; 6:10.
          6. Nicoletti M, Loppini A, Chiodo L, Folli V, Ruocco G, Filippi S. Biophysical modeling of C. elegans neurons: Single ion currents and whole-cell dynamics of AWCon and RMD. PLoS One. 2019; 14:e0218738.




          Tuesday July 23, 2024 4:50pm - 6:40pm PDT
          TBA

          4:50pm PDT

          P115 The dual nature of synaptic homeostasis: Interaction between fast and slow processes
          The brain stores information using Hebbian plasticity mechanisms that, however, require compensatory mechanisms to prevent pathological behaviour [1]. Synaptic scaling regulates synaptic efficacies to maintain the neuron to a fixed firing regime [2], but it might be too slow to counteract the excessive growth of synapses driven by Hebbian processes [1]. In contrast, heterosynaptic plasticity operates at timescales similar to Hebbian plasticity [3], but its ability to homeostatically stabilize neural activity remains unclear.
          Here, we introduce aggregate scaling, a simple, biologically plausible model for synaptic homeostasis based on the dynamics of synaptic building blocks, such as AMPA receptors (Fig. 1, left). Activity-dependent changes introduce synaptic competition over the available building blocks giving rise to heterosynaptic changes [4]. The abundance of these building blocks is regulated based on persistent shifts of spiking activity [5]. For our model, we implement heterosynaptic plasticity as multiplicative normalization that preserves the total sum of weights and ensures that synapses potentiate at the expense of others. A simple control law scales the sum of all synaptic weights to regulate the firing activity of each neuron. Parameters are set to replicate experimental results and consistent with those studies we find the timescales of synaptic scaling to be in the order of hours [6, 7].
          We introduce aggregate scaling on a single neuron and a cortical network model and compare it with conventional synaptic scaling and a simple normalization model. We demonstrate that in contrast to other models, aggregate scaling can regulate activity at both the neuronal and network levels in the face of ongoing plasticity (Fig. 1, right). We find that both rapid heterosynaptic changes and slow activity-dependent regulation are necessary to prevent pathological behaviour. Lastly, we analyze the stability of our model by considering a simplified non-linear system. We identify the necessary conditions for the system to reach equilibrium and avoid damped oscillations. Overall, our results highlight the importance of multiple timescales of synaptic regulation.
          Acknowledgements
          This work was supported by the German Research Foundation, DFG, SPP 2041, Project number 347573108: "The dynamic connectome: dynamics of learning" and the Johanna Quandt foundation (JT).
          References




          1. Zenke F, Gerstner W. Hebbian plasticity requires compensatory processes on multiple timescales. Philos Trans R Soc B Biol Sci, 2017, vol. 372, no. 1715, p. 20160259.
          2. Turrigiano G G. The self-tuning neuron: synaptic scaling of excitatory synapses. Cell. 2008, vol. 135, no. 3, pp. 422–435.
          3. Chistiakova M, Bannon N M, Bazhenov M, et al. Heterosynaptic plasticity: multiple mechanisms and multiple roles. The Neuroscientist, 2014, vol. 20, no. 5, pp. 483–498.
          4. Triesch J, Vo A D, Hafner A S. Competition for synaptic building blocks shapes synaptic plasticity. Elife, 2018, vol. 7, p. e37836.
          5. Ju W, Morishita W, Tsui J, et al. Activity-dependent regulation of dendritic synthesis and trafficking of AMPA receptors. Nat. Neurosci, 2004, vol. 7, no. 3, pp. 244–253.
          6. Turrigiano G G, Leslie K R, Desai N S, et al. Activity-dependent scaling of quantal amplitude in neocortical neurons. Nature, 1998, vol. 391, no. 6670, pp. 892–896.
          7. Ibata K, Sun Q, Turrigiano G G. Rapid synaptic scaling induced by changes in postsynaptic firing. Neuron, 2008, vol. 57, no. 6, pp. 819–826.


          Tuesday July 23, 2024 4:50pm - 6:40pm PDT
          TBA

          4:50pm PDT

          P116 Closed-loop neurostimulation for the treatment of pathological brain rhythms
          Mental disorders affect millions of people worldwide and are a leading cause of disability. While pharmacological treatments can be effective for some patients, they often have limited efficacy and significant side effects. In recent years, there has been growing interest in using neurostimulation as an alternative treatment for mental disorders. However, traditional neurostimulation and neurofeedback approaches have focused on delivering a fixed stimulation pattern [1, 2]. Adaptive closed-loop neurostimulation, on the other hand, offers a more targeted and personalized approach by adjusting the stimulation in real-time based on the patient's brain activity.

          In this work, we explore the potential of closed-loop neurostimulation for restoring physiological brain activity in mental disorders. We focus on psychotic transition in schizophrenia, which has been shown to induce an increased gamma activity and a decreased gamma activity in electroencephalograms (EEG) [3]. We highlight the use of EEG to monitor ongoing brain activity and adjust the neurostimulation accordingly. This setup permits reformulating the problem in terms of a control circuit including a controller whose purpose is to convert the EEG signal into the neurostimulation signal in real-time. Furthermore, under the assumption of small input current, we can study the brain EEG response with linear analysis tools [4]. This allows us to study the effect of neurostimulation directly in the frequency domain.

          One major challenge is the development of reliable biomarkers that can be used to guide the neurostimulation. The most obvious biomarker choice are the pathological rhythms themselves. Consequently the closed-loop setup aims to control the EEG power spectral density distribution. Another challenge with real-time closed-loop control is the effect of the feedback delay on the stability and the total output of the system. To address this issue, we introduce a predictor filter that optimally compensates the effect of the delay in the frequency ranges of interest. Moreover, the design of the controller involves a neurostimulation response model, which is extracted in situ from open-loop neurostimulation data of the patient. This model identification step also makes it possible to account for the interference created by direct current measurement, while applying neurostimulation and measuring the brain activity simultaneously.

          The elements and evaluation of the proposed method are applied to a linear neural population for EEG under general anesthesia [5] and to a cortico-thalamic feedback model describing EEG under ketamine and transcranial neurostimulation [6].

          The control algorithm is both lightweight and efficient, which allowed it to be entirely implemented in an Arduino chip which is interfaced with stimulation and EEG measurement devices.

          References
          1. Tortella, Gabriel, et al., World journal of psychiatry 5.1 (2015): 88.
          2. Ros, Tomas, et al., Frontiers in human neuroscience 8 (2014): 1008.
          3. Schulman, Joshua J., et al., Frontiers in human neuroscience 5 (2011): 69.
          4. Liu, Zhongming, et al., Neuroimage 50.3 (2010): 1054-1066.
          5. Hutt A, Frontiers in Computational Neuroscience 7: 2 (2013)
          6. Riedinger J and Hutt A, Journal of Clinical Medicine 11(7): 1845 (2022)


          Tuesday July 23, 2024 4:50pm - 6:40pm PDT
          TBA

          4:50pm PDT

          P117 Some further developments on a neurobiologically-based model for human color sensation
          I previously presented a neurobiologically-based model for human trichromatic color sensation [1]—psychophysically, my mode is based on a color-sensation model proposed by the British psychologist William McDougall in 1901 [2], which in turn is based on the Young-Maxwell-Helmholtz trichromatic theory for human color vision. Here I apply my model to three aspects of color vision: 3D color solid, dichromatism, and ocular agnosticism. Fig. 1A illustrates the first three stages involved in my model, and Fig. 1B lists the main tenets of both McDougall’s model and mine.  A crucial point here is as follows: Many color vision phenomena exhibit monocularity (i.e., occurring within the part of visual consciousness corresponding to one eye only) (see [3], p.499), McDougall characterized this feature as a cortical monocular stage for color sensation, and I mapped this stage to V1-L4 (i.e., layer 4 in primary visual cortex or cortical area V1) based on currently known neuroanatomical and neurophysiological findings about the primate visual system [4]. Now let’s proceed to the three aspects of color vision mentioned above.  (1) 3D color solid: over history, many researchers have proposed various geometrical forms (i.e., the so-called “color solid” models) for all perceivable colors stacked together in some orderly manners [5]. V1-L4 is known to have a gradient of cell densities from the layer’s top (i.e., its pia or outer side) to its bottom (i.e., its white-matter or inner side). Considering together with the proposition that the population size of a cell assembly directly corresponds to the magnitude of a color sensation, we can derive that the neuroanatomically-based color solid is a tilted cuboid (Fig. 1C), which is very close to the tilted double-pyramid color solid proposed by the German psychologist Ebbinghaus ([5], p.81). (2) Color blindness—specifically, dichromatism in which the dimensionality of color space is two. Fig. 1D compares two cases of dichromatism with normal trichromatism. Take deuteranopia as example, at the retinal level, M-cones are lost and replaced by L-cones; but at the cortical level, deuteranopia shows as a fusion (or non-differentiation) between the two bottom layers of V1-L4. (3) Ocular agnosticism: even though color sensation is monocular, we are normally not aware of which eye we are seeing with—this phenomenon is known as “ocular agnosticism” (or informally, “blindness to eye-of-origin”) and has been described by many investigators (e.g., by Helmholtz in [3]). Fig. 1E presents an explanation for this phenomenon: within one eye’s ODC (ocular dominance column) and its counterpart of the other eye, what matters for color sensation is the population size of neuronal activation; how this population is distributed between the two eyes’ ODCs does not matter to color sensation under the condition of binocular color fusion.

          References
          1. Wu CQ. A neurobiologically based two-stage model for human color vision. Proceedings Human Vision and Electronic Imaging. 2012, XVII, 82911O.
          2. McDougall W. Some new observations in support of Thomas Young's theory of light- and colour-vision (II). Mind. 1901, 10, 210-245.
          3. Helmholtz H. Treatise on Physiological Optics. Vol. 3 (translated by J. P. C. Southall). New York: Optical Society of America; 1925.
          4. Horton JC. Ocular integration in the human visual cortex. Canadian Journal of Ophthalmology. 2006, 41, 584-593.
          5. Kuehni RG. Color Space and Its Divisions. Wiley; 2003.


          Speakers

          Tuesday July 23, 2024 4:50pm - 6:40pm PDT
          TBA

          4:50pm PDT

          P118 Shorter Intrinsic Timescales in Aging Brains: Insights from Spiking Neuron Networks
           

          Intrinsic timescales of brain regions quantify how long neural information is stored. They are heterogeneous across brain regions and grow with the hierarchical levels [1, 2]. Healthy ageing brings a shift in the local and global balance of information integration [3]. These dynamical changes may affect the timescales of information coding in the aged brain. To test this hypothesis, this work investigates the alteration of intrinsic timescales in functional regions with resting-state functional magnetic resonance imaging (fMRI) in a young (18–32 years old, mean: 22.21, SD: 3.65) and an old cohort (61–80 years old, mean: 69.82, SD: 5.64). We found decreased intrinsic timescales across brain regions encompassing multiple large-scale functional networks in elderly subjects. Furthermore, to better understand the neuroanatomical bases underlying these alterations of intrinsic timescales, we measured the grey-matter volumes (GMV) of the corresponding brain regions and inspected their association with intrinsic timescales. In agreement with the literature [4][5], we found a significant positive association between intrinsic timescales and GMV across brain regions. To obtain a mechanistic explanation for this phenomenon, we modelled networks of spiking neurons [6]. The young brain network was modelled as a near-critical network. The elderly brain network was modelled with a reduced number of neurons and connections, representing the normal ageing process and the reduced GMV observed. The empirical results were reproduced by the model: the neuronal network representing the younger brains was closer to criticality and exhibited longer intrinsic timescales due to critical slowing down. As parts of the network were removed, the elderly network had less connections and more subcritical brain dynamics and, hence, shorter intrinsic timescales[7]. This research provides a novel mechanistic understanding of how structural brain changes may underpin dynamical alterations.
          References
          [1].  Golesorkhi M, Gomez-Pilar J, Zilio F, Berberian N, Wolff A, Yagoub MC, Northoff G. The brain and its time: intrinsic neural timescales are key for input processing. Communications biology. 2021 Aug 16;4(1):970.
          [2].  Gollo LL. Exploring atypical timescales in the brain. Elife. 2019, 8:e45089.
          [3].  McIntosh AR. Neurocognitive aging and brain signal complexity. InOxford research encyclopedia of psychology 2019 Feb 25.
          [4].  Kanai R, Rees G. The structural basis of inter-individual differences in human behaviour and cognition. Nature Reviews Neuroscience. 2011 Apr;12(4):231-42.
          [5].  Watanabe, Takamitsu, Geraint Rees, and Naoki Masuda. Elife, 2019, 8: e42256.
          [6].  Kinouchi O, Copelli M. Optimal dynamical range of excitable networks at criticality. Nature physics. 2006 May;2(5):348-51.

          [7].  Cocchi L, Gollo LL, Zalesky A, Breakspear M. Criticality in the brain: A synthesis of neurobiology, models and cognition. Progress in neurobiology. 2017 Nov 1;158:132-52.


          Tuesday July 23, 2024 4:50pm - 6:40pm PDT
          TBA

          4:50pm PDT

          P119 Dynamical remodeling of neural circuit architecture
          Visual cortex, as a model for plasticity, has been studied for many years[1,2]. It is still controversial, however which mechanisms guide the emergence and development of a hallmark of the cortical functional architecture, orientation preference maps (OPMs) [3,4]. Many people believe that molecular recognition governs the initial map formation and helps build a basic topology structure, but for later stages in development, retinal activity patterns are required to tune the maps for maturation[5,6]. We use a detailed input-driven model with Hebbian learning mechanism- the Topographica model and examine different conditions and parameter regimes for OPM development and characterize map layouts on long time scales[7]. Our results for the first time provide a comprehensive characterization of the temporal reorganization of orientation maps across all biologically relevant timescales, from the first emergence of orientation selective cells to the final convergence of the entire circuit architecture. Among these conditions pinwheel layouts under certain parameter conditions show statistical characteristics mostly closely matching the common statistical features of the experimental data from six animal species[8,9].
           
           
           
          1. Hubel, D. H. and T. N. Wiesel (1962). “Receptive fields, binocular I nteraction and functional architecture in the cat’s visual cortex”. In: J Physiol. 160, pp. 106–154. 
          2. Wandell, B. A. and S. M. Smirnakis (2009). “Plasticity and stability of visual field maps in adult primary visual cortex”. In: Nature Review Neuroscience 10, pp. 873– 884. 
          3. Ackman, J. B. and M. C. Crair (2014). “Role of emergent neural activity in visual map development”. In: Current Opinion in Neurobiology 24, pp. 166–175. 
          4. Bosking, W. H., J. C. Crowley, and D. Fitzpatrick (2002). “Spatial coding of position and orientation in primary visual cortex”. In: Nature Neuroscience 5, pp. 874–882. 
          5. White, L. and D. Fitzpatrick (2007). “Vision and cortical map development”. In: Neu- ron 56, pp. 327–338. 
          6. Albert, M. V., A. Schnabel, and D. J. Field (2008). “Innate Visual Learning through Spontaneous Activity Patterns”. In: plos computational biology 4, e1000137. 
          7. Stevens, J. et al. (2013). “Mechanisms for Stable, Robust, and Adaptive Development of Orientation Maps in the Primary Visual Cortex”. In: J.Neurosci. 33, pp. 15747–15766.
          8. Ho, C. L. A et al. (2020). “Orientation Preference Maps in Microcebus murinus Re- veal Size-Invariant Design Principles in Primate Visual Cortex”. In: Current Biol- ogy 31, pp. 733–741. 
          9. Kaschube, M. et al. (2010). “Universality in the Evolution of Orientation Columns in the Visual Cortex”. In: Science 330, pp. 1113–1116. 


          Tuesday July 23, 2024 4:50pm - 6:40pm PDT
          TBA

          4:50pm PDT

          P120 A network model for flexible binding in working memory with dendritic bistability
          The binding problem is about how our brain can associate elements of information into separate items. We have studied it in the context of working memory. For example, how can we hold working memories of a red square and a blue circle, without confusing them with a blue square and a red circle? Another example, where binding is unidirectional, is how, when playing chess, can we flexibly manipulate next steps in our mind like a palimpsest? Generally, there are combinatorially many ways elements can be bound, making it unrealistic to do an exhaustive training. Therefore, the brain should employ a flexible mechanism in working memory.


          Classically, two popular mechanisms have been proposed. One is ‘communication through coherence’, which binds elements, which are oscillators, by relative phase locking. However, it can be fragile in a noisy environment like the brain. Another is fast Hebbian plasticity, but this has little experimental support, and allows only one item to be held actively. Other items need to be activity-silent to prevent detrimental interactions between items.


          In this study, we propose a basis for flexible binding in working memory based upon a spiking network model with bistable dendrites. This simplified network consists of 40 identical neurons, all-to-all connected with a uniform weight. Each neuron has 40 separated dendrites, each of which is projected to by a different neuron in the network. Dendrites follow a bistable input-output function, which is achieved by conductance-based dynamics dominated by NMDA receptors (Fig. 1A). Functionally, this makes each dendrite behave as though there was fast Hebbian learning, but stored memory is based on dendritic activity, not synaptic strength.


          With this simple model, we tested various working memory situations. These includes two separately bound items, ‘A+C’ and ‘B+D’, two partially overlapping items, ‘A+B’ and ‘A+C’, or unidirectionally bound relations, ‘A to C’ and ‘C to B’, as illustrated (Fig. 1B). Four features, ABCD, are represented in the network, each with 10 neurons. We demonstrated the active storage with robustness under noise or a strong local inhibitory impulse. And induced errors give testable predictions. Furthermore, we showed how information, stored in the form of dendritic voltage, can be read out in firing rate, either by cue-recall or phase locking.


          Tuesday July 23, 2024 4:50pm - 6:40pm PDT
          TBA

          4:50pm PDT

          P121 Synaptic down-scaling during sleep emerges as a by-product of decoupled neuronal activities
          Sleep may induce specific alterations in synapses. In some brain regions, e.g., rodent's somatosensory and motor cortices [1], as well as prefrontal cortex [2], synapses have been shown to weaken during sleep. In others, e.g., rodent's visual cortex [2], synapses have been shown to remain unaltered. The mechanism underlying synapse modifications during sleep and, more importantly, why synapses in different brain regions undergo distinct types of plasticity are not well understood. We hypothesize that the same synaptic plasticity rule is at play during awake and sleep periods, but distinct neuronal activity patterns determine whether synapses are weakened (strengthened) during sleep (awake) periods. To test this hypothesis, we investigated how excitatory synapses are modified in response to ongoing changes in pre- and postsynaptic spike statistics, designed to mimic sleep/awake patterns of neuronal activity. We simulated changes in excitatory synapses via a weight-dependent spike-timing-dependent plasticity (STDP) rule with additive long-term potentiation (LTP) and multiplicative long-term depression [3,4]. For a fixed set of the synaptic plasticity model's parameters, the steady-state distribution of synaptic weights is approximately Gaussian, with mean and standard deviation well described by these parameters as well as the correlation between pre- and postsynaptic activity. However, synaptic weight distributions reported experimentally are not Gaussian, but heavy-tailed [1,2]. We thus further hypothesized that distinct synapses might be governed by different sets of the synaptic plasticity model's parameters. We confirmed this hypothesis by fitting the experimentally reported distribution of excitatory synapse strengths from motor and somatosensory cortices of mice [1] as a composite of multiple Gaussian-like functions. We found that the experimentally reported distribution is well captured by such a fitting and explored how such diversity in parameters may naturally emerge by incorporating a metaplasticity-like [5] dynamics to the STDP model. In our metaplasticity implementation, we set the amplitude of LTP of a given synapse to integrate the changes of its synaptic weight. This naturally separated the plasticity parameters of each synapse into distinct groups, resulting in a heavy-tailed synaptic weight distribution as reported experimentally [2]. Importantly, our findings confirmed our original hypothesis that excitatory synapses exhibit a natural strengthening (respectively weakening) during periods of correlated (respectively uncorrelated) pre- and postsynaptic activity, reminiscent of awake (respectively sleep) states [1]. Without changes in correlation levels between pre- and postsynaptic activity, the synaptic weight distribution remained unchanged [2]. Importantly, metaplasticity, which may be linked to consolidation of synaptic weights, was the key component of the model for the emergence of a heavy-tail distribution of synaptic weights. Our results suggest that the weakening of excitatory synapses may arise from uncorrelated synaptic inputs during sleep rather than an active scaling-down process.
          [1] de Vivo, L, et al. (2017) Science 355: 507-510.
          [2] Cary, A and Turrigiano, G G (2021) eLife 10: e66304.
          [3] Bi, G-q and Poo, M-m (1998) Journal of Neuroscience 18: 10464-10472.
          [4] van Rossum, M C W et al. (2000) Journal of Neuroscience 20: 8812-8821.
          [5] Abraham, W C and Bear, M F (1996) Trends in neurosciences 19: 126-130.


          Tuesday July 23, 2024 4:50pm - 6:40pm PDT
          TBA

          4:50pm PDT

          P122 Efficient Algorithms for Extracting Higher-Order Geometric Information from Complex Networks and its Applications to Neuroscience
          Topological and Geometrical Data Analysis has emerged in the last decade as a novel robust method in data science with applications in neuroscience for extracting non-trivial geometrical and topological patterns from neuronal data. These non-trivial patterns are typically studied via topological invariants (e.g. Betti numbers, Euler characteristics etc). However, in this talk we explore novel directions in connection with discrete differential geometric in which we specifically develop a novel computational framework, based on The Forman-Ricci curvature (FRC), that is  known for its high capacity for extracting geometric information from complex networks. This is motivated by the fact that extracting information from dense datasets (e.g. brain data) is still challenging due to the combinatorial explosion of high-order network structures. Motivated by this challenge we sought a set-theoretic representation theory for high-order network cells and FRC as an alternative and efficient formulation for computing high-order FRC in complex networks. We provide a pseudo-code, a software implementation coined FastForman, as well as a benchmark comparison with alternative implementations. As a result, our implementation supersedes the performance of the FRC computations in the literature. Taking these facts into account, our findings open new research possibilities in complex systems where higher-order geometric computations are required. Finally, to show the validity of our framework by applying it to fMRI time-series from human connectome data.



          Tuesday July 23, 2024 4:50pm - 6:40pm PDT
          TBA

          4:50pm PDT

          P123 Tight E-I Balance, Feedback, and Efficient Coding in Hierarchical Spiking Neural Network: Modeling Simple and Complex Cell Dynamics
          Understanding of the primary visual cortex (V1) and its role in visual perception has continued to evolve since the pioneering work by Hubel and Wiesel, aided in part by increasingly accurate models of simple and complex cells. Nevertheless, how V1 processes visual information is not fully understood, calling for more accurate models to capture the nuances of brain visual processing. Two crucial aspects are the tight balance between excitatory and inhibitory processes and efficient/sparse brain processing [1].
          Rao's predictive coding model [2] describes a hierarchical sensory processing model in which the cortex constantly generates and updates predictions to reduce errors between expected and actual sensory inputs. It postulates that higher cortical areas in the hierarchy generate predictions about sensory information, compare these with incoming inputs, and feedforward only the discrepancies. Integrating with this, sparse coding highlights the brain's efficiency in using the fewest active neurons for sensory representation, minimizing redundancy. Bastos and colleagues [3] extended this model by proposing a specialized microcircuit replicated across cortical areas to support this processing in which each of the six distinct layers of the cortex is specialized for different processing roles. Layer 4 receives feedforward input from lower levels and sends this information to layers 2/3. Layers 2/3 pyramidal neurons then integrate this input with feedback received at their distal tufts, further transmitting the integrated information to both to layer 5 pyramidal neurons and to subsequent cortical areas in the hierarchy. Layer 5 neurons generate predictions, looping feedback to lower levels, completing the predictive feedback cycle.
          This study investigates the hierarchical structure of the V1 area, focusing on how simple cells in layer 4 interact with complex cells in layers 2/3. We adopt the balanced Deneve model [1] within a hierarchical neural framework, which comprises two populations of excitatory and inhibitory neurons, with the latter tracking the activity of the former. Our approach positions the membrane potential as a representation for the prediction error, with spikes transmitting information. Our contribution lies in the integration of feedback information into layer 2/3's processing, enhancing the simulation of cortical activities and sensory processing. 
          Analysis of our model trained on natural images leads to the emergence of classical receptive fields in L4 and complex receptive fields in L2/3. This results from the tight balance of excitation and inhibition together with the pooling of phases of the simple cell input through the lateral connectivity. Through feedback analysis, we explore the wider impact of stimuli on neural structures, seeking to enrich our understanding of the visual cortex's roles in perception and interpretation.
          This study enhances understanding of visual processing and its use in improving artificial vision systems, showcasing predictive coding as an alternative to back-propagation for neuromorphic integration. It also impacts brain-computer interfaces (BCIs) by deepening insights into feedback's role in neural processing.
          References

          [1]Denève,et.al.(2016) Nature neuroscience, 19(3):375-382.

          [2]Rao,et.al.(1999) Nature neuroscience, 2(1):79-87.

          [3]Bastos,et.al(2012) Neuron, 76(4):695-711.



          Tuesday July 23, 2024 4:50pm - 6:40pm PDT
          TBA
           
          Wednesday, July 24
           

          8:30am PDT

          Registration
          Wednesday July 24, 2024 8:30am - 8:30am PDT

          9:00am PDT

          From Computational Neuroscience to Biomimetic Embodied AI
          Full program on the Workshop Website.

          In this workshop, we will explore how animal brains solve problems, and how AI can take inspiration from biological systems that have evolved specifically to solve problems flexibly and rapidly, and to adapt over the lifetime of an individual, whilst being computation- and energy-efficient. Topics will include computational models to explore how brain circuits can solve problems and how this creates new hypotheses to explore experimentally. We will also examine how this can contribute to solving problems in autonomous robotics and provide inspiration for other AI applications.

          Schedule:
          Daniel Yasumasa Takahashi, Federal University of Rio Grande do Norte
          Stochastic dynamical systems model of vocal turn-taking and its development in marmoset monkeys
          Thomas Nowotny, University of Sussex
          Training Spiking Neural Networks for keyword recognition with Eventprop in GeNN 
          Renan Moioli, Federal University of Rio Grande do Norte
          A Neurorobotics Model of the Cerebellar-Basal Ganglia Circuitry: decision making and motor control in healthy and diseased states
          Rachael Stentiford, University of Sussex 

          Marcelo Bussotti Reyes, Universidade Federal do ABC (UFABC)
          Temporal Decoding Dynamics: Insights from Prefrontal Cortex and Striatum During Rapid Learning

          Full program on the Workshop Website.


          Speakers
          avatar for Thomas Nowotny

          Thomas Nowotny

          Professor of Informatics, University of Sussex, UK
          I do research in computational neuroscience and bio-inspired AI. More details are on my home page http://users.sussex.ac.uk/~tn41/ and institutional homepage (link above).


          Wednesday July 24, 2024 9:00am - 12:30pm PDT
          Cedro V

          9:00am PDT

          9:00am PDT

          9:00am PDT

          Cerebellar learning and models of learning involving the cerebellum
          Speakers
          avatar for Volker Steuber

          Volker Steuber

          Professor, Centre for Computer Science and Informatics Research, University of Hertfordshire


          Wednesday July 24, 2024 9:00am - 5:00pm PDT
          Cedro II

          9:00am PDT

          10:20am PDT

          Coffee Break
          Wednesday July 24, 2024 10:20am - 10:50am PDT

          12:30pm PDT

          Lunch
          Wednesday July 24, 2024 12:30pm - 2:10pm PDT

          2:00pm PDT

          Workshop on Methods of Information Theory in Computational Neuroscience
          https://kgatica.github.io/CNS2024-InfoTeory-W.io/

          Speakers
          avatar for Joseph T. Lizier

          Joseph T. Lizier

          Associate Professor, Centre for Complex Systems, The University of Sydney
          My research focusses on studying the dynamics of information processing in biological and bio-inspired complex systems and networks, using tools from information theory such as transfer entropy to reveal when and where in a complex system information is being stored, transferred and... Read More →
          avatar for Abdullah Makkeh

          Abdullah Makkeh

          Postdoc, University of Goettingen
          My research is mainly driven by the aim of enhancing the capability of information theory in studying complex systems. Currently, I'm focusing on introducing novel approaches to recently established areas of information theory such as partial information decomposition (PID). My work... Read More →
          avatar for Marilyn Gatica

          Marilyn Gatica

          Postdoctoral Research Assistant, Northeastern University London


          Wednesday July 24, 2024 2:00pm - 5:30pm PDT
          Cedro I

          3:20pm PDT

          Coffee Break
          Wednesday July 24, 2024 3:20pm - 3:50pm PDT
           
          Filter sessions
          Apply filters to sessions.