33rd Annual Computational Neuroscience Meeting: CNS*2024

K1 Intelligence as flexibility: What life gains with a pallium

Suzana Herculano-Houzel *1

1Vanderbilt University, Department of Psychology Department of Biological Sciences, Nashville, United States of America

*Email: suzana.herculano@vanderbilt.edu

Neurons are the computational units of brains, and in principle, the more the neurons in a circuit, the more computationally capable that circuit will be. Indeed, that is the most obvious feature that sets the human brain apart from all others: it is the one with by far the most cortical, or pallial, neurons. Given the associative connectivity of the pallium, it endows the brain with flexible behavior, which, the speaker will argue, is the best current working definition of intelligence. Indeed, despite their entirely different brain sizes, corvids and great apes have comparable cognition and numbers of associative neurons in the pallium.

However, while it seems intuitive from the Darwinian adaptationist point of view that life should evolve towards ever more capable brains, with ever more neurons, the vast majority of vertebrate animals have very few pallial neurons, even fewer of which are purely associative – and yet, they do just fine, which is a strong argument against any need or requirement for more brain neurons in any species. Instead, this lecture will argue that brain evolution has been a matter of balance between opportunity and constraints, rather than of finding computational solutions to problems. Circuits, like brains, do not need to be optimized; they just have to work – and if opportunities and constraints allow for a pallium, then long, complex and flexible, intelligent life ensues, in proportion to the number of neurons afforded in the pallium.

1Imperial College London, Department of Bioengineering, London, United Kingdom

*Email: c.clopath@imperial.ac.uk

Episodic memories are encoded by experience-activated neuronal ensembles that remain necessary and sufficient for recall. However, the temporal evolution of memory engrams after initial encoding is unclear. In this study, we employed computational and experimental approaches to examine how the neural composition and selectivity of engrams change with memory consolidation. Our spiking neural network model yielded testable predictions: memories transition from unselective to selective as neurons drop out of and drop into engrams; inhibitory activity during recall is essential for memory selectivity; and inhibitory synaptic plasticity during memory consolidation is critical for engrams to become selective. Using activity-dependent labeling, longitudinal calcium imaging, and a combination of optogenetic and chemogenetic manipulations in mouse dentate gyrus, we conducted contextual fear conditioning experiments that supported our model’s predictions. Our results reveal that memory engrams are dynamic and that changes in engram composition mediated by inhibitory plasticity are crucial for the emergence of memory selectivity.

K3 Adaptive design in sensory and memory systems

1University of Ottawa, Department of Physics, Ottawa, Canada

*Email: alongtin@uottawa.ca

Neural fatigue, and recovery from this fatigue, is a negative feedback mechanism that endows an organism with a large array of information coding capabilities. This talk will present novel adaptive computations in the central and peripheral nervous system beyond the well-documented ability to signal input changes. I will first set the stage by contrasting current or threshold adaptation. Then I will show how adaptation causes spike train patterning that enables the detection of stimuli too weak to change the mean rate. It also leads to optimal transmission to neurons with similar synaptic time scale. Adaptation reduces noise in certain frequency bands and thus enhanced information transmission for time-varying signals. The reason is the regularization of spike trains by adaptation via noise-shaping. In its power-law variety, adaptation leads to a speed-invariant estimation of stimulus distance. I will then discuss the impact of adaptation on sensory system design, wherein supra-threshold encoding of small stimuli is followed by subthreshold encoding (stochastic resonance). It will explain a machine-learning-and-dynamics-based inference method to infer adaptation and connectivity parameters from microscopic neural data. Adaptation also appears in more central circuits such as hippocampus where it has very long time scales. Such adaptation can be used to represent time intervals between events. Finally, in a recurrent network inspired by dentate gyrus circuitry, our theory reveals a state of quasi-criticality that enables a novel dual encoding strategy that is possibly relevant for pattern separation: rate coding for moderate-to-large stimuli and pattern coding for weak stimuli.

K4 Brain criticality and cortical states

1Universidade Federal de Pernambuco, Departamento de Física, Recife, Brazil

*Email: mauro.copelli@ufpe.br

The presumed proximity to a critical point is believed to endow the brain with scale-invariant statistics, which are thought to confer various functional advantages in terms of information processing, storage, and transmission capabilities. In this talk, I will present a personal review of this idea, known as the critical brain hypothesis, which is now two decades old. The role of very simple and not-so-simple computational models in addressing pressing questions in the field will be discussed. Specifically, we will investigate to what extent these ideas can contribute to the understanding of cortical states by comparing model results with the analysis of spiking data from the urethane-anesthetized rat’s primary visual cortex.

FO1 From Population to Place Coding: Mechanistic Insights into the transformation of ITD representation along the auditory pathway

Lavínia Mitiko Takarabe *1, Bóris Marin1, Rodrigo Pavão1

1Federal University of ABC, Center for Mathematics, Computing and Cognition, São Bernardo do Campo, Brazil

*Email: lavinia.mitiko@aluno.ufabc.edu.br

The Interaural Time Difference (ITD) is one of the main acoustic cues employed by animals to locate sound sources in the environment. Over the years, multiple studies have demonstrated that numerous auditory neurons are sensitive to ITDs. However, experimental results reveal that different species employ distinct solutions to represent ITDs in subcortical areas and in the cortex. For instance, in the midbrain of owls, the strategy is place coding, characterized by narrow tuning curves with peaks that uniformly span ITD space. Conversely, recent work on the midbrain of gerbils shows a population coding strategy, in which broad tuning curves have peaks that over-represent ITDs corresponding to locations in the contralateral hemifield. This project is based on evidences that ITD representations undergo significant transformations between the midbrain and the auditory cortex. In owls, there is a shift from a place code in the midbrain to a population code in the forebrain, while in gerbils, this transformation occurs in the opposite direction [1].

Acknowledging that different sensory subsystems may employ similar computational strategies, we drew inspiration from the literature on orientation selectivity in the visual system to elucidate the mechanisms underlying this transformation of ITD representation. In [2], Hansel and van Vreeswijk showed that randomly connected balanced networks can considerably narrow neuronal spike responses, giving rise to strong selectivity from weakly modulated feedforward and recurrent inputs. Subsequent works [3] also demonstrated that networks in an inhibition-dominated regime could enhance orientation selectivity and observed a shift in the neurons' preferred orientations due to stronger recurrence in the network. In light of these results, we hypothesize that sharper and more uniformly distributed tuning curves may emerge due to a combination of balanced recurrent connectivity and feedforward feature mixing.

In order to test this hypothesis, we implemented a randomly connected spiking neural network with strong recurrence to explain the transformation of ITD representation between the inferior colliculus and the primary auditory cortex of gerbils. We show that depending on the recurrence strength, inhibition-dominated networks sharpen the input tuning curves and uniformize the distribution of peaks, which serves as a possible mechanism for transforming a population code into a place code. However, as the recurrent strength increases, the network loses stability and becomes dominated by noise. We are currently investigating how different learning rules can balance the contributions of feedforward and recurrent input — as well as nonspecific noise — to match the tuning curve distributions observed along the auditory pathway, and how the function of each of these structures could be linked with the corresponding coding strategy.

1. Belliveau LA, Lyamzin DR, Lesica NA. The neural representation of interaural time differences in gerbils is transformed from midbrain to cortex. Journal of Neuroscience. 2014 Dec 10;34(50):16796-808.

2. Hansel D, van Vreeswijk C. The mechanism of orientation selectivity in primary visual cortex without a functional map. Journal of Neuroscience. 2012 Mar 21;32(12):4049-64.

3. Sadeh S, Cardanobile S, Rotter S. Mean-field analysis of orientation selectivity in inhibition-dominated networks of spiking neurons. SpringerPlus. 2014 Dec;3:1–35.

figure a

Figure 1. (A) Illustrates our hypothesis. (B) The distribution of selectivity in three different regimes of recurrence shows that sharpening may emerge in inhibition-dominated random balanced networks. The selectivity index is given by SI=1−Circular Variance. (C) The density of the distribution of peaks converges to a uniform distribution according to a Kullback–Leibler divergence.

FO2 Learning egocentric spatial cells in the postrhinal cortex

Yanbo Lian *1, Patrick LaChance2, Samantha Malmberg2, Michael Hasselmo2, Anthony Burkitt1

1University of Melbourne, Department of Biomedical Engineering, Melbourne, Australia

2Boston University, Department of Psychological and Brain Sciences, Boston, United States of America

*Email: yanbo.lian@unimelb.edu.au

Animals perform very complex spatial navigation tasks, but how their brain’s navigational system does this is still unclear. In recent decades, experimental studies have identified numerous spatial cell types, including place cells, grid cells, head direction cells, boundary cells and speed cells [1]. Many of these cells code an allocentric spatial map, i.e., defined with respect to the environment. Egocentric spatial cells that code for the space with respect to the observer, such as egocentric spatial cells in the postrhinal cortex (PoR) [2] and egocentric boundary cells in the retrosplenial cortex (RSC) [3] have recently been discovered. Animals use their sensory system, that is egocentric in nature, to explore their environment. Consequently, understanding how an egocentric spatial representation of the space arises from sensory input during learning is vital to understanding the brain’s navigational system. Our previous work shows that egocentric RSC cells can be learnt from visual input of the primary visual cortex [4]. In this study, a computational learning model is trained using sparse coding and learns various types of PoR egocentric spatial cells similar to those experimentally observed in rat brains (Fig. 1).

Spatial cells found in rat PoR can be characterized by three navigational variables: center bearing, center distance, and head direction [2]. Head direction is an allocentric measurement, center bearing is an egocentric measurement that is the angle between the current head direction and the center of the environment, and center bearing is also an egocentric measurement that represents the distance between the animal and the center of the environment. Experimental data shows that PoR cells exhibit diverse spatial properties: some are selective to a preferred head direction, some have a preferred center bearing, some show tuning to center distance, and some have conjunctive encoding of more than one variable.

Based on the experimental evidence that PoR receives visual information via the superior colliculus (SC) [5] and the visual processing properties of SC [6], a SC-PoR model based on sparse coding was built. As a virtual rat runs in a simulated environment, its visual input is captured and used as the input to train the SC-PoR model. After learning, our model shows types of egocentric spatial cells that are similar to those found experimentally in PoR. This work explains how PoR spatial cells can arise from a learning process with visual input processed by SC, supporting the postulate that sparse coding is one of the underlying principles of the brain’s navigational system.

1. Moser EI, Moser MB, McNaughton BL. Spatial representation in the hippocampal formation: a history. Nature neuroscience. 2017 Nov 1;20(11):1448-64.

2. LaChance PA, Todd TP, Taube JS. A sense of space in postrhinal cortex. Science. 2019 Jul 12;365(6449):eaax4192.

3. Alexander AS, Carstensen LC, Hinman JR, Raudies F, Chapman GW, Hasselmo ME. Egocentric boundary vector tuning of the retrosplenial cortex. Science advances. 2020 Feb 21;6(8):eaaz2322.

4. Lian Y, Williams S, Alexander AS, Hasselmo ME, Burkitt AN. Learning the vector coding of egocentric boundary cells from visual data. Journal of Neuroscience. 2023 Jul 12;43(28):5180-90.

5. Brenner JM, Beltramo R, Gerfen CR, Ruediger S, Scanziani M. A genetically defined tecto-thalamic pathway drives a system of superior-colliculus-dependent visual cortices. Neuron. 2023 Jul 19;111(14):2247-57.

6. Li YT, Turan Z, Meister M. Functional architecture of motion direction in the mouse superior colliculus. Current Biology. 2020 Sep 7;30(17):3304-15.

figure b

Figure 1. SC-PoR model can learn diverse types of spatial cells similar to experimental data of PoR. For each of the three variables (head direction, center bearing, center distance) used to characterize PoR cells, two example cells of both model and experimental data are given. For either model or experimental data, the left plot shows a square arena with dot representing a spike of this cell and its colo

FO3 Neural Heterogeneity Controls the Computational Properties of Spiking Neural Networks

Richard Gast *1, Sara A. Solla1, Ann Kennedy1

1Northwestern University, Department of Neuroscience, Chicago, United States of America

*Email: richard.gast@northwestern.edu

Neurons have been shown to express substantial heterogeneity between as well as within genetically defined cell types [1]. How does this heterogeneity affect the dynamics and function of a neural network? Here, we address this question by studying networks of coupled, heterogeneous spiking neurons.

Applying the Ott-Antonsen ansatz [2], we derive sets of mean-field equations for networks of Izhikevich neurons with heterogeneous spike thresholds and provide detailed comparisons between the dynamics of the mean-field equation and the corresponding spiking neural network (see [3], for a detailed derivation and analysis of the mean-field equations).

We then leverage the mean-field equations to study how heterogeneity affects the macroscopic dynamic regimes of neural networks via bifurcation theory. In a single population of excitatory neurons, we find that the population dynamics become more linear as spike threshold heterogeneity increases. This holds for synchronous, asynchronous, monostable, and bistable regimes of the population dynamics (see Fig. 1). To study the implications of this macroscopic linearization on the network function, we examined the encoding and function generation capacities of the corresponding spiking neural networks in various dynamic regimes that we identified via the mean-field equations. We find that neural heterogeneity controls both the encoding and function generation capacities of populations of recurrently coupled excitatory neurons via its effect on the equilibrium solutions of the system. Whereas the encoding capacity is reduced by increasing heterogeneity, the function generation capacity is increased [4].

In much of the brain, inhibitory interneurons act on neighboring excitatory neurons to modify their dynamics. We thus asked how heterogeneity in local inhibitory interneurons alters the responses of excitatory neural networks. To address this question, we analyzed how inhibitory interneurons modulate the bifurcation structure of excitatory neural networks and how this modulation depends on the degree of heterogeneity of the inhibitory population. When the heterogeneity of the inhibitory population is low, we find that the bistable asynchronous regime observed in the one-population model of purely excitatory neurons ceases to exist, and synchronous oscillations become the dominant dynamic regime. However, when the heterogeneity of the inhibitory population is increased, the behavior of the two-population model begins to revert to that of the one-population model (see Fig. 1). We conclude that heterogeneous inhibitory interneurons preserve the bifurcation structure of the excitatory population, whereas homogeneous inhibitory interneurons overwrite this bifurcation structure and move the system towards highly synchronized dynamics. This has important implications for attractor-based theories of brain function - in which the sole role of inhibitory interneurons is to gate excitatory neural dynamics [5] - as it suggests that the level of heterogeneity of inhibitory neuron populations allows to modulate their gating functions.

Together, our results suggest that neural heterogeneity should be considered as a crucial "knob" that can be adjusted in neural circuits to tune their computational function.

1. Peng H, Xie P, Liu L, Kuang X, Wang Y, Qu L, Gong H, Jiang S, Li A, Ruan Z, Ding L. Morphological diversity of single neurons in molecularly defined cell types. Nature. 2021 Oct 7;598(7879):174-81.

2. Ott E, Antonsen TM. Low dimensional behavior of large systems of globally coupled oscillators. Chaos: An Interdisciplinary Journal of Nonlinear Science. 2008 Sep 1;18(3).

3. Gast R, Solla SA, Kennedy A. Macroscopic dynamics of neural networks with heterogeneous spiking thresholds. Physical Review E. 2023 Feb;107(2):024306.

4. Gast R, Solla SA, Kennedy A. Neural heterogeneity controls computations in spiking neural networks. Proceedings of the National Academy of Sciences. 2024 Jan 16;121(3):e2311885121.

5. Khona M, Fiete IR. Attractor and integrator networks in the brain. Nature Reviews Neuroscience. 2022 Dec;23(12):744-66.

figure c

Figure 1. Homogeneous inhibitory interneurons overwrite the bifurcation structure of excitatory neurons. 2D bifurcation diagrams: Regions colored in grey and green depict bistable and oscillatory regimes, respectively. The black and red stars indicate the value of the input current used during low input (white, t < 750 ms and t > 2000 ms) and high input (gray-blue, 750 ms < t < 2000 ms) regimes, respectively.

FO4 Backwards and forwards, hot or cold: robust and flexible rhythms in a neural network model

Lindsay Stolting *1, Joshua Nunley1, Eduardo Izquierdo2

1Indiana University, Cognitive Science Department, Bloomington, IN, IN, United States of America

2Rose Hulman Institute of Technology, Computer Science & Software Engineering, Terre Haute, IN, United States of America

Neural circuits undergo constant perturbation to their ionic conductance values. Some perturbations, like temperature for the cold-blooded crab, are detrimental to circuit function. Others, like neuromodulation, play a functional role in the shaping and switching of circuit rhythms. Both kinds of perturbation have been studied separately in experimental and modeling contexts. However, real circuits deal with both simultaneously, and must manage their interaction to produce robust and flexible behavior. They must, for instance, be able to enact reliable neuromodulatory control across temperatures, despite temperature-dependent base circuit parameters. Several mechanisms have been proposed to explain this capacity, including that the neuromodulatory signals themselves must vary with temperature. A comprehensible and mathematically-tractable model is needed to clarify such hypotheses and generate experimental predictions. Therefore, we propose a novel modeling framework which is sufficiently low-dimensional to afford deep understanding while including the richness of a brain-body-environment context. Using stochastic optimization, we generated an ensemble of neural circuits which, when placed in a simulated body, can employ neuromodulation to walk either forward or backward, across different temperatures (Fig. 1). To our surprise, we found that many successful circuits do not need to tune their neuromodulatory signal based on temperature. Instead, they structure their responses to leverage convenient structures that exist within parameter space itself, such that one neuromodulatory signal is effective across all temperatures (Fig. 1, right). Due to its sheer simplicity and dependence on parameter space visualization, this possibility had not been considered in prior literature. This finding demonstrates the broad potential of this framework for investigating the interplay of robustness and flexibility in neural systems.

figure d

Figure 1. Two evolved walkers use neuromodulation (NM) to switch walking directions across temperatures (T). One (top) exhibits the problem of state dependence. Applying the same NM is an effective switch at some T's, but not others. Many of our agents do not encounter this problem (bottom). They exploit conveniently aligned behavior manifolds such that the same NM switch is effective across T's.

O1 Touch stimulation to enhance separation of sound sources

Farzaneh Darki *1, Piotr Slowinski2, Marc Goodfellow3, James Rankin2

1University of Exeter, Mathematics and Statistics, Exeter, United Kingdom

2University of Exeter, Faculty of Environment, Science and Economy, Exeter, United Kingdom

3University of Exeter, Living Systems Institute, Exeter, United Kingdom

*Email: f.darki2@exeter.ac.uk

Multisensory integration has received significant interest in the neuroscience community in recent years. However, number of mechanistic mathematical models of multisensory integration is limited. Here we propose a computational model of audio-tactile integration, that could be generalised to other cross-sensory interactions. Recent studies have suggested that tactile inputs may modulate auditory processing, raising questions about the potential cross-modal interactions between touch and sound. It has been demonstrated that responses generated by tactile stimuli in auditory cortex implicate non-primary auditory areas for integrating auditory and tactile information [1,2] and that tactile-driven inputs to secondary auditory areas modulate its local activity [1]. However, the modulatory mechanism remains unknown. We use mathematical modelling to understand how tactile (touch) sensation affects perception of multiple sound sources and how these audio-tactile interactions result from specific neural computations.

Specifically, proposed model includes interactions between primary (A1) and non-primary auditory cortex (Competition stage) and is an extension of a mathematical model of auditory streaming [3]. Crucially, we experimentally validated the model. To this end we used the auditory streaming paradigm in which participants are listening to a sequence of interleaved high and low frequency tones repeated in ABA- triplets (‘A’ and ‘B’ tones, ‘-’ silent gap) [4]. Participants can perceive these sequences either as a single integrated stream (ABA-ABA-) or as being segregated into two streams (concurrent: A-A-A-A- and -B---B--). We used tactile stimulation which is synchronised with a subset of the tones in an ABA- triplet (Fig. 1A). During an experimental trial participants listen to five triplets and judge the final triplet as integrated or segregated. When tactile stimulation timing matches only the B tone sequence the proportion of segregation increases. When the timing matches the A and B tones a bias towards integration emerges. This effect was observed over a range of frequency difference values (Fig. 1B).

Proposed model supports hypothesis that tactile inputs boost recurrent excitation in non-primary auditory areas (Fig. 1C); modelling results qualitatively account for the experimental data (Fig. 1D). We are using the proposed framework to investigating the robustness of audio-tactile interactions to variation in latency of the tactile stimuli with the aim of understanding the timescales of excitatory and inhibitory interactions between the two modalities. Further hypotheses about modulatory audio-tactile circuits can be explored, leading to testable predictions tied to the range of experimental data. Our approach improves fundamental knowledge about audio-tactile integration and other cross-sensory interactions.

1. Schürmann M, Caetano G, Hlushchuk Y, Jousmäki V, Hari R. Touch activates human auditory cortex. Neuroimage. 2006 May 1;30(4):1325-31.

2. Fu KM, Johnston TA, Shah AS, Arnold L, Smiley J, Hackett TA, Garraghty PE, Schroeder CE. Auditory cortical neurons respond to somatosensory stimulation. Journal of Neuroscience. 2003 Aug 20;23(20):7510-5.

3. Rankin J, Sussman E, Rinzel J. Neuromechanistic model of auditory bistability. PLoS computational biology. 2015 Nov 12;11(11):e1004555.

4. Bregman AS. Auditory streaming: Competition among alternative organizations. Perception & Psychophysics. 1978 Sep;23:391-8.

figure e

Figure 1. A: Tact AB, tactile stimulations during both A- and B-tone intervals; Tact B, stimulations during B-tone. B: Proportion segregated increases (decreases) in Tact B (Tact AB) relative to Tact Off. C: Computational model with competition between units driven by inputs from tonotopic A1. D: Modeling assumes tactile stimuli modulate recurrent excitation in B unit for Tact B and all units for Tact AB.

O2 Coherent Motion Detection Facilitated by Surround Suppression

Elnaz Nemati1, Anthony Burkitt2, David Grayden2, Parvin Zarei Eskikand*3

1The University of Melbourne, Biomedical Engineering, Melbourne, Australia

2University of Melbourne, Department of Biomedical Engineering, Melbourne, Australia

3University of Melbourne, Flemington, Australia

*Email: pzarei@unimelb.edu.au

Motion detection is a cognitive process that relies upon past motion data. In our daily experience, we continually interpret motion cues to anticipate object movements, such as predicting the trajectories of vehicles while navigating streets. In scenes involving multiple independently moving objects, our predictions are shaped by the motions of neighbouring objects in the visual field. For instance, when observing a flock of birds in flight, we perceive both the collective motion of the flock and the individual movements of each bird. Previous studies have revealed deficits in coherent motion perception among schizophrenia patients, particularly evident in tasks like random dot experiments. Our objective in this study is to pinpoint potential deficits in motion detection tasks within a predictive coding framework for coherent motion detection.

This study investigates how the brain extracts coherent motion from multiple moving objects within two-dimensional space. Our model is based on a classical predictive coding model [1], that additionally integrates a surround suppression mechanism. This mechanism dampens neuronal activity in response to motion within surrounding receptive fields. The suppression efficacy is further influenced by past neuronal activity within a temporal window. We assessed the model's performance using a random dot experiment in which dots moved coherently, but with added noise to individual dot motions. For instance, a noise level of 25% means that 25% of the dots move randomly, while the remaining dots move in a coherent direction. We examined the model's convergence time with increasing noise levels. Additionally, we investigated the influence of surround suppression on the detection of coherent motion. Specifically, we evaluated the effect of modifying the threshold in the surround suppression mechanism. A higher threshold value required that the motions in the centre and surround need to be increasingly different for the surround suppression to be activated.

Results demonstrated an increasing convergence time from 100 ms to 450 ms as noise levels increased from 0% to 60%. We observed that increasing the threshold for surround suppression improved the performance of the model in detecting coherent motion. Decreasing the threshold for surround suppression suppressed the activity of neurons, consequently leading to a longer convergence time for the model.

In this study, results were presented of a biologically plausible model based on predictive coding that was capable of successfully detecting the coherent motion of stimuli. Our model emphasized the role of surround suppression in overall performance. Better understanding of motion detection may shed light on abnormalities in motion detection tasks in schizophrenia, which might result from malfunction of surround suppression in this disorder.

Acknowledgment This research was funded by the Australian Government under the Australian Research Council’s Training Centre in Cognitive Computing for Medical Technologies (IC170100030) and an Early Career Research Grant to PZ from The University of Melbourne.

1. Rao RP, Ballard DH. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature neuroscience. 1999 Jan;2(1):79–87.

O3 Extracting regularities embedded within stochastic sequences of sensorimotor events.

Claudia D Vargas *1, Antonio Galves2, Jesus E Garcia3, Noslen Hernández4, Paulo Roberto Cabral-Passos5

1Federal University of Rio de Janeiro, Institute of Biophysics Carlos Chagas FIlho, Rio de Janeiro, Brazil

2Universidade de São Paulo, Instituto de Matemática e Estatística, São Paulo, Brazil

3Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica,, Campinas, Brazil

4Université de Toulouse, INTHERES, Toulouse, France

5Universidade de São Paulo, Departamento de Física da Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, Ribeirão Preto, Brazil

*Email: cdvargas@biof.ufrj.br

This work is part of a multidisciplinary team effort in which electrophysiological and behavioral methods are combined to investigate neural signatures associated with the ability to extract regularities embedded within structured stochastic sequences of events [1,2,3]. For this purpose, we developed an electronic game called the Goalkeeper Game (https://game.numec.prp.usp.br/). Playing the role of a goalkeeper, participants were asked to predict, step by step, the successive directions (left, center or right) to where a penalty kicker would send the ball. An animation feedback then showed to which direction the ball was effectively sent. The sequence of kicks was driven by a stochastic chain with memory of variable length, introduced by Rissanen [4] as a universal model for data compression. Rissanen noted that very often when it comes to sequences of symbols, each new symbol appears to be randomly selected by taking into account the smallest sequence of past symbols (called context) required to generate the next symbol. The set of contexts can be represented by a rooted and labeled oriented tree, called context tree. The procedure to generate the sequence of symbols is defined by the context tree and an associated family of transition probabilities used to choose each next symbol, given the context associated to the sequence of past symbols at each time step. In [2] we addressed whether it was possible to extract the temporal structure used to generate the sequences of events from the goalkeepers’ responses. This would provide evidence that the goalkeeper is capable of learning the statistical regularities enclosed in those sequences. To achieve this goal, we have introduced a new statistical model selection method that allows estimating both the contexts and the associated family of distributions from the goalkeeper's responses. An open question is which of these parameters make certain context tree models more difficult to learn than others. To address this question, we fixed the structure of a given context tree and modified the associated family of transition probabilities. Results showed that (1) the shape of the context trees summarizing the dependencies between present and past directions, (2) the entropy of the stochastic chains used to generate the sequences of events and (3) the absence of a deterministic periodic sequence underlying the context tree models make them more difficult to predict. We also investigated, for a given context tree model, whether reaction times (RTs) varied as a function of the contexts and of their associated transition probabilities [3]. Results revealed that the distribution of RTs associated with a stochastic sequence of events depended both on contexts and on the results of previous predictions.

Acknowledgments This work is part of the FAPESP NeuroMat activities (grants #2013/07699-0; #2022/00699-3), FAPERJ grants: (#E-26/010.002418/2019 #E-26/202.785/2018) and CNPq (grants #310397/2021-9; 407092/2023-4).

1. Hernández N, Duarte A, Ost G, Fraiman R, Galves A, Vargas CD. Retrieving the structure of probabilistic sequences of auditory stimuli from EEG data. Scientific Reports. 2021 Feb 10;11(1):3520.

2. Hernández N, Galves A, García JE, Gubitoso MD, Vargas CD. Probabilistic prediction and context tree identification in the Goalkeeper game. Scientific Reports [Internet]. 2024 Jul 5;14(1).

3. Cabral-Passos PR, Galves A, Garcia JE, Vargas CD. What comes next? response times are affected by mispredictions in a stochastic game. arXiv preprint arXiv:2309.09813. 2023 Sep 18.

4. Rissanen J. A universal data compression system. IEEE Transactions on Information Theory [Internet]. 1983 Sep;29(5):656–64.

O4 Recurrent neural networks outperform canonical computational models at fitting auditory brain responses

Ulysse Rancon *1, Timothee Masquelier2, Benoit Cottereau2

1CerCo, CNRS, Université Toulouse III - Paul Sabatier,, Toulouse, France

*Email: ulysse.rancon@protonmail.com

Computational models of auditory neural responses have originated from the ubiquitous Spectro-Temporal Receptive Field (STRF) or Linear (L) model, which states that neural activation is simply a direct weighting of the stimulus frequency-time bins over a past of arbitrarily defined length. However, despite successive additions of nonlinearities to account for specific properties observed in biological brains, such as adaptation, contrast gain control or ON-OFF response generation, the core philosophy when modelling auditory neural activity remains that of a stateless temporal convolution performed on the cochleagram.

Although simple and convenient, this representation appears distant from the established reality of biological neurons as stateful units with an internal “memory” and short to long-range dependencies on previous inputs and states. As an example, the membrane potential at a given instant not only depends on current and past inputs, but also on its own preceding value, and more generally to its history. The corresponding class of autoregressive computational models, also known as recurrent neural networks (RNNs) in modern deep learning literature, is surprisingly lacking among the community of auditory computational neuroscience – sound being a purely temporal stimulus. Therefore, we propose a novel recurrent architecture backbone that we call StateNet, capable of processing auditory signals and accurately fitting real brain responses by leveraging statefulness.

We train our model to reproduce single-unit electrophysiological data recorded in anesthetized animals given the spectrograms of experimental stimuli, and compare it against a broad gamut of traditional models. We find that RNNs systematically outperform stateless networks by a substantial margin; our results are robust and validated on a recent benchmark comprising 3 publicly available datasets obtained across 3 species (rat, mouse, ferret) and 3 areas (A1, AAF, PEG). Finally, we propose a reverse engineering method inspired by the ”deep dream” technique from the AI/DL community, that allows to create interpretable features relatable to nonlinear STRFs for stateful networks. Together, our findings contribute to bring computational models of audition still closer to biological neurons, and to a better understanding of their computations.

O5 Interaction of segregated resonant mechanisms along the dendritic axis in CA1 pyramidal cells: Interplay of cellular biophysics and spatial structure

Ulises Chialva1, Horacio Rotstein*2

1Universidad Nacional del Sur, Departmento de Matemática and CONICET, Bahía Blanca, Argentina

2New Jersey Institute of Technology, Federated Department of Biological Sciences, NJIT / Rutgers University, Newark, NJ, United States of America

The processing of synaptic information by neurons is shaped by several factors including the neuron’s intrinsic properties and the dendritic geometry. Under certain conditions, synaptic inputs are processed by neurons in a geometrically-distributed and frequency-dependent manner giving rise to resonance (maximal amplification of the response to oscillatory inputs at preferred frequencies, Fig. 1-A1) and phasonance (zero-phase response at preferred frequencies, Fig. 1-A2). Resonance has been proposed to play a key role in the frequency-specific information flow in neuronal networks and to contribute to the generation of brain rhythms, particularly the theta rhythm (4 - 12 Hz).

While the ionic and dynamic mechanisms of generation of resonance in single neurons are well understood, less attention has been paid to the dependence of the resonant properties on the dendritic spatial structure of neurons, particularly in the presence of heterogenous distributions of ionic currents, voltage heterogeneities along the cell, and the presence of spatial and voltage segregated resonances generated by different ionic mechanisms [1].

We address these issues in the context of hippocampal CA1 pyramidal cells (PYR). We use biophysical modeling (multicompartmental model using the Hodgkin-Huxley formalism, Figs. 1-B and -C), numerical simulations and analytical calculations. We investigate how the spatially segregated qualitatively different subthreshold resonances (observed) and phasonances (hypothesized) in PYR interact and along the dendritic cable and the consequences of this for in vivo activity where PV+ and OLM interneurons target proximal and distal dendrites, respectively.

CA1 PYR minimal ball-and-stick models capture the coexistence of h- and M-current- based resonances observed in vitro [1]. However, two-compartment models fail to reproduce the segregation between the two mechanisms observed experimentally due to the partial overlapping of the activation/inactivation ranges of the participating resonant currents. Therefore, our model includes several dendritic compartments with a realistic distribution of active ionic currents (Fig. 1-C) and at the same time preserves the segregation between the two resonant mechanisms.

We describe conditions under which the two mechanisms interact, including strong voltage variations along the dendritic cable that differentially activate the different ionic channels distributed along the dendrite. Selective inhibition of the different currents and joint activation of the resonant mechanisms produces responses with different attributes than those produced in the classical scenarios. We find similar segregation and interaction mechanisms of phasonance. Finally, we show that the interplay of background noise and the resonant mechanisms generates sustained oscillations that are modulated by the cable’s ionic and geometric properties.

Our findings reveal a complex interplay between spatial structure and ionic mechanisms leading to a diversity of dendritic resonant responses. This interaction produces spatially-extended filtering properties, depends not only on currents and voltage but also crucially on the spatial structure, which allows for flexible filtering regimes. This has implications for individual neuron contributions to network rhythms. The interaction of resonances and phasonances, and the resulting diversity of responses suggest mechanisms by which the neuron regulates its activity during network activity.

Acknowledgments The authors acknowledge support from the National Science Foundation grant NSF IOS-2002863 (HGR)

1. Hu H, Vervaeke K, Graham LJ, Storm JF. Complementary theta resonance filtering by two spatially segregated mechanisms in CA1 hippocampal pyramidal neurons. Journal of Neuroscience. 2009 Nov 18;29(46):14472-83.

figure f

Figure 1. A. Representative impedance amplitude (A1, resonance) and phase (A2, phasonance) profiles. B. Schematic diagram of the multicompartmental model. The dendritic cable is divided into Nd compartments of equal length. The soma is modeled as a single compartment. C. Spatial distribution of the h- (Ih) and persistent sodium (INaP) currents (conductances) along the dendritic cable. The M-current IM is present only in the soma.

O6 Distributed engrams constitute flexible and versatile neural representations

Douglas Feitosa Tomé *1, Tim P. Vogels1

1Institute of Science and Technology Austria (ISTA), Klosterneuburg, Austria

*Email: douglas.feitosatome@ist.ac.at

It has been hypothesized that neuronal ensembles or engrams encoding a specific memory are distributed across multiple functionally-connected brain regions. Such a network of engram cell ensembles–a unified engram complex–recently found comprehensive experimental support enabled by technological breakthroughs. However, it is unknown whether there are computational principles behind the distributed nature of memory engrams. Here we propose that distributed engrams support functional flexibility and versatility. We examined the ability of distributed engrams to differentially regulate memory discrimination and generalization in a computational model. These opposing and complementary computations must be balanced for adaptive memory-guided behavior. For instance, while animals need to discriminate between threat-predictive and neutral stimuli, they also need to generalize threat-predictive cues to novel stimuli with shared features. By combining brain state-dependent and brain region-specific synaptic plasticity, our multi-region spiking neural network model could capture the emergence of synaptically-coupled and functionally-connected distributed engrams in line with previous experimental findings. Critically, our model generated testable predictions. First, our model predicted that while engrams in multiple brain regions promote memory generalization following initial encoding, a subset of regions switch to memory discrimination over the course of memory consolidation while the remaining regions continue to support memory generalization. Second, our model predicted that memory engrams in monosynaptically-connected brain regions are dynamic, allowing neurons to drop into and out of engrams in each region. Taken together, our results suggest that distributed engrams collectively form a flexible and versatile unified engram complex that: i) supports switching from memory generalization to discrimination for behavioral memory expression as observed experimentally, and ii) enables parallel memory generalization and discrimination at the neural level in distinct brain regions. Thus, our work proposes a testable theory that uncovers functional flexibility and versatility as computational principles underlying the distributed organization of memory.

O7 Unraveling the brain circuits underlying target pursuit in the hoverfly

Anindya Ghosh1, Sarah Nicholas2, Karin Nordström2, Thomas Nowotny*1, James Knight3

1University of Sussex, School of Engineering and Informatics, Brighton, United Kingdom

2Flinders University, Flinders Health and Medical Research Institute, Adelaide, Australia

3University of Sussex, Department of Informatics, Brighton, United Kingdom

*Email: t.nowotny@sussex.ac.uk

The small target motion detector (STMD) is an interneuron present in hoverflies that can robustly discern small targets e.g. rival males or potential mates, moving at a range of speeds amidst visual clutter with no relative visual cues [1]. However, its descending neuron, the target selective descending neuron (TSDN) which synapses with motor neurons exhibits suppressed responses, when the target is moving in the same direction as the background compared to when moving across a stationary background; and amplified responses when the target is moving in the opposite direction to the background [2]. While a lot of modeling work has focused on the circuit driving the STMD [3], to our knowledge, there have been no previous attempts to model the circuit driving the TSDN. Given that the TSDN receives input from the STMD and is sensitive to background movement i.e. optic flow, we designed TSDN model circuits that include the STMD and lobula plate tangential cells (LPTCs) – cells known to respond to global optic flow [4]. In the candidate models, we combine the STMD and LPTC inputs in different configurations to arrive at several rate-based models of the TSDN circuit, which we then test across various stimulus conditions — largely moving targets over moving backgrounds of different types. We then compare the results of the various models to TSDN spike trains recorded extracellularly from the TSDN in the hoverfly, Eristalis tenax, while flies were exposed to similar stimuli in a virtual reality setup. This enables us to find the simplest candidate TSDN circuit configuration that can reasonably explain observed TSDN behaviour. Based on these results we make testable predictions for TSDN responses to novel visual stimuli.Figs. A-C depict the experimental findings of our co-authors SN and KN of scenarios resulting in “facilitation” and suppression in the TSDN. Figs. D-E depict illustrations of the candidate models we believe can explain the aforementioned experimental findings.

Acknowledgments AG is funded by the Leverhulme Trust, JK was funded by the EPSRC (grant EP/V052241/1), and TN by the EPSRC (grant EP/S030964/1) and the EU (grant no. 945539). SN and KN were funded by the US Air Force Office of Scientific Research (AFOSR, FA9550-23-1-0473), and the Australian Research Council (ARC, DP210100740 and DP230100006).

1. Nordström K, Barnett PD, O'Carroll DC. Insect detection of small targets moving in visual clutter. PLoS biology. 2006 Mar;4(3):e54.

2. Nicholas S, Nordström K. Facilitation of neural responses to targets moving against optic flow. Proceedings of the National Academy of Sciences. 2021 Sep 21;118(38):e2024966118.

3. Wiederman SD, Shoemaker PA, O'Carroll DC. A model for the detection of moving targets in visual clutter inspired by insect physiology. PloS one. 2008 Jul 30;3(7):e2784.

4. Nicholas S, Leibbrandt R, Nordström K. Visual motion sensitivity in descending neurons in the hoverfly. Journal of Comparative Physiology A. 2020 Mar;206(2):149-63.

figure g

Figure 1. (A) If a target (hoverfly – blue arrow) moves across a stationary background, TSDN robustly detects it. (B) with background motion alone, TSDN is silent. (C) With background motion in the same direction as a target, TSDN response is suppressed, with opposite movement facilitated. Reproduced from [2]. (D), (E) candidate models with different hypotheses on how suppression and facilitation arise.

O8 Effect of Focused Ultrasonic Stimulation via Intramembrane Cavitation in the Squid Giant Axon

Mithun Padmakumar *1, Divya Rajan2, John Eric Steephen2

1Digital University Kerala, Thiruvananthapuram, India

2Digital University Kerala, School of Digital Sciences, Thiruvananthapuram, India

*Email: mithun.padmakumar@duk.ac.in

Focused ultrasound stimulation (FUSS) is emerging as a promising modality for ne

Comments (0)

No login
gif