KEYNOTE SPEAKERS
Prof. Si Wu
(School of Psychological and Cognitive Sciences, Peking University, China)
Neural Information Processing with Adaptive Continuous Attractor Neural Networks
Attractor neural networks consider that neural information is stored as stationary states of a dynamical system formed by a large number of interconnected neurons. The attractor property empowers a neural system to encode information robustly, but it also incurs the difficulty of updating state rapidly, impairing information search in brain functions. To overcome this difficulty, a natural solution is to include adaptation in the attractor network dynamics, whereby the adaptation serves as a slow negative feedback mechanism to destabilize which are otherwise permanently stable states. Through the interplay between attractor and adaptation, the neural system can, on one hand, represent information reliably using attractor states, and on the other hand, perform computations whenever rapid state updating is necessary. In this talk, I will introduce our recent works on continuous attractor neural networks with adaptation (A-CANNs), and show that A-CANNs exhibit rich dynamical behaviors to support rich neural computations.
INVITED SPEAKERS
Prof. Se-Bum Paik
(Department of Brain and Cognitive Sciences, KAIST, South Korea)
Spontaneous Cognitive Functions in the Brain
The ability to recognize visual objects emerges in the early stages of development. However, the origins of these visual functions in the brain remain unclear—specifically, whether they require supervised or unsupervised learning, as seen in artificial neural networks, or if they can arise spontaneously without prior training, as often observed in naïve animals. In this talk, I will present our recent findings indicating that primitive cognitive functions in the brain can indeed emerge spontaneously, entirely in the absence of training. Using a biologically inspired neural network model, I will first demonstrate how regularly structured functional circuits can arise from simple local interactions between individual cells. I will also discuss how evolutionary variations in physical parameters may facilitate the development of distinct functional circuitry within the brain. Additionally, I will illustrate that higher cognitive functions, such as quantity estimation and primitive object detection, can emerge spontaneously in untrained neural networks. I argue that random feedforward connections in early circuits may be sufficient to initialize functional circuits. These findings suggest that early cognitive functions can spontaneously emerge from the statistical properties of bottom-up projections in hierarchical neural networks, providing valuable insights into the origins of primitive functions in the brain.
Prof. Li-An Chu
(Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Taiwan)
AI empowered Multiscale Microscopy Imaging
Optical microscopes have been instrumental in both neuroscience and cancer research by allowing scientists to observe and study cellular structures and processes at a microscopic level. In neuroscience, optical microscopes enable researchers to visualize the intricate details of neurons, synapses, and neural circuits, leading to a better understanding of brain function and disorders. In cancer research, optical microscopes aid in the examination of cancer cells, tumor growth, and the effects of potential treatments, contributing to the development of diagnostic methods and therapeutic interventions. The combination of optical microscopes with deep learning-based analysis has further advanced neuroscience and cancer research by enhancing the capabilities of image processing, analysis, and interpretation. Deep learning algorithms can efficiently identify and classify complex patterns in microscopic images, allowing for more accurate and automated analysis of neural structures in neuroscience and cancerous cells in oncology. This integration enables researchers to extract meaningful insights from large datasets more quickly and effectively than traditional manual methods, leading to discoveries that may have been previously overlooked. Additionally, deep learning-based analysis can facilitate the development of novel diagnostic tools and treatment strategies by uncovering subtle correlations and biomarkers within microscopic images that may not be apparent to the human eye alone. Overall, the synergy between optical microscopes and deep learning-based analysis has significantly accelerated research progress in both fields.
Prof. Songting Li
(Institute of Natural Sciences and the School of Mathematical Sciences, Shanghai Jiao Tong University, China)
Timescale localization and signal propagation in the large-scale primate cortex
In the brain, while early sensory areas encode and process external inputs rapidly, higher-association areas are endowed with slow dynamics to benefit information accumulation over time. This property raises the question of why diverse timescales are well localized rather than being mixed up across the cortex, despite high connection density and an abundance of feedback loops that support reliable signal propagation. In this talk, we will address this question by developing and analyzing a large-scale network model of the primate cortex, and we identify a novel dynamical regime termed “interference-free propagation”. In this IFP regime, the mean component of the synaptic currents to each downstream area are imbalanced to ensure signals to propagate reliably, while the temporally fluctuating component of the synaptic inputs governed by upstream areas’ timescales are largely canceled out, leading to the localization of its own timescale in each downstream area. Our result provides new insights into the operational regime of the cortex, leading to the coexistence of hierarchical timescale localization and reliable signal propagation.
Prof. Ching-Lung Hsu
(Institute of Biomedical Sciences, Academia Sinica, Institute of Biomedical Sciences, Academia Sinica (NPAS), Taiwan)
A novel three-factor learning theory for one-shot hippocampal plasticity
What are the fundamental forms of circuit learning algorithms that best address the core of computational problems faced by animals living with memory and goals? Error Backpropagation for artificial neural networks was initially inspired by reinforcement learning in behaving animals, while Hebbian Plasticity for biological neural networks is native to brain circuits solving “spatial intelligence” problems. The key challenge is to conceptualize a way that can unite commonly used algorithms under one powerful and easily deployable framework. Hebbian plasticity like spike-timing-dependent plasticity (STDP) has been taken as a central mechanism for supporting flexible spatial selectivity in the hippocampus, a region critical for one-shot episodic information encoding, best known for its place cells. However, arguably, the memory encoding cannot be fully explained by Hebbian rules (that depend on tight temporal correlation between pre- and post-synaptic activations) without a feedback control like that in reinforcement learning. In this talk, I will describe a recently discovered one-shot plasticity named behavioral time-scale synaptic plasticity (BTSP)—of potentiation postsynaptically gated by signals with a nature of feedback (dendritic Ca2+-based supralinear plateaus), which can come after active synaptic input second(s) beyond canonical Hebbian time windows. Here, we are arguing that BTSP can actually be viewed as a combination of putative Hebbian and instructed learning. With ex-vivo, in-vivo and in-silico intracellular physiology, facilitated by mouse behavior in virtual reality (VR), our lab proposed that dendritic plateaus, unlike “two-factor” algorithms in Hebbian plasticity, do not serve as the postsynaptic signal, but allows synapses to learn pre/post causality rather than correlation in a manner of considering “three factors”. The idea was elaborated by evidence that the biophysics of plateaus may sense time-dependent circuit states, further supporting their conceptual role for reflecting third factors. We contemplate that formalizing this place-cell-forming plasticity in Three-Factor Learning theory, under clear mechanistic understanding, will crystalize its behaviorally relevant functions and make it an immediately accessible algorithm for computational communities of neural network learning
Prof. Hiroshi Makino
(Department of Physiology, Keio University School of Medicine, Japan)
Learning in intelligence systems
Recent years have seen a resurgence of interactions between artificial intelligence (AI) and neuroscience. AI research has the potential to provide new theories and hypotheses about how the brain solves computational problems, while neuroscience could contribute new algorithms and neural network architectures that endow machines with cognitive abilities resembling those of humans and other animals. Despite this, direct comparisons between artificial and biological intelligent systems remain limited. Here, we explore behaviors and neural representations in both systems across a variety of behavioral paradigms, spanning different domains of intelligence. By training both mice and artificial deep reinforcement learning (RL) agents on the same tasks and analyzing the resulting task representations in their respective neural networks, we found that learning in the mouse cortex exhibits key features similar to those of deep RL algorithms. For instance, after conducting a systematic hyperparameter search and evaluating thousands of deep RL models, we found that AI models optimized for behavioral outcomes more closely recapitulated the neural representation patterns observed in biological systems. Moreover, by formulating AI-derived theoretical predictions and empirically testing them in mice, we discovered that the brain composes novel behaviors through a simple arithmetic combination of pre-acquired action-value representations and a stochastic policy. These results underscore the remarkable parallels in behavior and neural representations between the two systems and demonstrate the value of comparative approaches. Our interdisciplinary methodology may define new research trajectories for AI and neuroscience, deepening our understanding of the brain while enhancing machine intelligence through neuroscience-inspired algorithms.
YOUNG SCHOLARS
Shaoshi Zhang
(Centre for Sleep and Cognition, Centre for Translational Magnetic Resonance Research, Yong Loo Lin School of Medicine, National University of Singapore, Singapore)
In-vivo whole-cortex marker of neural excitation and inhibition
Normal brain functioning requires balanced neuronal excitation and inhibition. An imbalance of excitation and inhibition ratio (E/I ratio) has been linked to an array of psychiatric disorders, including autism spectrum disorder and schizophrenia. Thus, characterizing neuronal excitation and inhibition is an important research question to advance our understanding of disease mechanisms and a potential gateway to future therapeutic interventions. However, precise measurements of excitation and inhibition require invasive approaches, which are infeasible for human participants. In this talk, I will present our latest works on large-scale biophysical models to estimate E/I ratio from resting state fMRI data. I will demonstrate that the estimated E/I ratio marker is sensitive to neuropharmacological manipulation during fMRI. Finally, I will discuss questions of how E/I ratio shifts across neurodevelopment as well as how E/I ratio relates to cognition during brain maturation.
Kensuke Yoshida
(RIKEN Center for Brain Science, Japan)
A biological model of nonlinear dimensionality reduction
Animals make decisions based on high-dimensional sensory inputs. Obtaining their low-dimensional representations in an unsupervised manner is essential for straightforward downstream information processing. Although nonlinear dimensionality reduction methods such as t-distributed stochastic neighbor embedding (t-SNE) have been developed, their implementation in simple biological circuits remains unclear. Here, we develop a biologically plausible dimensionality reduction algorithm compatible with t-SNE, which utilizes a simple three-layer feedforward network mimicking the Drosophila olfactory circuit. The synaptic weights from the middle to output layers are updated according to a three-factor Hebbian learning rule so that similar (high-dimensional) inputs evoke similar (low-dimensional) outputs and vice versa. The proposed learning rule is effective for datasets such as entangled rings and MNIST, comparable to t-SNE. We further show that the algorithm could be working in olfactory circuits in Drosophila by analyzing the multiple experimental data in previous studies. We finally suggest that the algorithm is also beneficial for association learning between inputs and rewards, allowing the generalization of these associations to other inputs not yet associated with rewards.
Toshitake Asabuki
(RIKEN Center for Brain Science, Japan)
Predictive Learning for Embedding Stochastic Dynamics of the Environment in Spontaneous Activity
While spontaneous activity in the brain is often regarded as simple background noise, recent works have hypothesized that spontaneous activity instead reflects the brain’s learned internal model representing the statistical structure of the environment. Several computational studies have proposed synaptic plasticity rules to generate structured spontaneous activity, but the mechanism of learning and embedding such statistical structure in spontaneous activity is still unclear. Using a computational model, we investigate novel synaptic plasticity rules that learn structured spontaneous activity obeying appropriate probabilistic dynamics. The proposed synaptic plasticity rule for excitatory synapses seeks to minimize the discrepancy between stimulus-evoked and internally predicted activity, while inhibitory plasticity maintains the excitatory-inhibitory balance. We show that the spontaneous reactivation of cell assemblies follows the transition statistics of the model’s evoked dynamics. Furthermore, we demonstrate that simulations of our model can replicate recent experimental results of spontaneous activity in songbirds, suggesting that the proposed plasticity rule might underlie the mechanism by which animals learn internal models of the environment. Our results shed light on the learning mechanism of the brain’s internal model, which is a crucial step towards a better understanding of the role of spontaneous activity as an internal generative model of stochastic processes in complex environments.
Yang Qi
(Institute of Science and Technology for Brain-inspired Intelligence,
Fudan University, China)
Learning and inference with stochastic neural computing
What are the fundamental forms of circuit learning algorithms that best address the core of computational problems faced by animals living with memory and goals? Error Backpropagation for artificial neural networks was initially inspired by reinforcement learning in behaving animals, while Hebbian Plasticity for biological neural networks is native to brain circuits solving “spatial intelligence” problems. The key challenge is to conceptualize a way that can unite commonly used algorithms under one powerful and easily deployable framework. Hebbian plasticity like spike-timing-dependent plasticity (STDP) has been taken as a central mechanism for supporting flexible spatial selectivity in the hippocampus, a region critical for one-shot episodic information encoding, best known for its place cells. However, arguably, the memory encoding cannot be fully explained by Hebbian rules (that depend on tight temporal correlation between pre- and post-synaptic activations) without a feedback control like that in reinforcement learning. In this talk, I will describe a recently discovered one-shot plasticity named behavioral time-scale synaptic plasticity (BTSP)—of potentiation postsynaptically gated by signals with a nature of feedback (dendritic Ca2+-based supralinear plateaus), which can come after active synaptic input second(s) beyond canonical Hebbian time windows. Here, we are arguing that BTSP can actually be viewed as a combination of putative Hebbian and instructed learning. With ex-vivo, in-vivo and in-silico intracellular physiology, facilitated by mouse behavior in virtual reality (VR), our lab proposed that dendritic plateaus, unlike “two-factor” algorithms in Hebbian plasticity, do not serve as the postsynaptic signal, but allows synapses to learn pre/post causality rather than correlation in a manner of considering “three factors”. The idea was elaborated by evidence that the biophysics of plateaus may sense time-dependent circuit states, further supporting their conceptual role for reflecting third factors. We contemplate that formalizing this place-cell-forming plasticity in Three-Factor Learning theory, under clear mechanistic understanding, will crystalize its behaviorally relevant functions and make it an immediately accessible algorithm for computational communities of neural network learning.
Zijiao Chen
(The Multimodal Neuroimaging in Neuropsychiatric Disorders Laboratory, Yong Loo Lin School of Medicine, National University of Singapore, Singapore)
High-quality Perception Reconstruction from Brain Activities
Decoding diverse human perception from functional Magnetic Resonance Imaging (fMRI) data remains a significant challenge in cognitive neuroscience, hindered by the complexity of neural signals, limitations in decoding accuracy, and lack of model interpretability. This talk presents four novel, interpretable deep learning approaches to address these challenges, in fMRI to image, video, audio and text decoding, advancing our understanding of how the brain represents various external stimuli.
Sang Ho Lee
(School of Digital Humanities and Computational Social Sciences, KAIST, South Korea)
Inverse reinforcement learning captures value representations in the reward circuit in a real-time driving task
A challenge in using naturalistic tasks is to describe complex data beyond simple summary of behaviors. Lee et al. (2024) showed that an inverse reinforcement learning (IRL) algorithm combined with deep neural networks is a practical framework for modeling real-time behaviors in a naturalistic task. However, it remains unknown whether the reward function inferred by IRL reflects value representations in the reward circuit. To address this issue, we investigate the neural correlates of the reward inferred by IRL. Human participants were scanned using fMRI while performing a real-time driving task (i.e., highway task). We show that the trajectory of IRL reward during the task strongly correlates with the trajectory of BOLD signals in the reward circuit including the prefrontal cortex, the striatum, and the insula. The results demonstrate the validity of the IRL as a modeling framework that explains both behaviors and the brain activity in a real-time task.
Yufan Dong
(Institute of Neuroscience, State Key Laboratory of Neuroscience, Key Laboratory of Primate Neurobiology, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, China)
Untangling the mystery of REM sleep: Spontaneous cortical dynamics and their functions
Rapid eye movement (REM) sleep is a sleep state characterized by skeletal muscle paralysis and cerebral cortical activation, accompanied by vivid dreaming. Yet, global cortical dynamics and how they correlated with REM sleep dreaming are unclear. One of the research bottlenecks is the difficulty for animals to give a ‘dream report’. Here we show that in mice, REM sleep have two distinct stages, characterized by contrasting facial movements, autonomic activity and distinguishable electroencephalogram theta oscillations. During the later stage of REM sleep, mice exhibited enriched orofacial movements including whisker, mouth, and eye movement. We characterized different micro-states based on different orofacial movement patterns, and found corresponding structured cortical activities. We also found that most patterned cortical activity waves initiated from retrosplenial cortex (RSC). Two-photon imaging of layer 2/3 pyramidal neurons of the RSC revealed two distinct patterns of population activities during REM sleep. These activities encoded the two sequential REM sleep substages, Also, closed-loop optogenetic inactivation of RSC during REM sleep altered cortical activity dynamics, shortened REM sleep duration and blocked the orofacial movements. These results highlight an important role for the RSC in dictating cortical dynamics and regulating REM sleep progression, reveal the biological meanings of enriched cortical dynamics and pave the road for dreaming research in animals.
Jae-Joong Lee
(Center for Neuroscience Imaging Research, Institute for Basic Science Suwon, South Korea)
Personalized Brain Decoding of Spontaneous Pain in Individuals with Chronic Pain
Current assessment of chronic pain lacks objective biomarkers. Neuroimaging has shown the potential as a surrogate for subjective pain reports in healthy population, but whether it has validity for single individuals with chronic pain remains unclear. Here, we aimed to develop precise, person-specific brain markers for spontaneous pain through intensive longitudinal sampling of two individuals with chronic pain. The personalized decoding models predicted changes in spontaneous pain ratings across sessions, runs, and minutes (Participant 1: r = 0.40-0.61; Participant 2: r = 0.51-0.65), and discriminated between the high pain vs. low pain states (Participant 1: AUC = 0.71-0.87; Participant 2: AUC = 0.76-0.93). These models relied on individually unique patterns of whole-brain interaction for decoding, and were not predictive of the other participant’s pain. This study provides evidence that neuroimaging can be used to decode spontaneous pain in single individuals with chronic pain, offering a potential pathway for objective pain assessment.
Richard Sebastian Eydam
(Neural Circuits and Computations Unit (Kang Lab), RIKEN Center for Brain Science, Japan)
Metabolic feedback and its potential role in epilepsy
Epilepsy, a widespread neurological disorder, affects a substantial portion of the global population, with approximately 30% of cases remaining refractory to available therapies. This resistance arises from the heterogeneous nature of the underlying etiologies of epilepsy, which renders it a complex spectrum of disorders rather than a disease with a singular root cause. An alternative approach for mitigating the effects of epilepsy lies in leveraging metabolic pathways activated during fasting, which has been reported to reduce seizures already in antiquity. In the early 20th century, this concept evolved into the ketogenic diet (KD), a structured dietary regimen renowned for its efficacy in reducing epileptic seizures [1]. Nonetheless, with the emergence of modern antiepileptic drugs (AEDs), dietary interventions received limited research attention for several decades, only experiencing resurgence in recent years. We introduce a model of quadratic integrate-and-fire neurons coupled to a global energy reservoir. By including this energy reservoir one can mimic the transition from a regular to a ketogenic diet. Our proposed mechanism for seizure control hinges on the presence of ATP-dependent potassium channels [2], whose activity, or lack thereof in the presence of ATP, results in neuronal hyperpolarization. Our findings support the viability of this mechanism for regulating epileptic activity, demonstrating that adherence to a KD can shift neuronal dynamics back to a regime of normal activity. We substantiate this relationship by investigating bifurcations within a corresponding mean-field model [3] and presenting simulations that explore three distinct methods of transitioning between normal and seizure-like activity: ATP concentration shocks, parametric perturbations in ATP production rates, and external current stimulation.
Yi Ding
(College of Computing and Data Science, Nanyang Technological University (NTU), Singapore)
Neuropsychology-inspired Deep Neural Networks for Affective Brain-computer Interfaces
Emotion recognition from electroencephalogram (EEG) signals, also known as affective brain-computer interfaces (aBCI), is a critical area of biomedical research, with applications ranging from mental health regulation to human-computer interaction. However, EEG-based emotion recognition presents significant challenges, primarily due to the low signal-to-noise ratio and the non-stationary nature of EEG signals, especially in generalized evaluation settings where models are not exposed to test data during training. To effectively capture the spatial, temporal, and contextual information from EEG signals, it is essential to incorporate prior knowledge about emotional EEG when designing deep learning architectures. In this talk, several approaches will be introduced, focusing on two key aspects of EEG emotion recognition: classification and regression of emotional states. The findings demonstrate that integrating neurophysiological knowledge into neural network design improves the performance of EEG emotion decoding.