A comprehensive list of my papers is here. And more publicly, here.

My current research interests include
[1]. Sequence generation and timing signals involved in short-term memory in the brain
[2]. Developing dynamic mean-field methods to study stimulus-evoked responses in recurrent neural networks
[3]. Analyzing phase transitions in physical and neurobiological phenomena
[4]. Statistical physics and information theoretic approaches to sensory processing
[5]. Invariant representation of structured objects present in natural sensory data within network dynamics
You can look at an example research proposal here.

Here are a few neuroscience problems I have solved using tools from mathematics and physics - understanding the circuit mechanisms of sequence generation , characterizing neural responses to natural signals, studying chaos and non-chaotic responses in recurrent networks, eigenvalue spectra of random matrices relevant to neural networks, determining stimulus selectivity from network dynamics and chemical reaction kinetics.
Read on.

Recurrent Network Models of Sequence Generation and Memory Download PDF

Sequential activation of neurons is a common feature of network activity during a variety of behaviors, including working memory and decision making. Previous network models for sequences and memory emphasized specialized architectures in which a principled mechanism is pre-wired into their connectivity. Here we demonstrate that, starting from random connectivity and modifying a small fraction of connections, a largely disordered recur- rent network can produce sequences and implement working memory efficiently. We use this process, called Partial In-Network Training (PINning), to model and match cellular resolution imaging data from the posterior parietal cortex during a virtual memory- guided two-alternative forced-choice task. Analysis of the connectivity reveals that sequences propagate by the cooperation between recurrent synaptic inter- actions and external inputs, rather than through feedforward or asymmetric connections. Together our results suggest that neural sequences may emerge through learning from largely unstructured network architectures.

Neural Sequences and Short-Term Memory

A. Experimental setup for the virtual reality environment (left), a schematic showing the design for the memory-guided two-alternative forced choice (2AFC) task (top right), and an example field of view from the mouse posterior parietal cortex (PPC) are shown here (bottom right). Individual PPC neurons, which fire in a temporally constrained manner only for specific cue-response combinations and specific task epochs, are intermixed anatomically. B. Schematic of the activity-based modification scheme we call Partial in-Network Training4 or PINning is shown here. Only the outgoing synapses (plastic synapses, in orange) from a small subset of the randomly connected (random synapses, in gray) network neurons are plastic, using as targets, the firing rates extracted from Ca2+ imaging data from left preferring PPC neurons (schematized in blue), right preferring neurons (in red) and neurons with no choice preference (in green). Plastic synapses are modified at every time step by an amount proportional to the difference between the input to the respective neuron at that time step and its target, the presynaptic firing rate, and the firing rate fluctuations across the network. Each neuron in the PINned network therefore fires transiently with its peak at a time point staggered relative to the other neurons in the network, and collectively, the population produces a sequence spanning the entire duration of the task (about 10s). C. Normalized firing rates extracted from trial-averaged Ca2+ imaging data collected in the PPC during the two alternative forced-choice task3 are shown here. Spike trains were extracted by deconvolution63 from mean Ca2+ fluorescence traces for the 437 choice-specific and 132 non-specific, task-modulated PPC neurons (one per row) imaged on preferred and opposite trials. Firing rates, derived by smoothing from the inferred spikes4, were then used to extract the target functions for PINning. Traces were normalized to the peak of the mean firing rate of each neuron on preferred trials and sorted by their time of center-of-mass (tCOM). Vertical gray lines indicate the time points corresponding to the different epochs of the task – the cue period, ending at 4s, the delay period, ending at 7s, and the turn period, concluding at the end of the trial, ~10s. D. The outputs of a 569-neuron recurrent network model with 16% plastic synapses, constructed by PINning, sorted by tCOM and normalized by the peak of the output of each model neuron, showing an excellent match with the experimental data (C). The output of this minimally structured PINned network captures over 85% of the variability present in the experimental data. E. Logarithm of the probability density of the elements of the randomly initialized synaptic connectivity matrix (JRand, gray squares), sparsely PINned matrix with 16% plastic synapses (JPINned, 16%, yellow squares) whose outputs are shown in (D), and as a useful comparison, a fully PINned matrix (JPINned, 100%, red squares) are shown here. JRand is normally distributed. JPINned, 16% is skewed toward negative or inhibitory weights (mean = –0.1, variance = 2.2, and skewness = –2) with heavy tails (kurtosis = 30). JPINned, 16% corresponds to a network in which the large sequence-facilitating synaptic changes come from a small fraction of the weights, as suggested in experimental measurements. In JPINned, 16%, the ratio of the size of the largest synaptic weight to the size of the "typical" is ~20. If we assume the typical synapse corresponded to a post-synaptic potentiation (PSP) of 0.05mV, then the "large" synapses had a 1mV PSP. This is within the range in which existing experimental data support the plausibility of the network. F. Capacity of multi-sequential memory networks (Ns) as a function of network size (N) for different fractions of plastic synapses is shown here. Ns is directly proportional to PINning fraction times N and inversely proportional to temporal sparseness. The slope of the capacity-to-network-size relationship increases when non-specific neurons, such as those observed in the data and in (C), are included because they enable networks to propagate temporally sparser sequences.

Neural Responses to Natural Stimuli Download PDF

The concept of feature selectivity in sensory signal processing can be formalized as dimensionality reduction: in a stimulus space of very high dimensions, neurons respond only to variations within some smaller, relevant subspace. But if neural responses exhibit invariances, then the relevant subspace typically cannot be reached by a Euclidean projection of the original stimulus. We argue that, in several cases, we can make progress by appealing to the simplest nonlinear construction, identifying the relevant variables as quadratic forms, or “stimulus energies.” Natural examples include non–phase–locked cells in the auditory system, complex cells in visual cortex, and motion–sensitive neurons in the visual system.

A maximally informative stimulus energy for a non-phase-locked-auditory neuron.

Analyzing the responses of the model auditory neuron to a bird song. (a) The sound pressure wave of a zebra finch song used as stimulus to the model neuron is shown along with its spectrogram. The spectrogram of the song illustrates that s is highly structured, full of harmonic stacks and complex spectrotemporal motifs. (b) The equivalent matrix K, constructed from the two filters as described in Eq. (8) is 300 × 300 in size but has a relatively simple structure. (c) Taking a Fourier transform over t2 of K yields a spectrotemporal sensitivity matrix, K ̃ with a peak at approximately 1KHz. (d) The initial guess for Q is the random symmetric matrix plotted here. (e) The optimal matrix Q that maximizes the mutual information between the spiking response of the model neuron and the 1D projection x = sT · Q · s matches K well at the end of 100 learning steps. (f) The spectrotemporal sensitivity Q ̃, corresponding to the maximally informative stimulus energy has the same response preferences as K ̃.

Generalizing the idea of maximally informative dimensions, we show that one can search for the kernels of the relevant quadratic forms by maximizing the mutual information between the stimulus energy and the arrival times of action potentials. Simple implementations of this idea successfully recover the underlying properties of model neurons even when the number of parameters in the kernel is comparable to the number of action potentials and stimuli are completely natural. We explore several generalizations that allow us to incorporate plausible structure into the kernel and thereby restrict the number of parameters. We hope that this approach will add significantly to the set of tools available for the analysis of neural responses to complex, naturalistic stimuli.

Balanced Recurrent Networks and Chaos Download PDF

Neuronal activity arises from an interaction between ongoing firing generated spontaneously by neural circuits and responses driven by external stimuli. Using mean-field analysis, we ask how a neural network that intrinsically generates chaotic patterns of activity can remain sensitive to extrinsic input. We find that inputs not only drive network responses, but they also actively suppress ongoing activity, ultimately leading to a phase transition in which chaos is completely eliminated. The critical input intensity at the phase transition is a non-monotonic function of stimulus frequency, revealing a “resonant” frequency at which the input is most effective at suppressing chaos even though the power spectrum of the spontaneous activity peaks at zero and falls exponentially. A prediction of our analysis is that the variance of neural responses should be most strongly suppressed at frequencies matching the range over which many sensory systems operate.

Phase-transition curves showing the critical input amplitude that divides regions of periodic and chaotic activity as a function of input frequency. Transition curves for g=1.5 - dashed curve or g=1.8 - solid curve. The inset traces show representative single-unit firing rates for the regions indicated. A comparison of the transition curve computed by mean-field theory - open circles and line and by simulating a network - filled circles - for r0=1, g=2 and, for the simulation, N=10000.

Neuronal selectivity to stimulus features is typically studied by determining how the mean response across experimental trials depends on various stimulus parameters. The presence of nonlinear interactions between stimulus-evoked and spontaneous fluctuating activities indicates that response components that are not locked to the temporal modulation of the stimulus may also be sensitive to stimulus parameters. In general, our results suggest that experiments studying the stimulus dependence of the noise component of neural responses could provide important insights into the nature and origin of activity fluctuations in neuronal circuits, as well as their role in neuronal information processing..

Random Matrix Theory and Eigenvalue Spectra Download PDF

The dynamics of neural networks is influenced strongly by the spectrum of eigenvalues of the matrix describing their synaptic connectivity. In large networks, elements of the synaptic connectivity matrix can be chosen randomly from appropriate distributions, making results from random matrix theory highly relevant. Unfortunately, classic results on the eigenvalue spectra of random matrices do not apply to synaptic connectivity matrices because of the constraint that individual neurons are either excitatory or inhibitory. Therefore, we compute eigenvalue spectra of large random matrices with excitatory and inhibitory columns drawn from distributions with different means and equal or different variances.

Density ρ of eigenvalues as a function of position on the complex plane |ω|, for N = 1000. The solid lines are the result of the analytic calculation and symbols are numerical results. (a) Results for different fractions f of excitatory and inhibitory elements, and α = 0.06. The inset shows eigenvalues in the complex plane computed numerically for f = 0.5 and α = 0.06. (b) Results for different excitatory variances 1/(Nα) and f = 0.5, with the inhibitory variance equal to 1/N.

The eigenvalue distributions we have obtained have several implications for neural network dynamics. First, modifying the mean strengths of excitatory and inhibitory synapses has no effect on stability or small- fluctuation dynamics under balanced conditions. These can only be modified by changing the widths of the distributions of excitatory and inhibitory synaptic strengths. If these widths are different, fewer eigenvalues will appear at the edge of the eigenvalue circle, meaning that there will be fewer slowly oscillating and long-lasting modes in the network dynamics.

Stimulus Selectivity from Network DynamicsDownload PDF

How are the spatial patterns of spontaneous and evoked population responses re- lated? We study the impact of connectivity on the spatial pattern of fluctuations in the input-generated response, by comparing the distribution of evoked and intrinsically generated activity across the different units of a neural network. We develop a complementary approach to principal component analysis in which separate high-variance directions are derived for each input condition. We analyze subspace angles to compute the difference between the shapes of trajectories corresponding to different network states, and the orientation of the low-dimensional subspaces that driven trajectories occupy within the full space of neuronal activity. In addition to revealing how the spatiotemporal structure of spontaneous activity affects input-evoked responses, these methods can be used to infer input selectivity induced by network dynamics from experimentally accessible measures of spontaneous activity (e.g. from voltage- or calcium-sensitive optical imaging experiments). We conclude that the absence of a detailed spatial map of afferent inputs and cortical connectivity does not limit our ability to design spatially extended stimuli that evoke strong responses.

Spatial pattern of network responses. Top left panel: Schematic of the angle between the subspaces defined by the first 2 components of the chaotic activity (grey) and a 2D description of the periodic orbit (black curve). Top right panels: PCA of the chaotic spontaneous state and non-chaotic driven state reached when an input of sufficiently high amplitude has suppressed the chaotic fluctuations. % variance accounted for by different PC’s for chaotic spontaneous activity. Projections of the chaotic spontaneous activity onto PC vectors 1, 10 and 50 (in decreasing order of variance). For non-chaotic driven activity, projections of periodic driven activity are shown for PC’s 1, 3, and 5. Projections onto components 2, 4, and 6 are identical but phase shifted by π/2. N =1000, g=1.5, f =5 Hz and I/I1/2 =0.7. Bottom left panel: Effect of input frequency on the orientation of the periodic orbit. Angle between the subspaces defined by the 2 leading PC’s of non- chaotic driven activity at different frequencies and these two vectors for a 5 Hz input frequency. N =1000 and I/I1/2 = 0.7 and f = 5 Hz, I/I1/2 = 1.0. Bottom right panel: Network selectivity to different spatial patterns of input. Signal and noise amplitudes in the input-evoked response aligned to the leading PC’s of the spontaneous activity of the network. N =1000, I/I1/2 = 0.2 and f = 2 Hz. Chaos is completely suppressed only when input is aligned to the PC vectors with the 5 largest eigenvalues.

Our results show that experimentally accessible spatial patterns of spontaneous activity (e.g. from voltage- or calcium-sensitive optical imaging experiments) can be used to infer the stimulus selectivity induced by the network dynamics and to design spatially extended stimuli that evoke strong responses. This is particularly true when selectivity is measured in terms of the ability of a stimulus to entrain the neural dynamics. In general, our results indicate that the analysis of spontaneous activity can provide valuable information about the computational implications of neuronal circuitry.

Temperature Compensated Chemical ReactionsDownload PDF

Circadian rhythms are daily oscillations in behaviors that persist in constant light/dark conditions with periods close to 24 h. A striking feature of these rhythms is that their periods remain fairly constant over a wide range of physiological temperatures, a feature called temperature compensation. Although circadian rhythms have been associated with periodic oscillations in mRNA and protein levels, the question of how to construct a network of chemical reactions that is temperature compensated remains unanswered. We discuss a general framework for building such a network.
The generality and robustness of the temperature compensation mechanism we have presented have implications for the evolution of a temperature compensated system such as the mechanism that produces circadian rhythms. Suppose that a biochemical network evolves but does not yet have accurate temperature compensation. Additional reactions can then be added to this network, until a better pathway develops, leading to better compensation.