Inverse problems in computational neuroscience comprise the determination of synaptic weight matrices or kernels for neural networks or neural fields respectively. Here, we reduce multi-dimensional inverse problems to inverse problems in lower dimensions which can be solved in an easier way or even explicitly through kernel construction. In particular, we discuss a range of embedding techniques and analyze their properties. We study the Amari equation as a particular example of a neural field theory. We obtain a solution of the full 2D or 3D problem by embedding 0D or 1D kernels into the domain of the Amari equation using a suitable path parametrization and basis transformations. Pulses are interconnected at branching points via path gluing. As instructive examples we construct logical gates, such as the persistent XOR and binary addition in neural fields. In addition, we compare results of inversion by dimensional reduction with a recently proposed global inversion scheme for neural fields based on
In vivo cortical recording reveals that indirectly driven neural assemblies can produce reliable and temporally precise spiking patterns in response to stereotyped stimulation. This suggests that despite being fundamentally noisy, the collective activity of neurons conveys information through temporal coding. Stochastic integrate-and-fire models delineate a natural theoretical framework to study the interplay of intrinsic neural noise and spike timing precision. However, there are inherent difficulties in simulating their networks dynamics in silico with standard numerical discretization schemes. Indeed, the well-posedness of the evolution of such networks requires temporally ordering every neuronal interaction, whereas the order of interactions is highly sensitive to the random variability of spiking times. Here, we answer these issues for perfect stochastic integrate-and-fire neurons by designing an exact event-driven algorithm for the simulation of recurrent networks, with delayed Dirac-like
In this paper, we demonstrate for the first time an ultrafast fully functional photonic spiking neuron. Our experimental setup constitutes a complete all-optical implementation of a leaky integrate-and-fire neuron, a computational primitive that provides a basis for general purpose analog optical computation. Unlike purely analog computational models, spiking operation eliminates noise accumulation and results in robust and efficient processing. Operating at gigahertz speed, which corresponds to at least 108 speed-up compared with biological neurons, the demonstrated neuron provides all functionality required by the spiking neuron model. The two demonstrated prototypes and a demonstrated feedback operation mode prove the feasibility and stability of our approach and show the obtained performance characteristics.. © 2011 Optical Society of America. Full Article , PDF Article ...
where yi is the output of the i th neuron, xj is the jth input neuron signal, wij is the synaptic weight (or strength of connection) between the neurons i and j, and φ is the activation function. While this model has seen success in machine-learning applications, it is a poor model for real (biological) neurons, because it lacks the time-dependence that real neuron spikes exhibit. Biological models of the "integrate-and-fire" type take essentially this form; but they have largely been superseded by kinetic models such as the Hodgkin-Huxley model.[citation needed] In the case of modelling a biological neuron, physical analogues are used in place of abstractions such as "weight" and "transfer function". A neuron is filled and surrounded with water containing ions, which carry electric charge. The neuron is bound by an insulating cell membrane and can maintain a concentration of charged ions on either side that determines a capacitance Cm. The firing of a neuron involves the movement of ions into ...
This is the readme for the two neural network models (mex file) associated with the following paper: S.Cavallari, S. Panzeri and A.Mazzoni (2014) Comparison of the dynamics of neural interactions between current-based and conductance-based integrate-and-fire recurrent networks, Frontiers in Neural Circuits 8:12. doi: 10.3389/fncir.2014.00012 Recurrent networks, each of two populations (excitatory and inhibitory) of randomly connected Leaky Integrate-and-Fire (LIF) neurons with either conductance-based synapses (COBN) or current-based synapses (CUBN) were studied. The activity of the LIF COBN model were compared with the activity of the associated model LIF CUBN. Instructions are provided in the ReadMe files in each model associated sub folder, LIF_COBN and LIF_CUBN. If you have any questions about the implementation of these matlab models, which require compilation with mex, contact: [email protected] Please cite the paper if you use the codes. 20140709 Comments in LIF_COBN/code_COBN.c, ...
The Ebb and Flow of Deep Learning: a Theory of Local Learning In a physical neural system, where storage and processing are intertwined, the learning rules for adjusting synaptic weights can only depend on local variables, such as the activity of the pre- and post-synaptic neurons. Thus learning models must specify two things: (1) which variables are to be considered local; and (2) which kind of function combines these local variables into a learning rule. We consider polynomial learning rules and analyze their behavior and capabilities in both linear and non-linear networks. As a byproduct, this framework enables the discovery of new learning rules and important relationships between learning rules and group symmetries. Stacking local learning rules in deep feedforward networks leads to deep local learning. While deep local learning can learn interesting representations, it cannot learn complex input-output functions, even when targets are available for the top layer. Learning complex ...
The Ebb and Flow of Deep Learning: a Theory of Local Learning In a physical neural system, where storage and processing are intertwined, the learning rules for adjusting synaptic weights can only depend on local variables, such as the activity of the pre- and post-synaptic neurons. Thus learning models must specify two things: (1) which variables are to be considered local; and (2) which kind of function combines these local variables into a learning rule. We consider polynomial learning rules and analyze their behavior and capabilities in both linear and non-linear networks. As a byproduct, this framework enables the discovery of new learning rules and important relationships between learning rules and group symmetries. Stacking local learning rules in deep feedforward networks leads to deep local learning. While deep local learning can learn interesting representations, it cannot learn complex input-output functions, even when targets are available for the top layer. Learning complex ...
Please complete this form in order to receive a quote or information from STERIS Corporation for AMSCO Single Compartment Warming Cabinets
Our estimate of the average cortical spread of the LFP is similar to an estimate in layer 2/3 of cat V1 reported recently (Katzner et al., 2009) with a different approach, but is much smaller than many others. Our result seems to contradict a number of studies (Mitzdorf, 1987; Kruse and Eckhorn, 1996; Kreiman et al., 2006; Liu and Newsome, 2006; Berens et al., 2008) that concluded that the spread of LFP signals ranges from ∼500 μm to a few millimeters. However, all such estimates rely on a model associating the LFP with functional properties of cortical neurons. For most studies (Mitzdorf, 1987; Kruse and Eckhorn, 1996; Kreiman et al., 2006; Liu and Newsome, 2006; Berens et al., 2008; Katzner et al., 2009), estimates of the cortical spread of the LFP were indirect and were associated with columnar structures of certain feature selectivities, such as orientation, direction, or speed selectivity. Because the feature selectivity of the LFP is usually broader than that of single unit activity or ...
Figure 3: The effect of synaptic noise of various synaptic types. (a) Visualization of spike train groups analyzed under synaptic noise. The ordinate represents the AP rank of APs fired at times shown on abscissa. The braided lines in the main graph represent different spike trains of individual trials and demonstrate an interesting preference of the spiking mechanism preserving AP times well aligned despite different numbers of fired APs in trials. The inset focuses on the region of an AP cluster, demonstrating the cluster method used to analyze APs: the dotted blue rectangle in the inset represents the 10 ms time window moving along aligned spike trains and collecting only those AP clusters having within the 10 ms window number of spikes larger than 70% the value of ...
We have shown that intensity-selective and intensity-nonselective neurons differ in excitatory tuning profile and absolute excitatory strength. Intuitively, the differential intensity tuning profile of excitation may fully account for the functional difference between intensity-selective and intensity-nonselective neurons. In the intensity-selective neuron, excitation is close to saturation at the intensity threshold (it reaches ,80% of the peak level), whereas at this intensity level inhibition is nearly the lowest (Fig. 4A,B). This may lead to a strong output response at the intensity threshold. With intensity further increasing, inhibition catches up, and the level of spike response may gradually fall.. To demonstrate how much the observed excitatory and inhibitory tuning profiles can account for the intensity tuning profile of output responses in a quantitative manner, we used a conductance-based single-compartment integrate-and-fire neuron model (Liu et al., 2007; Zhou et al., 2012). ...
Paolo Puggioni. Bayesian inference of synaptic inputs given an in vivo voltage clamp trace. Voltage clamp recordings in awake animals directly measure the excitatory (or inhibitory) synaptic current flowing in a single neuron.. The current results from the summation of thousands of synaptic inputs of variable size per second. Recent methods based on Kalman filtering [Paninski2012, Lankarany2013] give a good estimation of the time-varying synaptic conductance, that is a function of input rate and size. However, inferring how the distribution of input size changes during behaviour can be important to understand computational mechanisms used by our brain. Here I propose a simple hierarchical model that is able to estimate both frequency and probability distribution of the synaptic weights. Firstly I test the validity of my model by using generated data, showing that it is robust to high frequency noise and slow fluctuations in the inputs rate typical of in vivo recordings. Thus, I apply it to ...
Many hormones are released in pulsatile patterns. This pattern can be modified, for instance by changing pulse frequency, to encode relevant physiological information. Often other properties of the pulse pattern will also change with frequency. How do signaling pathways of cells targeted by these hormones respond to different input patterns? In this study, we examine how a given dose of hormone can induce different outputs from the target system, depending on how this dose is distributed in time. We use simple mathematical models of feedforward signaling motifs to understand how the properties of the target system give rise to preferences in input pulse pattern. We frame these problems in terms of frequency responses to pulsatile inputs, where the amplitude or duration of the pulses is varied along with frequency to conserve input dose. We find that the form of the nonlinearity in the steady state input-output function of the system predicts the optimal input pattern. It does so by selecting an optimal
In our approach, we choose to approximate P from the contribution coming from the most probable trajectory for the potential for each cell i, referred to as V i *(t). This approximation is exact when the amplitude σ of the noise is small. The determination of V i *(t) was done numerically by Paninski for one cell in [4]. We have found a fast algorithm to determine V i *(t) analytically in a time growing linearly with the number of spikes and quadratically with the number of neurons, which allows us to process recordings with tens of neurons easily. The algorithm is based on a detailed and analytical resolution of the coupled equations for the optimal potential V i *(t) and the associated optimal noise η i *(t) through (1), and is too complex to be explained in this abstract. ...
Part of an educational physics resource on aerosols, pressure, kinetic theory and electrostatics aimed at 16 to 18 year old students
Buy Unsupervised Learning Foundations of Neural Computation by Terrence J. Sejnowski at TextbookX.com. ISBN/UPC: 9780262581684. Save an average of 50% on the marketplace.
This archive instantiates the single-cell cortical models used in (Aberra et al. 2018) and sets up extracellular stimulation with either a point-current source, to simulate intracortical microstimulation (ICMS), or a uniform E-field distribution, with a monophasic, rectangular pulse waveform in both cases ...
log in you can tag this publication with additional keywords A publication can refer to another publication (outgoing references) or it can be referred to by other publications (incoming references).. ...
Two limitations of Dynamic clamp are entirely technical: it has to be fast and temporally consistent. The time required for reading the membrane potential and calculating the current to inject has to be faster than the fastest time constant in the real neuron. This becomes problematic when the model is computationally intensive (as with stochastic Markov models). With ever increasing computer processor speeds, the speed limitation of dynamic clamp is constantly being pushed back. The update interval of the dynamic clamp has to be reliable and reproducible. Modern operating systems (OS) such as Linux, Mac OS, and Windows suffer from the inability of a program running on the OS to be guaranteed a requested operation will be performed precisely when it is requested. Operating systems that can guarantee a request is performed within a well defined time window are Linux with the RTLinux kernel extension, LabView with the real time module, and DOS. A version of dynamic clamp developed by R. Pinto ...
Gain modulation and the voltage dependence of channel activation.(A) The firing rate of the model neuron plotted against the average membrane potential, illustr
The cerebral cortex is a highly complex network comprised of billions of excitable nerve cells. The coordinated dynamic interactions of these cells underlie our thoughts, memories, and sensory perceptions. A healthy brain carefully regulates its neural excitability to optimize information processing and avoid brain disorders. If excitability is too low, neural interactions are too weak and signals fail to propagate through the brain network. On the other hand, high excitability can result in excessively strong interactions and, in some ca
it is used in synaptic interactions, and in integrate-and-fire models it also leads to the execution of one or more reset statements.. Sometimes, it can be useful to define additional events, e.g. when an ion concentration in the cell crosses a certain threshold. This can be done with the custom events system in Brian, which is illustrated in this diagram.. ...
Author: Boussinot, Guillaume et al.; Genre: Journal Article; Published in Print: 2014-06-30; Title: Achieving realistic interface kinetics in phase field models with a diffusional contrast
G. Agarwal, I. H. Stevenson, A. Berényi, K. Mizuseki, G. Buzsáki, F. T. Sommer: Spatially distributed local fields in the hippocampus encode rat position. Science 344 (2014): 626-630. pdf Supplement J. P. Crutchfield, R. G. James, S. Marzen and D. P. Varn, "Understanding and Designing Complex Systems: Response to A framework for optimal high-level descriptions in science and engineering---preliminary report", (2014). arXiv link J. H. Engel, S. B. Eryilmaz, SangBum Kim, M. BrightSky, Chung Lam, H.-L. Lung, B. A. Olshausen, H.-S.P. Wong (2014) "Capacity optimization of emerging mem- ory systems: A shannon-inspired approach to device characterization," Electron Devices Meeting (IEDM), 2014 IEEE International , vol., no., pp.29.4.1,29.4.4, 15- 17 Dec. 2014 link A. K. Fletcher and S. Rangan, Scalable Inference for Neuronal Connectivity from Calcium Imaging, Proc. 28th Ann. Conf. Neural Information Processing Systems, NIPS (2014). Harper NS, Scott BH, Semple MN, McAlpine D (2014) The neural code ...
G. Agarwal, I. H. Stevenson, A. Berényi, K. Mizuseki, G. Buzsáki, F. T. Sommer: Spatially distributed local fields in the hippocampus encode rat position. Science 344 (2014): 626-630. pdf Supplement J. P. Crutchfield, R. G. James, S. Marzen and D. P. Varn, "Understanding and Designing Complex Systems: Response to A framework for optimal high-level descriptions in science and engineering---preliminary report", (2014). arXiv link J. H. Engel, S. B. Eryilmaz, SangBum Kim, M. BrightSky, Chung Lam, H.-L. Lung, B. A. Olshausen, H.-S.P. Wong (2014) "Capacity optimization of emerging mem- ory systems: A shannon-inspired approach to device characterization," Electron Devices Meeting (IEDM), 2014 IEEE International , vol., no., pp.29.4.1,29.4.4, 15- 17 Dec. 2014 link A. K. Fletcher and S. Rangan, Scalable Inference for Neuronal Connectivity from Calcium Imaging, Proc. 28th Ann. Conf. Neural Information Processing Systems, NIPS (2014). Harper NS, Scott BH, Semple MN, McAlpine D (2014) The neural code ...
On Thu, May 12, 2005 at 10:47:13AM +0200, Karim Belabas wrote: , Indeed. As for the ridiculous accuracy of %3 above, we have conflicting , specifications: , 1) PARI functions give as precise a result as is possible from the input, , 2) floating point computations are meant to foster speed by truncating , operands. , , Only 1) is specified in the documentation, 2) is only a general understanding. , And a rather misleading one as far as PARI is concerned; it is a common , source of misapprehension to assume that , , * realprecision is the relative accuracy used to truncate operands in 2). , Which it is not: it is used to convert exact objects to inexact ones. , , * operands with n digits of accuracy will yield a result with at most the , same accuracy. Which is wrong: indeed 1 + 1e-50000 may be computed to , more than 50000 digits of accuracy. The IEEE754 specification and the MPFR extension say the the correct result is the representable number the closest to the actual result, assuming the ...
Spike-timing dependent plasticity is a learning mechanism used extensively within neural modelling. The learning rule has been shown to allow a neuron to find the onset of a spatio-temporal pattern repeated among its afferents. In this thesis, the first question addressed is what does this neuron learn? With a spiking neuron model and linear prediction, evidence is adduced that the neuron learns two components: (1) the level of average background activity and (2) specific spike times of a pattern. Taking advantage of these findings, a network is developed that can train recognisers for longer spatio-temporal input signals using spike-timing dependent plasticity. Using a number of neurons that are mutually connected by plastic synapses and subject to a global winner-takes-all mechanism, chains of neurons can form where each neuron is selective to a different segment of a repeating input pattern, and the neurons are feedforwardly connected in such a way that both the correct stimulus and the ...
Information theory provides a natural set of statistics to quantify the amount of knowledge a neuron conveys about a stimulus. A related work (Kennel, Shlens, Abarbanel, & Chichilnisky, 2005) demonstrated how to reliably estimate, with a Bayesian confidence interval, the entropy rate from a discrete, observed time series. We extend this method to measure the rate of novel information that a neural spike train encodes about a stimulus-the average and specific mutual information rates. Our estimator makes few assumptions about the underlying neural dynamics, shows excellent performance in experimentally relevant regimes, and uniquely provides confidence intervals bounding the range of information rates compatible with the observed spike train. We validate this estimator with simulations of spike trains and highlight how stimulus parameters affect its convergence in bias and variance. Finally, we apply these ideas to a recording from a guinea pig retinal ganglion cell and compare results to a ...
This paper describes the performance of a neuronal network model of the input layer 4Cα of macaque V1. This model differs from others in the literature in several ways. (i) It is designed largely from data for the anatomy and physiology of layer 4Cα of macaque (i.e., length scales and patterning of connectivity, and pinwheel centers). (ii) It uses cortical coordinates rather than idealized coordinates as in "ring models" (7, 8, 28) or "near-ring models" (9), whose coordinate labels are angles of orientation preference, rather than cortical locations within the layer. (iii) It has only short-range local inhibition, which is consistent with anatomical data, rather than an inhibition which is explicitly long-range in orientation preference, as is standard for many models (7-9, 31). (iv) It uses membrane potential, driven by synaptic conductances, as the fundamental variables, rather than activities or mean firing rates (7, 8, 32), or a probabilistic "population-density" representation (31, 33, ...
The aim of this work is to introduce and study simple neuron models with a dynamic threshold: The integrate and fire with dynamic threshold and a resonate and fire with dynamic threshold. The last model is a new model raised from an initial study of the integrate and fire with a dynamic threshold. The characteristics of the new model are studied from a mathematical point of view specially concerning the dynamical systems theory of the non-smooth systems. . In this project we will study analytically and computationally integrate-and-fire models with dynamic threshold, reproducing behaviors observed in the real nervoussystems, using techniques from non-smooth systems. Our goal is to describe the types of spiking patterns that can generate in response to steady and periodic current ...
Finden Sie alle Bücher von Arathorn, David Y.; Arathorn, D. W. - Map-Seeking Circuits in Visual Cognition: A Computational Mechanism for Biological and Machine Vision. Bei der Büchersuchmaschine eurobuch.com können Sie antiquarische und Neubücher VERGLEICHEN UND SOFORT zum Bestpreis bestellen. 9780804742771
Background. Synaptic plasticity is thought to be the cellular correlate for the formation of memory traces in the brain. Recently, spike-timing dependent plasticity has gained increased interest as a plausible physiological mechanism for the activity-dependent modification of synaptic strength. It might be fundamental for circuit refinement, map plasticity and the explanation of higher brain functions. It is not clear if spike-timing dependent plasticity is a universal learning rule based on simple biophysical mechanisms. The molecular signalling pathways involved are quite diverse and apparently use-dependent. The fundamental question is what determines the molecular machinery at a synaptic contact that translates electrical activity into a change in synaptic strength.Specific Aims. (1) The influence of active dendritic properties, which can result in the generation of local dendritic spikes, on changes in synaptic strength will be studied. They will have an important impact on the local ...
WK7 - Hebbian Learning. CS 476: Networks of Neural Computation WK7 - Hebbian Learning Dr. Stathis Kasderidis Dept. of Computer Science University of Crete Spring Semester, 2009. Contents. Introduction to Hebbian Learning Definitions on Pattern Association Pattern Association Network Slideshow 6779351 by sylvana-silas
To contribute, see :pencil2: code of contribution. Computational neuroscience is a multidisciplinary science that joins biology/neuroscience, medicine, biophysics, psychology, computer science, mathematics, and statistics to study the nervous system using computational approaches.. This list of schools and researchers in computational neuroscience, theoretical neuroscience, (and systems neuroscience) aims to give a global perspective of researchers in the field, make it easier to apply to the listed institutions, and also provide a reasonable way to find an advisor.. In addition to names of PIs, excerpts of their academic biographies, and links to their publications, many of the researchers are qualified with a small scale "+/=/- computational." The metric is subjective to the editor of that material but it generally breaks down as: (+) refers to a researcher the university identifies as a computational neuroscientist, their bio consistently identifies a significant component of their research ...
The ability to learn and perform statistical inference with biologically plausible recurrent network of spiking neurons is an important step towards understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators) conveying information about ``novelty on a statistically rigorous ground.Simulations show that our model is able to learn bothstationary and non-stationary patterns of spike trains.We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.
Buy, download and read Computational Neuroscience ebook online in PDF format for iPhone, iPad, Android, Computer and Mobile readers. Author: Jianfeng Feng. ISBN: 9780203494462. Publisher: CRC Press. How does the brain work? After a century of research, we still lack a coherent view of how neurons process signals and control our activities. But as the field of computational neuroscience continues
Many sounds of ecological importance, such as communication calls, are characterised by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamocortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP), which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based ...
Here, based on our previous work on linear synaptic filtering [1-3], we build a general theory for the stationary firing response of integrate-and-fire (IF) neurons receiving stochastic inputs filtered by one, two or multiple synaptic channels each characterized by an arbitrary timescale. The formalism applies to arbitrary IF model neurons, and to arbitrary forms of input noise (i.e., not required to be Gaussian or to have small amplitude), as well as to any form of synaptic filtering (linear or non-linear). The theory determines with exact analytical expressions the firing rate of an IF neuron for long synaptic time constants using the adiabatic approach. The correlated spiking (cross-correlations function) of two neurons receiving common as well as independent sources of noise is also described (see figure 1). The theory is exemplified using leaky, quadratic and noise thresholded IF neurons (LIF, QIF, NTIF). Although the adiabatic approach is exact when at least one of the synaptic timescales ...
Computational neuroscience is the theoretical study of the brain to uncover the principles and mechanisms that guide the development, organization, information processing, and mental functions of the nervous system. Although not a new area, it is only recently that enough knowledge has been gathered to establish computational neuroscience as a scientific discipline in its own right.
A free platform for explaining your research in plain language, and managing how you communicate around it - so you can understand how best to increase its impact.
The brain orchestrates perceptions, thoughts, and actions through the ensemble spiking activity of its neurons. Biological processes underlying spike transformations across brain regions, including synaptic transmissions, dendritic integrations, and spike generations, are highly nonlinear dynamical processes and are often nonstationary. For example, it is well established that certain forms of plasticity, such as long-term potentiation (LTP) and long-term depression (LTD), occur in response to specific input patterns, and plasticity is manifested as a change in input-output function that can be viewed as a system nonstationarity. Quantitative studies of how such functions of information transmission across brain regions evolve during behavior are required in order to understand the brain. In this project, we developed a nonstationary modeling framework for the multiple-spike activity propagations between brain regions. ❧ In this study, in order to analyze the nonlinear dynamics underlying ...
The brain orchestrates perceptions, thoughts, and actions through the ensemble spiking activity of its neurons. Biological processes underlying spike transformations across brain regions, including synaptic transmissions, dendritic integrations, and spike generations, are highly nonlinear dynamical processes and are often nonstationary. For example, it is well established that certain forms of plasticity, such as long-term potentiation (LTP) and long-term depression (LTD), occur in response to specific input patterns, and plasticity is manifested as a change in input-output function that can be viewed as a system nonstationarity. Quantitative studies of how such functions of information transmission across brain regions evolve during behavior are required in order to understand the brain. In this project, we developed a nonstationary modeling framework for the multiple-spike activity propagations between brain regions. ❧ In this study, in order to analyze the nonlinear dynamics underlying ...
Multistability remains an intriguing phenomenon for neuroscience. The functional roles of multistability in neural systems are not well understood. On the other hand, its pathological roles are widely discussed in recent studies. Knowledge of the dynamic mechanisms supporting multistability opens new horizons for medical applications. It has been intensively targeted in a search for new treatments of the medical conditions which are caused by malfunction of the dynamics of the CNS. Sudden infant death syndrome, epilepsy and Parkinsons disease are examples of such conditions. Recent progress in the modern technology of computer-brain interface based on real time systems allows to utilize complex feedback stimulation algorithms suppressing pathological regime co-existing with the normal one. It still remains a challenge to identify the scenarios leading to the multistability in the neuronal dynamics and discuss what are potential roles of the multistability in the operation of the central nervous ...
The GL model was defined in 2013 by mathematicians Antonio Galves and Eva Löcherbach.[1] Its inspirations included Frank Spitzers interacting particle system and Jorma Rissanens notion of stochastic chain with memory of variable length. Another work that influenced this model was Bruno Cessacs study on the leaky integrate-and-fire model, who himself was influenced by Hédi Soula.[4] Galves and Löcherbach referred to the process that Cessac described as "a version in a finite dimension" of their own probabilistic model. Prior integrate-and-fire models with stochastic characteristics relied on including a noise to simulate stochasticity.[5] The Galves-Löcherbach model distinguishes itself because it is inherently stochastic, incorporating probabilistic measures directly in the calculation of spikes. It is also a model that may be applied relatively easily, from a computational standpoint, with a good ratio between cost and efficiency. It remains a non-Markovian model, since the probability ...
A system and method for detecting aberrant network behavior. One embodiment provides a system of detecting aberrant network behavior behind a network access gateway comprising a processor, a first network interface coupled to the processor, a second network interface coupled to the processor, a storage media accessible by the processor and a set of computer instructions executable by the processor. The computer instructions can be executable to observe network communications arriving at the first network interface from multiple clients and determine when the traffic of a particular client is indicative of malware infection or other hostile network activity. If the suspicious network communication is determined to be of a sufficient volume, type, or duration the computer instructions can be executable to log such activity to storage media, or to notify an administrative entity via either the first network interface or second network interface, or to make the computer instructions be executable to perform
Correlated neuronal activity and the flow of neural information. Jaeseung Jeong, Ph.D Department of Bio and Brain Engineering. Nonlinear information transmission of the cerebral cortex. Conventional measure: cross-correlation. Slideshow 6735842 by alana-fulton
We present a peripheral vision model inspired by the cortical architecture discovered by Hubel and Wiesel. As with existing cortical models, this model contains alternating layers of simple cells, which employ tuning functions to increase specificity, and complex cells, which pool over simple cells to increase invariance. To extend the traditional cortical model, we introduce the option of eccentricity-dependent pooling and tuning parameters within a given model layer. This peripheral vision system can be used to model physiological data where receptive field sizes change as a function of eccentricity. This gives the user flexibility to test different theories about filtering and pooling ranges in the periphery. In a specific instantiation of the model, pooling and tuning parameters can increase linearly with eccentricity to model physiological data found in different layers of the visual cortex. Additionally, it can be used to introduce pre-cortical model layers such as retina and LGN. We have ...
This 12-month course is designed to provide you with in-depth training into the core problems in systems neuroscience, and will develop your understanding of the disciplines and techniques used to address these problems such as computer simulation modelling, data visualisation and neuroanatomy.. In semester one youll build on your existing knowledge, giving you a thorough understanding of the fundamentals of neuroscience, computational neuroscience and mathematical modelling. Once youve developed a solid foundation in these areas at the core of systems neuroscience, semester two will be devoted to advanced modules where youll tailor your learning and choose to specialise in one of two distinct routes of study: pathway 1 or pathway 2.. Pathway 1 will train you in methods for visualising brain networks, including electrophysiology, optical imaging, and functional magnetic resonance imaging, as well as experimental protocols. Pathway 2 will build on your theoretical skills in this area of ...
The totality of mechanisms involved in the process of information transmission is immensely complex, comprised as it is of biochemical, biophysical, and ultrastructural elements. More-over, these are...
The study is designed to identify specific patterns of brain functional activity associated with chronic, moderate to severe tinnitus through the use of resting-state MEG scans. Robust patterns identified in this study will be used as a biomarker for subsequent clinical evaluation of experimental drug treatments for tinnitus. This study will conduct MEG scans on approximately 30 to 75 subjects with tinnitus and approximately 15 healthy control subjects. MEG scans will be obtained for each subject following screening, clinical and tinnitus evaluations. A subset of 6 subjects from the tinnitus cohort will be invited to undergo evoked auditory assessment during an extended MEG scan session to identify cortical regions that respond to the auditory stimulus. These six subjects also will be evaluated with a single structural MRI scan to support high-resolution mapping of the localized cortical regions. MEG data will be analyzed to identify patterns of brain activity that are specifically associated ...
Biological neurons are good examples of a threshold deviceâ€"this is why neural systems are in the focus when looking for realization of Stochastic Resonance (SR) and Spatiotemporal Stochastic Resonance (STSR) phenomena. There are two different ways to simulate neural systemsâ€"one based on differential equations, the other based on a simple threshold model. In this talk the effect of noise on neural systems will be discussed using both ways of modelling. The results so far suggest that SR and STSR do occur in models of neural systems. However, how significant is the role played by these phenomena and what implications might they have on neurobiology is still a question. © 2000 American Institute of Physics ...
It is our great pleasure to welcome you to the 11th International Conference on Neural Information Processing (ICONIP 2004) to be held in Calcutta. ICONIP 2004 is organized jointly by the Indian Stati
20) Spike-timing dependent plasticity: Getting the brain from correlation to causation (Levy - 1983, Sakmann - 1994, Bi & Poo - 1998, Dan - 2002). Hebbs original proposal was worded as such: When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that As efficiency, as one of the cells firing B, is increased. [emphasis added]. The phrase takes part in firing implies causation of Bs activity via As activity, not simply a correlation of the two.. There are several ways to go beyond correlation to infer causation. One method is to observe that one event (e.g., cell As activity) comes just before the caused event (e.g., cell Bs activity).. In 1983 Levy showed with hippocampal slices that electrically stimulating cell A to fire before cell B will cause long-lasting strengthening of the synapse from cell A to cell B. However, when the opposite occurs, and ...
Scientists from the Max Planck Institute for Biological Cybernetics have now developed a novel multimodal methodology called "neural event-triggered functional magnetic resonance imaging" (NET-fMRI) and presented the very first results obtained using it in experiments with both anesthetized and awake, behaving monkeys. The new methodology uses multiple-contact electrodes in combination with functional magnetic resonance imaging (fMRI) of the entire brain to map widespread networks of neurons that are activated by local, structure-specific neural events. Many invasive studies in nonhuman primates and clinical investigations in human patients have demonstrated that the hippocampus, one of the oldest, most primitive brain structures, is largely responsible for the long term retention of information regarding places, specific events, and their contexts, that is, for the retention of so-called declarative memories. Without the hippocampus a person may be able to learn a manual task over a period of ...
RA1 will take a leading role in the development of the neural circuit model. This will be a spiking neural model that copies one-to-one the neurons in the olfactory learning pathway of Drosophila larvae. It will be interfaced to a behavioural simulation, and to a robot model, and will be used to test hypotheses about the key mechanisms and locations of synaptic modification, and how the acquisition of associations interacts with behavioural control. Predictions from the model will be tested by our project partners using neurogenetic methods to measure and alter neural function in freely behaving animals ...
Connect and collaborate with Trevor Bekolay, with research interests in Long-term memory and Spike-timing dependent plasticity, on Mendeley.
In particular, we would like to understand how neurons are optimized for their computational task. Therefore, we investigate the relationship between intrinsic neuronal properties and the signaling mechanisms shaped by them, as well as their relevance for network behavior. The aim is to link the molecular level of ion channels to characteristics of neuronal firing, focusing on topics like information transfer, synchronization, energy efficiency, and temperature robustness. To identify computational principles, conductance-based and phenomenological models are used in combination with data evaluation techniques and analytical approaches. We closely cooperate with experimental groups in order to test theoretical predictions ...
schmitt at cs.unc.edu (Charles Schmitt) wrote: ,Can anyone suggest what would be good review articles or book (chapters) ,describing how measured postsynaptic spike trains have been observed ,to be dependent on presynaptic spike trains. What Im searching for ,is to strengthen my knowledge of what statistical measures of spike trains, ,such as mean frequency, inter-spike interval, etc..., have been used ,to describe postsynaptic spike trains and how these statistics have any ,observed dependency on the same presynaptic statistics. Im particularly ,interested in such observed dependencies in visual cortex. ,Thanks, Charlie. The initial work here was done for the spinomotor system. A good paper is by Cope, Fetz, and Matsumura, Cross-correlation assessment of synaptic strength of single Ia fibre connections with triceps surae motorneurons in cats, J. Physiol., 390:161-188, 1982. This is not an easy read, at first, but is worth it. (There is an older literature for this system: search Kirkwood, ...
In my ongoing Ph.D. project, titled "Exploring the effective communication of brain network dynamics under different states", we carried the first research to demonstrate significant brain-wide integration changes during three cognitive tasks using intracranial EEG [1]. In a second study, we provided the first demonstration of a whole-brain model integrating anatomical connectivity, functional activity and neuromodulator receptor density from multimodal imaging of healthy human participants [2], and in a third study, we investigated the relevant timescale for understanding spatiotemporal dynamics across the whole brain. We introduced a novel way to generate whole-brain neural dynamical activity at the millisecond scale from fMRI signals, and using the independent measures of entropy and hierarchy to characterize the richness of the dynamical repertoire, we showed that both methods find a similar optimum at a timescale of around 200 ms in resting state and in task data [3].. 1. Cruzat, J., Deco, ...
Learning, knowledge, research, insight: welcome to the world of UBC Library, the second-largest academic research library in Canada.
When choosing between two options, correlates of their value are represented in neural activity throughout the brain. Whether these representations reflect activity that is fundamental to the computational process of value comparison, as opposed to other computations covarying with value, is unknown. We investigated activity in a biophysically plausible network model that transforms inputs relating to value into categorical choices. A set of characteristic time-varying signals emerged that reflect value comparison. We tested these model predictions using magnetoencephalography data recorded from human subjects performing value-guided decisions. Parietal and prefrontal signals matched closely with model predictions. These results provide a mechanistic explanation of neural signals recorded during value-guided choice and a means of distinguishing computational roles of different cortical regions whose activity covaries with value.
Retinal waves are bursts of activity occurring spontaneously in the developing retina of vertebrate species, contributing to the shaping of the visual system organization and disappear short after birth. Waves during development are a transient process which evolves dynamically exhibiting different features. Based on our previous modelling work [1,2], we now propose a classification of stage II retinal waves patterns as a function of acetylcholine coupling strength and a possible mechanism for waves generation. Our model predicts that spatiotemporal patterns evolve upon maturation or pharmacological manipulation and that waves emerge from a complete homogeneous state without the need of the variability of a neural population. Context & Motivation SACs dynamically change their synaptic coupling upon maturation Coupled cholinergic SACs [3] Cholinergic current evolution upon maturation [4]-70 mV 60mV 0 0.8 88 nM 300 nM 0.2 0.4 0.4 0.2 Bursting sAHP Increase of Calcium load during bursting Calcium controls
Get this from a library! Systems neuroscience. [Albert C H Yu; Lina Li;] -- This edition of Advances in Neurobiology brings together experts in the emerging field of Systems Neuroscience to present an overview of this area of research. Topics covered include: how different ...
The course will consist of lectures in the morning and and matching exercises using Matlab and Mathematica. Experience with these software packages will be helpful but is not required for registration. The participants should have a basic understanding of scientific programming ...
A generalization of the complex numbers and the quaternions, Clifford algebras provide a powerful tool for a variety of fields--like geometric computing, physics, computer vision, and robotics. Given a quadratic space over a particular field, I will discuss the defining universal property for Clifford algebras and demonstrate how to construct such an algebra from that quadratic space. Clifford algebras have a useful application to neural networks and have inspired the spinor neuron model--which allows orthogonal transformations to be computed in a manner faster and more robust than real neurons. In fact, some transformations (like Moebius transformations) not learnable by real neurons can be learned by Clifford neurons. During the talk, Ill cover this application in more detail as motivation for the "abstract nonsense." This talk is intended for a general math audience, and I will define several basic algebraic concepts which will be used again in the next two GSAC talks ...
By Benda J.. The only neuron is the elemental component to info processing in worried structures. during this thesis numerous houses of the dynamics of the iteration of spikes are investigated theoretically in addition to experimentally. section oscillators of other complexity are brought as versions to foretell the timing of spikes. The neurons intensity-response curve is used as a uncomplicated parameter in those types to cause them to simply appropriate to genuine neurons. As a moment very important element of the spiking dynamics, the neurons phase-resetting curve is used to increase the versions. The section oscillators turn into a superb approximation of the spiking habit of a neuron so long as it truly is motivated in its super-threshold regime. despite the fact that, it really is proven through comparability with conductance-based types that those types, in addition to all different one-dimensional types together with the typical integrate-and-fire version, fail, if the neuron is ...
Line attractors in neural networks have been suggested to be the basis of many brain functions, such as working memory, oculomotor control, head direction, locomotion, and sensory processing. I will discuss how, by incorporating pulse gating into feedforward neural networks, graded information may be propagated. This propagation can be viewed as a line attractor in the firing rate of transiently synchronous populations. I will show how pulse-gated graded information transfer persists in spiking neural networks and is robust to intrinsic and extrinsic noise. Then, using a Fokker-Planck approach, I will show that the gradedness of rate amplitude information transmission in pulse-gated networks is associated with the existence of a cusp catastrophe, and that the slow (ghost) dynamics near the fold of the cusp underlies the robustness of the line attractor. Understanding the dynamical aspects of this cusp catastrophe allows us to show how line attractors can persist in biologically realistic ...
The NetCons weight should affect epsp amplitude and nothing peculiar should happen. I just built a toy network that consisted of a presynaptic cell (NetStim that generated a single event at 1 ms) and a postsynaptic cell (single compartment with 100 um2 surface area and pas mechanism with e_pas = -70, to which a single bg2pyr is attached). All parameters of the bg2pyr have the default values specified in the NMODL file that defines it. The NetStim sends events to the bg2pyr through a NetCon that has delay = 1 ms and weight = 0.001. A single event from the NetStim at t=1 ms elicits an epsp in the postsynaptic cell that depolarizes the cell to a maximum of about -68.3 mV at 11 ms (epsp amplitude about 1.7 mV). Cutting the NetStims weight by a factor of 2 reduces the maximum v to about -69.2 mV (epsp amplitude about 0.8 mV), pretty much as expected ...
The NetCons weight should affect epsp amplitude and nothing peculiar should happen. I just built a toy network that consisted of a presynaptic cell (NetStim that generated a single event at 1 ms) and a postsynaptic cell (single compartment with 100 um2 surface area and pas mechanism with e_pas = -70, to which a single bg2pyr is attached). All parameters of the bg2pyr have the default values specified in the NMODL file that defines it. The NetStim sends events to the bg2pyr through a NetCon that has delay = 1 ms and weight = 0.001. A single event from the NetStim at t=1 ms elicits an epsp in the postsynaptic cell that depolarizes the cell to a maximum of about -68.3 mV at 11 ms (epsp amplitude about 1.7 mV). Cutting the NetStims weight by a factor of 2 reduces the maximum v to about -69.2 mV (epsp amplitude about 0.8 mV), pretty much as expected ...
What is the primary factor enabling artificial intelligence to resemble the human brain? There are many possible answers. We show an artificial unit cell emulating the biological neuron, i.e. motion of ions in the membrane and integration of multiple input/output signals biased by external stimuli.
This course emphasizes practical issues that are key to the most productive use of NEURON, an advanced simulation environment for realistic modeling of biological neurons and neural circuits. Through lectures and live computer demonstrations, we will address topics that include the following: ...
Systematic and coordinated variations in morphology and connectivity can structurally tune a microcircuits computation but non-systematic variably also exists, imparting connection noise that potentially limits processing performance.
Spiking and bursting patterns of neurons are characterized by a high degree of variability. A single neuron can demonstrate endogenously various bursting patterns, changing in response to external dis
The field of Computational Neuroscience is best described as a mathematical approach to the study of neural systems reducing neural systems to a set of computational tasks. Computer models are an important part of this ...
Beginning in 1999, research began to emerge confirming Julian Jayness neurological model for the bicameral mind: fMRI studies showing a right/left temporal...
A highly accurate model of how neurons behave when performing complex movements could aid in the design of robotic limbs which behave more realistically.
Researchers at the California Institute of Technology have developed an autonomous molecular machine comprising a single strand of DNA that can move around, collect specific molecules and sort them into predefined locations. Furthermore, this nanobot has been designed in such a way that requires no external source of energy or instruction.
An array of weighted summation circuits, N in number, each generate a weighted sum response to the same plurality of input signals, M in number. Each weighted summation circuit includes at least one corresponding capacitive element for determining the weighting of each of the input signals within that weighted summation circuit. At least one corresponding capacitive element is of a programmable type having its capacitance value determined in accordance with the bits of a digital word received at a control word port thereof. The array of weighted summation circuits are preferably constructed in integrated circuit form together with an interstitial memory having respective word storage elements for temporarily storing the digital words applied to the control word ports of nearby capacitive elements in the integrated circuitry.
Researchers from Japans ATR Computational Neuroscience Laboratories have developed new brain analysis technology that can reconstruct the images inside a
This volume is a natural continuation of the book Algebraic Renormalization, Perturbative Renormalization, Symmetries and Anomalies, by O Piguet and S P Sorella, with the aim of applying the algebraic renormalization procedure to gauge field models quantized in nonstandard gauges. The main ingredient of the algebraic renormalization program is the quantum action principle, which allows one to control in a unique manner the breaking of a symmetry induced by a noninvariant subtraction scheme. In particular, the volume studies in-depth the following quantized gauge field models: QED, Yang-Mills theories and topological models (the Chern-Simons and the BF model) in the context of axial-like gauges.
Independent coding without synaptic coordination explains complex sequences of population activity observed during theta states and maximizes the number of distinct environments that can be encoded through population theta sequences.
I had a medical episode two weeks ago and we still dont know what happened. Im currently 36 weeks pregnant, and one night very suddenly I developed and excruciating pain in my side, just under my left breast and wrapping ...
Frequency multipliers having corresponding methods and multifunction radios comprise: N multipliers, wherein N is an integer greater than one; wherein the multipliers are connected in series such that each of the multipliers, except for a first one of the multipliers, is configured to mix a periodic input signal with an output of another respective one of the multipliers; wherein the first one of the multipliers is configured to mix the periodic input signal with the periodic input signal.
New findings challenge existing dogma that neurons release fixed amounts of chemical signal at any one time and could have implications for brain disorders including Parkinsons and schizhophrenia.
Walter Neumann [2005-05-12 07:31]: , On Wed, 11 May 2005, Igor Schein wrote: ,,\\ ver 2.2.9 ,,? for(k=1,10,print(k precision(erfc(2^k)) precision(erfc(-2^k)))) ,,... ,,10 455407 455446 ,,\\ ver 2.2.10 ,,? for(k=1,10,print(k precision(erfc(2^k)) precision(erfc(-2^k)))) ,,... ,,10 38 455446 , , The second (2.2.10) looks better to me: , , GP/PARI CALCULATOR Version 2.2.11 (development CHANGES-1.1205) , , ? erfc(2^10) , %1 = 9.342620665669385261706140592 E-455395 , ? precision(%) , %2 = 28 , ? erfc(-2^10) , %3 = 2.000000000000000000000000000 , ? precision(%) , %4 = 455427 , ? 2-%3 , %5 = 9.342620665669385261706140592 E-455395 , ? precision(%) , %6 = 38 Indeed. As for the ridiculous accuracy of %3 above, we have conflicting specifications: 1) PARI functions give as precise a result as is possible from the input, 2) floating point computations are meant to foster speed by truncating operands. Only 1) is specified in the documentation, 2) is only a general understanding. And a rather ...
Representative input-output curves between P1 and P9.A: P2 cell: example individual traces and input-output curve in the VCN-LSO pathway. As shown here, input-o
Institute of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience & Institute for Advanced Simulation (IAS-6), Theoretical Neuroscience & JARA-Institut Brain structure-function relationships (INM-10). ...
Klapoetke NC, Murata Y, Kim SS, Pulver SR, Birdsey-Benson A, Cho YK, Morimoto TK, Chuong AS, Carpenter EJ, Tian Z, Wang J, Xie Y, Yan Z, Zhang Y, Chow BY, Surek B, Melkonian M, Jayaraman V, Constantine-Paton M, Wong GK, Boyden ...
Now use base circuits to do an add without carry: if no inputs are on, the binary sum is zero, and the led should be off. If either input (but not both inputs) is on, the sum is one, the led should be lit. If both inputs are on, the sum is two, and since two in binary is 10, the led is unlit, as we are throwing out the carry ...
Neurons in the dorsal nucleus of the lateral lemniscus (DNLL) receive excitatory and inhibitory inputs from the superior olivary complex (SOC) and convey GABAergic inhibition to the contralateral DNLL and the inferior colliculi. Unlike the fast glycinergic inhibition in the SOC, this GABAergic inhibition outlasts auditory stimulation by tens of milliseconds. Two mechanisms have been postulated to explain this persistent inhibition. One, an "integration-based" mechanism, suggests that postsynaptic excitatory integration in DNLL neurons generates prolonged activity, and the other favors the synaptic time course of the DNLL output itself. The feasibility of the integration-based mechanism was tested in vitro in DNLL neurons of Mongolian gerbils by quantifying the cellular excitability and synaptic input-output functions (IO-Fs). All neurons were sustained firing and generated a near monotonic IO-F on current injections. From synaptic stimulations, we estimate that activation of approximately five ...
NIDA T90 Grant: Project Information. This undergraduate and graduate training program in computational neuroscience draws from faculty mentors distributed through many departments and schools, including Physiology and Biophysics, Biological Structure, Computer Science and Engineering, Applied Math, Biology, Psychology and Bioengineering. Support for undergraduate and graduate education and research will foster the ongoing growth of this area, enhance interaction between theorists and experimentalists, expand and integrate coursework in quantitative approaches in neuroscience, enhance interactions between undergraduate and graduate students, enhance opportunities for undergraduate research and draw together the community across campus to strengthen existing interdisciplinary exchange and collaboration. The undergraduate training program will establish a two-year sequence in computational neuroscience for students from neurobiology or from a computational major who will take a common core ...
TY - JOUR. T1 - Ionic mechanisms of endogenous bursting in CA3 hippocampal pyramidal neurons. T2 - A model study. AU - Xu, Jun. AU - Clancy, Colleen E. PY - 2008/4/30. Y1 - 2008/4/30. N2 - A critical property of some neurons is burst firing, which in the hippocampus plays a primary role in reliable transmission of electrical signals. However, bursting may also contribute to synchronization of electrical activity in networks of neurons, a hallmark of epilepsy. Understanding the ionic mechanisms of bursting in a single neuron, and how mutations associated with epilepsy modify these mechanisms, is an important building block for understanding the emergent network behaviors. We present a single-compartment model of a CA3 hippocampal pyramidal neuron based on recent experimental data. We then use the model to determine the roles of primary depolarizing currents in burst generation. The single compartment model incorporates accurate representations of sodium (Na+) channels (Nav1.1) and T-type calcium ...
The reliability and temporal precision of signal propagation between neurons is a major constraint for different coding strategies in neuronal networks. In systems that rely on rate coding, input-output functions of neurons are classically described as ratios of mean firing rates, and the precise timing of individual action potentials is not considered a meaningful parameter (Shadlen and Newsome, 1994, 1998). In these systems, synchrony of presynaptic action potentials and reliable synaptic transmission have even been implicated to deteriorate the information content of the postsynaptic spike train (Zador, 1998). For the functioning of a temporal code in neuronal networks, on the other hand, the precision and reliability of synaptic integration is a prerequisite (Abeles, 1991; Konig et al., 1996; Mainen and Sejnowski, 1995; Nowak et al., 1997; Roy and Alloway, 2001), and without exact spike timing in the millisecond range, synchronous activity among neurons that putatively form a functional cell ...
Even single neurons have complex biophysical characteristics and can perform computations (e.g.[19]). Hodgkin and Huxleys original model only employed two voltage-sensitive currents (Voltage sensitive ion channels are glycoprotein molecules which extend through the lipid bilayer, allowing ions to traverse under certain conditions through the axolemma), the fast-acting sodium and the inward-rectifying potassium. Though successful in predicting the timing and qualitative features of the action potential, it nevertheless failed to predict a number of important features such as adaptation and shunting. Scientists now believe that there are a wide variety of voltage-sensitive currents, and the implications of the differing dynamics, modulations, and sensitivity of these currents is an important topic of computational neuroscience.[20] The computational functions of complex dendrites are also under intense investigation. There is a large body of literature regarding how different currents interact ...
My original area of expertise is the theory of applied dynamical systems and global bifurcations. I study dynamics and their origin in diversely phenomenological systems and in exact models from life sciences. Of my special interest is a new emergent cross‐disciplinary field known as mathematical neuroscience. Its scopes include nonlinear models of individual neurons and networks. In‐depth analysis of such systems requires development of advanced mathematical tools paired with sophisticated computations. I derive models and create bifurcation toolkits for studying a stunning array of complex activities such as multistability of individual neurons and polyrhythmic bursting patterns discovered in multifunctional central pattern generators governing vital locomotor behaviors of animals and humans ...
Computational modeling is a useful method for generating hypotheses about the contributions of impaired neurobiological mechanisms, and their interactions, to psychopathology. Modeling is being increasingly used to further our understanding of schizophrenia, but to date, it has not been applied to questions regarding the common perceptual disturbances in the disorder. In this article, we model aspects of low-level visual processing and demonstrate how this can lead to testable hypotheses about both the nature of visual abnormalities in schizophrenia and the relationships between the mechanisms underlying these disturbances and psychotic symptoms. Using a model that incorporates retinal, lateral geniculate nucleus (LGN), and V1 activity, as well as gain control in the LGN, homeostatic adaptation in V1, lateral excitation and inhibition in V1, and self-organization of synaptic weights based on Hebbian learning and divisive normalization, we show that (a) prior data indicating ...
Autori: Vlad MO, Morán F, Popa VT, Szedlacsek SE, Ross J.. Editorial: Proc Natl Acad Sci U S A, 104(12), p.4798-803, 2007.. Rezumat:. We give a functional generalization of fractal scaling laws applied to response problems as well as to probability distributions. We consider excitations and responses, which are functions of a given state vector. Based on scaling arguments, we derive a general nonlinear response functional scaling law, which expresses the logarithm of a response at a given state as a superposition of the values of the logarithms of the excitations at different states. Such a functional response law may result from the balance of different growth processes, characterized by variable growth rates, and it is the first order approximation of a perturbation expansion similar to the phase expansion. Our response law is a generalization of the static fractal scaling law and can be applied to the study of various problems from physics, chemistry, and biology. We consider some ...
In this thesis I present novel mechanisms for certain computational capabilities of the cerebral cortex, building on the established notion of attractor memory. A sparse binary coding network for generating efficient representation of sensory input is presented. It is demonstrated that this network model well reproduces receptive field shapes seen in primary visual cortex and that its representations are efficient with respect to storage in associative memory. I show how an autoassociative memory, augmented with dynamical synapses, can function as a general sequence learning network. I demonstrate how an abstract attractor memory system may be realized on the microcircuit level -- and how it may be analyzed using similar tools as used experimentally. I demonstrate some predictions from the hypothesis that the macroscopic connectivity of the cortex is optimized for attractor memory function. I also discuss methodological aspects of modelling in computational neuroscience.. ...
In this study, we first derived a modified theta model that possesses voltage-dependent dynamics and appropriate forms and strengths of the synaptic interactions. These properties are not incorporated into the conventional theta model which is the normal form for the saddle-node infinite cycle bifurcation [15]. Unlike related integrate-and-fire models with conductance-based synapses, the modified theta model is continuous with no resetting. By letting the number of neurons tend to infinity, we derived a hybrid PDE/ODE for the coupled neurons and their synaptic gates. The PDE for the population dynamics has simple periodic boundary conditions, and because it is continuous, requires no special methods for solving it. Furthermore, solutions to the discretized PDE can be numerically continued and bifurcations are easily detected using AUTO or other packages. Thus, we were able to find the parameter regions for macroscopic gamma oscillations and show that these arise via a supercritical HB. We found ...
Synaptic noise due to intense network activity can have a significant impact on the electrophysiological properties of individual neurons. This is the case for the cerebral cortex, where ongoing activity leads to strong barrages of synaptic inputs, which act as the main source of synaptic noise affecting on neuronal dynamics. Here, we characterize the subthreshold behavior of neuronal models in which synaptic noise is represented by either additive or multiplicative noise, described by Ornstein-Uhlenbeck processes. We derive and solve the Fokker-Planck equation for this system, which describes the time evolution of the probability density function for the membrane potential. We obtain an analytic expression for the membrane potential distribution at steady state and compare this expression with the subthreshold activity obtained in Hodgkin-Huxley-type models with stochastic synaptic inputs. The differences between multiplicative and additive noise models suggest that multiplicative noise is adequate to