Okay, so we come to the third talk of today and the last talk, which so in the previous
talk we saw that we had networks of neurons and in the second talk we had the main feed
and then in this talk we will focus on the microscopic scale so we will just look at
one neuron that is excited by some bounded noise. So experimentally it has been seen in the lab that
some neurons that have the same physiological features which are better by some sort of
I mean some synaptic input that are either the same or maybe have some similar distributions
but the neurons are observed to encode completely different activities and this question has been
a bit interesting in the experimental community. So in this talk we are going to look at some
ways for which neurons can encode information using noise. There have been some mechanisms
that have been developed so there's what they call stochastic resonance, covariance resonance
and so on but in this talk we will focus just on two and the goal of this talk is first of all to
wonder so we'll focus on two of them so focus on certain stochastic resonance and inverse
stochastic resonance which are two phenomena that encode kind of opposite information so
I will explain what they are. So the goal of this talk is first of all to independently establish
the mathematical conditions that are necessary for neurons to encode information to serve
in this stochastic resonance which are the assets of s and inverse stochastic resonance
and then with that understanding we will try to see how the neuron could switch from encoding one
type of information that is due to self-induced stochastic resonance and the other which is due
to inverse stochastic resonance in the same noise limits or like in having the same distributions.
Okay so here's the output of the outline of my talk so I will present the Shizuoka-Nagumo
model that was used in the first talk so it comes again here the stochastic version of it
and then we look at serving the stochastic resonance what it is what are what are the
conditions necessary and then inverse stochastic resonance we have some of it and then we look at
some of the problems and future research. Okay so here we have a stochastic Shizuoka-Nagumo model
which is a slow fast dynamical system so you can see here in this case here it is we have
what an epsilon parameter here which is very important parameter it is what's called a singular
parameter it's small so you can write it just gonna go more in this way where here you have
some sort of notificative noise and motion which is written here on the slow timescale tau which
you can use the definition of epsilon to scale even to the fast timescale t so these two dynamical
equations are topological equivalents so trajectories of the first one you can always find
homomorphism that translate the trajectory of the first first dynamical system to the other so you
could choose to study the first one or the second one depending on what you want but
we will see later on that the singular parameter plays a very important role in studying this type
of behavior so for the Shizuoka-Nagumo we have equation one here the electrophores are given by
this one which is a bit different from what we saw in the first talk in the first talk we realized
that the second in the second vector field was was just a v plus k but here we enrich the dynamics
by adding more terms of course which preserve the spiking activity of the Shizuoka-Nagumo and you
you see why this is very very important and then v here is the membrane potential that is the that
is the variable that in course information that we use to store current variable I will not get
into the biology of those but I was just want to show that w is a slow variable because epsilon
here is a small parameter so the velocity vector field of g here is actually slow dynamics and then
of course we have two other important parameters a here and then c here is the quarter mission one
for bifurcation parameter and then we have our multiplicative noise and our additive noise that
can be either this term in the slow time scale or just sigma in the past time scale as you might
know in stochastic processes the banyan motion rescale that's quite out of time but in this talk
of course for simplicity we will consider just additive noise and then the o vector b is just
a stochastic integral with respect to our banyan motion here defined on certain probability space
and for simplicity again we are going to use bounded noise yeah yeah so I mean the banyan
motion is bounded noise so that's what we're going to focus on this on this one current work
that we're piercing on the archive in which we use unbounded noise which is quite interesting but
Zugänglich über
Offener Zugang
Dauer
01:26:25 Min
Aufnahmedatum
2020-11-09
Hochgeladen am
2020-11-09 20:58:54
Sprache
en-US