On Parameter Estimation for Neuron Models Abhijit Biswas Department of Mathematics Temple University November 30th, 2017 Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 1 / 30
Outline Introduction Hodgkin-Huxley Model Parameter Estimation Voltage Clamp Objective Function Least Squares Gradient Decent Levenberg-Marquardt Algorithm Interval Analysis Conclusion Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 2 / 30
Introduction Ion channel models of Neuron such as proposed by Hodgkin-Huxley can be represented by a set of differential equations. Solving these differential equation involves finding optimal values for the parameters that define the Hodgkin-Huxley equations. Parameters are evaluated using an optimization algorithm that takes as input ion channel current data recorded from a neuron using voltage clamp technique. In this talk we will see how to estimate these parameters by some optimization technique using the data. Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 3 / 30
Hodgkin-Huxley Model Cell membranes can be modeled as a parallel combination of capacitor and conductance. The mechanism of protein inside an ion channel is not known and the conductor is a lumped representation of ion channels forming selective pores through the neural membrane. Hodgkin-Huxley introduced a gating variable to model this ion channel currents in a giant squid axon. Each channel may have more than one gating variable, where some controls activation of ionic current (channel opening) and others control the inactivation (channel closing) of the same current. These gating variable are time and voltage dependent and they can range in 0 and 1. Also they act as a scale factor of the maximum conductance ḡ. Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 4 / 30
Hodgkin-Huxley Model(continued) A Hodgkin-Huxley equation for a given ionic current is as follows: I (t, V ) = ḡm x (t, V )h y (t, V )(V E) (1) where ḡ is the maximum conductance. The parameters m and h are activation and inactivation gating variables. x and y are the number of subunits.the voltage V is the membrane potential and E is the equilibrium potential. An arbitrary gating variable z is described by z βz αz 1 z where α z and β z are activation and inactivation rate for z. And this will produce the equation ż = α z (1 z) β z z (2) Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 5 / 30
Hodgkin-Huxley Model(continued) A more common form of this equation is ż = z z τ z (3) where z is the steady-state value of z and τ z is the time constant are given by z = τ z = α(v ) α(v ) + β(v ) 1 α(v ) + β(v ) (4) (5) Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 6 / 30
Hodgkin-Huxley Model(continued) The equation which describe the time evolution of the voltage is given by C m V = i m + I e (6) where i m is the membrane current and I e is the electrode current. To describe the channel operation more realistically the steady-state value for z and the time constant τ are re-expressed using α and β where α and β are of the following form with parameters x 1, x 2, x 3, x 4 and x 5. α = x 1(x 2 V ) exp ( x 2 V x 3 ) 1 (7) β = x 4 exp ( V x 5 ) (8) The parameters x 1 through x 5 are dependent on the specific ion channel current and vary from cell to cell. Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 7 / 30
Model equations The Hodgkin-Huxley model describes Na + and K + conductances and the corresponding membrane currents. Here are the equations of the model 1 2 3 (a) (b) (c) i m = ḡ i (V E L ) + ḡ K n 4 (V E K ) + ḡ Na m 3 h(v E Na ) C m dv dt = i m + I e τ m (V ) dm dt = m (V ) m τ n (V ) dn dt = n (V ) n τ h (V ) dh dt = h (V ) h Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 8 / 30
Model equations 4 Parameters (a) (b) (c) (d) (e) (f) α n = 0.01(V + 55) 1 exp( 0.1(V + 55)) β n = 0.125exp( 0.0125(V + 65)) α m = 0.1(V + 40) 1 exp( 0.1(V + 40)) β m = 4exp( 0.0556(V + 65)) α h = 0.07exp( 0.05(V + 65)) β h = 1 1 + exp( 0.1(V + 35)) Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 9 / 30
Parameter Estimation A single representative current I K is given by where the gating variable, n, is given by I K = ḡ K n(v E K ) (9) ṅ = n n τ n (10) Under a constant voltage condition, this has a closed form solution given by n = n (1 e t τn ) (11) Substituting this back into equation (9) gives us I K = ḡ K (V E K )[n (1 e t τn )] (12) Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 10 / 30
Voltage Clamp The cell membrane can be held at an approximately constant voltage in a technique known as voltage clamping. While the voltage is held constant, current data is recorded from the cell. Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 11 / 30
Voltage Clamp Here is the recorded data. Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 12 / 30
Objective Function In order to find the best estimates to current parameters, an objective function is defined and minimized in the mean-square sense. The objective for the potassium current is F = k l i=1 j=1 ( yj ḡ K (V i E K )n (V i )[(1 e t j τn(v i ) )] ) 2 (13) where k is number of different of data sets for k different voltages, y j is the vale of the current at time t j and l is the number of data point in each data sets. An alternative approach to the objective function in equation (13) is to fit each data set individually using the following objective function: F = j i=1 ( yi ḡ K (V E K )n [(1 e t i τn )] ) 2 (14) where y i is the measured current at time t i and j is the number of data points in the voltage clamp data at a constant value of voltage V. Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 13 / 30
Objective Function In either case we can simply minimize the objective function by manipulating x 1 through x 5 or simply by varying α and β. In either case, values of α and β are found for each value of V. These values are then used to find the final values of x 1 through x 5 by minimizing the two functions in the mean-square sense: F α = F β = k i=1 ( αk x 1(x 2 V ) exp ( x 2 V x 3 ) 1 ) 2 (15) k ( βk x 4 exp ( V )) 2 (16) x 5 i=1 Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 14 / 30
Linear Least Squares From an experiment, four (x, y) data points were obtained, (1, 6), (2, 5), (3, 7), and (4, 10). We hope to find a line y = β 1 + β 2 x that best fits these four points. We consider following objective function β 1 + 1β 2 = 6 β 1 + 2β 2 = 5 β 1 + 3β 2 = 7 β 1 + 4β 2 = 10 S(β 1, β 2 ) = [6 (β 1 + 1β 2 )] 2 + [5 (β 1 + 2β 2 )] 2 + [7 (β 1 + 3β 2 )] 2 + [10 (β 1 + 4β 2 )] 2 = 4β 2 1 + 30β 2 2 + 20β 1 β 2 56β 1 154β 2 + 210 Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 15 / 30
Linear Least Squares S β 1 = 0 = 8β 1 + 20β 2 56 S β 2 = 0 = 20β 1 + 60β 2 154 Solving we get β 1 = 3.5 and β 2 = 1.4. y = 3.5 + 1.4x One can think of this system as Ax = b the x = [β 1, β 2 ] T is given by x = A(A T A) 1 A T b Which is simply a projection of b onto the column space of the matrix A. Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 16 / 30
Non-Linear Least Squares Consider a set of m data points, (x 1, y 1 ), (x 2, y 2 ),..., (x m, y m ), and a curve (model function) y = f (x, β), that in addition to the variable x also depends on n parameters =(β 1, β 2,..., β n ), β = (β 1, β 2,..., β n ), with m n. m n. It is desired to find the vector β of parameters such that the curve fits best the given data in the least squares sense, that is, the sum of squares is minimized. S = m i=1 r 2 i, r i = y i f (x i, β), for i = 1, 2,..., m S β j = 2 m i=1 r i r i β j, j = 1, 2,..., n (17) In a nonlinear system, the derivatives r i β j are functions of both the independent variable and the parameters, so these gradient equations do not Abhijithave Biswas a(temple closed University) solution. On Parameter Estimation for Neuron Models November 30th, 2017 17 / 30
Non-Linear Least Squares In a nonlinear system, the derivatives r i β j are functions of both the independent variable and the parameters, so these gradient equations do not have a closed solution. So we need to solve this iteratively by initializing with β 0 = (β1 0, β0 2,..., β0 n). Taylor expansion around β k = (β1 k, βk 2,..., βk n ) with the increment vector β = ( β 1, β 2,..., β n ), f (x i, β 1, β 2,..., β n ) = f (x i, β k 1, β k 2,..., β k n ) + n j=1 (β j β k j ) f (x i, β k 1, βk 2,..., βk n β j (18) n f (x i, β) = f (x i, β k ) + J ij β j j=1 Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 18 / 30
Non-Linear Least Squares Observe that, r i = f (x i, β) = J ij β j β j r i = y i f (x i, β) = (y i f (x i, β k )) + (f (x i, β k ) f (x i, β)) n = (y i f (x i, β k )) J is β s Substituting these into equation (17) we get 2 m J ij [(y i f (x i, β k )) i=1 s=1 n J is β s ] = 0 s=1 By simplifying and writing this in vector notation we get (J T J) β = J T (y f (β)) (19) where J = (J ij ) m n. This is known as normal equation. Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 19 / 30
Example Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 20 / 30
Gradient Decent We want to minimize S(β). Gradient decent method is based on the fact that S(β) decreases fastest if one goes from a point β k in the direction of negative gradient of S(β). It follows that if β k+1 = β k γ S(β k ) then for small γ, S(β k ) S(β k+1 ). The convergence rate is slow for this method. Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 21 / 30
Levenberg-Marquardt algorithm Levenberg s contribution is to replace the normal equation (19) by (J T J + λi ) β = J T (y f (β)) (20) λ is called the damping factor. λ is chosen small and large according to the rapid reduction and the insufficient reduction of S. If λ is large then it acts like gradient decent. To avoid this Marquardt modified the equation (20) by (J T J + λdiag(j T J)) β = J T (y f (β)) (21) This avoids slow convergence in the direction of small gradient, as there is larger movement along the directions where the gradient is smaller. This is known as Levenberg-Marquardt algorithm. Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 22 / 30
Solving our problem Levenberg-Marquardt algorithm was used first to minimize equation (14) j ( F = yi ḡ K (V E K )n [(1 e t i τn )] ) 2 i=1 Using this toolbox only one set of data may be fit at a time. Minimizing we get the following result Current versus time in voltage clamp conditions with fitted Data. The smooth curves represent the fitted data. The other curves are the original voltage clamp data. The current data sets have been fit individually using the Levenberg-Marquardt method. Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 23 / 30
Solving our problem So for each k data set we got α k and β k by Levenberg-Marquardt algorithm. Using these values we want to minimize equation (15) and (16) F α = F β = k i=1 ( αk x 1(x 2 V ) exp ( ) 2 ) x 2 V x 3 1 k ( βk x 4 exp ( V )) 2 x 5 i=1 It is during this step the Levenberg-Marquardt algorithm fails. Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 24 / 30
Solving our problem Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 25 / 30
Result by Interval Analysis The other method is used known as Interval Analysis. The current data sets have been fit individually using interval analysis. Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 26 / 30
Result by Interval Analysis Previously obtained values of α k and β k by interval analysis are used to calculate the unique values of x i and then using equations (7), (8) and (12) the value of current is calculated. Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 27 / 30
Result by Interval Analysis Finally using the global optimization equation (13) k l ( F = yj ḡ K (V i E K )n (V i )[(1 e t j τn(v i ) )] ) 2 i=1 j=1 and the equations (14) and (15) with Interval analysis optimization we get Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 28 / 30
Conclusion In general finding an accurate model for ionic current is difficult because of the non-linear nature of the parameter estimation. Interval analysis based methods can find verified global solution for the parameters. The drawback of this approach is that it is not as computationally fast as more traditional methods. Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 29 / 30
Thank you! Any Questions? Abhijit Biswas (Temple University) On Parameter Estimation for Neuron Models November 30th, 2017 30 / 30