Optimal sensor location for distributed parameter system identi

Size: px
Start display at page:

Download "Optimal sensor location for distributed parameter system identi"

Transcription

1 Optimal sensor location for distributed parameter system identification (Part 1) Institute of Control and Computation Engineering University of Zielona Góra

2 Structure of the minicourse Tuesday, 27 October, 13:30: introduction to OED; sensor location problem for parameter estimation of PDEs; adaptation of modern OED theory to the sensor location setting Friday, 30 October, 11:30: sensor selection for large-scale monitoring networks; overcoming the curse of dimensionality; design for scanning observations Friday, 6 November, 11:30: design of mobile sensor trajectories; robust design; design with correlated observations; Bayesian approach; design for ill-posed problems

3 Schedule of Part 1 Introduction to optimum design 1 Introduction to optimum design Linear regression Optimum design 2

4 Linear regression Introduction to optimum design Linear regression Optimum design Example: For the following observations x i y i we wish to find an equation describing best the linear trend: y x

5 Linear regression Introduction to optimum design Linear regression Optimum design We approximate the response y by the linear model ŷ = a 1x + a 0 so as to minimize the individual prediction errors (residuals) e i = y i ŷ i = y i a 0 a 1x i

6 Least-squares criterion Linear regression Optimum design S r = n ei 2 = i=1 n (y i a 0 a 1x i ) 2 min i= S r a a

7 Least-squares criterion Linear regression Optimum design The optimality conditions S r a 0 = 2 S r a 1 = 2 n (y i a 0 a 1x i ) = 0 i=1 n [(y i a 0 a 1x i )x i ] = 0 yield the system of linear equations 0 = y i a 0 a 1x i i=1 0 = y i x i a 0x i a 1x 2 i

8 Normal equations Introduction to optimum design Linear regression Optimum design After rearranging, we get the so-called normal equations: ( ) na 0 + xi a 1= y i ( ) ( xi a 0+ x 2 i )a 1= x i y i

9 Normal equations Introduction to optimum design Linear regression Optimum design After rearranging, we get the so-called normal equations: ( ) na 0 + xi a 1= y i ( ) ( xi a 0+ x 2 i )a 1= x i y i or, in matrix form, where F T Fθ = F T y 1 x 1 F =.., θ = 1 x n [ a0 a 1 ] y 1, y =. y n

10 y Linear regression Introduction to optimum design Linear regression Optimum design Its solution is θ = ( F T F ) 1 F T y. Specifically Example (ctd ): â 1 = n x i y i x i yi n x 2 i ( x i ) 2 â 0 = ȳ a 1 x ( 12) 16 6 a 1 = = a 0 = 1.5 ( 0.857)(4) = ŷ = 0.857x x

11 Polynomial regression Linear regression Optimum design Given data, fit a parabola: ŷ = a 0 + a 1 x + a 2 x 2

12 Polynomial regression Linear regression Optimum design Given data, fit a parabola: ŷ = a 0 + a 1 x + a 2 x 2 Sum of squared residuals: n S r (θ) = (y i a 0 a 1 x i a 2 xi 2 ) 2 = y Fθ 2 where i=1 1 x 1 x 2 1 a 0 y 1 F =..., θ = a 1, y =. 1 x n xn 2 a 2 y n

13 Polynomial regression Linear regression Optimum design Given data, fit a parabola: ŷ = a 0 + a 1 x + a 2 x 2 Sum of squared residuals: n S r (θ) = (y i a 0 a 1 x i a 2 xi 2 ) 2 = y Fθ 2 i=1 where 1 x 1 x 2 1 a 0 y 1 F =..., θ = a 1, y =. 1 x n xn 2 a 2 y n Optimality conditions are the normal equations again: ( F T F ) θ = F T y

14 Polynomial regression Linear regression Optimum design Exlicitly, the normal equations are ( ) ( ) (n)a 0 + xi a 1 + x 2 i a 2 = y i ( ) ( ) ( ) xi a 0 + x 2 i a 1 + x 3 i a 2 = x i y i ( ) ( ) ( ) x 2 i a 0 + x 3 i a 1 + x 4 i a 2 = xi 2 y i It is easy to generalize the approach to ŷ = a 0 f 0 (x) + a 1 f 1 (x) + a 2 f 2 (x) + + a m f m (x) where f 0, f 1,..., f m are linearly independent functions. Question How much does measurement noise influence the accuracy of the estimates? Is it possible to reduce this influence?

15 Linear regression Optimum design Measurement accuracy: intro to optimal design The weights of objects A and B are to be measured using a pan balance and a set of standard weights. Each weighing measures the weight difference between objects placed in the left pan vs. any objects placed in the right pan by adding calibrated weights to the lighter pan until the balance is in equilibrium.

16 Linear regression Optimum design Intro to optimal design: Weighing two objects Each measurement has a random error. The average error is zero; the standard deviations of the probability distribution of the errors is the same number σ on different weighings; and errors on different weighings are independent. Denote by m A and m B the true weights.

17 Weighing two objects Linear regression Optimum design First, we object A in one pan, with the other pan empty.

18 Weighing two objects Linear regression Optimum design Let y 1 be its measured weight. As the error in this result is σ, we have m A = y 1 ± σ For object B, we perform the second measurement and get m B = y 2 ± σ For example, if y 1 and y 2 are 10 and 25 grammes, respectively, and σ is 0.1 gramme, we have m A = 10 ± 0.1 grammes m B = 25 ± 0.1 grammes

19 Weighing two objects Linear regression Optimum design We can express these results using the matrix notation where y = [ y1 y 2 ], F = y = Fθ + ɛ [ ] +1 0, θ = 0 +1 Note that we have det(f T F) = 1. [ ma m B ], ɛ = [ ɛ1 ɛ 2 ]

20 Weighing two objects Linear regression Optimum design But consider another strategy with the same experimental effort (i.e., two measurements):

21 Weighing two objects Linear regression Optimum design Neglecting measurement errors, we get { +ma + m B = y m 1 A = 1 2 (y 1 y 2 ) m A + m B = y 2 m B = 1 2 (y 1 + y 2 )

22 Weighing two objects Linear regression Optimum design Neglecting measurement errors, we get { +ma + m B = y m 1 A = 1 2 (y 1 y 2 ) m A + m B = y 2 m B = 1 2 (y 1 + y 2 ) What is the error for these estimates? var(m A ) = 1 4 [var(y 1) + var(y 2 )] = 1 4 It was reduced from σ to σ/ 2: [ σ 2 + σ 2] = 1 2 σ2 m A = 10 ± 0.07 grammes m B = 25 ± 0.07 grammes

23 Weighing two objects Linear regression Optimum design Additionally, observe that now and det(f T F) = 4. F = [ +1 ]

24 Weighing four objects: Strategy 1 Linear regression Optimum design Consider now weighing four objects A, B, C and D. First, we weigh each of them individually and get m A = y 1 ± σ m B = y 2 ± σ m C = y 3 ± σ m D = y 4 ± σ Suppose that y 1 = 10, y 2 = 25, y 3 = 15 and y 4 = 30. Observe that F = and det(f T F) = 1.

25 Weighing four objects: Strategy 2 Linear regression Optimum design But consider another strategy:

26 Weighing four objects: Strategy 2 Linear regression Optimum design Neglecting measurement errors, we get m A = 1 2 (y 1 y 2 ) +m A + m B = y 1 m A + m B = y m 2 B = 1 2 (y 1 + y 2 ) +m C + m D = y 3 m C = 1 m C + m D = y 4 2 (y 3 y 4 ) m D = 1 2 (y 3 + y 4 )

27 Weighing four objects: Strategy 2 Linear regression Optimum design What is the error for these estimates? var(m A ) = 1 4 [var(y 1) + var(y 2 )] = 1 4 [ σ 2 + σ 2] = 1 2 σ2 It was reduced from σ to σ/ 2: m A = 10 ± 0.07 grammes m B = 25 ± 0.07 grammes m C = 15 ± 0.07 grammes m D = 30 ± 0.07 grammes

28 Weighing four objects: Strategy 2 Linear regression Optimum design We have y m A ɛ 1 y 2 y 3 = m B m C + ɛ 2 ɛ 3 y 4 } {{ +1 } m D ɛ 4 F Additionally, observe that det(f T F) = 16.

29 Weighing four objects: Strategy 3 Linear regression Optimum design But consider yet another strategy:

30 Weighing four objects: Strategy 3 Linear regression Optimum design Neglecting measurement errors, we get +m A + m B + m C + m D = y 1 m A + m B m C + m D = y 2 m A m B + m C + m D = y 3 m A + m B + m C m D = y 4 Consequently, m A = 1 4 (y 1 y 2 y 3 y 4 ) m B = 1 4 (y 1 + y 2 y 3 + y 4 ) m C = 1 4 (y 1 y 2 + y 3 + y 4 ) m D = 1 4 (y 1 + y 2 + y 3 y 4 )

31 Weighing four objects: Strategy 3 Linear regression Optimum design What is the error for these estimates? var(m A ) = 1 16 [var(y 1) + var(y 2 ) + var(y 3 ) + var(y 4 )] = 1 16 [ σ 2 + σ 2 + σ 2 + σ 2] = 1 4 σ2 It was further reduced from σ/ 2 to σ/2: m A = 10 ± 0.05 grammes m B = 25 ± 0.05 grammes m C = 15 ± 0.05 grammes m D = 30 ± 0.05 grammes It can be shown that this cannot be improved.

32 Weighing four objects: Strategy 3 Linear regression Optimum design We have y m A ɛ 1 y 2 y 3 = m B m C + ɛ 2 ɛ 3 y 4 1 } {{ 1 } m D ɛ 4 F Additionally, observe that det(f T F) = 256.

33 Weighing four objects: Strategy 3 Linear regression Optimum design We have y m A ɛ 1 y 2 y 3 = m B m C + ɛ 2 ɛ 3 y 4 1 } {{ 1 } m D ɛ 4 F Additionally, observe that det(f T F) = 256. Better precision coincides with higher values of det(f T F). The matrix M = F T F is called the Fisher information matrix and experimental conditions maximizing its determinant are called D-optimal.

34 Linear regression Optimum design D-optimum designs for simple linear regression Assume that the response is the form y = a 0 + a 1 x We would like to make two measurements to estimate a 0 and a 1, but we can select the corresponding settings of x = x 1 and x = x 2 before the experiment. If they are to belong to the interval [ 1, 1], how can we choose them best?

35 Linear regression Optimum design D-optimum designs for simple linear regression Assume that the response is the form y = a 0 + a 1 x We would like to make two measurements to estimate a 0 and a 1, but we can select the corresponding settings of x = x 1 and x = x 2 before the experiment. If they are to belong to the interval [ 1, 1], how can we choose them best? Answer: We are going to get observations y 1 a 0 + a 1 x 1 y 2 a 0 + a 1 x 2

36 Linear regression Optimum design D-optimum designs for simple linear regression In matrix form, this is [ ] [ ] y1 1 x1 = y 2 1 x 2 }{{} F [ a0 a 1 ] + [ ɛ1 ɛ 2 ]

37 Linear regression Optimum design D-optimum designs for simple linear regression In matrix form, this is [ ] [ ] y1 1 x1 = y 2 1 x 2 }{{} F [ a0 a 1 ] + [ ɛ1 Adopting the D-optimality criterion as was for the weighing example, we get ([ ] [ ]) det(f T x1 F) = det x 1 x 2 1 x 2 ([ ]) 2 x = det 1 + x 2 x 1 + x 2 x1 2 + x 2 2 = (x 1 x 2 ) 2 It is maximized for x 1 = 1 and x 2 = +1 (or x 1 = +1 and x 2 = 1), i.e., when the measurements are at the opposite ends of the design interval. ɛ 2 ]

38 Linear regression Optimum design D-optimum design to measure resistance We wish to estimate resitance R based on 10 measurements of the currant (it may vary between 10 A and 20 A) and the corresponding voltage arbitrary design D-optimum design

39 Linear regression Optimum design Why is the information matrix so important? Assume that the observations are modelled by y = Fθ + ε where ε is measurement noise satisfying E[ε] = 0, cov(ε) = E[εε T ] = σ 2 I and σ 2 is a constant noise variance.

40 Linear regression Optimum design Why is the information matrix so important? Assume that the observations are modelled by y = Fθ + ε where ε is measurement noise satisfying E[ε] = 0, cov(ε) = E[εε T ] = σ 2 I and σ 2 is a constant noise variance. The parameter estimate θ = ( F T F ) 1 F T y is then a random vector and E[ θ] = θ cov( θ) = σ 2( F T F ) 1 (unbiasedness) (dispersion around θ)

41 Uncertainty ellipsoid Linear regression Optimum design θ θ 1

42 Uncertainty ellipsoid Linear regression Optimum design θ θ 1

43 Uncertainty ellipsoid Linear regression Optimum design θ θ 1

44 Uncertainty ellipsoid Linear regression Optimum design θ θ 1

45 Uncertainty ellipsoid Linear regression Optimum design θ θ 1

46 Uncertainty ellipsoid Linear regression Optimum design θ θ 1

47 Optimality criteria Introduction to optimum design Linear regression Optimum design Setting M = F T F, we define D-optimality criterion: Ψ(M) = log det(m 1 ) = log det(m) (the volume of the uncertainty ellipsoid is minimized)

48 Optimality criteria Introduction to optimum design Linear regression Optimum design Setting M = F T F, we define D-optimality criterion: Ψ(M) = log det(m 1 ) = log det(m) (the volume of the uncertainty ellipsoid is minimized) A-optimality criterion: Ψ(M) = trace(m 1 ) (the mean axis length of the uncertainty ellipsoid is minimized)

49 Optimality criteria Introduction to optimum design Linear regression Optimum design Setting M = F T F, we define D-optimality criterion: Ψ(M) = log det(m 1 ) = log det(m) (the volume of the uncertainty ellipsoid is minimized) A-optimality criterion: Ψ(M) = trace(m 1 ) (the mean axis length of the uncertainty ellipsoid is minimized) E-optimality criterion: Ψ(M) = λ max (M 1 ) (the length of the largest axis of the uncertainty ellipsoid is minimized) Notation: λ max ( ) is the maximum eigenvalue of its matrix argument. Most often, they yield different solutions!

50 Something about history Linear regression Optimum design : development of factorial experiments and their applications in industry (mainly chemical engineering) 1950s: beginning of optimal experiment design (Kiefer-Wolfowitz theorem) : first algorithms for numerical construction of optimum designs (V. Fedorov and H. Wynn) 1970 up to now : optimal input signals for parameter estimation in systems described by ODEs (R. Mehra) 1980s up to now optimal sensor placement (1981) and input signal design (1983) for PDEs All the above research directions are still far from being complete.

51 Workshops Introduction to optimum design Linear regression Optimum design Workshop on Experiments for Processes With Time or Space Dynamics, Isaac Newton Institute for Mathematical Sciences, Cambridge, UK, June 2011, organizers: and Andrew Curtis (University of Edinburgh)

52 Workshops Introduction to optimum design Linear regression Optimum design

53 Introduction to optimum design Linear regression Optimum design Conference moda 10 Dariusz Ucin ski

54 Complements to this minicourse Linear regression Optimum design CRC Press, 2005 Springer-Verlag, 2012

55 Example: Heating a metal block with a crack

56 Example: Heating a metal block with a crack

57 Example: Heating a metal block with a crack

58 Example: Heating a metal block with a crack

59 Example: Heating a metal block with a crack

60 Example: Heating a metal block with a crack

61 Example: Heating a metal block with a crack

62 Example: Heating a metal block with a crack

63 Example: Heating a metal block with a crack

64 Example: Heating a metal block with a crack

65 Example ctd : Mathematical model Let y(x, t) be the temperature at spatial point x and time t. Its evolution is governed by the partial differential equation (PDE) y t = θ y for x Ω and t [0, t f ] θ being a parameter, subject to the initial condition and the boundary conditions y = 0 in Ω at t = 0 y = 100 on the left side of Ω (Dirichlet condition) y = 10 n on the right side of Ω (Neumann condition) y = 0 n on all other boundaries (Neumann condition)

66 Example ctd : Calibration and measurement design If thermal conductivity θ is unknown, which is rather typical, we could observe the temperature evolution at a fixed point and tune θ so that our model fits the data as best as it can.

67 Example ctd : Calibration and measurement design If thermal conductivity θ is unknown, which is rather typical, we could observe the temperature evolution at a fixed point and tune θ so that our model fits the data as best as it can. Problem Where to place the pyrometer so as to get the most valuable information about θ? Such situations (PDEs and related sensor location problems) are common in engineering practice.

68 Other motivating examples environment observation and forecasting systems air pollution monitoring, groundwater resources management and river monitoring, military applications: surveillance and inspection in hazardous environments, fault detection and isolation, emerging smart materials, intelligent building monitoring, habitat monitoring, intelligent traffic systems, computer-assisted tomography, recovery of valuable minerals and hydrocarbon from underground permeable reservoirs, and many more...

69 Spatiotemporal dynamics Distributed parameter system dynamic system whose state depends on both time and space; its model (a PDE) is known up to a vector of unknown parameters θ. Observations using sensors in order to estimate θ.

70 Subject of this minicourse Spatial area Ω x 1 x 2 x 3 Sensors Problem: How to choose optimal sensor locations?

71 Strategies of taking measurements (Option 1) Stationary sensors Spatial area Ω x 1 x 2 x 3 Sensors How to determine the best sensor configuration?

72 Strategies of taking measurements (Option 2) Scanning Sensors activated at time t Area Ω (x 2, t) (x 3, t) (x 1, t) Sensors dormant at t How to select optimal locations at given time instants?

73 Strategies of taking measurements (Option 3) Mobile sensors Spatial area Ω x 1 ( ) x 2 ( ) Starting positions Trajectories How to design optimal trajectories?

74 Enabling technology: Sensor Networks (SNs) Collection of spatially distributed inexpensive, small devices equipped with sensors and interconnected by a communication network, exchanging and processing data to cooperatively monitor physical or environmental conditions (e.g., temperature, sound, vibration, pressure, etc.). sensor node gateway sensor node communication motes

75 SN nodes Introduction to optimum design Motes must be low cost, low power (for long-term operation), automated (maintenance free), robust (to withstand errors and failures), and non-intrusive. Hardware: a microprocessor, data storage, sensors, AD-converters, a data transceiver (Bluetooth), controllers, and an energy source Software: TinyOS They are already manufactured (Crossbow, Intel).

76 Applications: from battle field...

77 Applications:... to our life

78 Existing approaches to optimal sensor location Conversion to a non-linear problem of state estimation (Malebranche, 1988; Korbicz et al., 1988); Employing random fields analysis (Kazimierczyk, 1989; Sun, 1994); Formulation in the spirit of optimum experimental design (Uspenskii and Fedorov, 1975; Quereshi et al., 1980; Rafaj lowicz, ; Kammer, 1990; 1992; Sun, 1994; Point et al., 1996; Vande Wouwer et al., 1999; Uciński, ; Patan, ).

79 Existing approaches to optimal sensor location Conversion to a non-linear problem of state estimation (Malebranche, 1988; Korbicz et al., 1988); Employing random fields analysis (Kazimierczyk, 1989; Sun, 1994); Formulation in the spirit of optimum experimental design (Uspenskii and Fedorov, 1975; Quereshi et al., 1980; Rafaj lowicz, ; Kammer, 1990; 1992; Sun, 1994; Point et al., 1996; Vande Wouwer et al., 1999; Uciński, ; Patan, ).

80 System description Introduction to optimum design Consider a spatial domain Ω R 2 and the PDE y ( ) t = F x, t, y, θ, x Ω, t T subject to the appropriate initial and boundary conditions. Notation: x spatial variable, t time, T = [0, t f ]; y = y(x, t) state vector; F given operator including spatial derivatives; θ R m vector of unknown (constant) parameters.

81 Example: Air pollution Consider a domain Ω R 2 with sufficiently smooth boundary Γ. Advection-diffusion equation y(x, t) = [κ(x, θ) y(x, t)] + ν(x) y(x, t) + f (x, t, θ) t y(x, 0) = y 0 (x), y(x, t) = 0 Γ Notation: y(x, t) contaminant concentrat. at point x and time t κ(x, θ) diffusivity coefficient, parameterized by θ f (x, t, θ) source term ν(x) wind velocity field

82 Observation Introduction to optimum design For pointwise stationary sensors we have z(t) = y m (t) + ε m (t), t T where y m (t) = col[y(x 1, t),..., y(x N, t)] ε m (t) = col[ε(x 1, t),..., ε(x N, t)] x j Ω, j = 1,..., N sensor locations, ε(x, t) Gaussian measurement noise such that E { ε(x, t) } = 0, E { ε(x, t)ε(x, t ) } = q(x, x, t)δ(t t ) δ Diraca delta function.

83 LS criterion Introduction to optimum design The least-squares estimates of θ are the ones which minimize J (θ) = 1 2 T z(t) ŷ m (t; θ) 2 Q 1 (t) dt where Q(t) = [ q(x i, x j, t) ] N i,j=1 RN N is positive-definite, a 2 Q 1 (t) = at Q 1 (t)a, a R N ŷ m (t; θ) = col[ŷ(x 1, t; θ),..., ŷ(x N, t; θ)] and ŷ(, ; θ) stands for the solution to the state equation corresponding to a given value of θ.

84 Optimal measurement problem Formulation Determine x j, j = 1,..., N so as to maximize the identification accuracy in a sense. Problem How to quantify the identification accuracy?

85 Optimal measurement problem Formulation Determine x j, j = 1,..., N so as to maximize the identification accuracy in a sense. Problem How to quantify the identification accuracy? Answer: Make use of the Cramér-Rao inequality : { } cov(ˆθ) = E (ˆθ θ)(ˆθ θ) T M 1 We have cov(ˆθ) = M 1 provided that an estimator is efficient! But what is M?

86 Towards a criterion Introduction to optimum design Spatially independent measurements E { ε(x i, t)ε(x j, t ) } = σ 2 δ ij δ(t t ) where δ ij is the Kronecker delta and σ > 0 signifies the standard deviation of the measurement noise. Average Fisher Information Matrix (FIM) M = 1 Nt f N j=1 tf 0 ) T g(x j, t)g T (x j, t) dt ( y(x where g(x j j, t; θ), t) = is the vector of sensitivity θ θ=θ 0 coefficients, θ 0 stands for a preliminary estimate of θ. M = M up to a constant multiplier.

87 Calculation of the sensitivity vector g Options 1 Finite-difference method 2 Sensitivity-equation method (direct approach) 3 Variational method (indirect approach)

88 Calculation of the sensitivity vector g Example (direct approach) Condider y t = (κ y) in Ω T where Ω R 2 is the spatial domain with boundary Γ, T = (0, t f ), subject to κ(x) = a + bψ(x), ψ(x) = x 1 + x 2 y(x, 0) = 5 y(x, t) = 5(1 t) in Ω on Γ T

89 Calculation of the sensitivity vector g Example (ctd ) The sensitivities to a and b satisfy y a t = y + (κ y a) y b t = (ψ y) + (κ y b) supplemented with homogeneous initial and Dirichlet boundary conditions. We have to solve them simultaneously with the state equation and to store the solution in memory for its further use in a sensor location algorithm.

90 Ultimate formulation Reformulated sensor location problem Choose x j, j = 1,..., N corresponding to an extreme of some scalar measure Ψ of the FIM, e.g. minimize [ ] 1 1/γ Ψ γ (M) = m trace(qm 1 Q T ) γ if det(m) 0 if det(m) = 0

91 Most popular choices: γ = 1 and Q = I m A-optimal design J A (x 1,..., x N ) = trace M 1 (x 1,..., x N ) min γ and Q = I m E-optimal design J E (x 1,..., x N ( ) = λ max M 1 (x 1,..., x N ) ) min γ 0 and Q = I m D-optimal design J D (x 1,..., x N ) = det M(x 1,..., x N ) max Sensitivity criterion (may yield very poor solutions!) J S (x 1,..., x N ) = trace M(x 1,..., x N ) max

92 Well, there are some drawbacks... The problem dimensionality may be high and many local minima interfere with the optimization process. When t f <, the estimator is neither unbiased nor characterized by a normal distribution. Sensors sometimes tend to cluster. The optimal solution usually depends on the parameters to be identified.

93 Example Introduction to optimum design Consider the heat equation ( κ(x) y(x, t) t = x 1 y(x, t) x 1 ) + ( κ(x) x 2 ) y(x, t) x 2 in Ω T supplemented with y(x, 0) = 5 y(x, t) = 5(1 t) in Ω on Ω T where Ω = [0, 1] 2, T = (0, 1), κ(x) = θ 1 + θ 2 x 1 + θ 3 x 2 θ 1 = 0.1, θ 2 = θ 3 = 0.3

94 Results Introduction to optimum design x 2 x x x 1 Figure : Locating six sensors through the direct use of non-linear programming techniques: (a) independent measurements, (b) correlated measurements.

95 Sensor clusterization Example Consider y(x, t) t = θ 1 2 y(x, t) x 2, x (0, π), t (0, 1) supplemented by { y(0, t) = y(π, t) = 0, t (0, 1) y(x, 0) = θ 2 sin(x), x (0, π) Find D-optimal sensor locations for estimation of θ 1 and θ 2.

96 Sensor clusterization Example (ctd ) Solution The closed-form solution is y(x, t) = θ 2 exp( θ 1 t) sin(x) Assuming σ = 1, on some easy calculations, we get det(m(x 1, x 2 )) = θ2 2 ( 4θ 2 1 exp( 2θ 1 ) 2 exp( 2θ 1 ) + exp( 4θ 1 ) + 1 ) 16θ1 4 } {{ } constant term (2 cos 2 (x 1 ) cos 2 (x 2 ) ) 2 = x = x 2 = π }{{} 2 term dependent on x 1 and x 2

97 Sensor clusterization Example (ctd ) Key ideas of identification and experimental design x x x x 1 FIGURE 2.2 Surface and contour plots of det(m(x 1,x 2 )) of Example 2.3 versus the sensors locations (θ 1 =0.1, θ 2 =1).

98 Dependence of designs on estimated parameters Example Consider the heat process y(x, t) t = θ 2 y(x, t) x 2, x (0, π), t (0, t f ) complemented with { y(0, t) = y(π, t) = 0, t (0, t f ) y(x, 0) = sin(x) sin(2x), x (0, π) Its solution can be easily found in explicit form as y(x, t) = exp( θt) sin(x) + 1 exp( 4θt) sin(2x) 2

99 Dependence of designs on estimated parameters Example The FIM (here scalar) is tf ( y(x M(x 1 1 ) 2, t; θ) ) = dt 0 θ = 1 { 4θ 3 sin 2 (x 1 ) ( 2θ 2 tf 2 + 2t f θ + 1 ) exp( 2t f θ) cos(x 1 ) sin 2 (x 1 ) ( 10t f θ θ 2 t 2 f ) exp( 5tf θ) 1 4 cos2 (x 1 ) sin 2 (x 1 ) ( θ 2 t 2 f + 8t f θ ) exp( 8t f θ) sin2 (x 1 ) ( cos(x 1 ) cos 2 (x 1 ) )}

100 Dependence of designs on estimated parameters Example (ctd ) Key ideas of identification and experimental design θ θ x x 1 FIGURE 2.3 Surface and contour plots for M(x 1 ) of Example 2.4 as a function of the parameter θ (the dashed line represents the best measurement points).

101 1st option: Adaptive design Since θ true is unknown, a common approach is to use some prior estimate θ 0 of θ true instead. But θ 0 may be far from θ true and properties of locally optimal designs can be very sensitive to changes in θ. Remedy A first way out to obtain a robust design procedure is to provide adaptivity through application of a sequential design.

102 General scheme of adaptive design Experimentation and estimation steps are alternated. At each stage, a locally optimal sensor location is determined based on the current parameter estimates (nominal parameter values can be assumed as initial guesses for the first stage), measurements are taken at the newly calculated sensor positions, and the data obtained are then analysed and used to update the parameter estimates. design experiment analysis

103 General scheme of adaptive design Experimentation and estimation steps are alternated. At each stage, a locally optimal sensor location is determined based on the current parameter estimates (nominal parameter values can be assumed as initial guesses for the first stage), measurements are taken at the newly calculated sensor positions, and the data obtained are then analysed and used to update the parameter estimates. design experiment analysis

104 General scheme of adaptive design Experimentation and estimation steps are alternated. At each stage, a locally optimal sensor location is determined based on the current parameter estimates (nominal parameter values can be assumed as initial guesses for the first stage), measurements are taken at the newly calculated sensor positions, and the data obtained are then analysed and used to update the parameter estimates. design experiment analysis

105 General scheme of adaptive design Experimentation and estimation steps are alternated. At each stage, a locally optimal sensor location is determined based on the current parameter estimates (nominal parameter values can be assumed as initial guesses for the first stage), measurements are taken at the newly calculated sensor positions, and the data obtained are then analysed and used to update the parameter estimates. design experiment analysis

106 General scheme of adaptive design Experimentation and estimation steps are alternated. At each stage, a locally optimal sensor location is determined based on the current parameter estimates (nominal parameter values can be assumed as initial guesses for the first stage), measurements are taken at the newly calculated sensor positions, and the data obtained are then analysed and used to update the parameter estimates. design experiment analysis

107 General scheme of adaptive design Experimentation and estimation steps are alternated. At each stage, a locally optimal sensor location is determined based on the current parameter estimates (nominal parameter values can be assumed as initial guesses for the first stage), measurements are taken at the newly calculated sensor positions, and the data obtained are then analysed and used to update the parameter estimates. design experiment analysis

108 General scheme of adaptive design Experimentation and estimation steps are alternated. At each stage, a locally optimal sensor location is determined based on the current parameter estimates (nominal parameter values can be assumed as initial guesses for the first stage), measurements are taken at the newly calculated sensor positions, and the data obtained are then analysed and used to update the parameter estimates. design experiment analysis

109 Locally optimal design for stationary sensors The approach adopted here Adaptation of the notion of continuous designs (the approach set forth by Rafaj lowicz in the 1980s) for a wide class of optimality criteria, as is common in modern optimum experimental design.

110 Observations Introduction to optimum design Given N sensors and a finite set X = { x 1,..., x l} Ω Ω of points at which they can be placed, consider their fixed configuration. Spatial domain Ω Ω l potential locations

111 Observations Introduction to optimum design Given N sensors and a finite set X = { x 1,..., x l} Ω Ω of points at which they can be placed, consider their fixed configuration. Spatial domain Ω Ω l potential locations r i sensors at x i

112 Output equation Introduction to optimum design For pointwise stationary sensors we have z ij (t) = y(x i, t; θ) + ε ij (t), t T for i = 1,..., l and j = 1,..., r i, where l i=1 r i = N ε ij ( ) white Gaussian measurement noise.

113 Output equation Introduction to optimum design For pointwise stationary sensors we have z ij (t) = y(x i, t; θ) + ε ij (t), t T for i = 1,..., l and j = 1,..., r i, where l i=1 r i = N ε ij ( ) white Gaussian measurement noise. Average Fisher Information Matrix (FIM) Up to a constant multiplier, we have M = 1 Nt f l tf r i i=1 0 g(x i, t)g T (x i, t) dt

114 Problem of weight optimization Minimize some measure Ψ of the FIM, e.g., Ψ(M) = ln det(m 1 ) or Ψ(M) = trace(m 1 ) by finding a solution (called design) of the form { } x ξ = 1,..., x l p 1,..., p l where p i = r i /N, i.e., the weight p i defining the proportion of sensors placed at x i, i = 1,..., N.

115 Problem of weight optimization Minimize some measure Ψ of the FIM, e.g., Ψ(M) = ln det(m 1 ) or Ψ(M) = trace(m 1 ) by finding a solution (called design) of the form { } x ξ = 1,..., x l p 1,..., p l where p i = r i /N, i.e., the weight p i defining the proportion of sensors placed at x i, i = 1,..., N.

116 Problem relaxation Introduction to optimum design The problem still has a discrete nature, as the sought weights p i are integer multiples of 1/N which sum up to unity. Consequently, calculus techniques cannot be exploited in the solution.

117 Problem relaxation Introduction to optimum design The problem still has a discrete nature, as the sought weights p i are integer multiples of 1/N which sum up to unity. Consequently, calculus techniques cannot be exploited in the solution. Thus we extend the definition of the design: when N is large, the feasible p i s can be considered as any real numbers in the interval [0, 1] which sum up to unity, and not necessarily integer multiples of 1/N, i.e., elements of the canonical simplex.

118 Ultimate formulation { Find ξ = x 1,..., x l p 1,..., p l } such that p = arg min p S Ψ[M(ξ)] where S is the canonical simplex, subject to M(ξ) = l p i Υ(x i ) i=1 Here Υ(x) = 1 tf g(x, t f 0 t)gt (x, t) dt is the fixed contribution of the point x to the FIM.

119 Ultimate formulation { Find ξ = x 1,..., x l p 1,..., p l } such that p = arg min p S Ψ[M(ξ)] where S is the canonical simplex, subject to M(ξ) = l p i Υ(x i ) i=1 Here Υ(x) = 1 tf g(x, t f 0 t)gt (x, t) dt is the fixed contribution of the point x to the FIM.

120 Ultimate formulation { Find ξ = x 1,..., x l p 1,..., p l } such that p = arg min p S Ψ[M(ξ)] where S is the canonical simplex, subject to M(ξ) = l p i Υ(x i ) i=1 Here Υ(x) = 1 tf g(x, t f 0 t)gt (x, t) dt is the fixed contribution of the point x to the FIM.

121 Ultimate formulation { Find ξ = x 1,..., x l p 1,..., p l } such that p = arg min p S Ψ[M(ξ)] where S is the canonical simplex, subject to M(ξ) = l p i Υ(x i ) i=1 Here Υ(x) = 1 tf g(x, t f 0 t)gt (x, t) dt is the fixed contribution of the point x to the FIM.

122 Differentiability of the design criterion Define Ψ(ξ) = Ψ(M) M = M=M(ξ) [ ] Ψ(M) M ij M=M(ξ) Note that we have M log det(m 1 ) = (M 1 ) T M trace(m 1 ) = (M 2 ) T ( ) trace(m) = I M

123 Differentiability of the design criterion We introduce crucial functions [ Ψ(ξ)M(ξ) ] c(ξ) = trace [ Ψ(ξ)Υ(x) ] φ(x, ξ) = trace Ψ[M(ξ)] φ(x, ξ) c(ξ) ln det(m 1 1 tf (ξ)) g T (x, t)m 1 (ξ)g(x, t) dt m trace(m 1 (ξ)) trace(m(ξ)) t f 1 t f 0 tf 0 1 t f 0 g T (x, t)m 2 (ξ)g(x, t) dt tf g T (x, t)g(x, t) dt trace(m 1 (ξ)) trace(m(ξ))

124 Characterizations of solutions Theorem (Existence of solution) An optimal design exists comprising no more than m(m + 1)/2 support points. Moreover, the set of optimal designs is convex.

125 Characterizations of solutions Theorem (Existence of solution) An optimal design exists comprising no more than m(m + 1)/2 support points. Moreover, the set of optimal designs is convex. Theorem (Equivalence theorem) The following characterizations of an optimal design ξ are equivalent: 1 the design ξ minimizes Ψ[M(ξ)], 2 the design ξ minimizes max φ(x, ξ) c(ξ), x X 3 max φ(x, x X ξ ) = c(ξ ). All the designs satisfying 1 3 and their convex combinations have the same M(ξ ).

126 Convex combination of designs For any designs { } { } x 1,..., x l x 1,..., x l ξ 1 = p (1) 1,...,, ξ 2 = p(1) l p (2) 1,..., p(2) l we define ξ = (1 λ)ξ 1 + λξ 2 { } x 1,..., x l = (1 λ)p (1) 1 + λp (2) 1,..., (1 λ)p(1) l + λp (2) l

127 Numerical approach: A-optimality For Ψ[M(ξ)] = trace(m 1 (ξ)), we have to find p and the additional variable q R m to minimize J(p, q) = 1 T q subject to 1 T p = 1, p 0 and the Linear Matrix Inequalities (LMIs) [ ] M(ξ) ej 0, j = 1,..., m. e T j where e j is j-th coordinate versor. q j

128 Numerical approach: E-optimality For Ψ[M(ξ)] = λ max [M 1 (ξ)] we have to find p and q R to minimize J(p, q) = q subject to and the LMI M(ξ) qi. 1 T p = 1, p 0

129 Numerical approach: E-optimality For Ψ[M(ξ)] = λ max [M 1 (ξ)] we have to find p and q R to minimize J(p, q) = q subject to and the LMI M(ξ) qi. 1 T p = 1, p 0 Note that λ max [M 1 (ξ)] is non-differentiable!

130 Numerical approach: Torsney s algorithm for D-optimality For Ψ[M(ξ)] = ln det[m 1 (ξ)] apply 1 Guess p (0) i, i = 1,..., l. Set k = 0. 2 If φ(x i, ξ (k) ) = trace ( Υ(x i )M 1 (ξ (k) ) ) m, i = 1,..., l, then STOP. 3 Evaluate p (k+1) i = p (k) i φ(x i, ξ (k) ), i = 1,..., l. m Form ξ (k+1), increment k and go to Step 2.

131 Numerical approach: Torsney s algorithm for D-optimality For Ψ[M(ξ)] = ln det[m 1 (ξ)] apply 1 Guess p (0) i, i = 1,..., l. Set k = 0. 2 If φ(x i, ξ (k) ) = trace ( Υ(x i )M 1 (ξ (k) ) ) m, i = 1,..., l, then STOP. 3 Evaluate p (k+1) i = p (k) i φ(x i, ξ (k) ), i = 1,..., l. m Form ξ (k+1), increment k and go to Step 2.

132 Numerical approach: Torsney s algorithm for D-optimality For Ψ[M(ξ)] = ln det[m 1 (ξ)] apply 1 Guess p (0) i, i = 1,..., l. Set k = 0. 2 If φ(x i, ξ (k) ) = trace ( Υ(x i )M 1 (ξ (k) ) ) m, i = 1,..., l, then STOP. 3 Evaluate p (k+1) i = p (k) i φ(x i, ξ (k) ), i = 1,..., l. m Form ξ (k+1), increment k and go to Step 2.

133 Properties of this multiplicative algorithm The idea is reminiscent of the EM algorithm used for maximum likelihood estimation. Theorem Assume that { ξ (k)} is a sequence of iterates constructed by the outlined algorithm. Then the sequence { det(m(ξ (k) )) } is nondecreasing and lim k det(m(ξ(k) )) = max det(m(ξ)) ξ

134 Numerical example Introduction to optimum design Consider the nonlinear system y t = a 2 y x 2 b y 3, x (0, 1), t (0, t f ) subject to the conditions y(x, 0) = c ψ(x), x (0, 1) y(0, t) = y(1, t) = 0, t (0, t f ) where t f = 0.8, ψ(x) = sin(πx) + sin(3πx).

135 Adopted method Introduction to optimum design Determine a D-optimum design of the form { ξ x = 1,..., x l } p1,..., p l for recovering θ = (a, b, c), where the support points are x i = 0.05(i 1), i = 1,..., l = 21 Apply the multiplicative algorithm just mentioned. (Matlab s routine pdepe comes here in handy.)

136 Results Introduction to optimum design p i x Figure : Optimal weights

137 Results Introduction to optimum design φ(x,ξ * ) x Figure : Derivative function φ(x, ξ )

138 Results Introduction to optimum design 2.6 x det[m(ξ (k) )] iteration Figure : Monotonic increase in det[m(ξ (k) )]

139 Simulation example: material flaw detection Consider a metallic core with a rectangular crack inside Ω Γ Cross-section plane of reconstruction x 2 0 Γ 1 Γ 5 Γ Γ x 1 The left and right boundaries Γ 1 and Γ 2 are subjected to constant voltages forcing the flow of a low density steady current.

140 Computer-assisted impedance tomography Electrical voltage distribution y is described by the Laplace equation div ( γ(x) y(x) ) = 0, x Ω, with boundary conditions Conductivity coefficient y(x) = 10 x Γ 1, y(x) = 10 x Γ 2, y(x) n = 0 x Γ i, i = 3, 4, 5. γ(x) = exp ( θ 1 (x 1 θ 2 ) 2 θ 3 (x 2 θ 4 ) 2) θ 0 = (30.0, 0.3, 80.0, 0.7)

141 Results Introduction to optimum design Γ Γ 1 Γ Γ Γ D-optimal A-optimal

142 Comments Introduction to optimum design The approach is useful as

143 Comments Introduction to optimum design The approach is useful as It indicates spatial regions which provide the most important information about the parameters, while quantifying their influence.

144 Comments Introduction to optimum design The approach is useful as It indicates spatial regions which provide the most important information about the parameters, while quantifying their influence. Instead of several sensors placed at a point, we can use a more precise measurement device at that point.

145 Comments Introduction to optimum design The approach is useful as It indicates spatial regions which provide the most important information about the parameters, while quantifying their influence. Instead of several sensors placed at a point, we can use a more precise measurement device at that point. Design weights can be interpreted as measurement frequencies.

146 Comments Introduction to optimum design The approach is useful as It indicates spatial regions which provide the most important information about the parameters, while quantifying their influence. Instead of several sensors placed at a point, we can use a more precise measurement device at that point. Design weights can be interpreted as measurement frequencies. With a little turning up, the results can be employed when attacking the setting with purely binary weights (r i = 0 or r i = 1), see Part 2.

147 Further extensions? Introduction to optimum design Question So far, we have focused on optimizing weights assigned to spatial points fixed a priori. Is it possible to simultaneously optimize these spatial points and weights, possibly with selecting their optimal number?

148 Passage from discrete to continuous designs { x ξ N = 1,..., x l } p 1,..., p l p i =r i /N, l p i =1 i=1 ξ any probability measure on X X ξ(dx) = 1 } M(ξ N ) = l p i Υ(x i ) i=1 M(ξ) = X Υ(x) ξ(dx) Notation: X is a compact set of feasible sensor locations, Υ(x) = 1 t f tf 0 g(x i, t)g T (x i, t) dt

149 Passage from discrete to continuous designs { x ξ N = 1,..., x l } p 1,..., p l p i =r i /N, l p i =1 i=1 ξ any probability measure on X X ξ(dx) = 1 } M(ξ N ) = l p i Υ(x i ) i=1 M(ξ) = X Υ(x) ξ(dx) Notation: X is a compact set of feasible sensor locations, Υ(x) = 1 t f tf 0 g(x i, t)g T (x i, t) dt

150 Representation properties of FIM s Let Ξ(X ) denote the set of all probability measures on X. Lemma (Symmetry and nonnegativity) For any ξ Ξ(X ) the information matrix M(ξ) is symmetric and non-negative definite. Lemma (Set of all FIMs) M(X ) = { M(ξ) : ξ Ξ(X ) } is compact and convex. Lemma (Designs as probability mass functions) For any M 0 M(X ) there always exists a purely discrete design ξ with no more than m(m + 1)/2 + 1 support points such that M(ξ) = M 0.

151 Representation properties of FIM s Consequently, we may think of designs as discrete probability mass functions of the form { } x ξ = 1, x 2,..., x l l ; p p 1, p 2,..., p i = 1 l where the number of support points l is not fixed and constitutes an additional parameter to be determined, subject to the condition l m(m + 1)/ i=1

152 Differentiability of the design criterion Our main assumption is that Ψ[M(ξ) + λ(m( ξ) M(ξ))] λ = λ=0 + But it is not particularly restrictive, since we have ψ(x, ξ) = c(ξ) φ(x, ξ) X ψ(x, ξ) ξ(dx) where [ Ψ(ξ)M(ξ) ] [ Ψ(ξ)Υ(x) ] c(ξ) = trace, φ(x, ξ) = trace Ψ(ξ) = Ψ(M) M M=M(ξ)

153 Differentiability of the design criterion Recall that Ψ[M(ξ)] φ(x, ξ) c(ξ) ln det(m 1 1 tf (ξ)) g T (x, t)m 1 (ξ)g(x, t) dt m trace(m 1 (ξ)) trace(m(ξ)) t f 1 t f 0 tf 0 1 t f 0 g T (x, t)m 2 (ξ)g(x, t) dt tf g T (x, t)g(x, t) dt trace(m 1 (ξ)) trace(m(ξ))

154 Characterization of optimal solutions Theorem Under some conditions (not particularly restrictive anyway; cf. the Monograph) we have the following results: 1 An optimal design exists comprising not more than m(m + 1)/2 points (i.e. one less than predicted previously). 2 The set of optimal designs is convex.

155 Convex combination of designs Assume that the support points of the designs ξ 1 and ξ 2 coincide (this situation is easy to achieve, since if a support point is included in only one design, we can formally add it to the other design and assign it the zero weight). Thus we have { } { } x 1,..., x l x 1,..., x l ξ 1 = p (1) 1,..., p(1) l and, in consequence, we define, ξ 2 = p (2) 1,..., p(2) l ξ = (1 λ)ξ 1 + λξ 2 { } x 1,..., x l = (1 λ)p (1) 1 + λp (2) 1,..., (1 λ)p(1) l + λp (2) l

156 Main result: Equivalence Theorem Theorem The following characterizations of an optimal design ξ are equivalent in the sense that each implies the other two: 1 the design ξ minimizes Ψ[M(ξ)], 2 the design ξ minimizes max φ(x, ξ) c(ξ), and x X 3 max φ(x, x X ξ ) = c(ξ ). All the designs satisfying 1 3 and their convex combinations have the same information matrix M(ξ ).

157 Illustration of Equivalence Theorem Locally optimal designs for stationary sensors 57 φ(x, ξ ) X FIGURE 3.1 Figure Sensitivity : Derivative function function for the D-optimal for a D-optimal design of Example design 3.1 (solid) and line). an The same function for an exemplary nonoptimal design is also shown (dashed line) exemplary for the sake nonoptimal of comparison. design (dashed). x p 1,...,p l at those points, Dariusz as they Uciński uniquely Optimal determine sensor location the designs. for distributed Such parametera system identi

158 Illustration of Equivalence Theorem Example Consider y(x, t) t = α 2 y(x, t) x 2 +βy(x, t), x Ω = (0, 1), t T = (0, 1) subject to y(x, 0) = sin (π x) + sin (2 π x) in Ω y(0, t) = y(1, t) = 0 on T The purpose is to determine a D-optimum design for θ = (α, β).

159 Illustration of Equivalence Theorem Example (ctd ) Exploting the closed-form solution y(x, t) = e (β α π2 )t sin (π x) + e (β 4 α π2 )t sin (2 π x) we get 1 ξ π arctan( 2), 1 1 π arctan( 2) = 1 2, 1 2

160 Illustration of Equivalence Theorem Locally Example optimal (ctd ) designs for stationary sensors φ(x, ξ ) x 1 x x 2 FIGURE 3.2 Sensitivity functionfigure for the: Optimal D-optimal derivative design of function Example (solid) 3.2 (solid line).

161 Towards numerical algorithms Step 1. Guess a discrete non-degenerate starting design measure ξ (0) (we must have det(m(ξ (0) )) 0). Choose some positive tolerance η 1. Set k = 0. Step 2. Determine x (k) 0 = arg max φ(x, ξ k). If x X φ(x (k) 0, ξ(k) ) < c(ξ (k) ) + η, then STOP. Step 3. For an appropriate value of 0 < λ k < 1, set ξ (k+1) = (1 λ k )ξ (k) + λ k δ x (k) 0 where δ x stands for the one-point (Dirac) design measure { x 1 }, increment k by one and go to Step 2.

162 Example Introduction to optimum design Consider the heat equation ( κ(x) y(x, t) t = x 1 y(x, t) x 1 ) + ( κ(x) x 2 ) y(x, t) x 2 in Ω T supplemented with y(x, 0) = 5 y(x, t) = 5(1 t) in Ω on Ω T where Ω = [0, 1] 2, T = (0, 1), κ(x) = θ 1 + θ 2 x 1 + θ 3 x 2 θ 1 = 0.1, θ 2 = θ 3 = 0.3

163 Results Introduction to optimum design x 2 x x x 1 Figure : Locating six sensors through the direct use of non-linear programming techniques: (a) independent measurements, (b) correlated measurements.

164 Results Introduction to optimum design x 2 x x x 1 Figure : Support points of an optimal continuous design (equal weghts) (left) and the result of a clusterization-free design for N = 97 (right).

165 Conclusions Introduction to optimum design The following is a concise summary of the contributions provided by this work to the state-of-the-art in optimal sensor location for parameter estimation in DPS s: 1 Dramatically reduces the problem dimensionality. 2 Provides characterizations of continuous designs for stationary sensors and a wide class of optimality criteria. 3 Clarifies how to adapt well-known algorithms of optimum experimental design for finding numerical approximations to the solutions.

On construction of constrained optimum designs

On construction of constrained optimum designs On construction of constrained optimum designs Institute of Control and Computation Engineering University of Zielona Góra, Poland DEMA2008, Cambridge, 15 August 2008 Numerical algorithms to construct

More information

Chapter 2 Distributed Parameter Systems: Controllability, Observability, and Identification

Chapter 2 Distributed Parameter Systems: Controllability, Observability, and Identification Chapter 2 Distributed Parameter Systems: Controllability, Observability, and Identification 2.1 Mathematical Description We introduce the class of systems to be considered in the framework of this monograph

More information

Optimal sensor placement in parameter estimation of distributed systems

Optimal sensor placement in parameter estimation of distributed systems Optimal sensor placement in parameter estimation of distributed systems Dariusz Uciński Institute of Robotics and Software Engineering Technical University of Zielona Góra ul. Podgórna 5 65 246 Zielona

More information

Resource-aware sensor activity scheduling for parameter estimation of distributed systems

Resource-aware sensor activity scheduling for parameter estimation of distributed systems Resource-aware sensor activity scheduling for parameter estimation of distributed systems Maciej Patan Dariusz Uciński Institute of Control and Computation Engineering University of Zielona Góra, Poland

More information

Decentralized scheduling of sensor network for parameter estimation of distributed systems

Decentralized scheduling of sensor network for parameter estimation of distributed systems Decentralized scheduling of sensor network for parameter estimation of distributed systems Maciej Patan Institute of Control and Computation Engineering University of Zielona Góra ul. Podgórna 5, 65 246

More information

Information in a Two-Stage Adaptive Optimal Design

Information in a Two-Stage Adaptive Optimal Design Information in a Two-Stage Adaptive Optimal Design Department of Statistics, University of Missouri Designed Experiments: Recent Advances in Methods and Applications DEMA 2011 Isaac Newton Institute for

More information

The L p -dissipativity of first order partial differential operators

The L p -dissipativity of first order partial differential operators The L p -dissipativity of first order partial differential operators A. Cialdea V. Maz ya n Memory of Vladimir. Smirnov Abstract. We find necessary and sufficient conditions for the L p -dissipativity

More information

Stochastic Design Criteria in Linear Models

Stochastic Design Criteria in Linear Models AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework

More information

Distributed Configuration of Sensor Network for Fault Detection in Spatio-Temporal Systems

Distributed Configuration of Sensor Network for Fault Detection in Spatio-Temporal Systems Journal of Physics: Conference Series PAPER OPEN ACCESS Distributed Configuration of Sensor Network for Fault Detection in Spatio-Temporal Systems To cite this article: Maciej Patan and Damian Kowalów

More information

UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN. EE & CS, The University of Newcastle, Australia EE, Technion, Israel.

UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN. EE & CS, The University of Newcastle, Australia EE, Technion, Israel. UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN Graham C. Goodwin James S. Welsh Arie Feuer Milan Depich EE & CS, The University of Newcastle, Australia 38. EE, Technion, Israel. Abstract:

More information

Electromagnetic Modeling and Simulation

Electromagnetic Modeling and Simulation Electromagnetic Modeling and Simulation Erin Bela and Erik Hortsch Department of Mathematics Research Experiences for Undergraduates April 7, 2011 Bela and Hortsch (OSU) EM REU 2010 1 / 45 Maxwell s Equations

More information

Introduction to Bayesian methods in inverse problems

Introduction to Bayesian methods in inverse problems Introduction to Bayesian methods in inverse problems Ville Kolehmainen 1 1 Department of Applied Physics, University of Eastern Finland, Kuopio, Finland March 4 2013 Manchester, UK. Contents Introduction

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

A Direct Method for reconstructing inclusions from Electrostatic Data

A Direct Method for reconstructing inclusions from Electrostatic Data A Direct Method for reconstructing inclusions from Electrostatic Data Isaac Harris Texas A&M University, Department of Mathematics College Station, Texas 77843-3368 iharris@math.tamu.edu Joint work with:

More information

Reconstructing inclusions from Electrostatic Data

Reconstructing inclusions from Electrostatic Data Reconstructing inclusions from Electrostatic Data Isaac Harris Texas A&M University, Department of Mathematics College Station, Texas 77843-3368 iharris@math.tamu.edu Joint work with: W. Rundell Purdue

More information

Error analysis for efficiency

Error analysis for efficiency Glen Cowan RHUL Physics 28 July, 2008 Error analysis for efficiency To estimate a selection efficiency using Monte Carlo one typically takes the number of events selected m divided by the number generated

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 Jinglin Zhou Hong Wang, Donghua Zhou Department of Automation, Tsinghua University, Beijing 100084, P. R. China Control Systems Centre,

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Numerical Methods in Informatics

Numerical Methods in Informatics Numerical Methods in Informatics Lecture 2, 30.09.2016: Nonlinear Equations in One Variable http://www.math.uzh.ch/binf4232 Tulin Kaman Institute of Mathematics, University of Zurich E-mail: tulin.kaman@math.uzh.ch

More information

A new algorithm for deriving optimal designs

A new algorithm for deriving optimal designs A new algorithm for deriving optimal designs Stefanie Biedermann, University of Southampton, UK Joint work with Min Yang, University of Illinois at Chicago 18 October 2012, DAE, University of Georgia,

More information

Nonlinear Signal Processing ELEG 833

Nonlinear Signal Processing ELEG 833 Nonlinear Signal Processing ELEG 833 Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware arce@ee.udel.edu May 5, 2005 8 MYRIAD SMOOTHERS 8 Myriad Smoothers 8.1 FLOM

More information

D-optimal Designs for Factorial Experiments under Generalized Linear Models

D-optimal Designs for Factorial Experiments under Generalized Linear Models D-optimal Designs for Factorial Experiments under Generalized Linear Models Jie Yang Department of Mathematics, Statistics, and Computer Science University of Illinois at Chicago Joint research with Abhyuday

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Numerical Analysis: Interpolation Part 1

Numerical Analysis: Interpolation Part 1 Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019,

More information

5.1 2D example 59 Figure 5.1: Parabolic velocity field in a straight two-dimensional pipe. Figure 5.2: Concentration on the input boundary of the pipe. The vertical axis corresponds to r 2 -coordinate,

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

Numerical Methods in Physics and Astrophysics

Numerical Methods in Physics and Astrophysics Kostas Kokkotas 2 October 17, 2017 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation

More information

Variable-gain output feedback control

Variable-gain output feedback control 7. Variable-gain output feedback control 7.1. Introduction PUC-Rio - Certificação Digital Nº 611865/CA In designing control laws, the usual first step is to describe the plant at a given operating point

More information

ACM/CMS 107 Linear Analysis & Applications Fall 2017 Assignment 2: PDEs and Finite Element Methods Due: 7th November 2017

ACM/CMS 107 Linear Analysis & Applications Fall 2017 Assignment 2: PDEs and Finite Element Methods Due: 7th November 2017 ACM/CMS 17 Linear Analysis & Applications Fall 217 Assignment 2: PDEs and Finite Element Methods Due: 7th November 217 For this assignment the following MATLAB code will be required: Introduction http://wwwmdunloporg/cms17/assignment2zip

More information

Lecture Introduction

Lecture Introduction Lecture 1 1.1 Introduction The theory of Partial Differential Equations (PDEs) is central to mathematics, both pure and applied. The main difference between the theory of PDEs and the theory of Ordinary

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

CSCI5654 (Linear Programming, Fall 2013) Lectures Lectures 10,11 Slide# 1

CSCI5654 (Linear Programming, Fall 2013) Lectures Lectures 10,11 Slide# 1 CSCI5654 (Linear Programming, Fall 2013) Lectures 10-12 Lectures 10,11 Slide# 1 Today s Lecture 1. Introduction to norms: L 1,L 2,L. 2. Casting absolute value and max operators. 3. Norm minimization problems.

More information

AP-Optimum Designs for Minimizing the Average Variance and Probability-Based Optimality

AP-Optimum Designs for Minimizing the Average Variance and Probability-Based Optimality AP-Optimum Designs for Minimizing the Average Variance and Probability-Based Optimality Authors: N. M. Kilany Faculty of Science, Menoufia University Menoufia, Egypt. (neveenkilany@hotmail.com) W. A. Hassanein

More information

Estimation and Model Selection in Mixed Effects Models Part I. Adeline Samson 1

Estimation and Model Selection in Mixed Effects Models Part I. Adeline Samson 1 Estimation and Model Selection in Mixed Effects Models Part I Adeline Samson 1 1 University Paris Descartes Summer school 2009 - Lipari, Italy These slides are based on Marc Lavielle s slides Outline 1

More information

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection SG 21006 Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 28

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Numerical Methods in Physics and Astrophysics

Numerical Methods in Physics and Astrophysics Kostas Kokkotas 2 October 20, 2014 2 http://www.tat.physik.uni-tuebingen.de/ kokkotas Kostas Kokkotas 3 TOPICS 1. Solving nonlinear equations 2. Solving linear systems of equations 3. Interpolation, approximation

More information

A Short Course in Basic Statistics

A Short Course in Basic Statistics A Short Course in Basic Statistics Ian Schindler November 5, 2017 Creative commons license share and share alike BY: C 1 Descriptive Statistics 1.1 Presenting statistical data Definition 1 A statistical

More information

The moment-lp and moment-sos approaches

The moment-lp and moment-sos approaches The moment-lp and moment-sos approaches LAAS-CNRS and Institute of Mathematics, Toulouse, France CIRM, November 2013 Semidefinite Programming Why polynomial optimization? LP- and SDP- CERTIFICATES of POSITIVITY

More information

ECE531 Lecture 10b: Maximum Likelihood Estimation

ECE531 Lecture 10b: Maximum Likelihood Estimation ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So

More information

ODE Final exam - Solutions

ODE Final exam - Solutions ODE Final exam - Solutions May 3, 018 1 Computational questions (30 For all the following ODE s with given initial condition, find the expression of the solution as a function of the time variable t You

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Multiplicative Algorithm for computing Optimum Designs

Multiplicative Algorithm for computing Optimum Designs for computing s DAE Conference 2012 Roberto Dorta-Guerra, Raúl Martín Martín, Mercedes Fernández-Guerrero and Licesio J. Rodríguez-Aragón Optimum Experimental Design group October 20, 2012 http://areaestadistica.uclm.es/oed

More information

Stochastic Realization of Binary Exchangeable Processes

Stochastic Realization of Binary Exchangeable Processes Stochastic Realization of Binary Exchangeable Processes Lorenzo Finesso and Cecilia Prosdocimi Abstract A discrete time stochastic process is called exchangeable if its n-dimensional distributions are,

More information

Basic concepts in estimation

Basic concepts in estimation Basic concepts in estimation Random and nonrandom parameters Definitions of estimates ML Maimum Lielihood MAP Maimum A Posteriori LS Least Squares MMS Minimum Mean square rror Measures of quality of estimates

More information

DETERMINATION OF AN UNKNOWN SOURCE TERM IN A SPACE-TIME FRACTIONAL DIFFUSION EQUATION

DETERMINATION OF AN UNKNOWN SOURCE TERM IN A SPACE-TIME FRACTIONAL DIFFUSION EQUATION Journal of Fractional Calculus and Applications, Vol. 6(1) Jan. 2015, pp. 83-90. ISSN: 2090-5858. http://fcag-egypt.com/journals/jfca/ DETERMINATION OF AN UNKNOWN SOURCE TERM IN A SPACE-TIME FRACTIONAL

More information

SELECTIVE ANGLE MEASUREMENTS FOR A 3D-AOA INSTRUMENTAL VARIABLE TMA ALGORITHM

SELECTIVE ANGLE MEASUREMENTS FOR A 3D-AOA INSTRUMENTAL VARIABLE TMA ALGORITHM SELECTIVE ANGLE MEASUREMENTS FOR A 3D-AOA INSTRUMENTAL VARIABLE TMA ALGORITHM Kutluyıl Doğançay Reza Arablouei School of Engineering, University of South Australia, Mawson Lakes, SA 595, Australia ABSTRACT

More information

Value Iteration and Action ɛ-approximation of Optimal Policies in Discounted Markov Decision Processes

Value Iteration and Action ɛ-approximation of Optimal Policies in Discounted Markov Decision Processes Value Iteration and Action ɛ-approximation of Optimal Policies in Discounted Markov Decision Processes RAÚL MONTES-DE-OCA Departamento de Matemáticas Universidad Autónoma Metropolitana-Iztapalapa San Rafael

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

Nonparametric Identification of a Binary Random Factor in Cross Section Data - Supplemental Appendix

Nonparametric Identification of a Binary Random Factor in Cross Section Data - Supplemental Appendix Nonparametric Identification of a Binary Random Factor in Cross Section Data - Supplemental Appendix Yingying Dong and Arthur Lewbel California State University Fullerton and Boston College July 2010 Abstract

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

Final: Solutions Math 118A, Fall 2013

Final: Solutions Math 118A, Fall 2013 Final: Solutions Math 118A, Fall 2013 1. [20 pts] For each of the following PDEs for u(x, y), give their order and say if they are nonlinear or linear. If they are linear, say if they are homogeneous or

More information

Optimal scheduling of mobile sensor networks for detection and localization of stationary contamination sources

Optimal scheduling of mobile sensor networks for detection and localization of stationary contamination sources Optimal scheduling of mobile sensor networks for detection and localization of stationary contamination sources Maciej Patan Institute of Control and Computation Engineering, University of Zielona Góra,

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

Approximate Optimal Designs for Multivariate Polynomial Regression

Approximate Optimal Designs for Multivariate Polynomial Regression Approximate Optimal Designs for Multivariate Polynomial Regression Fabrice Gamboa Collaboration with: Yohan de Castro, Didier Henrion, Roxana Hess, Jean-Bernard Lasserre Universität Potsdam 16th of February

More information

Uncertainty Quantification for Inverse Problems. November 7, 2011

Uncertainty Quantification for Inverse Problems. November 7, 2011 Uncertainty Quantification for Inverse Problems November 7, 2011 Outline UQ and inverse problems Review: least-squares Review: Gaussian Bayesian linear model Parametric reductions for IP Bias, variance

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Optimal Design for the Rasch Poisson-Gamma Model

Optimal Design for the Rasch Poisson-Gamma Model Optimal Design for the Rasch Poisson-Gamma Model Ulrike Graßhoff, Heinz Holling and Rainer Schwabe Abstract The Rasch Poisson counts model is an important model for analyzing mental speed, an fundamental

More information

1 Differentiable manifolds and smooth maps

1 Differentiable manifolds and smooth maps 1 Differentiable manifolds and smooth maps Last updated: April 14, 2011. 1.1 Examples and definitions Roughly, manifolds are sets where one can introduce coordinates. An n-dimensional manifold is a set

More information

Image restoration: numerical optimisation

Image restoration: numerical optimisation Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

Covariance function estimation in Gaussian process regression

Covariance function estimation in Gaussian process regression Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian

More information

Lecture 3 September 1

Lecture 3 September 1 STAT 383C: Statistical Modeling I Fall 2016 Lecture 3 September 1 Lecturer: Purnamrita Sarkar Scribe: Giorgio Paulon, Carlos Zanini Disclaimer: These scribe notes have been slightly proofread and may have

More information

1 The linear algebra of linear programs (March 15 and 22, 2015)

1 The linear algebra of linear programs (March 15 and 22, 2015) 1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real

More information

RECENTLY, wireless sensor networks have been the object

RECENTLY, wireless sensor networks have been the object IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 4, APRIL 2007 1511 Distributed Sequential Bayesian Estimation of a Diffusive Source in Wireless Sensor Networks Tong Zhao, Student Member, IEEE, and

More information

Tutorial: Statistical distance and Fisher information

Tutorial: Statistical distance and Fisher information Tutorial: Statistical distance and Fisher information Pieter Kok Department of Materials, Oxford University, Parks Road, Oxford OX1 3PH, UK Statistical distance We wish to construct a space of probability

More information

Algorithms for Nonsmooth Optimization

Algorithms for Nonsmooth Optimization Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

Differential equations, comprehensive exam topics and sample questions

Differential equations, comprehensive exam topics and sample questions Differential equations, comprehensive exam topics and sample questions Topics covered ODE s: Chapters -5, 7, from Elementary Differential Equations by Edwards and Penney, 6th edition.. Exact solutions

More information

A Multigrid Method for Two Dimensional Maxwell Interface Problems

A Multigrid Method for Two Dimensional Maxwell Interface Problems A Multigrid Method for Two Dimensional Maxwell Interface Problems Susanne C. Brenner Department of Mathematics and Center for Computation & Technology Louisiana State University USA JSA 2013 Outline A

More information

Vibrating-string problem

Vibrating-string problem EE-2020, Spring 2009 p. 1/30 Vibrating-string problem Newton s equation of motion, m u tt = applied forces to the segment (x, x, + x), Net force due to the tension of the string, T Sinθ 2 T Sinθ 1 T[u

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms L09. PARTICLE FILTERING NA568 Mobile Robotics: Methods & Algorithms Particle Filters Different approach to state estimation Instead of parametric description of state (and uncertainty), use a set of state

More information

Advanced Signal Processing Introduction to Estimation Theory

Advanced Signal Processing Introduction to Estimation Theory Advanced Signal Processing Introduction to Estimation Theory Danilo Mandic, room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,

More information

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011 LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS S. G. Bobkov and F. L. Nazarov September 25, 20 Abstract We study large deviations of linear functionals on an isotropic

More information

Advanced Statistics I : Gaussian Linear Model (and beyond)

Advanced Statistics I : Gaussian Linear Model (and beyond) Advanced Statistics I : Gaussian Linear Model (and beyond) Aurélien Garivier CNRS / Telecom ParisTech Centrale Outline One and Two-Sample Statistics Linear Gaussian Model Model Reduction and model Selection

More information

Linear model A linear model assumes Y X N(µ(X),σ 2 I), And IE(Y X) = µ(x) = X β, 2/52

Linear model A linear model assumes Y X N(µ(X),σ 2 I), And IE(Y X) = µ(x) = X β, 2/52 Statistics for Applications Chapter 10: Generalized Linear Models (GLMs) 1/52 Linear model A linear model assumes Y X N(µ(X),σ 2 I), And IE(Y X) = µ(x) = X β, 2/52 Components of a linear model The two

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

Optimization Problems

Optimization Problems Optimization Problems The goal in an optimization problem is to find the point at which the minimum (or maximum) of a real, scalar function f occurs and, usually, to find the value of the function at that

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table

More information

Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods.

Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods. Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods. Linear models for classification Logistic regression Gradient descent and second-order methods

More information

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES)

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) RAYTCHO LAZAROV 1 Notations and Basic Functional Spaces Scalar function in R d, d 1 will be denoted by u,

More information

Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014

Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014 Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014 Put your solution to each problem on a separate sheet of paper. Problem 1. (5166) Assume that two random samples {x i } and {y i } are independently

More information

Kriging models with Gaussian processes - covariance function estimation and impact of spatial sampling

Kriging models with Gaussian processes - covariance function estimation and impact of spatial sampling Kriging models with Gaussian processes - covariance function estimation and impact of spatial sampling François Bachoc former PhD advisor: Josselin Garnier former CEA advisor: Jean-Marc Martinez Department

More information

Statistics 3858 : Maximum Likelihood Estimators

Statistics 3858 : Maximum Likelihood Estimators Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,

More information

Gradient Descent. Sargur Srihari

Gradient Descent. Sargur Srihari Gradient Descent Sargur srihari@cedar.buffalo.edu 1 Topics Simple Gradient Descent/Ascent Difficulties with Simple Gradient Descent Line Search Brent s Method Conjugate Gradient Descent Weight vectors

More information

A Generalization of Principal Component Analysis to the Exponential Family

A Generalization of Principal Component Analysis to the Exponential Family A Generalization of Principal Component Analysis to the Exponential Family Michael Collins Sanjoy Dasgupta Robert E. Schapire AT&T Labs Research 8 Park Avenue, Florham Park, NJ 7932 mcollins, dasgupta,

More information

Active Robot Calibration Algorithm

Active Robot Calibration Algorithm Active Robot Calibration Algorithm Yu Sun and John M. Hollerbach Abstract This paper presents a new updating algorithm to reduce the complexity of computing an observability index for kinematic calibration

More information

A First-Order Framework for Solving Ellipsoidal Inclusion and. Inclusion and Optimal Design Problems. Selin Damla Ahipa³ao lu.

A First-Order Framework for Solving Ellipsoidal Inclusion and. Inclusion and Optimal Design Problems. Selin Damla Ahipa³ao lu. A First-Order Framework for Solving Ellipsoidal Inclusion and Optimal Design Problems Singapore University of Technology and Design November 23, 2012 1. Statistical Motivation Given m regression vectors

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1.

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1. STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, 26 Problem Normal Moments (A) Use the Itô formula and Brownian scaling to check that the even moments of the normal distribution

More information