Stochastic Target Interception in Non-convex Domain Using MILP
|
|
- Austin Underwood
- 5 years ago
- Views:
Transcription
1 Stochastic Target Interception in Non-convex Domain Using MILP Apoorva Shende, Matthew J. Bays, and Daniel J. Stilwell Virginia Polytechnic Institute and State University Blacksburg, VA {apoorva, mjb222, Abstract In this paper we present a planning approach for the stochastic target interception problem, in which, a team of mobile sensor agents is tasked with intercepting multiple targets. We extend our previous work on stochastic target interception to non-convex domains and propose a cost that addresses minimum time requirement for probabilistically intercepting all the targets if possible over a finite horizon. Indeed, our optimization problem for the stochastic case has similar computational costs as the optimization program for the corresponding deterministic case. Our solution presumes that the system can be approximated by linear dynamics and Gaussian noise, with Gaussian localization uncertainty. I. INTRODUCTION We present an algorithm for multi-agent motion planning in the presence of Gaussian uncertainties. The objective of the desired multi-agent motion is to intercept a set of moving targets as quickly as possible while conforming to a nonconvex operational domain. We address this problem in a mixed integer linear programming (MILP) framework for a finite planning horizon. MILP has been effectively used in planning problems, where event detections in the forward planning need to be accounted for in computing the agent control actions. Examples of such planning problems addressed using MILP can be found in [1], [2], [3], [4], [5] etc. In [4] and [5], MILP has been used for efficient target interception. [4] addresses the case of deterministic target interception for the area protection problem. To the best of our knowledge, our previous work [5], is the first effort in integrating uncertainty in localization and dynamics in the target interception problem in the MILP framework. The work described herein extends our previous work [5] on incorporating Gaussian uncertainties in target interception, in 2 ways. While [5] requires the sensor operational domains to be convex, nonconvex domains often arise in realistic applications. Hence we first address the issue of non-convex operational domains using chance constraints in MILP. Secondly, we present an alternative to planning cost in [5], which proposed an additive cost over the planning horizon. The novel cost that we present here, explicitly deals with the time required to intercept all the targets in the finite planning horizon case, where the interception is evaluated through the chance constraints. The issue of agent maneuvering in non-convex domain under Gaussian uncertainty has been addressed in [6], [7] and [8] for goal-reaching problems using disjunctive linear programming (DLP). As is evident in these works, DLP can be efficiently utilized to integrate Gaussian uncertainty in non-convex operational domains for agent maneuvering. For planning problems in which the efficiency of trajectories depends on the occurrence of certain discrete events, it is possible to use DLP by representing the discrete events in the forward planning through disjunctions. However the number of disjunctions in the constraint set of the resulting DLP will be exponential in the number of possible events in the forward planning trajectories, making the disjunctive notation very lengthy and cumbersome. MILP provides a notationally compact alternative to DLP in formulating discrete event trajectory planning problems where the events are represented using binary variables instead of disjunctions. The target interception problem is an example of such a trajectory planning problem where the events of intercepting targets largely determine the cost associated with a multi-agent trajectory. Due to its notational compactness we use MILP to formulate our target interception problem over DLP, which is consistent with our work in [5]. In [1] and [2], non-convex domain constraints have been enforced on agent motion using MILP. However [1] and [2] do not incorporate uncertainty in the constraints. Hence we specifically address the conformance of the agents to the nonconvex operational domain in the MILP framework under Gaussian uncertainty in localization and dynamics. In many recent works, [5], [6], [7], [8], [9], [10] etc., it has been shown that incorporation of Gaussian uncertainty through closed form chance constraints, leads to computationally efficient and accurate solutions. Sampling-based approaches for chance constraints, are also available in the literature [11], [12], [13] etc. These approaches accommodate a more general form of uncertainty than assumed herein, but are more computationally intensive. This paper is organized as follows. In Section II, we describe the linear Gaussian discrete time dynamical model of the system required for the proposed algorithm. In Section III, we describe 2 discrete events that are dealt with in this paper, namely, non-convex domain conformance by the sensor agents and target interception. In Section IV, we propose our MILP formulation of the target interception planning problem. Simulation results for a novel coordinated autonomous riverine rescue team are described in Section V and the paper is concluded in Section VI.
2 II. SYSTEM DYNAMICS In this section we state the principal assumptions concerning the system dynamics that we use in our problem, and we briefly outline the stochastic RHC problem. Most of this section is reproduced from our work [5], as we have used same system dynamics here. We are reproducing it here for the sake of completeness. We denote the number of un-intercepted targets at time t by N(t). As new targets appear, N(t) increases, and it decreases when the un-intercepted targets get intercepted. We denote the state of target i for i {1,..., N(t)} at time t by x i (t) R n and the state of the sensor j for j {1,..., M} at time t by s j (t) R m. We assume that the target and sensor dynamics are described by discrete-time, time-invariant, linear state space equations, x i (t + 1) = A i x i (t) + B i ν i (t) (1) s j (t + 1) = Ājs j (t) + C j u j (t) + B j ν j (t). (2) In the target state equation (1), ν i (t) R nν is zero mean Gaussian noise distributed as N (0, Q i (t)). The sensor control signal is u j (t) R mu and ν j (t) R mν denotes the zero mean Gaussian noise vector distributed as N (0, Q j (t)). The 2-norm of the control vector u j (t) is bounded by a constant u max, i.e., u j (t) u max. (3) If m u = 2, then u j (t) R 2, and the control constraint (3) represents the interior of a circle. As (3) is a nonlinear constraint, we conservatively approximate it using linear constraints, that represent a regular polygon that inscribes the circle. These constraints can be incorporated in the MILP formulation. The constraints corresponding to an N U sided inscribing polygon are given by, ( ) π < u j (t), v r > u max cos for all r {1,..., N U }, N U (4) where, [ <.,. > is] used to denote the inner product in R 2 and, sin(θr ) v r =, for θ r = 2πr N cos(θ r ) U. An important aspect of our modeling is that the noise terms ν i (t) and ν j (t) are independently distributed with respect to each other and the target and sensor states x i (t) and s j (t) for all i {1,..., N(t)} and j {1,..., M}. As a consequence for a given set of control actions the target and sensor trajectories are independent. We assume the target positions χ i (t) R 2 and the sensor positions ς j (t) R 2 are linear functions of the states x i (t) and s j (t) respectively. These relations are stated, χ i (t) = Lx i (t), (5) ς j (t) = Λs j (t), (6) where L and Λ are matrix operators. We require the posterior PDFs of target and sensor states to be Gaussian or well approximated by Gaussian. If the observation equations are linear, then, for linear dynamics, a Kalman filter can be used to compute the mean and covariance of the state distributions. If the state dynamics or observation equations are not linear, then we presume that an adequate approximation of the state distribution can be computed using an extended Kalman filter, unscented filter, etc. Forward predictions are computed for the interval [t, t + T ]. Since the system is time-invariant, and to simplify notation, we denote time in the planning interval by the variable τ [0, T ]. A fixed number N RH = N(t) targets are considered during the planning interval. The initial target and sensor state PDFs at τ = 0 of the planning interval will correspond to the posterior PDFs at time t in the real or simulation environment. At a time instant τ in the planning interval, we denote the target and sensor state means by ˆx i (τ) and ŝ j (τ) respectively and, covariances by P i (τ) and Π j (τ) respectively. These would be updated by the following open-loop predictor equations as in and [6], ˆx i (τ + 1) = A iˆx i (τ) (7) ŝ j (τ + 1) = Ājŝ j (τ) + C j u j (τ) (8) P i (τ + 1) = A i P i (τ)a T i + B i Q i (τ)b T i (9) Π j (τ + 1) = ĀjΠ j (τ)āt j + B j Qj (τ)āt j (10) As the target and sensor positions at all times in the planning interval depend linearly through (6) on the states they also have a Gaussian distribution. We denote the target and sensor position means by ˆχ i (τ) and ˆς j (τ) respectively and, covariances by P i (τ) and Π j (τ) respectively at time τ in the planning interval. From (6) these are related to the state estimates and covariances as follows, ˆχ i (τ) = Lˆx i (τ) (11) ˆς j (τ) = Λŝ j (τ), (12) P i (τ) = LP i (τ)l T (13) Π j (τ) = ΛΠ j (τ)λ T. (14) III. DISCRETE EVENTS AND BINARY VARIABLES A. Non-convex Sensor Domain In most practical applications, the positions of the mobile sensor agents ς j (τ), need to be constrained within a domain Ω R 2. The domain Ω, which we refer to as the operational domain, is typically non-convex. We consider those nonconvex domains Ω R 2 that are connected and can be represented or conservatively approximated as a union of convex polygons Ω k, Ω N D k=1 Ω k, (15) where N D is the number of convex domains Ω k. Approximation of a part of a riverine domain, where such autonomous sensor agents could operate, as a union of disjoint convex polygon is shown in Figure 1. Figure 1 is a part of Peek Creek, located in Pulaski county, VA. While it is possible to use DLP based notation [6] to represent the non-convex polynomial domain constraint, due to its notational compactness for the target interception problem (as was noted in the introduction),
3 Due to Gaussianity of the sensor position we can adopt an approach similar to Subsection III-B to get the following constraint on the sensor mean, < ˆς j (τ), u kl > ˆQ jkl (τ) for all l {1,..., L k }, (23) where, ˆQ jkl (τ) = Q kl Φ 1 (β k ) u T Π kl j (τ)u kl. (24) We use binary indicator variables d jk (τ) to denote that the event ς j (t) Ω k occurs probabilistically, i.e. (21) and hence (23) is satisfied. Thus, Fig. 1. Riverine domain approximated by a union of disjoint polygons. we choose to use MILP based notation to model the nonconvex domains instead. As the sets Ω k for k {1,..., N D } are convex polygons, we can define them by a conjunction of L k linear constraints, < ς, u kl > Q kl for all l {1,..., L k }, (16) where, ς R 2 is a variable point in Ω k and, u kl R 2 and Q kl are constants corresponding to the l th constraint of Ω k. From (15), the constraint on sensor position j can be written as a union of events, ς j (τ) Ω N D k=1 ς j (τ) Ω k. (17) As the sensor positions ς j (τ) are random vectors we cannot assert with certainty that the event ς j (τ) Ω will occur in the planning horizon at every time instant. Thus we define the probabilistic occurrence of the event ς j (τ) Ω by specifying a lower bound β Ω on its probability, P (ς j (t) Ω) β Ω. (18) However due to (17), and the property of union of events, we have, P (ς j (τ) Ω) P (ς j (τ) Ω k ), for all k = 1,..., N D. (19) Thus by imposing the chance constraint, k {1,..., N D } s.t. P (ς j (t) Ω k ) β Ω, (20) the chance constraint (18) is guaranteed to be satisfied. From (16), ς j (t) Ω k is a conjunction of events < ς j (τ), u kl > Q kl for all l {1,..., L k }. Hence, using Boole s inequality, we can achieve the lower bound β Ω in (20), by imposing a lower bound on the probability of satisfaction of the events, P (< ς j (τ), u kl > Q kl ) β k for all l {1,..., L k }, (21) where, β k = 1 1 β Ω L k. (22) d jk (τ) < ˆς j (τ), u kl > ˆQ jkl (τ) for all l {1,..., L k }. (25) The logical relation (25) can be algebraically represented using the big-m notation [14] as follows, < ˆς j (τ), u kl > ˆQ jkl (τ) + M big (1 d jk (τ)) for all l {1,..., L k }, (26) where M big is a large positive number greater than the maximum value that the right hand side can take. As the sensor j has to be in at least one of the convex subdomains Ω k, we additionally require that, N D d jk (τ) 1. (27) k=1 Thus using constraints (26) and (27), we can guarantee that the chance constraint (18), which corresponds to the nonconvex agent position constraint ς j (τ) Ω, is satisfied. B. Target Interception In this section we propose an approach to evaluate target interception in the planning interval. The constraints that define the target-sensor interception proximity are reposed as chance constraints. This ensures their deterministic evaluation in the MILP framework. We require that a target needs to be intercepted by any sensor only once to be considered intercepted for the rest of the planning interval. Most of the derivations in this section are reproduced from our work [5]. We are presenting it here for the sake of completeness. We define a target i to be intercepted by a sensor j at time τ if it is within a distance of R from sensor j at τ. Thus for this interception to occur we require, χ i (τ) ς j (τ)) R, (28) where is the Eucledian norm in R 2. However in order to formulate a MILP we require linear constraints. Thus we conservatively approximate the interception constraint (28), which represents the interior of a circle in R 2 with the interior of a sided inscribing regular polygon through the following set of linear inequalities, ( ) π < (χ i (τ) ς j (τ)), v q > R cos (29) for all q {1,..., },
4 [ ] sin(θq ) where v q =, for θ q = 2πq N cos(θ q ) I. We denote the discrete event of target i being intercepted by sensor j at time τ by I ij (τ). Thus occurrence of I ij (τ) implies joint satisfaction of the linear constraints in (29). Since χ i (τ) and ς j (τ) are random vectors, the satisfaction of the constraints (29) cannot be deterministically guaranteed. We reformulate the constraints in probabilistic form and require a lower bound α I such that, P (I ij (τ)) α I (30) implies that the target i has been probabilistically intercepted by sensor j at time τ. It should be noted that the interception event I ij (τ) is a conjunction of the events corresponding to the satisfaction of the linear constraints in (29) by ς j (t). Thus using Boole s inequality we can achieve the lower bound α I in (30) by imposing a lower bound α on the probability of satisfaction of each individual constraint in (29), ( ( )) π P < (χ i (τ) ς j (τ)), v q > R cos α (31) for all q {1,..., }, such that α I = (α 1) + 1, which can be rewritten as, α = 1 1 α I. (32) Thus (31) is the probabilistic reformulation of the target interception constraints (29) wherein we use (32) to obtain α that gives us the prescribed lower bound α I in (30). Thus satisfaction of (31) implies probabilistic interception (30). Since χ i (τ) and ς j (τ) are independent and Gaussian, < (χ i (τ) ς j (τ)), v q > is a Gaussian random variable with distribution N (µ q ij (τ), σq ij (τ)), where µ q ij (τ) =< (ˆχ i(τ) ˆς j (τ)), v q > (33) σ q ij (τ) = v T q ( Π j (τ) + P i (τ))v q (34) The constraints (31) thus can be rewritten as, ( ) Φ R cos π µ q ij (τ) σ q ij (τ) α for all q {1,..., }. (35) Taking Φ 1 (.) on both sides of (35) and rearranging we get, < (ˆχ i (τ) ˆς j (τ)), v q > ˆR q ij (τ) cos ( π ) where, for all q {1,..., }, (36) ˆR q ij (τ) = R Φ 1 (α)σ q ij ( ) (τ). (37) π cos Thus joint satisfaction of the constraints (36) is equivalent to joint satisfaction of the constraints (31) implying probabilistic interception (30). Again due to Gaussianity, the structure of the probabilistically reformulated interception constraints (36) with respect to ˆς j (τ) is the same as the original interception constraints (29) with respect to ς j (τ). As all quantities in (36) are deterministic, its satisfaction can be evaluated in the MILP framework, which is not the case with (29). We use variables b ij (τ) {0, 1} for i {1,...N RH } and j {1,..., M}, to indicate probabilistic interception (30). If b ij (τ) = 1, then target i will be probabilistically intercepted by mobile sensor j at time τ (or (30) will be satisfied). As satisfaction of (36) implies probabilistic interception (30), we require b ij (τ) = 1 to imply satisfaction of (36). Thus, for i {1,...N RH } and j {1,..., M} we have, b ij (τ) = 1 < (ˆχ i (τ) ˆς j (τ)), v q > ˆR q ij (τ) cos ( π ) for all q {1,..., }. (38) The logical relation (38) can be equivalently represented in terms of the following linear inequalities [14] for i {1,...N RH } and j {1,..., M}, < (ˆχ i (τ) ˆς j (τ)), v q > ˆR q ij (τ) cos ( π ) M big (1 b ij (τ)) for all q {1,..., }, (39) where M big is the big-m constant. M big should be greater than the largest possible value of the left-hand side of (39). We use the variables δ i (τ) {0, 1} as indicator variables for the target i {1,..., N RH } being probabilistically intercepted at a time τ τ. Thus, 1 If target i probabilistically intercepted at τ τ. δ i (τ) = 0 If target i not probabilistically intercepted for any τ τ. (40) From (38) and (40) we get, τ τ =0 j=1 M b ij (τ ) 1 δ i (τ) = 1 (41) As illustrated in [14], the logical relation (41) can be represented in terms of the linear inequalities, 1 1 τ τ =0 j=1 τ τ =0 j=1 M b ij (τ ) (1 δ i (τ)) M b ij (τ ) ɛ + (1 (τ + 1)M ɛ)δ i (τ), (42) where ɛ > 0 is a small positive number, typically machine precision for practical implementation. IV. MILP PROBLEM In this section we formulate a MILP problem for target interception that incorporates the non-convex agent domain constraints, (26) and (27) formulated in Subsection III-A, and target interception constraints (39) and (42) formulated
5 in Subsection III-B. In addition we have (8), which is the linear dynamic equation for the propagation of the sensor state mean and (12), which linearly relates the sensor state mean to the sensor position mean, as constraints. The stochastic optimization problem consists of sensor control action variables u j (τ) R mu, variables that depend on the sensor control actions, constraints that relate the variables, and the cost function. Thus in addition to u j (τ), the sensor state means ŝ j (τ), the sensor position means ˆς j (τ), the interception indicators, b ij (τ) and δ i (τ), and the sensor domain indicators d jk (τ) are also the variables of the optimization problem that has the sensor motion, domain, and target interception constraints. The initial state means, however, are given parameters, ŝ j (0) = ŝ 0 j for j {1,..., M}. (43) The target state and position means, ˆx i (τ) and ˆχ i (τ), and the sensor and target covariances evolve independently of the control variables and hence are not optimization variables. These are obtained using (7), (11), (9), (10),(13) and (14) and are supplied to the optimization problem as given parameters. A key aspect of our formulation is the linearity of the constraints and the cost leading to a MILP-based framework. In order to efficiently intercept the targets, an additive cost over all targets and all planning time instants, J = RH N RH + T 1 E[d i (x i (τ))](1 δ i (τ)) τ=0 E[D i (x i (T ))](1 δ i (T )). (44) that was proposed in [5], is an reasonable option, where E[d i (x i (τ))] and E[D i (x i (T ))] are the planning and terminal costs for target i. However, in this section we formulate another cost that cannot be represented in the same form as (44) and that addresses the issue of intercepting all targets in a minimum time for finite horizon planning. In order to formulate such a cost, first note from the definition of δ i (τ) (40), that the time to probabilistically intercept a target i is given by, τ i = T τ=0 (1 δ i(τ)), if the target is probabilistically intercepted at or before the terminal planning instant T. Thus in order to define a cost that would minimize the time required to intercept all the targets, we define the variable T max as an upper bound on all the target interception times, T (1 δ i (τ)) T max. (45) τ=0 We define the planning cost of the receding horizon problem, J plan = T max. (46) Due to the finiteness of the planning horizon T, it is possible that some of the targets are not probabilistically interceptable at or before T in a planing iteration. Hence a cost J = J plan = T max for the receding horizon problem would be useless, as it would always be T +1 if there do not exist feasible trajectories that can intercept all the targets within T. In such a case we would like the planning iteration to result in as many target interceptions as possible. Hence we add a terminal cost, J term = (1 δ i (T )), (47) that signifies the number of un-intercepted targets at T. Thus the receding horizon cost that minimizes the time to probabilistically intercept all the targets, if there exists feasible trajectories that can do so or minimizes the number of unintercepted targets at the end of the planning horizon is given by, J = J plan + J term = T max + (1 δ i (T )). (48) This follows by observing that if there exist sensor trajectories that can intercept all the targets in the finite horizon T then minimizing the cost J in (48) will require that all the targets are intercepted. Note that if at least one target is not intercepted then minimizing J in (48), will require, T max = T + 1 (from (45)) and N (1 δ i(t )) 1. Thus J T +2 in (48) for the scenario that at least one target is not intercepted. Compared to this if all the targets are intercepted in T then as a result of minimizing J in (48), T max = max( T τ=0 (1 δ i(τ))) T (from (45)) and N (1 δ i(t )) = 0, resulting in J = max( T τ=0 (1 δ i(τ))) T in (48). Thus interception of all the targets whenever possible will result in consistently lower cost J in (48) as compared to the scenario where at least one of the targets is not intercepted. For the case that all the targets are intercepted in the finite planning horizon, the time required to intercept all the target is given by max( T τ=0 (1 δ i (τ))), which also corresponds to the cost J for this case. Thus minimizing J in (48) will result in minimizing the time to intercept all the targets whenever possible in the finite horizon T. The corresponding planning optimization problem is a MILP given by, Minimize J = J plan + J term = T max + w.r.t.: Subject to: (1 δ i (T )) T max, u j (τ), ŝ j (τ), δ i (τ), b ij (τ) and d jk (τ) for all τ {0,..., T }, j {1,..., M} k {1,..., N D } and i {1,..., N RH } Sensor Motion Constraints:(3), (8) and (12) Sensor Domain Constraints: (26) and (27) Target Interception Constraints: (39) and (42) Interception Time Upper Bound Constraints: (45) (49) We use the same argument as in [5] to justify that the MILP (49) has same computational cost as the corresponding deterministic problem. If the initial covariances satisfy
6 P i (0) = Π j (0) = 0 and state noise covariances satisfy Q i (τ) = Q j (τ) = 0, then from (9) and (10) the predicted covariances for τ [0, T ] in the planning interval are zero, i.e. P i (τ) = Π j (τ) = 0. Thus for this case the MILP in (49) is equivalent to the MILP for the deterministic case. This implies the number and form of the optimization variables and constraints will be the same in the MILP for the Gaussian case (49) and the corresponding deterministic case. This results in the solution of (49) having similar computational costs as the MILP for deterministic case. However the deterministic MILP problem can be formulated directly in terms of the sensor state without requiring the use of chance constraints. Thus there is a slight computational overhead involved in reformulating the constraints to incorporate the probabilistic information for the stochastic Gaussian case over the deterministic case. V. SIMULATION RESULTS In this section we present the simulation results for the proposed MILP-based stochastic RHC. Our application is a target interception problem, that would occur in a potential novel system of autonomous multi-agent system for riverine rescue. The objective of such an autonomous multi-agent system would be to intercept the objects that are drifting away due to riverine flow. We use MATLAB for simulations and use the optimization software GUROBI through a MATLAB MEX interface to solve the MILP on a Windows machine with 2.53 GHz Intel Core 2 Duo Processor and 4GB memory. We assume a double integrator model for the target motion and a single integrator model for the sensor motion. The target state x i R 4 has 2 position components (x and y) and 2 velocity components (x and y). The sensor state s j R 2 has 2 position components (x and y). The sensor velocity u j R 2 is assumed to be a control variable. Thus we have, 1 0 T T A i = , B i = I 4 (50) Ā j = I 2, B j = I 2, C j = T I 2, (51) where, T is the time between two consecutive discrete time instants. We use T = 5s in our simulation. We use the following covariances for the zero mean Gaussian process noise for the targets and sensors, Q i = E [ ν i (t)ν i (t) T] = diag [ ] 10 3 Q j = E [ ν j (t) ν j (t) T] = diag [ 2 2 ] 10 2 (52) We also use Q i and Q j for the Gaussian initial states. We set the lower bound on the probability of joint satisfaction of the sensor domain constraints, β Ω = This results in the lower bound on the probability of satisfaction of individual linear domain constraints to be β = We use R = 12 as the target-sensor interception distance and use a = 10 sided regular polygon to approximate the target-sensor interception proximity. For the target to be probabilistically intercepted by a sensor we use a lower bound α I = 0.9 on the joint satisfaction of the linear constraints (sides of the regular polygon) defining interception proximity. This translates to a lower bound α = 0.99 on the satisfaction of each of the linear constraints defining interception. The upper and lower bound vectors on the sensor [ control ] (velocity) vector are set 3 to (u j ) ub = (u j ) lb = m/s. The velocities of the 3 targets were randomly generated between 1 2.2m/s for the sumultion. Figure 2 shows a simulation consisting of 2 riverine rescue sensor agents and 4 targets, and a 12-step time horizon for the riverine rescue problem. The circles represent sensor interception region at each time instant, corresponding to the sensor position indicated by the dot of the same color. Red lines indicate target path. Initial target locations highlighted by red circles. The target motion stops when it comes in an interception radius R of the rescue agent, signifying interception. As can be seen in the figure, the 2 sensors intercept all Fig. 2. Riverine rescue simulation with 4 targets and 2 rescue agents. 4 targets. An average of 1.02 s and a maximum of 1.84 s were required to set up and solve the MILP at every iteration. However average and maximum of just the solve times for the MILP at each iteration were 0.24 s and 0.42 s respectively. VI. CONCLUSIONS The work presented in this paper extends our previous work on stochastic target interception [5]. While [5] required the sensor domain to be convex, we now have a MILP framework for target interception that is flexible enough to accommodate non-convex domains in the presence of Gaussian uncertainty. As most realistic scenarios scenarios for the operation of sensor agent teams will involve non-convex domains, the reasons for such an extension are quite compelling. In addition we have come up with a cost function that directly addresses the issue of intercepting all the target in a minimum time. We simulate the proposed algorithm on a non-convex polygonal
7 approximation of a riverine domain for a potential riverine rescue team of autonomous sensor agents. The computational times required for the 2 sensor agent and 4 target scenario are encouraging. VII. ACKNOLEDGEMENT This work was funded by the Office of Naval Research via grant N , and the DoD SMART fellowship program. REFERENCES [1] E. Feron T. Schouwenaars, B. DeMoor and J. How. Mixed integer programming for safe multi-vehicle cooperative path planning. In ECC, [2] A. Richards and J.P. How. Aircraft trajectory planning with collision avoidance using mixed integer linear programming. In American Control Conference., [3] N.K. Yilmaz, C. Evangelinos, P. Lermusiaux, and N.M. Patrikalakis. Path planning of autonomous underwater vehicles for adaptive sampling using mixed integer linear programming. Oceanic Engineering, IEEE Journal of, 33(4): , [4] M.G. Earl and R. D Andrea. Modeling and control of a multi-agent system using mixed integer linear programming. In Decision and Control, 2002, Proceedings of the 41st IEEE Conference on, volume 1, pages vol.1, [5] A. Shende, M. Bays, and D. Stilwell. Multiple agent coordination for stochastic target interception using milp. In IEEE/RSJ International Conference on Intelligent Robots and Systems, to appear, [6] L. Blackmore, Hui Li, and B. Williams. A probabilistic approach to optimal robust path planning with obstacles. In American Control Conference, 2006, page 7 pp., [7] M. Ono, L. Blackmore, and B.C. Williams. Chance constrained finite horizon optimal control with nonconvex constraints. In American Control Conference (ACC), 2010, pages , july [8] L. Blackmore, M. Ono, and B. C. Williams. Chance-constrained optimal path planning with obstacles. Robotics, IEEE Transactions on, 27(6), [9] N. E. Du Toit and J. W. Burdick. Probabilistic collision checking with chance constraints. Robotics, IEEE Transactions on, 27(4): , aug [10] Jun Yan and Robert R. Bitmead. Incorporating state estimation into model predictive control and its application to network traffic control. Automatica, 41(4): , [11] L. Blackmore. A probabilistic particle control approach to optimal, robust predictive control. In AIAA Guidance, Navigation and Control Conference, 2006, [12] A. Ono M. Williams B. C. Blackmore, L. Bektassov. Robust, optimal predictive control of jump markov linear systems using particles. LEC- TURE NOTES IN COMPUTER SCIENCE, (4416): , [13] Lars Blackmore, Masahiro Ono, Askar Bektassov, and Brian C. Williams. A probabilistic particle-control approximation of chanceconstrained stochastic predictive control. IEEE Transactions on Robotics, 26: , [14] Alberto Bemporad and Manfred Morari. Control of systems integrating logic, dynamics, and constraints. Automatica, 35(3): , 1999.
An Efficient Motion Planning Algorithm for Stochastic Dynamic Systems with Constraints on Probability of Failure
An Efficient Motion Planning Algorithm for Stochastic Dynamic Systems with Constraints on Probability of Failure Masahiro Ono and Brian C. Williams Massachusetts Institute of Technology 77 Massachusetts
More informationTHIS work is concerned with motion planning in Dynamic,
1 Robot Motion Planning in Dynamic, Uncertain Environments Noel E. Du Toit, Member, IEEE, and Joel W. Burdick, Member, IEEE, Abstract This paper presents a strategy for planning robot motions in dynamic,
More information1 Introduction. 2 Successive Convexification Algorithm
1 Introduction There has been growing interest in cooperative group robotics [], with potential applications in construction and assembly. Most of this research focuses on grounded or mobile manipulator
More informationAnytime Planning for Decentralized Multi-Robot Active Information Gathering
Anytime Planning for Decentralized Multi-Robot Active Information Gathering Brent Schlotfeldt 1 Dinesh Thakur 1 Nikolay Atanasov 2 Vijay Kumar 1 George Pappas 1 1 GRASP Laboratory University of Pennsylvania
More informationSTATE ESTIMATION IN COORDINATED CONTROL WITH A NON-STANDARD INFORMATION ARCHITECTURE. Jun Yan, Keunmo Kang, and Robert Bitmead
STATE ESTIMATION IN COORDINATED CONTROL WITH A NON-STANDARD INFORMATION ARCHITECTURE Jun Yan, Keunmo Kang, and Robert Bitmead Department of Mechanical & Aerospace Engineering University of California San
More informationAircraft Trajectory Planning With Collision Avoidance Using Mixed Integer Linear Programming
Proceedings of the American Control Conference Anchorage, AK May 8-10,2002 Aircraft Trajectory Planning With Collision Avoidance Using Mixed Integer Linear Programming Arthur Richards and Jonathan P. How
More informationStochastic Model Predictive Controller with Chance Constraints for Comfortable and Safe Driving Behavior of Autonomous Vehicles
Stochastic Model Predictive Controller with Chance Constraints for Comfortable and Safe Driving Behavior of Autonomous Vehicles David Lenz 1 and Tobias Kessler 1 and Alois Knoll Abstract In this paper,
More informationAutonomous Navigation for Flying Robots
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München Motivation Bayes filter is a useful tool for state
More informationModeling and control of a multi-agent system using mixed integer linear programming
Modeling and control of a multi-agent system using mixed integer linear programming Matthew G. Earl 1 Raffaello D Andrea Abstract The RoboFlag competition was proposed by the second author as a means to
More informationConvex receding horizon control in non-gaussian belief space
Convex receding horizon control in non-gaussian belief space Robert Platt Jr. Abstract One of the main challenges in solving partially observable control problems is planning in high-dimensional belief
More informationA Decentralized Approach to Multi-agent Planning in the Presence of Constraints and Uncertainty
2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center May 9-13, 2011, Shanghai, China A Decentralized Approach to Multi-agent Planning in the Presence of
More informationStochastic Tube MPC with State Estimation
Proceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems MTNS 2010 5 9 July, 2010 Budapest, Hungary Stochastic Tube MPC with State Estimation Mark Cannon, Qifeng Cheng,
More informationDistributed Receding Horizon Control of Cost Coupled Systems
Distributed Receding Horizon Control of Cost Coupled Systems William B. Dunbar Abstract This paper considers the problem of distributed control of dynamically decoupled systems that are subject to decoupled
More informationTarget Localization and Circumnavigation Using Bearing Measurements in 2D
Target Localization and Circumnavigation Using Bearing Measurements in D Mohammad Deghat, Iman Shames, Brian D. O. Anderson and Changbin Yu Abstract This paper considers the problem of localization and
More informationA Study of Covariances within Basic and Extended Kalman Filters
A Study of Covariances within Basic and Extended Kalman Filters David Wheeler Kyle Ingersoll December 2, 2013 Abstract This paper explores the role of covariance in the context of Kalman filters. The underlying
More informationPolitecnico di Torino. Porto Institutional Repository
Politecnico di Torino Porto Institutional Repository [Proceeding] Model Predictive Control of stochastic LPV Systems via Random Convex Programs Original Citation: GC Calafiore; L Fagiano (2012) Model Predictive
More informationNonlinear and robust MPC with applications in robotics
Nonlinear and robust MPC with applications in robotics Boris Houska, Mario Villanueva, Benoît Chachuat ShanghaiTech, Texas A&M, Imperial College London 1 Overview Introduction to Robust MPC Min-Max Differential
More informationProbability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles
Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles Myungsoo Jun and Raffaello D Andrea Sibley School of Mechanical and Aerospace Engineering Cornell University
More informationDistributed and Real-time Predictive Control
Distributed and Real-time Predictive Control Melanie Zeilinger Christian Conte (ETH) Alexander Domahidi (ETH) Ye Pu (EPFL) Colin Jones (EPFL) Challenges in modern control systems Power system: - Frequency
More informationRESEARCH SUMMARY ASHKAN JASOUR. February 2016
RESEARCH SUMMARY ASHKAN JASOUR February 2016 My background is in systems control engineering and I am interested in optimization, control and analysis of dynamical systems, robotics, machine learning,
More information2D Image Processing. Bayes filter implementation: Kalman filter
2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationOptimal Control of Mixed Logical Dynamical Systems with Linear Temporal Logic Specifications
Optimal Control of Mixed Logical Dynamical Systems with Linear Temporal Logic Specifications Sertac Karaman, Ricardo G. Sanfelice, and Emilio Frazzoli Abstract Recently, Linear Temporal Logic (LTL) has
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationAdaptive Nonlinear Model Predictive Control with Suboptimality and Stability Guarantees
Adaptive Nonlinear Model Predictive Control with Suboptimality and Stability Guarantees Pontus Giselsson Department of Automatic Control LTH Lund University Box 118, SE-221 00 Lund, Sweden pontusg@control.lth.se
More informationRobust Stabilization of Non-Minimum Phase Nonlinear Systems Using Extended High Gain Observers
28 American Control Conference Westin Seattle Hotel, Seattle, Washington, USA June 11-13, 28 WeC15.1 Robust Stabilization of Non-Minimum Phase Nonlinear Systems Using Extended High Gain Observers Shahid
More informationSafe Control under Uncertainty
Safe Control under Uncertainty Dorsa Sadigh UC Berkeley Berkeley, CA, USA dsadigh@berkeley.edu Ashish Kapoor Microsoft Research Redmond, WA, USA akapoor@microsoft.com ical models [20] have been very popular
More informationBilevel Integer Optimization: Theory and Algorithms
: Theory and Algorithms Ted Ralphs 1 Joint work with Sahar Tahernajad 1, Scott DeNegre 3, Menal Güzelsoy 2, Anahita Hassanzadeh 4 1 COR@L Lab, Department of Industrial and Systems Engineering, Lehigh University
More informationRECEDING HORIZON DRIFT COUNTERACTION AND ITS APPLICATION TO SPACECRAFT ATTITUDE CONTROL
AAS 7-46 RECEDING HORIZON DRIFT COUNTERACTION AND ITS APPLICATION TO SPACECRAFT ATTITUDE CONTROL Robert A. E. Zidek, Christopher D. Petersen, Alberto Bemporad, and Ilya V. Kolmanovsky In this paper, a
More informationarxiv: v1 [math.oc] 8 Nov 2018
Voronoi Partition-based Scenario Reduction for Fast Sampling-based Stochastic Reachability Computation of LTI Systems Hossein Sartipizadeh, Abraham P. Vinod, Behçet Açıkmeşe, and Meeko Oishi arxiv:8.03643v
More informationProbabilistic Feasibility for Nonlinear Systems with Non-Gaussian Uncertainty using RRT
Probabilistic Feasibility for Nonlinear Systems with Non-Gaussian Uncertainty using RRT Brandon Luders and Jonathan P. How Aerospace Controls Laboratory Massachusetts Institute of Technology, Cambridge,
More informationMixed-Integer Nonlinear Programming Formulation of a UAV Path Optimization Problem
Mixed-Integer Nonlinear Programming Formulation of a UAV Path Optimization Problem Shankarachary Ragi and Hans D. Mittelmann Abstract We present a mixed-integer nonlinear programming (MINLP) formulation
More informationThe Multi-Agent Rendezvous Problem - The Asynchronous Case
43rd IEEE Conference on Decision and Control December 14-17, 2004 Atlantis, Paradise Island, Bahamas WeB03.3 The Multi-Agent Rendezvous Problem - The Asynchronous Case J. Lin and A.S. Morse Yale University
More informationConsensus Algorithms are Input-to-State Stable
05 American Control Conference June 8-10, 05. Portland, OR, USA WeC16.3 Consensus Algorithms are Input-to-State Stable Derek B. Kingston Wei Ren Randal W. Beard Department of Electrical and Computer Engineering
More informationPartially Observable Markov Decision Processes (POMDPs) Pieter Abbeel UC Berkeley EECS
Partially Observable Markov Decision Processes (POMDPs) Pieter Abbeel UC Berkeley EECS Many slides adapted from Jur van den Berg Outline POMDPs Separation Principle / Certainty Equivalence Locally Optimal
More informationEE C128 / ME C134 Feedback Control Systems
EE C128 / ME C134 Feedback Control Systems Lecture Additional Material Introduction to Model Predictive Control Maximilian Balandat Department of Electrical Engineering & Computer Science University of
More informationDecentralized Stabilization of Heterogeneous Linear Multi-Agent Systems
1 Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems Mauro Franceschelli, Andrea Gasparri, Alessandro Giua, and Giovanni Ulivi Abstract In this paper the formation stabilization problem
More informationDesign of Norm-Optimal Iterative Learning Controllers: The Effect of an Iteration-Domain Kalman Filter for Disturbance Estimation
Design of Norm-Optimal Iterative Learning Controllers: The Effect of an Iteration-Domain Kalman Filter for Disturbance Estimation Nicolas Degen, Autonomous System Lab, ETH Zürich Angela P. Schoellig, University
More informationIndicator Constraints in Mixed-Integer Programming
Indicator Constraints in Mixed-Integer Programming Andrea Lodi University of Bologna, Italy - andrea.lodi@unibo.it Amaya Nogales-Gómez, Universidad de Sevilla, Spain Pietro Belotti, FICO, UK Matteo Fischetti,
More information2D Image Processing. Bayes filter implementation: Kalman filter
2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche
More informationFast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma
Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 8 September 2003 European Union RTN Summer School on Multi-Agent
More informationBranch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems
Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably
More informationConstrained Optimization and Distributed Computation Based Car-Following Control of A Connected and Autonomous Vehicle Platoon
Constrained Optimization and Distributed Computation Based Car-Following Control of A Connected and Autonomous Vehicle Platoon Siyuan Gong a, Jinglai Shen b, Lili Du a ldu3@iit.edu a: Illinois Institute
More informationLearning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System
Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System Ugo Rosolia Francesco Borrelli University of California at Berkeley, Berkeley, CA 94701, USA
More informationRank Tests for the Observability of Discrete-Time Jump Linear Systems with Inputs
Rank Tests for the Observability of Discrete-Time Jump Linear Systems with Inputs Ehsan Elhamifar Mihály Petreczky René Vidal Center for Imaging Science, Johns Hopkins University, Baltimore MD 21218, USA
More informationDecentralized Cooperative Conflict Resolution Among Multiple Autonomous Mobile Agents
Decentralized Cooperative Conflict Resolution Among Multiple Autonomous Mobile Agents Lucia Pallottino, Vincenzo Giovanni Scordio, Antonio Bicchi Abstract In this paper we consider policies for cooperative,
More informationAn Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse
An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse Yongjia Song, James Luedtke Virginia Commonwealth University, Richmond, VA, ysong3@vcu.edu University
More informationDiagnosis of Repeated/Intermittent Failures in Discrete Event Systems
Diagnosis of Repeated/Intermittent Failures in Discrete Event Systems Shengbing Jiang, Ratnesh Kumar, and Humberto E. Garcia Abstract We introduce the notion of repeated failure diagnosability for diagnosing
More information16.410/413 Principles of Autonomy and Decision Making
6.4/43 Principles of Autonomy and Decision Making Lecture 8: (Mixed-Integer) Linear Programming for Vehicle Routing and Motion Planning Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute
More informationROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino
ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Probabilistic Fundamentals in Robotics Gaussian Filters Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile
More informationRobot Localisation. Henrik I. Christensen. January 12, 2007
Robot Henrik I. Robotics and Intelligent Machines @ GT College of Computing Georgia Institute of Technology Atlanta, GA hic@cc.gatech.edu January 12, 2007 The Robot Structure Outline 1 2 3 4 Sum of 5 6
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationDisturbance Attenuation Properties for Discrete-Time Uncertain Switched Linear Systems
Disturbance Attenuation Properties for Discrete-Time Uncertain Switched Linear Systems Hai Lin Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556, USA Panos J. Antsaklis
More informationOrdonnancement robuste de réseaux de capteurs sans fil pour le suivi d une cible mobile sous incertitudes
Ordonnancement robuste de réseaux de capteurs sans fil pour le suivi d une cible mobile sous incertitudes Charly Lersteau Marc Sevaux André Rossi ROADEF 2016, Compiègne February 11, 2016 1/17 Context 2/17
More informationA Robust Event-Triggered Consensus Strategy for Linear Multi-Agent Systems with Uncertain Network Topology
A Robust Event-Triggered Consensus Strategy for Linear Multi-Agent Systems with Uncertain Network Topology Amir Amini, Amir Asif, Arash Mohammadi Electrical and Computer Engineering,, Montreal, Canada.
More informationSafe Autonomy Under Perception Uncertainty Using Chance-Constrained Temporal Logic
Noname manuscript No. (will be inserted by the editor) Safe Autonomy Under Perception Uncertainty Using Chance-Constrained Temporal Logic Susmit Jha Vasumathi Raman Dorsa Sadigh Sanjit A. Seshia Received:
More informationOn Design of Reduced-Order H Filters for Discrete-Time Systems from Incomplete Measurements
Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 2008 On Design of Reduced-Order H Filters for Discrete-Time Systems from Incomplete Measurements Shaosheng Zhou
More informationRiccati difference equations to non linear extended Kalman filter constraints
International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R
More informationOp#mal Control of Nonlinear Systems with Temporal Logic Specifica#ons
Op#mal Control of Nonlinear Systems with Temporal Logic Specifica#ons Eric M. Wolff 1 Ufuk Topcu 2 and Richard M. Murray 1 1 Caltech and 2 UPenn University of Michigan October 1, 2013 Autonomous Systems
More informationAn improved method for solving micro-ferry scheduling problems
Delft University of Technology Delft Center for Systems and Control Technical report 12-028 An improved method for solving micro-ferry scheduling problems M. Burger, B. De Schutter, and H. Hellendoorn
More informationExplorability of a Turbulent Scalar Field
017 American Control Conference Sheraton Seattle Hotel May 4 6, 017, Seattle, USA Explorability of a Turbulent Scalar Field Vivek Mishra and Fumin Zhang Abstract In this paper we extend the concept of
More informationAutonomous navigation of unicycle robots using MPC
Autonomous navigation of unicycle robots using MPC M. Farina marcello.farina@polimi.it Dipartimento di Elettronica e Informazione Politecnico di Milano 7 June 26 Outline Model and feedback linearization
More informationMulti-Robotic Systems
CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed
More informationALARGE number of applications require robots to move in
IEEE TRANSACTIONS ON ROBOTICS, VOL. 22, NO. 5, OCTOBER 2006 917 Optimal Sensor Scheduling for Resource-Constrained Localization of Mobile Robot Formations Anastasios I. Mourikis, Student Member, IEEE,
More informationRobust Sampling-based Motion Planning with Asymptotic Optimality Guarantees
Robust Sampling-based Motion Planning with Asymptotic Optimality Guarantees The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation
More informationInteger Programming and Branch and Bound
Courtesy of Sommer Gentry. Used with permission. Integer Programming and Branch and Bound Sommer Gentry November 4 th, 003 Adapted from slides by Eric Feron and Brian Williams, 6.40, 00. Integer Programming
More informationProbabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010
Probabilistic Fundamentals in Robotics Gaussian Filters Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile robot
More informationSTATE GENERALIZATION WITH SUPPORT VECTOR MACHINES IN REINFORCEMENT LEARNING. Ryo Goto, Toshihiro Matsui and Hiroshi Matsuo
STATE GENERALIZATION WITH SUPPORT VECTOR MACHINES IN REINFORCEMENT LEARNING Ryo Goto, Toshihiro Matsui and Hiroshi Matsuo Department of Electrical and Computer Engineering, Nagoya Institute of Technology
More informationOptimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints
Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Weijun Xie 1, Shabbir Ahmed 2, Ruiwei Jiang 3 1 Department of Industrial and Systems Engineering, Virginia Tech,
More informationFINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez
FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES Danlei Chu Tongwen Chen Horacio J Marquez Department of Electrical and Computer Engineering University of Alberta Edmonton
More informationPartially Observable Markov Decision Processes (POMDPs)
Partially Observable Markov Decision Processes (POMDPs) Sachin Patil Guest Lecture: CS287 Advanced Robotics Slides adapted from Pieter Abbeel, Alex Lee Outline Introduction to POMDPs Locally Optimal Solutions
More informationArtificial Intelligence
Artificial Intelligence Roman Barták Department of Theoretical Computer Science and Mathematical Logic Summary of last lecture We know how to do probabilistic reasoning over time transition model P(X t
More informationLearning Theory. Ingo Steinwart University of Stuttgart. September 4, 2013
Learning Theory Ingo Steinwart University of Stuttgart September 4, 2013 Ingo Steinwart University of Stuttgart () Learning Theory September 4, 2013 1 / 62 Basics Informal Introduction Informal Description
More informationTheory in Model Predictive Control :" Constraint Satisfaction and Stability!
Theory in Model Predictive Control :" Constraint Satisfaction and Stability Colin Jones, Melanie Zeilinger Automatic Control Laboratory, EPFL Example: Cessna Citation Aircraft Linearized continuous-time
More informationA STUDY ON THE STATE ESTIMATION OF NONLINEAR ELECTRIC CIRCUITS BY UNSCENTED KALMAN FILTER
A STUDY ON THE STATE ESTIMATION OF NONLINEAR ELECTRIC CIRCUITS BY UNSCENTED KALMAN FILTER Esra SAATCI Aydın AKAN 2 e-mail: esra.saatci@iku.edu.tr e-mail: akan@istanbul.edu.tr Department of Electronic Eng.,
More informationDistributed Coordination and Control of Formation Flying Spacecraft
Distributed Coordination and Control of Formation Flying Spacecraft Michael Tillerson, Louis Breger, and Jonathan P. How MIT Department of Aeronautics and Astronautics {mike t, lbreger, jhow}@mit.edu Abstract
More informationOPTIMAL FUSION OF SENSOR DATA FOR DISCRETE KALMAN FILTERING Z. G. FENG, K. L. TEO, N. U. AHMED, Y. ZHAO, AND W. Y. YAN
Dynamic Systems and Applications 16 (2007) 393-406 OPTIMAL FUSION OF SENSOR DATA FOR DISCRETE KALMAN FILTERING Z. G. FENG, K. L. TEO, N. U. AHMED, Y. ZHAO, AND W. Y. YAN College of Mathematics and Computer
More informationI. D. Landau, A. Karimi: A Course on Adaptive Control Adaptive Control. Part 9: Adaptive Control with Multiple Models and Switching
I. D. Landau, A. Karimi: A Course on Adaptive Control - 5 1 Adaptive Control Part 9: Adaptive Control with Multiple Models and Switching I. D. Landau, A. Karimi: A Course on Adaptive Control - 5 2 Outline
More informationResearch Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems
Journal of Applied Mathematics Volume 2013, Article ID 757391, 18 pages http://dx.doi.org/10.1155/2013/757391 Research Article A Novel Differential Evolution Invasive Weed Optimization for Solving Nonlinear
More informationDistributed Game Strategy Design with Application to Multi-Agent Formation Control
5rd IEEE Conference on Decision and Control December 5-7, 4. Los Angeles, California, USA Distributed Game Strategy Design with Application to Multi-Agent Formation Control Wei Lin, Member, IEEE, Zhihua
More informationELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems
ELEC4631 s Lecture 2: Dynamic Control Systems 7 March 2011 Overview of dynamic control systems Goals of Controller design Autonomous dynamic systems Linear Multi-input multi-output (MIMO) systems Bat flight
More informationTrajectory tracking & Path-following control
Cooperative Control of Multiple Robotic Vehicles: Theory and Practice Trajectory tracking & Path-following control EECI Graduate School on Control Supélec, Feb. 21-25, 2011 A word about T Tracking and
More informationR O B U S T E N E R G Y M AN AG E M E N T S Y S T E M F O R I S O L AT E D M I C R O G R I D S
ROBUST ENERGY MANAGEMENT SYSTEM FOR ISOLATED MICROGRIDS Jose Daniel La r a Claudio Cañizares Ka nka r Bhattacharya D e p a r t m e n t o f E l e c t r i c a l a n d C o m p u t e r E n g i n e e r i n
More informationDistributed MAP probability estimation of dynamic systems with wireless sensor networks
Distributed MAP probability estimation of dynamic systems with wireless sensor networks Felicia Jakubiec, Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania https://fling.seas.upenn.edu/~life/wiki/
More informationDevelopment of a Deep Recurrent Neural Network Controller for Flight Applications
Development of a Deep Recurrent Neural Network Controller for Flight Applications American Control Conference (ACC) May 26, 2017 Scott A. Nivison Pramod P. Khargonekar Department of Electrical and Computer
More informationTotally distributed motion control of sphere world multi-agent systems using Decentralized Navigation Functions
Totally distributed motion control of sphere world multi-agent systems using Decentralized Navigation Functions Dimos V. Dimarogonas, Kostas J. Kyriakopoulos and Dimitris Theodorakatos Abstract A distributed
More informationDistributed Estimation for Motion Coordination in an Unknown Spatially Varying Flowfield
Distributed Estimation for Motion Coordination in an Unknown Spatially Varying Flowfield Cameron K Peterson and Derek A Paley University of Maryland, College Park, MD, 742, USA I Introduction This note
More informationAn Optimization-based Approach to Decentralized Assignability
2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016 Boston, MA, USA An Optimization-based Approach to Decentralized Assignability Alborz Alavian and Michael Rotkowitz Abstract
More informationOPTIMAL CONTROL AND ESTIMATION
OPTIMAL CONTROL AND ESTIMATION Robert F. Stengel Department of Mechanical and Aerospace Engineering Princeton University, Princeton, New Jersey DOVER PUBLICATIONS, INC. New York CONTENTS 1. INTRODUCTION
More informationNear-Potential Games: Geometry and Dynamics
Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics
More informationMixed Integer Linear Programming Formulation for Chance Constrained Mathematical Programs with Equilibrium Constraints
Mixed Integer Linear Programming Formulation for Chance Constrained Mathematical Programs with Equilibrium Constraints ayed A. adat and Lingling Fan University of outh Florida, email: linglingfan@usf.edu
More informationCS491/691: Introduction to Aerial Robotics
CS491/691: Introduction to Aerial Robotics Topic: State Estimation Dr. Kostas Alexis (CSE) World state (or system state) Belief state: Our belief/estimate of the world state World state: Real state of
More informationNonlinear Optimization for Optimal Control Part 2. Pieter Abbeel UC Berkeley EECS. From linear to nonlinear Model-predictive control (MPC) POMDPs
Nonlinear Optimization for Optimal Control Part 2 Pieter Abbeel UC Berkeley EECS Outline From linear to nonlinear Model-predictive control (MPC) POMDPs Page 1! From Linear to Nonlinear We know how to solve
More informationECE7850 Lecture 8. Nonlinear Model Predictive Control: Theoretical Aspects
ECE7850 Lecture 8 Nonlinear Model Predictive Control: Theoretical Aspects Model Predictive control (MPC) is a powerful control design method for constrained dynamical systems. The basic principles and
More informationFunction Approximation for Continuous Constrained MDPs
Function Approximation for Continuous Constrained MDPs Aditya Undurti, Alborz Geramifard, Jonathan P. How Abstract In this work we apply function approximation techniques to solve continuous, constrained
More informationSearch for Dynamic Targets with Uncertain Probability Maps
Proceedings of the American Control Conference Minneapolis, Minnesota, USA, June -, WeB. Search for Dynamic Targets with Uncertain Probability Maps Luca F. Bertuccelli and Jonathan P. How Aerospace Controls
More informationNonlinear Wind Estimator Based on Lyapunov
Nonlinear Based on Lyapunov Techniques Pedro Serra ISR/DSOR July 7, 2010 Pedro Serra Nonlinear 1/22 Outline 1 Motivation Problem 2 Aircraft Dynamics Guidance Control and Navigation structure Guidance Dynamics
More informationTarget Tracking and Obstacle Avoidance for Multi-agent Networks with Input Constraints
International Journal of Automation and Computing 8(1), February 2011, 46-53 DOI: 10.1007/s11633-010-0553-1 Target Tracking and Obstacle Avoidance for Multi-agent Networks with Input Constraints Jing Yan
More informationKalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance
2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016. Boston, MA, USA Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise
More informationScenario Grouping and Decomposition Algorithms for Chance-constrained Programs
Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs Siqian Shen Dept. of Industrial and Operations Engineering University of Michigan Joint work with Yan Deng (UMich, Google)
More informationPlanning With Information States: A Survey Term Project for cs397sml Spring 2002
Planning With Information States: A Survey Term Project for cs397sml Spring 2002 Jason O Kane jokane@uiuc.edu April 18, 2003 1 Introduction Classical planning generally depends on the assumption that the
More information