Adaptive servo visual robot control
|
|
- Alexia Foster
- 5 years ago
- Views:
Transcription
1 Robotics and Autonomous Systems 43 (2003) Adaptive servo visual robot control Oscar Nasisi, Ricardo Carelli Instituto de Automática, Universidad Nacional de San Juan, Av. San Martín (Oeste) 1109, 5400 San Juan, Argentina Received 11 December 2001; received in revised form 27 November 2002 Abstract Adaptive controllers for robot positioning and tracking using direct visual feedback with camera-in-hand configuration are proposed in this paper. The controllers are designed to compensate for full robot dynamics. Adaptation is introduced to reduce the design sensitivity due to robot and payload dynamics uncertainties. It is proved that the control system achieves the motion control objective in the image coordinate system. Simulations are carried out to evaluate the controller performance. Also, discretization and measurement effects are considered in simulations Elsevier Science B.V. All rights reserved. Keywords: Visual motion; Robots; Tracking systems; Non-linear control systems; Adaptive control 1. Introduction The use of visual information in the feedback loop presents an attractive solution to motion control of autonomous manipulators evolving in unstructured environments. In this context, robot motion control uses direct visual sensory information to achieve a desired relative position between the robot and a possibly moving object in the robot environment, which is called visual servoing. The visual positioning problem arises when the object is static, whereas when the object is moving, the visual tracking problem is established instead. Visual servoing is treated in references as [1 6]. Visual servoing can be achieved either with the so called fixed-camera approach or with the camera-in-hand approach. With the former, cameras fixed in the world-coordinate frame capture images of both the robot and its environment. The objective of this approach is to move the robot in such a way that its end-effector reaches some desired object visually captured by the cameras in the working space [7 10]. With the camera-in-hand configuration, a camera mounted on the robot moves rigidly attached to the robot hand. The objective of this approach is that the manipulator move in such a way that the projection of a static or moving object will be at a desired location in the image as captured by the camera [11 17]. Most of the above cited works, however, have not considered the non-linear robot dynamics in the controller design. These controllers may result in unsatisfactory control under high performance requirements, including high-speed tasks and direct-drive robot actuators. In such cases, the robot dynamics has to be considered in the controller design, as partially done in [18,19] or fully included in [10,20,21]. In the visual servoing control some uncertainties may arise in relation to camera parameters, kinematics and robot dynamics. Some authors have addressed the problem of camera uncertainties, e.g. in [22 25] for different camera Corresponding author. addresses: onasisi@inaut.unsj.edu.ar (O. Nasisi), rcarelli@inaut.unsj.edu.ar (R. Carelli) /03/$ see front matter 2003 Elsevier Science B.V. All rights reserved. doi: /s (02)
2 52 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) configurations. Kinematics uncertainty is treated in [26]. With the ever-growing power of visual processing and a consequent increase in frequency bandwidth of visual controllers, the issues of compensating robot dynamics and designing controllers that reduce sensibility to dynamic uncertainties are becoming more important. As regards uncertainties in robot dynamics, robust control solutions have been proposed in [10,27,28], and adaptive control solutions in [29,30] for the fixed-camera visual servoing configuration. This paper deals with the adaptive control of robot dynamics using the camera-in-hand visual servoing approach. In previous work [31,32], the authors have proposed adaptive controllers for the camera-in-hand configuration assuming uncertainties in robot dynamics. The present paper proposes a positioning and a tracking adaptive controller using visual feedback for robots with camera-in-hand configuration. Feedback signals come directly from internal position and velocity sensors and from visual information. It is proved that the positioning control errors converge asymptotically to zero, and that the tracking errors for moving objects are ultimately bounded. The controllers are based on the robot s inverse dynamics, the definition of a manifold in the error space [33], an update-law [34], and, for moving objects, on the estimation of the target velocity. As far as the authors know, these are the first direct visual adaptive stable controllers which include non-linear robot dynamics. Although the main contribution of the work is the development of these adaptive controllers with the corresponding stability proofs, the paper also includes some simulation studies to show the performance of the proposed controllers. The paper is organized as follows. Section 2 presents the robot and the camera models. In Section 3, the adaptive controllers for positioning and tracking control objectives are presented. Section 4 gives the stability analysis for both controllers. Section 5 describes the simulation studies for a two degree-of-freedom (DOF) direct-drive manipulator. Finally, Section 6 presents some concluding remarks. 2. Robot and camera models 2.1. Model of the robot When neither friction nor any other disturbance is present, the joint-space dynamics of an n-link manipulator can be written as [35]: H(q) q + C(q, q) q + g(q) = τ, (1) where q is the n 1 vector of joint displacement, τ the n 1 vector of applied joint torques, H(q) the n n symmetric positive definite manipulator inertia matrix, C(q, q) q the n 1 vector of centripetal and Coriolis torques, and g(q) the n 1 vector of gravitational torques. The robot model, Eq. (1), has some fundamental properties that can be exploited in the controller design [36]. Skew-symmetry. Using a proper definition of matrix C only the vector C(q, q) q is uniquely defined matrices H and C in Eq. (1) satisfy x T [ dh(q) dt ] 2C(q, q) x = 0 R n. (2) Linearity. A part of the dynamics structure in Eq. (1) is linear in terms of a suitable selected set of robot and payload parameters: H(q) q + C(q, q) q + g(q) = (q, q, q)θ, (3) where (q, q, q) is an n m matrix and θ is an m 1 vector containing the selected set of robot and payload parameters.
3 2.2. Robot differential kinematics O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) The differential kinematics of a manipulator gives the relationship between joint velocities q, the corresponding end-effector translational velocity W v, and angular velocity W ω. They are related through the geometric Jacobian J g (q) [37]: [ ] W v = J g (q) q. (4) W ω If the end-effector pose (position and orientation) is expressed by regarding a minimal representation in the operational space, it is possible to compute the Jacobian matrix through differentiation of the direct kinematics with respect to joint positions. The resulting Jacobian, termed analytical Jacobian J A (q), is related to the geometric Jacobian through [37]: [ ] I 0 J g (q) = J A (q), (5) 0 T(q) where T(q) is a transformation matrix that depends on the parameterization of the end-effector orientation Camera model A TV camera is supposed to be mounted at the robot end-effector. Let the origin of the camera coordinate frame (end-effector frame) with respect to the robot coordinate frame be W p C = W p C (q) R m 0 with m 0 = 3. The orientation of the camera frame with respect to the robot frame is denoted as W R C = W R C (q) SO(3). The image captured by the camera supplies a two-dimensional array of brightness values from a three-dimensional scene. This image may undergo various types of computer processing to enhance image properties and extract image features. It is assumed here that the image features are the projection onto the 2D image plane of 3D points in the scene space. A perspective projection with a focal length λ is also assumed, as depicted in Fig. 1. An object (feature) point C p O with coordinates [ C p x C p y C p z ] T R 3 in the camera frame projects onto a point in the image plane with image coordinates [ u v] T R 2. The position ξ = [ u v] T R 2 of an object feature point in the image will be referred to as an image feature point [38]. In this paper, it is assumed that the object can be characterized by a set of feature points. For sake of completeness, some preliminars concerning single and multiple feature points are recalled below Single feature point Following the notation of [20], let W p O R m 0 be the position of an object feature point expressed in robot coordinate frame. Therefore, the relative position of this object feature located in the robot workspace, with respect Fig. 1. Perspective projection.
4 54 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) to camera coordinate frame is [ C p C x p C y p z ] T. According to the perspective projection [4], the image feature point depends uniquely on the object feature position W p O and camera position and orientation, and is expressed as [ ] [ ] u ξ = = α λ C p x v C, (6) p z C p y where α is the scaling factor in pixels/m due to camera sampling and C p z < 0. This model is also called the imaging model [20]. The time derivative yields C p x ξ = αλ 1 0 Cṗ x C p z C p z C p y Cṗ y. (7) 0 1 C p Cṗ z z On the other hand, the position of the object feature point with respect to the camera frame is given by C p x = C R W (q)[ W p O W p C (q)]. (8) C p y C p z By invoking the general formula for velocity of a moving point in a moving frame with respect to a fixed frame [39], and considering a fixed object point, the time derivative of (8) can be expressed in terms of the camera translational and angular velocities as [13] Cṗ x Cṗ y = C R W { W ω C ( W p O W p C (q)) W v C }. (9) Cṗ z After operating, there results Cṗ x C p C z p y [ ][ ] C Cṗ y = C p z 0 C R W (q) 0 W v C p x Cṗ z C p C 0 C R W (q) W, (10) ω C y p x 0 where W v C and W ω C stand for the camera s translational and angular velocities with respect to robot frame, respectively. The motion of the image feature point as a function of the camera velocity is obtained by substituting (10) into (7): ξ = αλ C p z C p x C p z C p y C p z C p x C p y C p z C p 2 z + C p 2 y C p z C p 2 z + C p 2 x C p z C p C x p y C p z C p y [ C ][ R W (q) 0 W ] v C. C 0 C R W (q) W ω C p x Instead of using coordinates C p x and C p y of the object feature described in camera coordinate frame, which are a priori unknown, it is usual to replace them by coordinates u and v of the projection of such a feature point onto (11)
5 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) the image frame. Therefore, by using (7) [ ][ ] C ξ = J image (ξ, C R W (q) 0 W v C p z ) 0 C, (12) R W (q) W ω C where J image (ξ, C p z ) is the so-called image Jacobian defined by [4,13]: αλ u J image (ξ, C C 0 p C uv α 2 λ 2 + u 2 z p z αλ αλ p z ) αλ v 0 C α2 λ 2 + v 2 uv p z αλ αλ C p z v. (13) u Finally, by using (4) and (5) we can express ξ in terms of robot joint velocity q as [ ] C ξ = J image (ξ, C R W (q) 0 p z ) 0 C J g (q) q R W (q) [ ][ ] C = J image (ξ, C R W (q) 0 I 0 p z ) 0 C J A (q) q. R W (q) 0 T(q) Multiple feature points In applications to objects located in a three-dimensional space, three or more feature points are required to make the visual servo control solvable [17,21]. The above imaging model can be extended to a static object located in the robot workspace, having p object features points. In this case, W p O R p m 0 is a constant vector which contains the p object feature points, and the feature image vector ξ R 2p is redefined as C p x1 C p z1 u 1 C p y1 C p z1 ξ = v 1. u p v p = αλ. C p xp C p zp C p yp C p zp R 2p. The extended image Jacobian J image (ξ, C p z ) R 2p m 0 is given by ([ ] ) u1 J image, C p z1 v 1 J image (ξ, C p z ) =., (14) ([ ] ) up J image, C p zp v p where C p z = [ C p z1 C p z2 C p zp ] T R p.
6 56 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) Using Eqs. (13) and (14), the time derivative of the image feature vector can be expressed as ξ = J(q, ξ, C p z ) q, (15) where [ C ][ ] J(q, ξ, C p z ) = J image (ξ, C R W (q) 0 I 0 p z ) 0 C J A (q) (16) R W (q) 0 T(q) will be called the Jacobian matrix hereafter in this paper Moving object When the object moves in the robot framework, the derivative of Eq. (8) can be expressed as Cṗ x Cṗ y Cṗ = C R W { W ω C ( W p O W p C org ) + ( W ṗ O W v C )}. z As both the camera-in-hand and the object are moving, there exists a relative velocity between each other. Therefore the object velocity in the camera frame can be calculated as Cṗ x Cṗ y Cṗ z = C p zo C p yo C p zo 0 C p xo C p yo C p xo 0, (17) Cṗ x [ C ][ Cṗ R W (q) 0 W ] v C y = Cṗ 0 C R W (q) W + C R W W ṗ O, (18) ω C z where W v O and W ω C are the translational and angular velocity of the camera with respect to the robot frame. The movement of the feature point into the image plane as a function of the object velocity and camera velocity is expressed by substituting (18) into (7): [ ξ = αλ m T ][ C ] 1 R W (q) 0 C J g (q) q, (19) p zo m T 2 0 C R W (q) ξ = αλ C p zo C p xo 1 0 C p zo C p yo 0 1 C p zo C R W W q O, (20)
7 where m 1 = 1 0 C p xo C p zo C p xo C p yo C p zo C p 2 zo + C p 2 xo C p zo C p yo O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) C p yo C p zo, m 2 = C p 2 zo + C p 2 yo. C p zo C p C xo p yo C p zo C p xo By analysing the last result in Eq. (20), it can be directly concluded that ξ = J(q, ξ, C p z ) q + J O (q, C p O ) W ṗ O, (21) where J O (q, C p O ) = αλ C p zo C p xo 1 0 C p zo C p yo 0 1 C p zo C R W. (22) A simple generalization to multiple feature points can be obtained similarly as in Section Adaptive controller 3.1. Problem formulation Two cases are considered: position control for a fixed object and tracking control for a moving object. Case (a). The object does not move and a desired trajectory is given for image features in the image plane. The following assumptions are considered: Assumption 1. The object is fixed, W ṗ O (t) = W v O (t) = 0. Assumption 2. There exists a joint position vector q d such that, for a fixed object, it is possible to reach the desired features vector ξ d. Assumption 3. For a given object situation W p O, there exists a neighbourhood of q d where J is invertible and, additionally, J and J 1 are bounded. Assumption 4. The depth C p z, i.e. the distance from the camera to the object, is available to be used by the controller. A practical way to obtain C p z is by using external sensors as ultrasound or additional cameras in the so-called binocular stereo approach [9]. Assumption 1 reduces the control problem to a positioning one. Assumption 2 ensures that the control problem is solvable. Assumption 3 is required for technical reasons in the stability analysis. Now, the position adaptive servo visual control problem can be formulated.
8 58 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) Control problem. By considering Assumptions 1 4, the desired features vector ξ d, the initial estimates of dynamic parameters θ in Eq. (3) and a given object situation C p O, find a control law τ = T(q, q, ξ, ˆθ) (23) and a parameter update-law dˆθ dt = (q, q, ξ, ˆθ,t) (24) such that the control error in the image plane ξ(t) = (ξ d ξ(t)) 0ast. Case (b). The object moves along an unknown path. The following assumptions are considered: Assumption 1. The object moves along a smooth trajectory with bounded velocity W ṗ O (t) = W v O (t) and acceleration d W v O (t)/dt = W a O (t). Assumption 2. There exists a trajectory in the joint space q d (t) such that the vector of desired fixed features ξ d is achievable: ξ d = i( W p C (q d (t)), W p O (t)). Assumption 3. For the target path W p O (t), there exists a neighbourhood of q d (t) where J is invertible and additionally J and J 1 are bounded. Assumption 4. The depth C p z, i.e. the distance from the camera to the object, is available to be used by the controller. A practical way to obtain C p z is by using external sensors as ultrasound or additional cameras in the so-called binocular stereo approach [9]. Assumption 1 establishes a practical restriction on the object trajectory. Assumption 2 ensures that the control problem is solvable. Assumption 3 is required for technical reasons in the stability analysis. Now, the adaptive servo visual tracking control problem can be formulated. Control problem. By considering Assumptions 1 4, the desired features vector ξ d, the initial estimates of dynamic parameters θ in (3), the initial estimates of target velocity W ˆv O (t) and its derivative d W ˆv O (t)/dt, find a control law: τ = T(q, q, ξ, ˆθ, W ˆv O, W â O ) (25) and a parameter update-law: dˆθ dt = (q, q, ξ, ˆθ, W ˆv O, W â O,t) (26) such that the control error in the image plane ξ(t) = ξ d ξ(t) is ultimately bounded by a sufficiently small ball B r Control and update laws Case (a). Let us define a signal υ in the image error space: υ = d ξ dt + ξ. (27)
9 The following control law is considered: O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) τ = Kυ + ˆθ (28) with υ = J 1 υ, (29) { (q, q, ξ, υ)ˆθ = H(q) (Jd T J) 1 (Jd T J) q + (JT d J) 1 (Jd T J) q d } dt [(JT d J) 1 ]υ +C(q, q)[(j T d J) 1 J T d ξ] + g(q), (30) where K and are positive definite gain (n n) and (2p 2p) matrices, Ĥ(q), Ĉ(q, q) and ĝ(q) are the estimates of H, C and g, respectively. Parameterization of (28) is possible due to property 2.1. To estimate θ, the following parameter update-law of the gradient type [40] is used: dˆθ dt = Ɣ T (q, q, υ, ξ)υ with Ɣ a positive definite adaptation gain (m m) matrix. Case (b). Let us define the same signal υ in the image error space as for Case (a): υ = d ξ dt + ξ = ξ + ξ with ξ = dξ/dt = J q + J W O v O. Target velocity W v O and its time derivative d W v O /dt can be estimated through a second order filter: W ˆv O = b 0 p p 2 + b 1 p + b 0 W p O (t), (31) (32) (33) W â O = dw ˆv O b 0 p 2 = W dt p 2 p O (t). (34) + b 1 p + b 0 Therefore ˆυ = dˆξ dt + ξ (35) with dˆξ dt = J q + J O W ˆv O. (36) Now, the following control law is proposed: τ = K ˆυ + ˆθ (37) with ˆυ = J 1 ˆυ = q J 1 J OC ˆv O + J 1 ξ, (38) ( q, ˆυ, W ˆv O, W ˆvO )ˆθ = H(q){ J 1 ˆυ J 1 J q J 1 J OW ˆv O J 1 W J O ˆvO J 1 J O J q J 1 J OW ˆv O } +C(q, q){ J 1 J OW ˆv O + J 1 ξ}+g(q), (39) where K and are positive definite gain (n n) and (2p 2p) matrices, Ĥ(q), Ĉ(q, q) and ĝ(q) are the estimates of H(q), C(q, q) and g(q). Parameterization of (37) is possible due to property 2.1.
10 60 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) To estimate θ, the following parameter update-law is considered: dˆθ dt = Ɣ T ( q, ˆυ, W ˆv O, W â O ) ˆυ Lˆθ (40) with Ɣ and L, positive definite adaptation gain (m m) matrices. 4. Stability analysis In this section, two propositions describe the stability properties of the adaptive controllers proposed in Section 3. First, the following technical lemma is considered. Lemma 1. Let the transfer function H R(s) n n be exponentially stable and strictly proper. Let u and y be its input and output, respectively. If u L n 2 Ln then y, ẏ Ln 2 Ln and y 0 as t. Lemma 1 shown in [33], implies that the filtering by an exponentially stable and strictly proper filter of a square-integrable and bounded function there results not only in a square-integrable and bounded function, but in maintaining this property for its time derivative. Besides, it leads the output to converge to zero. Case (a). Let us consider the control law (28) and update law (31) in closed loop with the robot and camera models (1) and (6), as well as Assumptions 1 4 for Case (a). Then, there exists a neighbourhood of q d (t) such that: (a) θ = θ ˆθ L m. (b) υ L 2p L 2p 2. (c) ξ(t) = (ξ d ξ) 0 as t. Proof. The closed-loop system is obtained by combining (1) and (28): KJ 1 υ + ˆθ = H q + C q + g. By using ˆθ(t) = θ θ(t) and Eqs. (29) and (31) we obtain Kυ + H υ + Cυ = θ. (41) (42) Let us consider the local non-negative function of time: V = 1 2 υt J T HJ 1 υ θ T Ɣ 1 θ, (43) whose time derivative along the trajectories of (42) is V = 1 2 υ T Hυ θ T Ɣ 1 θ, (44) V = υ T J T [ KJ 1 υ + θ CJ 1 υ] υt J T ḢJ 1 υ + θ T 1 d θ Ɣ dt. (45) By regarding property 2.1 and the parameter update-law of Eq. (31), it results V = υ T J T KJ 1 υ 0. (46) Eqs. (31) and (46) imply θ L m and υ L2p. By time-integrating V, it can also be easily shown that υ L 2p. Finally, to prove (c), we note that υ = (d ξ/dt) + ξ. By regarding ξ(t) the output of an exponentially stable and strictly proper linear filter with input υ, Lemma 1 allows to conclude that ξ(t) 0 as t.
11 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) Remark 1. If more features than DOF of the robot are taken, a non-square Jacobian matrix is obtained. In this case a re-definition of υ as υ = d(jt ξ) dt + (J T ξ) (47) should be used. A similar reasoning as that of proposition 2.1, enables to reach the same conclusions about control system stability. Case (b). Let us consider the control law (37) and update law (40) in closed loop with the robot and camera models (1) and (6), as well as Assumptions 1 4 for Case (b). Then, there exists a neighborhood of q d (t) such that: (a) θ = θ ˆθ L m. (b) ˆυ L n. (c) ξ(t) = ξ d ξ is ultimately bounded. Proof. The closed-loop system is obtained by combining (1) and (37): K ˆυ + ˆθ = H q + C q + g. (48) Using ˆθ = θ θ and Eqs. (38) and (39) it is obtained: K ˆυ + H ˆDυ + C ˆυ = θ, (49) where ˆDυ is the estimate of υ time-derivative. Also, ˆDυ = D ˆυ +ε, with ε = J 1 J O ( C v O C ˆv O ), C v O C ˆv O = ε O the estimate error, and D ˆυ the time-derivative of ˆυ. Then HD ˆυ = (K + C) ˆυ + θ ε, (50) where ε = Hε. Let us consider the local non-negative function of time: V = 1 2 ˆυ T H ˆυ θ T Ɣ 1 θ, (51) whose time-derivative along the trajectories of (50), and considering as well the parameter update-law (40), is V = ˆυ T [ (K + C) ˆυ + θ ε] ˆυ T Ḣ ˆυ + θ T [ Ɣ T ˆυ + Lˆθ]. (52) By regarding property 2.1, there results V = ˆυ T K ˆυ T θ T Ɣ 1 θ ˆυ T ε + θ T Ɣ 1 Lθ. (53) By defining the following expressions: µ K = σ min (K), µ Γ 1 L = σ min(ɣ 1 L), γ Γ 1 L = σ max(ɣ 1 L), (54) where σ i = λ i (A T A) denotes singular values of A for i : min, max: V µ K ˆυ 2 µ Γ 1 L θ 2 + ˆυ T ε +γ Γ 1 L θ θ. (55) From the expressions: ( ) 1 2 ζ θ ζ θ = 1 ζ 2 θ 2 2 θ θ +ζ 2 θ 2,
12 62 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) ( ) 1 2 η ˆυ T η ε = 1 η 2 ˆυ T 2 2 ˆυ T ε +η 2 ε 2 with ζ, η R +, it can be written: θ θ = 1 2ζ 2 θ 2 + ζ2 2 θ ( ) 1 2 ζ θ ζ θ, ˆυ T ε +η 2 ε 2 = 1 2η 2 ˆυ T 2 + η2 2 ε ( ) 1 2 η ˆυ T η ε. By neglecting the negative terms we can obtain the following equations: θ θ 1 2ζ 2 θ 2 + ζ2 2 θ 2, ˆυ T ε +η 2 ε 2 1 2η 2 ˆυ T 2 + η2 2 ε 2. (56) Now, going back to V : ( V µ K 1 ) ( 2η 2 ˆυ T 2 µ Γ 1 L γ ) Γ 1 L 2ζ 2 θ 2 ζ 2 + γ Γ 1 L 2 θ 2 + η2 2 ε 2 (57) that can be expressed as V α 1 ˆυ T 2 α 2 θ 2 + ρ, (58) where α 1 = µ K 1 2η 2 > 0, Eq. (51) can be stated as V β 1 ˆυ T 2 + β 2 θ 2, α 2 = µ Γ 1 L γ Γ 1 L ζ 2 2ζ 2 > 0, ρ = γ Γ 1 L 2 θ 2 + η2 2 ε 2. (59) where β 1 = (1/2)γ H, β 2 = γ Ɣ 1, γ H = sup q [σ max (H)], γ Ɣ 1 = σ max (Ɣ 1 ). Then V δv + ρ (61) with { α1 δ = min, α } 2. β 1 β 2 Since ρ is bounded, (61) implies that ˆυ L n, θ L m and x = ( ˆυ, θ) T is ultimately bounded inside a ball B, which proves (a) and (b). In addition, from (38), ˆυ = J ˆυ and by recalling Assumption 2, ˆυ L 2p. Besides, ˆυ can be expressed in terms of υ as ˆυ = d ξ dt + ξ + J O (v O ˆv O ) = υ + J O ε O. (62) Since J O ε O is bounded, it means that υ = (d ξ/dt) + ξ is ultimately bounded as well. From the last equation, ξ = O where, O is a linear operator with finite gain. Therefore ξ O υ and since υ is ultimately bounded, ξ is also ultimately bounded, which proves (c). (60)
13 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) Remark 1. If more features than DOF of the robot are regarded, a non-square Jacobian matrix is obtained. In this case a re-definition of υ as υ = d(jt ξ) + (J T ξ) (63) dt should be used. By reasoning just like in Proposition 2, it is possible to reach the same conclusions on control system behaviour. 5. Simulations Computer simulations have been carried out to show the stability and performance of the proposed adaptive controllers. The robot used for the simulations is a two DOF manipulator, as shown in Fig. 2. The meaning and numerical values of symbols in Fig. 2 are listed in Table 1. The elements H ij (q)(i, j = 1, 2) of inertia matrix H are H 11 (q) = m 1 lc1 2 + m 2(l1 2 + l2 c2 + 2l 1l c2 cos (q 2 )) + I 1 + I 2, H 12 (q) = m 2 (lc2 2 + l 1l c2 cos (q 2 )) + I 2, H 21 (q) = m 2 (lc2 2 + l 1l c2 cos (q 2 )) + I 2, H 22 (q) = m 2 lc2 2 + I 2. Fig. 2. Two DOF manipulator scheme. Table 1 Parameters of the manipulator Description Notation Value Length of link 1 (m) l Length of link 2 (m) l Center of gravity of l 1 (m) l c Center of gravity of l 2 (m) l c Mass of l 1 (kg) m Mass of l 2 + camera (kg) m Inertia of l 1 (kg m 2 ) I Inertia of l 2 + camera (kg m 2 ) I Acceleration of gravity (m/s 2 ) g 9.8
14 64 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) The elements C ij (q, q)(i, j = 1, 2) of the centrifugal and Coriolis matrix C are C 11 (q, q) = m 2 l 1 l c2 sin (q 2 ) q 2, C 12 (q, q) = m 2 l 1 l c2 sin (q 2 )( q 1 + q 2 ), C 21 (q, q) = m 2 l 1 l c2 sin (q 2 ) q 1, C 22 (q, q) = 0. Table 2 Parameters of the camera Description Notation Value Focal length (m) λ Scale factor (pixels/m) α Fig. 3. Trajectory in the image plane. Fig. 4. Trajectory in the robot workspace.
15 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) The entries of gravitational torque vector g are given by g 1 (q) = (m 1 l c1 + m 2 l 1 )g sin (q 1 ) + m 2 l c2 g sin (q 1 + q 2 ), g 2 (q) = m 2 l c2 g sin (q 1 + q 2 ). Numerical values for the camera model are listed in Table 2. All constants, design parameters and variables in the control system are expressed in the International System of Units (SI). Linear parameterization of Eqs. (31) and (39) leads to the parameter vector: θ = [ m 1 l 2 c1 m 1 l c1 m 2 l 2 c2 m 2 l c2 m 2 I 1 I 2 ] T. For controller design, it is assumed that the values of parameters of link 1 (m 1 l 2 c1, m 1l c1, I 1 ) are known with uncertainties of about 10% and for link 2 (m 2 l 2 c2, m 2l c2, m 2, I 2 ) with uncertainty of about 20%. Fig. 5. Evolution of control errors.
16 66 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) Case (a) adaptive position control Simulations are carried out using the following design parameters = diag{5, 1.8}, K = diag{50, 5}, Ɣ = diag{0.7,...,0.7}. Robot initial conditions are q 1 (0) = 30, q 2 (0) = 45, q 1 (0) = 0, q 2 (0) = 0 and initial estimates of vector θ are m 1 lc1 2 (0) = 0.264, m 1l c1 (0) = 2.632, m 2 lc2 2 (0) = , m 2 l c2 (0) = 0.671, ˆm 2 (0) = 5.328, Î 1 (0) = 1.397, Î 2 (0) = Fig. 6. Evolution of parameter estimates.
17 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) Fig. 7. Trajectory in the image plane. The object feature point was placed at W p O = [ ] T. Simulations were carried out in two stages. In the first stage, we consider the adaptive controller with uncertainty in robot dynamics parameters. The second stage presents the non-adaptive control with wrong estimates of dynamic parameters, which were set at the same values of initial estimates in the adaptive controller. Simulation results are shown in Figs Fig. 3 shows the image feature trajectories on the image plane for the adaptive and non-adaptive controllers. Fig. 4 represents the trajectory of the manipulator s end-effector, again for the adaptive and the non-adaptive cases. Fig. 5 presents the evolution of the control errors. It is clearly seen from the above figures that the adaptive controller achieves a better control performance compared to the non-adaptive one. For the adaptive case, the control errors tend to zero, while for the non-adaptive case, the controller is unable Fig. 8. Trajectory in the robot workspace.
18 68 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) Fig. 9. Coordinate W x in the work plane. to eliminate the steady state errors. By analyzing Fig. 6 which represent the evolution of parameter estimates, it can be concluded that for the involved signals, the proposed controller does not present parametric convergence, i.e. θ does not converge to zero as t Case (b) tracking adaptive control Simulation conditions are the same as for Case (a). It is considered that a point object moves within the manipulator s environment by describing a circular trajectory of radio r = 0.2 m and angular speed ω = 1.57 rad/s. Parameters of the speed estimating filter are selected as b 0 = 10 4, b 1 = 200, (33) and (34). Simulations were carried out considering the adaptive and non-adaptive cases to obtain comparative performance results, which are shown in Figs Fig. 7 shows the trajectory of image characteristics during the tracking process. Fig. 8, on the other hand, shows the trajectory of the manipulator s end-effector in the robot frame. For the adaptive case, initial Fig. 10. Coordinate W y in the work plane.
19 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) Fig. 11. Evolution of control errors. estimates (ˆθ O ) of the parameters are taken equal to the fixed wrong parameters of the non-adaptive case. For a better display of the adaptive controller s performance, Figs. 9 and 10 present coordinates W x and W y of the manipulator and the object trajectories. Here, it is clearly seen the good tracking performance for the example considered in the simulation. Control errors are explicitly shown in Fig. 11. Finally, the evolution of the estimates of dynamic parameters is presented in Fig. 12. From the above figures it can be noted the improvement in the manipulator s performance when the adaptive controller is used, as compared to the fixed controller. For the adaptive case, the control errors enter and remain into a smaller neighbourhood of the ideal zero control error. 6. Discretization and measurement noise effects In the previous section, a tracking adaptive servo-visual control algorithm in the continuous domain has been proposed, and its stability analysis has been done. The application feasibility of the proposed algorithm on a computer system motivates its discretization.
20 70 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) Fig. 12. Evolution of parameter estimates. In this section, the discretization of the control law and the update law are outlaid, for their digital implementation. Besides, through computer simulations, the performance of the proposed control algorithm with several sampling times and measurement and discretization noises is evaluated. The proposed scheme has two feedback loops with different sampling times. The first one (T 1 ) is the fast dynamics loop, and is in charge of controlling the manipulator using the joint position and velocities measurement. The second loop (T 2 ), with slower dynamics, computes and estimates the velocity of the moving object based on the images from a video camera, setting the tracking references for T 1 loop. The discretization is obtained as follows: dx dt = x k x k 1, (64) T where T is the sampling period.
21 Table 3 Performance for different sampling times O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) No. T 1 (s) T 2 (s) ξ dt ξ final ˆθ dt Fig. 13. Trajectory in the robot workspace. Simulations 1, 3, 4 and 6 of Table 3.
22 72 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) The discrete equation of both the control law and the parameter update-law for the adaptive controller are τ kt1 = Kˆν kt 1 + φ kt1 ˆθ kt1 (65) with kt1 ( q kt1, ˆν kt1, W ˆv OkT2, W ˆvOkT2 )ˆθ kt1 = Ĥ(q kt1 ){ J 1ˆν kt1 J 1 J q kt1 J 1 J O : W ˆv OkT2 J 1 J O : W ˆvOkT2 J 1 J O J q kt1 J 1 J O : W ˆv OkT2 }+Ĉ(q kt1, q kt1 ){ J 1 J O : W ˆv OkT2 + J 1 ξ kt2 }+ĝ(q kt1 ), (66) ˆθ kt1 = (I T 1 L)ˆθ (k 1)T1 + T 1 Ɣφ T kt 1 ˆν kt 1, (67) Fig. 14. Norms of the control errors. Simulations 1, 3, 4 and 6 of Table 3.
23 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) ˆν kt 1 = J 1ˆν kt1 = q kt1 J 1 J O ˆv OkT2 + J 1 ξ kt2. (68) The filter equations for the estimate of the object velocity and acceleration are ( ) 1 [ ( ) I ˆv OkT2 = + b 1 b O + b O ( W p OkT2 W 2I p O(k 1)T2 ) + + b 1 T 2 T 2 T 2 ˆv OkT2 = ( T 2 2 I T 2 2 ) 1 + a 1 + a O T 2 [ a O T 2 ( W p OkT2 2 W p O(k 1)T2 + W p O(k 2)T2 ) + ( 2I T 2 2 T 2 2 ] ˆv O(k 1)T2 I T2 2 ˆv O(k 2)T2, (69) ) ] + a 1 ˆv O(k 1)T2 I ˆv T 2 T2 2 O(k 2)T2. (70) Several simulations have been carried out by considering different sampling times and measurement noises to evaluate the performance of the developed discrete controller. The simulation conditions are the same as those of the continuous case (see Section 5.2). Let us consider different sampling times both for the faster and slower dynamic loops. The following gain matrices were selected: = diag{20, 20}, K = diag{40, 40}, Ɣ = diag{0.2,...,0.2}, L = diag{0.08,...,0.08}. In the simulations, it is considered the case where the point object moves in the manipulator environment describing a circular trajectory, with the same velocity and trajectory radius considered for the continuous case. The filter parameters for the velocity and acceleration estimation were b 0 = 10 4, b 1 = 200, a 0 = 10 4 and a 1 = 200. Table 3 shows the different sampling times used for the various simulation conditions and the results obtained based on three error indexes. The evaluated indexes are the integral of the control error norm, the error norm once the stationary state is reached, and the integral of the norm of the manipulator dynamic parameters. In Table 3, T 1 represents the sampling time of the faster dynamics loop and T 2 is the vision loop sampling time. Figs. 13 and 14 show the trajectory of the robot and the norm of the control errors for some simulation conditions of Table 3. A second simulation was performed to determine the influence of perturbations in the closed-loop system due to measurement and sensing errors. Besides, non-modelled robot dynamics was also assumed. In this last experience, the same simulation conditions of the previous ones were considered, i.e. the initial and final manipulator positions, and initial uncertainties of the parameters. It was also considered a real sampling time for the evaluation, i.e. T 1 = s and T 2 = 0.05 s. Table 4 Performance for different measurement noises Order Q (b) q 1 (σ 2 1 ) q 2 (σ 2 2 ) q 1, q 2 ξ dt ξ final ˆθ dt m σ
24 74 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) Fig. 15. (a) Trajectory in the robot workspace; (b) error norm. Simulation 1 of Table 4. The controller tuning was done using the same gain matrices of previous sections, because they guarantee an acceptable performance for the given conditions. Various cases were considered regarding the control system performance against perturbations, as shown in Table 4. These cases are: Quantization noise. It arises when considering certain number of bits in the image discretization process (see Table 4, rows 1 3). It should be concluded that the image discretization process has a poor influence on the system behaviour. Measurement noise introduced by the optical encoders. In this case, a noise with zero mean and different variances for each joint actuator are considered (see rows 1 and 4, Table 4). The real values for the noise introduced by optical Fig. 16. (a) Trajectory in the robot workspace; (b) error norm. Simulation 4 of Table 4.
25 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) Fig. 17. (a) Trajectory in the robot workspace; (b) error norm. Simulation 9 of Table 4. encoders do not affect the system performance, but when the noise is high enough, an important degradation in the control objective can be noted. Velocity measurement noise due to the tachometer. This case is obtained by assuming Gaussian noise with different mean and variance values (rows 1, 5 9, Table 4). The degradation in system behaviour is remarkable when the mean value of the noise increases. It can be seen that the system, under the above mentioned conditions, tends to be unstable. Worst case. Finally, last row of Table 4 (row 10) considers the worst case and as it was expected, the system performance is poor. Figs show the results for the simulations and conditions of Table 4. Figs show simulation results for conditions of cases 1, 4, 9 and 10 of Table 4. Curves of figure (a) represent the evolution of the manipulator s Fig. 18. (a) Trajectory in the robot workspace; (b) error norm. Simulation 10 of Table 4.
26 76 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) Fig. 19. (a) and (b) Norm of the parameters vector of the manipulator. Simulations 1, 4, 9 and 10 of Table 4. end-effector and curve (b) the norm control error. Finally, Fig. 19 depicts the norm of the parameter vector estimate. Curve (a) shows this norm for cases 1 and 4, and curve (b) the same for cases 9 and Conclusions This paper has presented a positioning and a tracking adaptive controller for robots with camera-in-hand configuration using direct visual feedback. Full non-linear robot dynamics has been considered in the controller design. Control errors are proven to asymptotically converge to zero for the positioning controller and be ultimately bounded for the tracking one. The work has been focused on the control problem, without considering the real-time image processing problem, which is assumed as already solved. Simulations illustrate the capability of the proposed controllers to attain suitable control performance under robot dynamics uncertainties. References [1] K. Hashimoto, Visual servoing-real-time control of robot manipulators based on visual sensory feedback, in: K. Hashimoto (Ed.), Visual Servoing, World Scientific, Singapore, [2] C.P., Visual Control of Robots, Research Studies Press Ltd., [3] R. Hutchinson, G.D. Hager, P. Corke, A tutorial on visual servo control, IEEE Transactions on Robotics and Automation 12 (1996) [4] P. Corke, M. Good, Dynamic effects in visual closed-loop systems, IEEE Transactions on Robotics and Automation 12 (1996) [5] All, Special issue on visual servoing, IEEE Robotics and Automation Magazine, 5 (1996). [6] P. Corke, S. Hutchinson, Real-time vision, tracking and control, in: Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, CA, April [7] P. Allen, A. Tomcenko, B. Yoshimi, P. Michelman, Automated tracking and grasping of a moving object with a robotic hand eye system, IEEE Transactions on Robotics and Automation 9 (1993) [8] G.D. Hager, W.C. Chang, A.S. Morse, Robot hand eye coordination based on stereo vision, IEEE Control System Magazine 15 (1995) [9] G.D. Hager, A modular system for robust positioning using feedback from stereo vision, IEEE Transacations on Robotics and Automation 13 (1997)
27 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) [10] R. Kelly, Robust asymptotically stable visual servoing of planar robots, IEEE Transactions on Robotics and Automation 12 (1996) [11] L.E. Wiess, A.C. Sanderson, C.P. Newman, Dynamic sensor-based control of robots with visual feedback, IEEE Journal of Robotics and Automation 3 (1987) [12] F. Chaumette, P. Rives, B. Espiau, Positioning of a robot with respect to an object, tracking it and estimating its velocity by visual servoing, in: Proceedings of the IEEE International Conference on Robotics and Automation, Sacramento, CA, April 1991, pp [13] K. Hashimoto, T. Kimoto, T. Ebine, H. Kimura, Manipulator control with image-based visual servoing, in: Proceedings of the IEEE international Conference on Robotics and Automation, Sacramento, CA, June 1991, pp [14] W. Jang, Z. Bien, Feature based visual servoing of an eye-in-hand robot with improved tracking performance, in: Proceedings of the IEEE International Conference on Robotics and Automation, Sacramento, CA, April 1991, pp [15] B. Espiau, F. Chaumette, P. Rives, A new approach to visual servoing in robotics, IEEE Transactions on Robotics and Automation 8 (1992) [16] H. Hashimoto, T. Kubota, M. Sato, F. Harashima, Visual control of robotics manipulator based on neural networks, IEEE Transactions on Industrial Electronics 9 (1992) [17] F. Chaumette, A. Santos, Tracking a moving object by visual servoing, in: Proceedings of the IFAC World Congress, vol. 9, Sydney, 1993, pp [18] N.P. Papanikolopoulos, P.K. Khosla, T. Kanade, Visual tracking of a moving target by a camera mounted on a robot: a combination of control and vision, IEEE Transactions on Robotics and Automation 9 (1993). [19] N.P. Papanikolopoulos, P.K. Khosla, Adaptive robotic visual tracking: theory and experiments, IEEE Transactions on Automatic Control 38 (1993) [20] K. Hashimoto, H. Kimura, Dynamic visual servoing with non-linear model-based control, in: Proceedings of the IFAC World Congress, vol. 9, Sydney, Australia, June 1993, pp [21] K. Hashimoto, T. Ebine, H. Kimura, Visual servoing with hand eye manipulator optimal control approach, IEEE Transactions on Robotic and Automation 12 (1996) [22] A. Astolfi, L. Hsu, M. Netto, R. Ortega, A solution to the adaptive visual servoing problem, in: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 1, May 21 26, 2001, pp [23] E. Malis, Visual servoing invariant to changes in camera intrinsic parameters, in: Proceedings of the Eighth IEEE International Conference on Computer Vision, vol. 1, July 7 14, 2001, pp [24] M. Asada, T. Tanaka, K. Hosoda, Adaptive binocular visual servoing for independently moving target tracking, in: Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, CA, April [25] E. Zergeroglu, D. Dawson, Y. Fang, A. Malatpure, Adaptive camera calibration control of planar robot: elimination of camera space velocity measurements, in: Proceedings of the IEEE International Conference on Control Applications, September 25 27, 2000, pp [26] C. Cheah, K. Lee, S. Kawamura, S. Arimoto, Asymptotic stability of robot control with approximate Jacobian matrix and its application to visual servoing, in: Proceedings of 39th IEEE Conference on Decision and Control, vol. 4, December 12 15, 2000, pp [27] E. Zergeroglu, D. Dawson, M. de Queiroz, S. Nagarkatti, Robust visual-servo control of robot manipulators in the presence of uncertainty, in: Proceedings of the IEEE 38th Conference on Decision and Control, vol. 4, December 7 10, 1999, pp [28] A. Maruyama, M. Fujita, Robust visual servo control for planar manipulators with the eye-in-hand configurations, in: Proceedings of the IEEE 36th Conference on Decision and Control, vol. 3, December 10 12, 1997, pp [29] L. Hsu, P. Aquino, Adaptive visual tracking with uncertain manipulator dynamics and uncalibrated camera, in: Proceedings of the IEEE 38th Conference on Decision and Control, vol. 2, December 7 10, 1999, pp [30] L. Hsu, R. Costa, P. Aquino, Stable adaptive visual servoing for moving targets, in: Proceedings of 2000 American Control Conference, vol. 3, June 28 30, 2000, pp [31] R. Carelli, O. Nasisi, B. Kuchen, Adaptive robot control with visual feedback, in: Proceedings of the American Control Conference, Baltimore, MD, June [32] O. Nasisi, R. Carelli, B. Kuchen, Tracking adaptive control of robots with visual feedback, in: Proceedings of the 13th IFAC World Congress, San Francisco, USA, June 1996, pp [33] J. Slotine, N. Li, Adaptive manipulator control: a case of study, in: Proceedings of the IEEE International Conference on Robotics and Automation, Raleigh, NC, April [34] K. Narendra, A. Annaswamy, Stable Adaptive Systems, Prentice-Hall, Englewood Cliffs, NJ, [35] M. Spong, M. Vidyasagar, Robot Dynamics and Control, Wiley, New York, [36] R. Ortega, M. Spong, Adaptive motion control of rigid robots: a tutorial, Automatica 25 (6) (1989) [37] L. Sciavicco, B. Sciciliano, Modeling and Control of Robot Manipulators, McGraw-Hill, New York, [38] J. Feddema, C. Lee, O.R. Mitchell, Weighted selection of image features for resolved rate visual feedback control, IEEE Transactions on Robotics and Automation 7 (1991) [39] J.J. Craig, Introduction to Robotics Mechanics and Control, Addison-Wesley, Reading, MA, [40] S. Sastry, M. Bodson, Adaptive Control: Stability, Convergence and Robustness, Prentice-Hall, New York, 1989.
28 78 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) Oscar Nasisi was born in San Luis, Argentina, in He received the Electronics Engineering degree from the National University of San Juan, Argentina, the M.S. degree in Electronics Engineering from the National Universities Foundation for International Cooperation, Eindhoven, The Netherlands, and the Ph.D. degree from the National University of San Juan in 1986, 1989, and 1998, respectively. Since 1986, he has been with the Instituto de Automática, National University of San Juan, where he currently is a Full Professor. His research areas of interest are artificial vision, robotics, and adaptive control. Ricardo Carelli was born in San Juan, Argentina. He graduated in Engineering from the National University of San Juan, Argentina, and obtained a Ph.D degree in Electrical Engineering from the National University of Mexico (UNAM). He is presently Full Professor at the National University of San Juan and Senior Researcher of the National Council for Scientific and Technical Research (CONICET, Argentina). He is Adjunct Director of the Instituto de Automática, National University of San Juan. His research interests are in robotics, manufacturing systems, adaptive control and artificial intelligence applied to automatic control. Prof. Carelli is a Senior Member of IEEE and a Member of AADECA-IFAC.
Adaptive Robust Tracking Control of Robot Manipulators in the Task-space under Uncertainties
Australian Journal of Basic and Applied Sciences, 3(1): 308-322, 2009 ISSN 1991-8178 Adaptive Robust Tracking Control of Robot Manipulators in the Task-space under Uncertainties M.R.Soltanpour, M.M.Fateh
More informationExponential Controller for Robot Manipulators
Exponential Controller for Robot Manipulators Fernando Reyes Benemérita Universidad Autónoma de Puebla Grupo de Robótica de la Facultad de Ciencias de la Electrónica Apartado Postal 542, Puebla 7200, México
More informationCONTROL OF ROBOT CAMERA SYSTEM WITH ACTUATOR S DYNAMICS TO TRACK MOVING OBJECT
Journal of Computer Science and Cybernetics, V.31, N.3 (2015), 255 265 DOI: 10.15625/1813-9663/31/3/6127 CONTROL OF ROBOT CAMERA SYSTEM WITH ACTUATOR S DYNAMICS TO TRACK MOVING OBJECT NGUYEN TIEN KIEM
More informationAdaptive Jacobian Tracking Control of Robots With Uncertainties in Kinematic, Dynamic and Actuator Models
104 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 6, JUNE 006 Adaptive Jacobian Tracking Control of Robots With Uncertainties in Kinematic, Dynamic and Actuator Models C. C. Cheah, C. Liu, and J.
More informationAdaptive Position and Orientation Regulation for the Camera-in-Hand Problem
Adaptive Position and Orientation Regulation for the Camera-in-Hand Problem Aman Behal* Electrical and Computer Engineering Clarkson University Potsdam, New York 13699-5720 e-mail: abehal@clarkson.edu
More informationCase Study: The Pelican Prototype Robot
5 Case Study: The Pelican Prototype Robot The purpose of this chapter is twofold: first, to present in detail the model of the experimental robot arm of the Robotics lab. from the CICESE Research Center,
More informationAdaptive Visual Tracking for Robotic Systems Without Image-Space Velocity Measurement
Adaptive Visual Tracking for Robotic Systems Without Image-Space Velocity Measurement Hanlei Wang 1 arxiv:1401.6904v3 [cs.ro] 27 Apr 2015 Abstract In this paper, we investigate the visual tracking problem
More informationStatistical Visual-Dynamic Model for Hand-Eye Coordination
The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Statistical Visual-Dynamic Model for Hand-Eye Coordination Daniel Beale, Pejman Iravani
More information458 IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. 16, NO. 3, MAY 2008
458 IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL 16, NO 3, MAY 2008 Brief Papers Adaptive Control for Nonlinearly Parameterized Uncertainties in Robot Manipulators N V Q Hung, Member, IEEE, H D
More informationTrajectory-tracking control of a planar 3-RRR parallel manipulator
Trajectory-tracking control of a planar 3-RRR parallel manipulator Chaman Nasa and Sandipan Bandyopadhyay Department of Engineering Design Indian Institute of Technology Madras Chennai, India Abstract
More informationA Sliding Mode Controller Using Neural Networks for Robot Manipulator
ESANN'4 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), 8-3 April 4, d-side publi., ISBN -9337-4-8, pp. 93-98 A Sliding Mode Controller Using Neural Networks for Robot
More informationNonlinear Tracking Control of Underactuated Surface Vessel
American Control Conference June -. Portland OR USA FrB. Nonlinear Tracking Control of Underactuated Surface Vessel Wenjie Dong and Yi Guo Abstract We consider in this paper the tracking control problem
More informationTrigonometric Saturated Controller for Robot Manipulators
Trigonometric Saturated Controller for Robot Manipulators FERNANDO REYES, JORGE BARAHONA AND EDUARDO ESPINOSA Grupo de Robótica de la Facultad de Ciencias de la Electrónica Benemérita Universidad Autónoma
More informationADAPTIVE FORCE AND MOTION CONTROL OF ROBOT MANIPULATORS IN CONSTRAINED MOTION WITH DISTURBANCES
ADAPTIVE FORCE AND MOTION CONTROL OF ROBOT MANIPULATORS IN CONSTRAINED MOTION WITH DISTURBANCES By YUNG-SHENG CHANG A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
More informationRobust Model Free Control of Robotic Manipulators with Prescribed Transient and Steady State Performance
Robust Model Free Control of Robotic Manipulators with Prescribed Transient and Steady State Performance Charalampos P. Bechlioulis, Minas V. Liarokapis and Kostas J. Kyriakopoulos Abstract In this paper,
More informationRobot Dynamics II: Trajectories & Motion
Robot Dynamics II: Trajectories & Motion Are We There Yet? METR 4202: Advanced Control & Robotics Dr Surya Singh Lecture # 5 August 23, 2013 metr4202@itee.uq.edu.au http://itee.uq.edu.au/~metr4202/ 2013
More informationObserver Based Output Feedback Tracking Control of Robot Manipulators
1 IEEE International Conference on Control Applications Part of 1 IEEE Multi-Conference on Systems and Control Yokohama, Japan, September 8-1, 1 Observer Based Output Feedback Tracking Control of Robot
More informationLecture Schedule Week Date Lecture (M: 2:05p-3:50, 50-N202)
J = x θ τ = J T F 2018 School of Information Technology and Electrical Engineering at the University of Queensland Lecture Schedule Week Date Lecture (M: 2:05p-3:50, 50-N202) 1 23-Jul Introduction + Representing
More informationRobotics & Automation. Lecture 25. Dynamics of Constrained Systems, Dynamic Control. John T. Wen. April 26, 2007
Robotics & Automation Lecture 25 Dynamics of Constrained Systems, Dynamic Control John T. Wen April 26, 2007 Last Time Order N Forward Dynamics (3-sweep algorithm) Factorization perspective: causal-anticausal
More informationNONLINEAR PATH CONTROL FOR A DIFFERENTIAL DRIVE MOBILE ROBOT
NONLINEAR PATH CONTROL FOR A DIFFERENTIAL DRIVE MOBILE ROBOT Plamen PETROV Lubomir DIMITROV Technical University of Sofia Bulgaria Abstract. A nonlinear feedback path controller for a differential drive
More informationNeural Network-Based Adaptive Control of Robotic Manipulator: Application to a Three Links Cylindrical Robot
Vol.3 No., 27 مجلد 3 العدد 27 Neural Network-Based Adaptive Control of Robotic Manipulator: Application to a Three Links Cylindrical Robot Abdul-Basset A. AL-Hussein Electrical Engineering Department Basrah
More informationA new large projection operator for the redundancy framework
21 IEEE International Conference on Robotics and Automation Anchorage Convention District May 3-8, 21, Anchorage, Alaska, USA A new large projection operator for the redundancy framework Mohammed Marey
More information(W: 12:05-1:50, 50-N202)
2016 School of Information Technology and Electrical Engineering at the University of Queensland Schedule of Events Week Date Lecture (W: 12:05-1:50, 50-N202) 1 27-Jul Introduction 2 Representing Position
More informationA Novel Finite Time Sliding Mode Control for Robotic Manipulators
Preprints of the 19th World Congress The International Federation of Automatic Control Cape Town, South Africa. August 24-29, 214 A Novel Finite Time Sliding Mode Control for Robotic Manipulators Yao ZHAO
More informationNonlinear PD Controllers with Gravity Compensation for Robot Manipulators
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 4, No Sofia 04 Print ISSN: 3-970; Online ISSN: 34-408 DOI: 0.478/cait-04-00 Nonlinear PD Controllers with Gravity Compensation
More informationTarget Localization and Circumnavigation Using Bearing Measurements in 2D
Target Localization and Circumnavigation Using Bearing Measurements in D Mohammad Deghat, Iman Shames, Brian D. O. Anderson and Changbin Yu Abstract This paper considers the problem of localization and
More informationMCE/EEC 647/747: Robot Dynamics and Control. Lecture 12: Multivariable Control of Robotic Manipulators Part II
MCE/EEC 647/747: Robot Dynamics and Control Lecture 12: Multivariable Control of Robotic Manipulators Part II Reading: SHV Ch.8 Mechanical Engineering Hanz Richter, PhD MCE647 p.1/14 Robust vs. Adaptive
More informationA Model-Free Control System Based on the Sliding Mode Control Method with Applications to Multi-Input-Multi-Output Systems
Proceedings of the 4 th International Conference of Control, Dynamic Systems, and Robotics (CDSR'17) Toronto, Canada August 21 23, 2017 Paper No. 119 DOI: 10.11159/cdsr17.119 A Model-Free Control System
More informationq 1 F m d p q 2 Figure 1: An automated crane with the relevant kinematic and dynamic definitions.
Robotics II March 7, 018 Exercise 1 An automated crane can be seen as a mechanical system with two degrees of freedom that moves along a horizontal rail subject to the actuation force F, and that transports
More informationRobust Adaptive Attitude Control of a Spacecraft
Robust Adaptive Attitude Control of a Spacecraft AER1503 Spacecraft Dynamics and Controls II April 24, 2015 Christopher Au Agenda Introduction Model Formulation Controller Designs Simulation Results 2
More informationBalancing of an Inverted Pendulum with a SCARA Robot
Balancing of an Inverted Pendulum with a SCARA Robot Bernhard Sprenger, Ladislav Kucera, and Safer Mourad Swiss Federal Institute of Technology Zurich (ETHZ Institute of Robotics 89 Zurich, Switzerland
More informationRobot Manipulator Control. Hesheng Wang Dept. of Automation
Robot Manipulator Control Hesheng Wang Dept. of Automation Introduction Industrial robots work based on the teaching/playback scheme Operators teach the task procedure to a robot he robot plays back eecute
More informationControl of the Inertia Wheel Pendulum by Bounded Torques
Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 5 Seville, Spain, December -5, 5 ThC6.5 Control of the Inertia Wheel Pendulum by Bounded Torques Victor
More informationGAIN SCHEDULING CONTROL WITH MULTI-LOOP PID FOR 2- DOF ARM ROBOT TRAJECTORY CONTROL
GAIN SCHEDULING CONTROL WITH MULTI-LOOP PID FOR 2- DOF ARM ROBOT TRAJECTORY CONTROL 1 KHALED M. HELAL, 2 MOSTAFA R.A. ATIA, 3 MOHAMED I. ABU EL-SEBAH 1, 2 Mechanical Engineering Department ARAB ACADEMY
More informationPassivity-based Dynamic Visual Feedback Control for Three Dimensional Target Tracking: Stability and L 2 -gain Performance Analysis
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 1, NO. 11, NOVEMBER 2002 1 Passivity-based Dynamic Visual Feedback Control for Three Dimensional Target Tracking: Stability and L 2 -gain Performance Analysis
More informationNeural Networks for Advanced Control of Robot Manipulators
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 13, NO. 2, MARCH 2002 343 Neural Networks for Advanced Control of Robot Manipulators H. Daniel Patiño, Member, IEEE, Ricardo Carelli, Senior Member, IEEE, and
More informationwith Application to Autonomous Vehicles
Nonlinear with Application to Autonomous Vehicles (Ph.D. Candidate) C. Silvestre (Supervisor) P. Oliveira (Co-supervisor) Institute for s and Robotics Instituto Superior Técnico Portugal January 2010 Presentation
More informationControl of industrial robots. Centralized control
Control of industrial robots Centralized control Prof. Paolo Rocco (paolo.rocco@polimi.it) Politecnico di Milano ipartimento di Elettronica, Informazione e Bioingegneria Introduction Centralized control
More informationModel Reference Adaptive Control of Underwater Robotic Vehicle in Plane Motion
Proceedings of the 11th WSEAS International Conference on SSTEMS Agios ikolaos Crete Island Greece July 23-25 27 38 Model Reference Adaptive Control of Underwater Robotic Vehicle in Plane Motion j.garus@amw.gdynia.pl
More informationLinköping University Electronic Press
Linköping University Electronic Press Report Simulation Model of a 2 Degrees of Freedom Industrial Manipulator Patrik Axelsson Series: LiTH-ISY-R, ISSN 400-3902, No. 3020 ISRN: LiTH-ISY-R-3020 Available
More informationVirtual Passive Controller for Robot Systems Using Joint Torque Sensors
NASA Technical Memorandum 110316 Virtual Passive Controller for Robot Systems Using Joint Torque Sensors Hal A. Aldridge and Jer-Nan Juang Langley Research Center, Hampton, Virginia January 1997 National
More informationRobust Control of Cooperative Underactuated Manipulators
Robust Control of Cooperative Underactuated Manipulators Marcel Bergerman * Yangsheng Xu +,** Yun-Hui Liu ** * Automation Institute Informatics Technology Center Campinas SP Brazil + The Robotics Institute
More informationMEAM 520. More Velocity Kinematics
MEAM 520 More Velocity Kinematics Katherine J. Kuchenbecker, Ph.D. General Robotics, Automation, Sensing, and Perception Lab (GRASP) MEAM Department, SEAS, University of Pennsylvania Lecture 12: October
More informationArtificial Intelligence & Neuro Cognitive Systems Fakultät für Informatik. Robot Dynamics. Dr.-Ing. John Nassour J.
Artificial Intelligence & Neuro Cognitive Systems Fakultät für Informatik Robot Dynamics Dr.-Ing. John Nassour 25.1.218 J.Nassour 1 Introduction Dynamics concerns the motion of bodies Includes Kinematics
More informationAerial Robotics. Vision-based control for Vertical Take-Off and Landing UAVs. Toulouse, October, 2 nd, Henry de Plinval (Onera - DCSD)
Aerial Robotics Vision-based control for Vertical Take-Off and Landing UAVs Toulouse, October, 2 nd, 2014 Henry de Plinval (Onera - DCSD) collaborations with P. Morin (UPMC-ISIR), P. Mouyon (Onera), T.
More informationADAPTIVE NEURAL NETWORK CONTROL OF MECHATRONICS OBJECTS
acta mechanica et automatica, vol.2 no.4 (28) ADAPIE NEURAL NEWORK CONROL OF MECHARONICS OBJECS Egor NEMSE *, Yuri ZHUKO * * Baltic State echnical University oenmeh, 985, St. Petersburg, Krasnoarmeyskaya,
More informationA composite adaptive output feedback tracking controller for robotic manipulators* E. Zergeroglu, W. Dixon, D. Haste, and D.
Robotica (1999) volume 17, pp. 591 600. Printed in the United Kingdom 1999 Cambridge University Press A composite adaptive output feedback tracking controller for robotic manipulators* E. Zergeroglu, W.
More informationRobotics I. Classroom Test November 21, 2014
Robotics I Classroom Test November 21, 2014 Exercise 1 [6 points] In the Unimation Puma 560 robot, the DC motor that drives joint 2 is mounted in the body of link 2 upper arm and is connected to the joint
More informationThe Design of Sliding Mode Controller with Perturbation Estimator Using Observer-Based Fuzzy Adaptive Network
ransactions on Control, utomation and Systems Engineering Vol. 3, No. 2, June, 2001 117 he Design of Sliding Mode Controller with Perturbation Estimator Using Observer-Based Fuzzy daptive Network Min-Kyu
More informationIntroduction to Robotics
J. Zhang, L. Einig 277 / 307 MIN Faculty Department of Informatics Lecture 8 Jianwei Zhang, Lasse Einig [zhang, einig]@informatik.uni-hamburg.de University of Hamburg Faculty of Mathematics, Informatics
More informationA Benchmark Problem for Robust Control of a Multivariable Nonlinear Flexible Manipulator
Proceedings of the 17th World Congress The International Federation of Automatic Control Seoul, Korea, July 6-11, 28 A Benchmark Problem for Robust Control of a Multivariable Nonlinear Flexible Manipulator
More informationRobotics I. Figure 1: Initial placement of a rigid thin rod of length L in an absolute reference frame.
Robotics I September, 7 Exercise Consider the rigid body in Fig., a thin rod of length L. The rod will be rotated by an angle α around the z axis, then by an angle β around the resulting x axis, and finally
More informationForce-feedback control and non-contact sensing: a unified approach
Force-feedback control and non-contact sensing: a unified approach Bernard Espiau, Jean-Pierre Merlet, Claude Samson INRIA Centre de Sophia-Antipolis 2004 Route des Lucioles 06560 Valbonne, France Abstract
More informationReal-time Motion Control of a Nonholonomic Mobile Robot with Unknown Dynamics
Real-time Motion Control of a Nonholonomic Mobile Robot with Unknown Dynamics TIEMIN HU and SIMON X. YANG ARIS (Advanced Robotics & Intelligent Systems) Lab School of Engineering, University of Guelph
More informationIntegrated Design and PD Control of High-Speed Closed-loop Mechanisms
F. X. Wu Division of Biomedical Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada W. J. Zhang* Department of Mechanical Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9,
More informationControl of a Handwriting Robot with DOF-Redundancy based on Feedback in Task-Coordinates
Control of a Handwriting Robot with DOF-Redundancy based on Feedback in Task-Coordinates Hiroe HASHIGUCHI, Suguru ARIMOTO, and Ryuta OZAWA Dept. of Robotics, Ritsumeikan Univ., Kusatsu, Shiga 525-8577,
More informationAdaptive Tracking Control for Robots with Unknown Kinematic and Dynamic Properties
1 Adaptive Tracking Control for Robots with Unknown Kinematic and Dynamic Properties C. C. Cheah, C. Liu and J.J.E. Slotine C.C. Cheah and C. Liu are with School of Electrical and Electronic Engineering,
More informationFuzzy Based Robust Controller Design for Robotic Two-Link Manipulator
Abstract Fuzzy Based Robust Controller Design for Robotic Two-Link Manipulator N. Selvaganesan 1 Prabhu Jude Rajendran 2 S.Renganathan 3 1 Department of Instrumentation Engineering, Madras Institute of
More informationDesign Artificial Nonlinear Controller Based on Computed Torque like Controller with Tunable Gain
World Applied Sciences Journal 14 (9): 1306-1312, 2011 ISSN 1818-4952 IDOSI Publications, 2011 Design Artificial Nonlinear Controller Based on Computed Torque like Controller with Tunable Gain Samira Soltani
More informationOn-line Learning of Robot Arm Impedance Using Neural Networks
On-line Learning of Robot Arm Impedance Using Neural Networks Yoshiyuki Tanaka Graduate School of Engineering, Hiroshima University, Higashi-hiroshima, 739-857, JAPAN Email: ytanaka@bsys.hiroshima-u.ac.jp
More informationDYNAMIC MODEL FOR AN ARTICULATED MANIPULATOR. Luis Arturo Soriano, Jose de Jesus Rubio, Salvador Rodriguez and Cesar Torres
ICIC Express Letters Part B: Applications ICIC International c 011 ISSN 185-766 Volume, Number, April 011 pp 415 40 DYNAMIC MODEL FOR AN ARTICULATED MANIPULATOR Luis Arturo Soriano, Jose de Jesus Rubio,
More informationVideo 8.1 Vijay Kumar. Property of University of Pennsylvania, Vijay Kumar
Video 8.1 Vijay Kumar 1 Definitions State State equations Equilibrium 2 Stability Stable Unstable Neutrally (Critically) Stable 3 Stability Translate the origin to x e x(t) =0 is stable (Lyapunov stable)
More informationExtremum Seeking for Dead-Zone Compensation and Its Application to a Two-Wheeled Robot
Extremum Seeking for Dead-Zone Compensation and Its Application to a Two-Wheeled Robot Dessy Novita Graduate School of Natural Science and Technology, Kanazawa University, Kakuma, Kanazawa, Ishikawa, Japan
More informationRigid Manipulator Control
Rigid Manipulator Control The control problem consists in the design of control algorithms for the robot motors, such that the TCP motion follows a specified task in the cartesian space Two types of task
More informationH 2 Adaptive Control. Tansel Yucelen, Anthony J. Calise, and Rajeev Chandramohan. WeA03.4
1 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July, 1 WeA3. H Adaptive Control Tansel Yucelen, Anthony J. Calise, and Rajeev Chandramohan Abstract Model reference adaptive
More informationGain Scheduling Control with Multi-loop PID for 2-DOF Arm Robot Trajectory Control
Gain Scheduling Control with Multi-loop PID for 2-DOF Arm Robot Trajectory Control Khaled M. Helal, 2 Mostafa R.A. Atia, 3 Mohamed I. Abu El-Sebah, 2 Mechanical Engineering Department ARAB ACADEMY FOR
More informationq HYBRID CONTROL FOR BALANCE 0.5 Position: q (radian) q Time: t (seconds) q1 err (radian)
Hybrid Control for the Pendubot Mingjun Zhang and Tzyh-Jong Tarn Department of Systems Science and Mathematics Washington University in St. Louis, MO, USA mjz@zach.wustl.edu and tarn@wurobot.wustl.edu
More informationAn experimental robot load identification method for industrial application
An experimental robot load identification method for industrial application Jan Swevers 1, Birgit Naumer 2, Stefan Pieters 2, Erika Biber 2, Walter Verdonck 1, and Joris De Schutter 1 1 Katholieke Universiteit
More informationVisual SLAM Tutorial: Bundle Adjustment
Visual SLAM Tutorial: Bundle Adjustment Frank Dellaert June 27, 2014 1 Minimizing Re-projection Error in Two Views In a two-view setting, we are interested in finding the most likely camera poses T1 w
More informationPosition and orientation of rigid bodies
Robotics 1 Position and orientation of rigid bodies Prof. Alessandro De Luca Robotics 1 1 Position and orientation right-handed orthogonal Reference Frames RF A A p AB B RF B rigid body position: A p AB
More informationDifferential Kinematics
Differential Kinematics Relations between motion (velocity) in joint space and motion (linear/angular velocity) in task space (e.g., Cartesian space) Instantaneous velocity mappings can be obtained through
More informationRobust Control of a 3D Space Robot with an Initial Angular Momentum based on the Nonlinear Model Predictive Control Method
Vol. 9, No. 6, 8 Robust Control of a 3D Space Robot with an Initial Angular Momentum based on the Nonlinear Model Predictive Control Method Tatsuya Kai Department of Applied Electronics Faculty of Industrial
More informationADAPTIVE VISION-BASED PATH FOLLOWING CONTROL OF A WHEELED ROBOT
ADAPTIVE VISION-BASED PATH FOLLOWING CONTROL OF A WHEELED ROBOT L. LAPIERRE, D. SOETANTO, A. PASCOAL Institute for Systems and Robotics - IST, Torre Norte, Piso 8, Av. Rovisco Pais,, 49- Lisbon, Portugal.
More informationEXPERIMENTAL COMPARISON OF TRAJECTORY TRACKERS FOR A CAR WITH TRAILERS
1996 IFAC World Congress San Francisco, July 1996 EXPERIMENTAL COMPARISON OF TRAJECTORY TRACKERS FOR A CAR WITH TRAILERS Francesco Bullo Richard M. Murray Control and Dynamical Systems, California Institute
More informationA Novel Integral-Based Event Triggering Control for Linear Time-Invariant Systems
53rd IEEE Conference on Decision and Control December 15-17, 2014. Los Angeles, California, USA A Novel Integral-Based Event Triggering Control for Linear Time-Invariant Systems Seyed Hossein Mousavi 1,
More informationTracking Control of Robot Manipulators with Bounded Torque Inputs* W.E. Dixon, M.S. de Queiroz, F. Zhang and D.M. Dawson
Robotica (1999) volume 17, pp. 121 129. Printed in the United Kingdom 1999 Cambridge University Press Tracking Control of Robot Manipulators with Bounded Torque Inputs* W.E. Dixon, M.S. de Queiroz, F.
More informationMulti-Robotic Systems
CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed
More informationAdaptive fuzzy observer and robust controller for a 2-DOF robot arm
Adaptive fuzzy observer and robust controller for a -DOF robot arm S. Bindiganavile Nagesh, Zs. Lendek, A.A. Khalate, R. Babuška Delft University of Technology, Mekelweg, 8 CD Delft, The Netherlands (email:
More informationTrajectory tracking & Path-following control
Cooperative Control of Multiple Robotic Vehicles: Theory and Practice Trajectory tracking & Path-following control EECI Graduate School on Control Supélec, Feb. 21-25, 2011 A word about T Tracking and
More informationRobust Control of Robot Manipulator by Model Based Disturbance Attenuation
IEEE/ASME Trans. Mechatronics, vol. 8, no. 4, pp. 511-513, Nov./Dec. 2003 obust Control of obot Manipulator by Model Based Disturbance Attenuation Keywords : obot manipulators, MBDA, position control,
More informationRobotics I. February 6, 2014
Robotics I February 6, 214 Exercise 1 A pan-tilt 1 camera sensor, such as the commercial webcams in Fig. 1, is mounted on the fixed base of a robot manipulator and is used for pointing at a (point-wise)
More informationAdaptive 3D Visual Servoing without Image Velocity Measurement for Uncertain Manipulators
Adaptive 3D Visual Servoing without Image Velocity Measurement for Uncertain Manipulators Antonio C. Leite, Alessandro R. L. Zachi, ernando Lizarralde and Liu Hsu Department of Electrical Engineering -
More informationStable Limit Cycle Generation for Underactuated Mechanical Systems, Application: Inertia Wheel Inverted Pendulum
Stable Limit Cycle Generation for Underactuated Mechanical Systems, Application: Inertia Wheel Inverted Pendulum Sébastien Andary Ahmed Chemori Sébastien Krut LIRMM, Univ. Montpellier - CNRS, 6, rue Ada
More informationGlobal robust output feedback tracking control of robot manipulators* W. E. Dixon, E. Zergeroglu and D. M. Dawson
Robotica 004) volume, pp. 35 357. 004 Cambridge University Press DOI: 0.07/S06357470400089 Printed in the United Kingdom Global robust output feedback tracking control of robot manipulators* W. E. Dixon,
More informationDynamics modeling of an electro-hydraulically actuated system
Dynamics modeling of an electro-hydraulically actuated system Pedro Miranda La Hera Dept. of Applied Physics and Electronics Umeå University xavier.lahera@tfe.umu.se Abstract This report presents a discussion
More informationDynamic Tracking Control of Uncertain Nonholonomic Mobile Robots
Dynamic Tracking Control of Uncertain Nonholonomic Mobile Robots Wenjie Dong and Yi Guo Department of Electrical and Computer Engineering University of Central Florida Orlando FL 3816 USA Abstract We consider
More informationSensor Localization and Target Estimation in Visual Sensor Networks
Annual Schedule of my Research Sensor Localization and Target Estimation in Visual Sensor Networks Survey and Problem Settings Presented in the FL seminar on May th First Trial and Evaluation of Proposed
More informationThe Jacobian. Jesse van den Kieboom
The Jacobian Jesse van den Kieboom jesse.vandenkieboom@epfl.ch 1 Introduction 1 1 Introduction The Jacobian is an important concept in robotics. Although the general concept of the Jacobian in robotics
More informationΜια προσπαθεια για την επιτευξη ανθρωπινης επιδοσης σε ρομποτικές εργασίες με νέες μεθόδους ελέγχου
Μια προσπαθεια για την επιτευξη ανθρωπινης επιδοσης σε ρομποτικές εργασίες με νέες μεθόδους ελέγχου Towards Achieving Human like Robotic Tasks via Novel Control Methods Zoe Doulgeri doulgeri@eng.auth.gr
More informationH-infinity Model Reference Controller Design for Magnetic Levitation System
H.I. Ali Control and Systems Engineering Department, University of Technology Baghdad, Iraq 6043@uotechnology.edu.iq H-infinity Model Reference Controller Design for Magnetic Levitation System Abstract-
More informationEnergy-based Swing-up of the Acrobot and Time-optimal Motion
Energy-based Swing-up of the Acrobot and Time-optimal Motion Ravi N. Banavar Systems and Control Engineering Indian Institute of Technology, Bombay Mumbai-476, India Email: banavar@ee.iitb.ac.in Telephone:(91)-(22)
More informationAdaptive set point control of robotic manipulators with amplitude limited control inputs* E. Zergeroglu, W. Dixon, A. Behal and D.
Robotica (2) volume 18, pp. 171 181. Printed in the United Kingdom 2 Cambridge University Press Adaptive set point control of robotic manipulators with amplitude limited control inputs* E. Zergeroglu,
More informationDecentralized PD Control for Non-uniform Motion of a Hamiltonian Hybrid System
International Journal of Automation and Computing 05(2), April 2008, 9-24 DOI: 0.007/s633-008-09-7 Decentralized PD Control for Non-uniform Motion of a Hamiltonian Hybrid System Mingcong Deng, Hongnian
More informationSliding Mode Control of Uncertain Multivariable Nonlinear Systems Applied to Uncalibrated Robotics Visual Servoing
2009 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June 10-12, 2009 WeA03.3 Sliding Mode Control of Uncertain Multivariable Nonlinear Systems Applied to Uncalibrated Robotics
More informationIMU-Camera Calibration: Observability Analysis
IMU-Camera Calibration: Observability Analysis Faraz M. Mirzaei and Stergios I. Roumeliotis {faraz stergios}@cs.umn.edu Dept. of Computer Science & Engineering University of Minnesota Minneapolis, MN 55455
More informationModelling and Simulation of a Wheeled Mobile Robot in Configuration Classical Tricycle
Modelling and Simulation of a Wheeled Mobile Robot in Configuration Classical Tricycle ISEA BONIA, FERNANDO REYES & MARCO MENDOZA Grupo de Robótica, Facultad de Ciencias de la Electrónica Benemérita Universidad
More informationDynamics. Basilio Bona. Semester 1, DAUIN Politecnico di Torino. B. Bona (DAUIN) Dynamics Semester 1, / 18
Dynamics Basilio Bona DAUIN Politecnico di Torino Semester 1, 2016-17 B. Bona (DAUIN) Dynamics Semester 1, 2016-17 1 / 18 Dynamics Dynamics studies the relations between the 3D space generalized forces
More informationPose estimation from point and line correspondences
Pose estimation from point and line correspondences Giorgio Panin October 17, 008 1 Problem formulation Estimate (in a LSE sense) the pose of an object from N correspondences between known object points
More informationRBF Neural Network Adaptive Control for Space Robots without Speed Feedback Signal
Trans. Japan Soc. Aero. Space Sci. Vol. 56, No. 6, pp. 37 3, 3 RBF Neural Network Adaptive Control for Space Robots without Speed Feedback Signal By Wenhui ZHANG, Xiaoping YE and Xiaoming JI Institute
More informationHIGHER ORDER SLIDING MODES AND ARBITRARY-ORDER EXACT ROBUST DIFFERENTIATION
HIGHER ORDER SLIDING MODES AND ARBITRARY-ORDER EXACT ROBUST DIFFERENTIATION A. Levant Institute for Industrial Mathematics, 4/24 Yehuda Ha-Nachtom St., Beer-Sheva 843, Israel Fax: +972-7-232 and E-mail:
More information