Shengli Xie Minyue Fu Derong Liu
|
|
- Shawn Walsh
- 5 years ago
- Views:
Transcription
1 2
2 Shengli Xie Minyue Fu Derong Liu 3
3 F.L. Lewis, NAI Moncrief-O Donnell Chair, UTA Research Institute (UTARI) The University of Texas at Arlington, USA and Guest Foreign Professor, Guangdong University of Technology, Guangzhou China Supported by : NSF NRI Initiative ONR Assistive Human-Robot Interaction (HRI) and Dan Popa, University of Louisville and Isura Ranatunga, Reza Modares, Bakur AlQaudi Supported by : China NNSF China Project 111 Talk available online at
4 Multi-Modal Skin and Garments for Healthcare and Home Robots Dan O. Popa 1, Frank L. Lewis 1, Nicoleta Bugnariu 2, Woo Ho Lee 1 and Muthu Wijesundara 3 1 Department of Electrical Engineering, University of Texas at Arlington, 2 University of North Texas Health Science Center, 3 UT Arlington Research Institute Partner Companies: Advanced Arm Dynamics, Hanson Robotics, Inc., National Instruments 1. System Design: where to place sensors on robot? Novel algorithms and methods for optimal placement and data management of such devices on several co-robots. - Statistical adaptive sampling for sensor selection - Sensor fusion based on noise and sensor scaling models - Optimization algorithms for maximizing robot perception - New Collect sensor simulation models and robot control Meshes Wor GAZEBO algorithms Sens ld Sketch-up or Plu Plac gin eme nt Sen O Infrar W sor ut ed o Plu pu Accel rl gins t. SDF d Inf Tem rar p. Model ed Tactil Plugins Ac e Overlay cel Thermal. Temp User Te profile Application m C++/Python p. Diagram of SkinSim, multi-modal skin simulation environment Ta ctil e 4. Co-Robot performance: how does this technology help humans? The impact of the new technology to humans will be assessed, including the safety, level of assistance to several targeted user groups, ease of use, aesthetics, and therapeutic benefits. - Clinical Testing at UNTHSC and UTARI - Collaborative work with Advanced Arm Dynamics Assistance for Upper Limb Amputees PR2 Teach-by-demonstration PR2 Robot, Tactile sensor array and the environment in Gazebo Initial sensor prototypes & Robotic hardware Task requirements Human Interaction Data Collection R o b ot S e ns or H u m a n Measuremen t & Simulation Huma n Chara cter Robot and Skin Simulation phri Model Reference Neuroadaptive Controller Neuroadaptive Impedance Control Fabrication & Integration of Skin/Garment Hardware Iterate Designs & Algorithms Sensors and skin onto PR2 and youbot robots at UTARI Microsensor packaging and interconnects Interaction Learning Perceived Impedance Electro Hydro Dynamical sensor printing (maskless lithography) 2. Control and Learning: both human and robot learn during interaction Learning algorithms and adaptive impedance control for efficient use of multimodal sensors to sense human intent and improve the usability of co-robots. - Online reinforcement learning for phri with co- Robots wearing skin and garments, given humancentric rewards and cost functions - Neuroadaptive Impedance Control with stability and performance guarantees Where, f ( x) M( q)( qm e ) Vm( q, q )( q m e) F( q ) G( q) ˆ T ˆT W ( V x) Kvrvr The robustifying signal vt () is, vt () K ( Zˆ Z ) r 3. Devices: distributed skin sensors Integration of multi-modal, multi-resolution, MEMS skin sensors to include tactile, thermal, pressure, acceleration, and distance IR sensing. - Sensor design tuned for phri - Fabrication on flexible substrates - Robust packaging in Frubber & laminates - Efficient wire interconnect schemes Array of temperature sensors Parylene (polymer substrate) at UTARI z F B Top Kapton layer with electric traces and pads Pressure sensitive adhesive tape (50µm Bottom thick) Kapton layer with electric Concept: Microactuator traces and array using piezo pads actuator University of Texas at Arlington NRI Grant No Program Manager: Dr. Paul Werbos, ECCS, ENG, NSF
5 Fully Automated Robot vs. Assistive Robot
6 PR2 meets Isura
7 Standard Robot Trajectory Tracking Controller Where is the human?
8 Robot dynamics Impedance Control Prescribed Error system Control torque depends on Impedance model parameters
9 Human Performance Factors Studies Human task learning has 2 components: 1. Human learns a robot dynamics model to compensate for robot nonlinearities 2. Human learns a task model to properly perform a task Inner Robot Specific Control Loop INDEPENDENT OF TASK Outer Task Specific Control Loop INDEPENDENT OF ROBOT DETAILS
10 Two-loop HRI Design- Robot Control versus Task Control 1. Inner Robot-Specific Control Loop 2. Outer Task-Specific Control Loop 2A. Adaptive Inverse Filter Task Design 2B. Model Reference Adaptive Control Task Design 2C. Reinforcement Learning Task Control for Minimum Human Effort 3. Experiments
11 Task control outer loop Robot control inner loop
12 1. Inner Loop Robot Specific Controller Model reference Neuro adaptive Control Make the robot behave like the prescribed model There is NO prescribed trajectory in the robot control loop design
13 A Novel Control Objective Using Neuro adaptive Control Techniques F.L. Lewis, S. Jagannathan, and A. Yesildirek, Neural Network Control of Robot Manipulators and Nonlinear Systems, Taylor and Francis, London, F.L. Lewis, D.M. Dawson, and C.T. Abdallah, Robot Manipulator Control: Theory and Practice, 2 nd edition, Revised and Expanded, CRC Press, Boca Raton, 2006.
14 Model following error formulation Robot dynamics M ( qx ) Vqqx (, ) Fq ( ) Gq ( ) f = f f d c h Control torque T = J ( q) f c Prescribed robot impedance model M x D x K x = f m m m m m m h Model following error e= x m x Sliding mode error r = e e Error dynamics There is NO task trajectory here M ( qr ) = Vqqr (, ) f( ) fd fc fh Unknown robot nonlinear function f( )= M( q)( x e ) V( q, q )( x e) F( q ) G( q) m T T T T T T T = e e xm x m xm q q T m
15 Dynamics Robot M ( qx ) Vqqx (, ) Fq ( ) Gq ( ) f = f f d c h Error M ( qr ) = Vqqr (, ) f( ) fd fc fh Unknown nonlinearities parameterized in terms of a function approximator T T f( )= W ( V ) W, V unknown parameters Controller approximation based f = fˆ ( ) K rv( t) f c v h Estimate for unknown nonlinearities ˆ( )= ˆ T ( ˆT f W V ) WV ˆ, ˆ Estimated parameters Closed loop error dynamics M ( qr ) = Vqqr (, ) Kr f ( ) f vt ( ) v d Model following error driven by parameter estimation error ˆ T T ( ) ( ) ( )= ( ) ˆ T ( ˆT f f f W V W V )
16 Adaptive control structure ˆ T = ( ˆT f W V ) K rv( t) f c v h Standard Adaptive Parameter tuning algorithms ˆ = ( ˆT T ) ( ˆT ) ˆT T W F V r F V V r FrWˆ ˆ = ( ( ˆT T ) ˆ T V GV Wr) GrVˆ Model following error e x x = m r = e e Robust control term vt ()= K( Zˆ Z ) r z F B T T T T T T T = e e xm x m xm q q T No task reference trajectory is used here The robot controller makes the model following error e= x m x small The parameters of the admittance model M x D x K x = f are not needed m m m m m m h
17 No task trajectory information is used in this inner loop robot controller The inner loop robot controller makes the model following error small The admittance model parameters are not needed Only the admittance model trajectories x, x, x are needed. m m m
18 2. Outer Task-Specific Control Loop 2A. Adaptive Inverse Filter Task Design 2B. Model Reference Adaptive Control Task Design 2C. Reinforcement Learning Task Control for Minimum Human Effort
19 Task control outer loop Robot control inner loop
20 Three Outer Loop Designs To appear 2016
21 2A. Outer loop Task Specific Design #1 Work of Isura Ranatunga Adaptive Inverse Control and Wiener Filter B. Widrow Adaptive inverse filter Signal to robot controller Want to find M(s) so that Ds () M() shs () with H(s) and D(s) unknown For trajectory following task e.g. point to point motion control
22 Want to find M(s) so that Ds () M() shs () with H(s) and D(s) unknown Wiener Filter Solution in terms of power spectral densities f x () s h d Ds () M()= s = () s H() s f h f h
23 Find Wiener Filter online using adaptive learning f () h t xm () t 1/s 1/s b 1 a 1 1/s 1/s b 2 a 2 1/s 1/s b 3 a 3 d dt f h d dt x m
24 Ideal Filter Wiener filter solution x ()= t H() t () t d x ()= t H() t ˆ () t m Known regression vector Unknown coefficients Kalman Filter = CT RLS x d x m
25 Combined stability analysis of Inner robot control loop and Outer task following loop Lyapunov function L r M q r tr W F W tr V G V t P t t T T 1 T 1 T 1 = ( ) { } { } () () () Robot model following error Outer loop inverse adaptive filter error NN weight estimation errors
26 Shi nian shu mu Bai nian shu ren Keshi- Wu nian shu xuesheng
27 2B. Outer loop Task Specific Design #2 Work of Bakur AlQaudi Model Reference Adaptive Control K. Astrom BUT In standard MRAC, the controller appears before the unknown plant Here, the unknown plant (e.g. human) is BEFORE the controller
28 BUT In standard MRAC, the controller appears before the unknown plant Here, the unknown plant (e.g. human) is BEFORE the controller So we need to add a human dynamics identifier
29 Unknown Human model H () h s b y sa u c Basic muscle response model Nominal Robot impedance model H p () s b y n s a u n p To generate prescribed model trajectory x, x, x m m m Task reference model first order crossover model ideal human + robot system H s b y m m m() sam uc Human factors studies show that AFTER Learning, the human plus robot system obeys A simple first order roll off high bandwidth dynamics
30 Human response estimation error y y yˆ Parameter estimation error a aˆ a b bˆ b
31 Human response estimation error y y yˆ
32
33 Combined stability proof of overall 2 loop robot task system Adaptive tuning parameter errors r M q r tr W F W tr V G V T T 1 T 1 ( ) { } { } Inner model tracking error NN parameter estimation errors
34 PD controller like that provided by cerebellum Basic muscle response
35 2C. Outer loop Task Specific Design #3 Reinforcement Learning for minimum human effort Force exerted by human indicates his discontent A measure of Human Intent Feedforward assistive control term Work of Reza Modares l (.) Human force amplifier xd e + d ( Ks + Ks ) - p d Human 1 f h K h ( Ms + Bs + K) xm Prescribed Impedance Model Find robot impedance model parameters To minimize human force effort f h And task trajectory following error e d M, BK,
36 ( Ks+ K ) f = ke d p h e d Tracking error e = x -x Î d d m n e = [ e e ] = x -x Î T T T 2n d d d d x = [ x x ] Î 2 T T T n m m x = [ x x ] Î T T T 2n d d d
37 Performance index ò J = ( e T Q e + f T Q f + u T Ru ) dt t Then control is d d d h h h e e u = K e + K f e 1 d 2 h Minimize human effort and tracking error How to get human force into PI? ò T T J = ( X QX + u Ru ) dt t e e
38 Robot Impedance Model Unknown Human Model ( Ks+ K ) f= ke d p e d K f + K f = k e d h p h e d f =- K K f + k K e º A f + E e -1 h d p h e d,0 d h h h d Overall Augmented Dynamics Feedback linearization loop
39 Optimal control is an offline method Based on solving ARE Knowing all the plant dynamics We want online method to learn the optimal control without knowing the System Matrix A Optimal Design Always Admits Reinforcement Learning for Real time Optimal Adaptive Control
40 OFF POLICY Reinforcement Learning Needs NO knowledge of the system dynamics Take enough data along the system trajectory To solve this equation using least squares
41 Off Policy Reinforcement Learning Needs NO knowledge of the system dynamics Off policy IRL Bellman equation t+d t t+dt T T T T T Xt () PXt () + ò [2 Xt () PBe] dt = é Xt () QXt () u Ru ù dt Xt ( t) PXt ( t) t ò ê + + +D +D ë e eúû t t Off policy term Finds optimal control gains without using ANY system dynamics
42 3. Experimental Results on PR2
43 3. Experimental Results Work of Isura Ranatunga Sven Cremer
44
45
46 Point to point tracking error Human force effort
47 Future Work
48 Thanks!!
49
F.L. Lewis, NAI. Talk available online at Supported by : NSF AFOSR Europe ONR Marc Steinberg US TARDEC
F.L. Lewis, NAI Moncrief-O Donnell Chair, UTA Research Institute (UTARI) The University of Texas at Arlington, USA and Qian Ren Consulting Professor, State Key Laboratory of Synthetical Automation for
More informationLyapunov Design for Controls
F.L. Lewis, NAI Moncrief-O Donnell Chair, UTA Research Institute (UTARI) The University of Texas at Arlington, USA and Foreign Professor, Chongqing University, China Supported by : China Qian Ren Program,
More informationF.L. Lewis. New Developments in Integral Reinforcement Learning: Continuous-time Optimal Control and Games
F.L. Lewis National Academy of Inventors Moncrief-O Donnell Chair, UTA Research Institute (UTARI) The University of Texas at Arlington, USA and Qian Ren Consulting Professor, State Key Laboratory of Synthetical
More informationNeural Network Control of Robot Manipulators and Nonlinear Systems
Neural Network Control of Robot Manipulators and Nonlinear Systems F.L. LEWIS Automation and Robotics Research Institute The University of Texas at Arlington S. JAG ANNATHAN Systems and Controls Research
More informationDesign Artificial Nonlinear Controller Based on Computed Torque like Controller with Tunable Gain
World Applied Sciences Journal 14 (9): 1306-1312, 2011 ISSN 1818-4952 IDOSI Publications, 2011 Design Artificial Nonlinear Controller Based on Computed Torque like Controller with Tunable Gain Samira Soltani
More informationTrajectory Planning, Setpoint Generation and Feedforward for Motion Systems
2 Trajectory Planning, Setpoint Generation and Feedforward for Motion Systems Paul Lambrechts Digital Motion Control (4K4), 23 Faculty of Mechanical Engineering, Control Systems Technology Group /42 2
More informationNonlinear PD Controllers with Gravity Compensation for Robot Manipulators
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 4, No Sofia 04 Print ISSN: 3-970; Online ISSN: 34-408 DOI: 0.478/cait-04-00 Nonlinear PD Controllers with Gravity Compensation
More informationAdaptive Robust Tracking Control of Robot Manipulators in the Task-space under Uncertainties
Australian Journal of Basic and Applied Sciences, 3(1): 308-322, 2009 ISSN 1991-8178 Adaptive Robust Tracking Control of Robot Manipulators in the Task-space under Uncertainties M.R.Soltanpour, M.M.Fateh
More informationZhongping Jiang Tengfei Liu
Zhongpng Jang Tengfe Lu 4 F.L. Lews, NAI Moncref-O Donnell Char, UTA Research Insttute (UTARI) The Unversty of Texas at Arlngton, USA and Qan Ren Consultng Professor, State Key Laboratory of Synthetcal
More informationOutput Adaptive Model Reference Control of Linear Continuous State-Delay Plant
Output Adaptive Model Reference Control of Linear Continuous State-Delay Plant Boris M. Mirkin and Per-Olof Gutman Faculty of Agricultural Engineering Technion Israel Institute of Technology Haifa 3, Israel
More informationLinear Quadratic Regulator (LQR) Design II
Lecture 8 Linear Quadratic Regulator LQR) Design II Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Outline Stability and Robustness properties
More informationControl System Design
ELEC ENG 4CL4: Control System Design Notes for Lecture #36 Dr. Ian C. Bruce Room: CRL-229 Phone ext.: 26984 Email: ibruce@mail.ece.mcmaster.ca Friday, April 4, 2003 3. Cascade Control Next we turn to an
More informationNeural Network-Based Adaptive Control of Robotic Manipulator: Application to a Three Links Cylindrical Robot
Vol.3 No., 27 مجلد 3 العدد 27 Neural Network-Based Adaptive Control of Robotic Manipulator: Application to a Three Links Cylindrical Robot Abdul-Basset A. AL-Hussein Electrical Engineering Department Basrah
More informationH-infinity Model Reference Controller Design for Magnetic Levitation System
H.I. Ali Control and Systems Engineering Department, University of Technology Baghdad, Iraq 6043@uotechnology.edu.iq H-infinity Model Reference Controller Design for Magnetic Levitation System Abstract-
More informationVideo 5.1 Vijay Kumar and Ani Hsieh
Video 5.1 Vijay Kumar and Ani Hsieh Robo3x-1.1 1 The Purpose of Control Input/Stimulus/ Disturbance System or Plant Output/ Response Understand the Black Box Evaluate the Performance Change the Behavior
More informationNeural network based robust hybrid control for robotic system: an H approach
Nonlinear Dyn (211) 65:421 431 DOI 117/s1171-1-992-4 ORIGINAL PAPER Neural network based robust hybrid control for robotic system: an H approach Jinzhu Peng Jie Wang Yaonan Wang Received: 22 February 21
More informationLecture «Robot Dynamics»: Dynamics 2
Lecture «Robot Dynamics»: Dynamics 2 151-0851-00 V lecture: CAB G11 Tuesday 10:15 12:00, every week exercise: HG E1.2 Wednesday 8:15 10:00, according to schedule (about every 2nd week) office hour: LEE
More informationGAIN SCHEDULING CONTROL WITH MULTI-LOOP PID FOR 2- DOF ARM ROBOT TRAJECTORY CONTROL
GAIN SCHEDULING CONTROL WITH MULTI-LOOP PID FOR 2- DOF ARM ROBOT TRAJECTORY CONTROL 1 KHALED M. HELAL, 2 MOSTAFA R.A. ATIA, 3 MOHAMED I. ABU EL-SEBAH 1, 2 Mechanical Engineering Department ARAB ACADEMY
More informationRobot Manipulator Control. Hesheng Wang Dept. of Automation
Robot Manipulator Control Hesheng Wang Dept. of Automation Introduction Industrial robots work based on the teaching/playback scheme Operators teach the task procedure to a robot he robot plays back eecute
More informationOptimization of Model-Reference Variable-Structure Controller Parameters for Direct-Current Motor
Journal of Computations & Modelling, vol., no.3, 1, 67-88 ISSN: 179-765 (print), 179-885 (online) Scienpress Ltd, 1 Optimization of Model-Reference Variable-Structure Controller Parameters for Direct-Current
More informationINVERSE MODEL APPROACH TO DISTURBANCE REJECTION AND DECOUPLING CONTROLLER DESIGN. Leonid Lyubchyk
CINVESTAV Department of Automatic Control November 3, 20 INVERSE MODEL APPROACH TO DISTURBANCE REJECTION AND DECOUPLING CONTROLLER DESIGN Leonid Lyubchyk National Technical University of Ukraine Kharkov
More informationOn-line Learning of Robot Arm Impedance Using Neural Networks
On-line Learning of Robot Arm Impedance Using Neural Networks Yoshiyuki Tanaka Graduate School of Engineering, Hiroshima University, Higashi-hiroshima, 739-857, JAPAN Email: ytanaka@bsys.hiroshima-u.ac.jp
More informationMCE/EEC 647/747: Robot Dynamics and Control. Lecture 12: Multivariable Control of Robotic Manipulators Part II
MCE/EEC 647/747: Robot Dynamics and Control Lecture 12: Multivariable Control of Robotic Manipulators Part II Reading: SHV Ch.8 Mechanical Engineering Hanz Richter, PhD MCE647 p.1/14 Robust vs. Adaptive
More informationOn Practical Applications of Active Disturbance Rejection Control
2010 Chinese Control Conference On Practical Applications of Active Disturbance Rejection Control Qing Zheng Gannon University Zhiqiang Gao Cleveland State University Outline Ø Introduction Ø Active Disturbance
More informationFAULT-TOLERANT CONTROL OF CHEMICAL PROCESS SYSTEMS USING COMMUNICATION NETWORKS. Nael H. El-Farra, Adiwinata Gani & Panagiotis D.
FAULT-TOLERANT CONTROL OF CHEMICAL PROCESS SYSTEMS USING COMMUNICATION NETWORKS Nael H. El-Farra, Adiwinata Gani & Panagiotis D. Christofides Department of Chemical Engineering University of California,
More informationGain Scheduling Control with Multi-loop PID for 2-DOF Arm Robot Trajectory Control
Gain Scheduling Control with Multi-loop PID for 2-DOF Arm Robot Trajectory Control Khaled M. Helal, 2 Mostafa R.A. Atia, 3 Mohamed I. Abu El-Sebah, 2 Mechanical Engineering Department ARAB ACADEMY FOR
More informationPartially Observable Markov Decision Processes (POMDPs)
Partially Observable Markov Decision Processes (POMDPs) Sachin Patil Guest Lecture: CS287 Advanced Robotics Slides adapted from Pieter Abbeel, Alex Lee Outline Introduction to POMDPs Locally Optimal Solutions
More informationOverview of the Seminar Topic
Overview of the Seminar Topic Simo Särkkä Laboratory of Computational Engineering Helsinki University of Technology September 17, 2007 Contents 1 What is Control Theory? 2 History
More informationControl for. Maarten Steinbuch Dept. Mechanical Engineering Control Systems Technology Group TU/e
Control for Maarten Steinbuch Dept. Mechanical Engineering Control Systems Technology Group TU/e Motion Systems m F Introduction Timedomain tuning Frequency domain & stability Filters Feedforward Servo-oriented
More informationIntroduction to Reinforcement Learning. CMPT 882 Mar. 18
Introduction to Reinforcement Learning CMPT 882 Mar. 18 Outline for the week Basic ideas in RL Value functions and value iteration Policy evaluation and policy improvement Model-free RL Monte-Carlo and
More informationLecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004
MER42 Advanced Control Lecture 9 Introduction to Kalman Filtering Linear Quadratic Gaussian Control (LQG) G. Hovland 24 Announcement No tutorials on hursday mornings 8-9am I will be present in all practical
More informationRBF Neural Network Adaptive Control for Space Robots without Speed Feedback Signal
Trans. Japan Soc. Aero. Space Sci. Vol. 56, No. 6, pp. 37 3, 3 RBF Neural Network Adaptive Control for Space Robots without Speed Feedback Signal By Wenhui ZHANG, Xiaoping YE and Xiaoming JI Institute
More informationIntroduction to System Identification and Adaptive Control
Introduction to System Identification and Adaptive Control A. Khaki Sedigh Control Systems Group Faculty of Electrical and Computer Engineering K. N. Toosi University of Technology May 2009 Introduction
More informationD(s) G(s) A control system design definition
R E Compensation D(s) U Plant G(s) Y Figure 7. A control system design definition x x x 2 x 2 U 2 s s 7 2 Y Figure 7.2 A block diagram representing Eq. (7.) in control form z U 2 s z Y 4 z 2 s z 2 3 Figure
More informationMATH4406 (Control Theory) Unit 1: Introduction Prepared by Yoni Nazarathy, July 21, 2012
MATH4406 (Control Theory) Unit 1: Introduction Prepared by Yoni Nazarathy, July 21, 2012 Unit Outline Introduction to the course: Course goals, assessment, etc... What is Control Theory A bit of jargon,
More informationCHAPTER 3 TUNING METHODS OF CONTROLLER
57 CHAPTER 3 TUNING METHODS OF CONTROLLER 3.1 INTRODUCTION This chapter deals with a simple method of designing PI and PID controllers for first order plus time delay with integrator systems (FOPTDI).
More informationDeadzone Compensation in Motion Control Systems Using Neural Networks
602 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 4, APRIL 2000 Deadzone Compensation in Motion Control Systems Using Neural Networks Rastko R. Selmić and Frank L. Lewis Abstract A compensation
More informationTwo-Link Flexible Manipulator Control Using Sliding Mode Control Based Linear Matrix Inequality
IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Two-Link Flexible Manipulator Control Using Sliding Mode Control Based Linear Matrix Inequality To cite this article: Zulfatman
More informationObserver Based Output Feedback Tracking Control of Robot Manipulators
1 IEEE International Conference on Control Applications Part of 1 IEEE Multi-Conference on Systems and Control Yokohama, Japan, September 8-1, 1 Observer Based Output Feedback Tracking Control of Robot
More informationSeul Jung, T. C. Hsia and R. G. Bonitz y. Robotics Research Laboratory. University of California, Davis. Davis, CA 95616
On Robust Impedance Force Control of Robot Manipulators Seul Jung, T C Hsia and R G Bonitz y Robotics Research Laboratory Department of Electrical and Computer Engineering University of California, Davis
More informationIntroduction to centralized control
ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Control Part 2 Introduction to centralized control Independent joint decentralized control may prove inadequate when the user requires high task
More informationTrajectory planning and feedforward design for electromechanical motion systems version 2
2 Trajectory planning and feedforward design for electromechanical motion systems version 2 Report nr. DCT 2003-8 Paul Lambrechts Email: P.F.Lambrechts@tue.nl April, 2003 Abstract This report considers
More informationPath Integral Stochastic Optimal Control for Reinforcement Learning
Preprint August 3, 204 The st Multidisciplinary Conference on Reinforcement Learning and Decision Making RLDM203 Path Integral Stochastic Optimal Control for Reinforcement Learning Farbod Farshidian Institute
More informationOptimal Design of CMAC Neural-Network Controller for Robot Manipulators
22 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART C: APPLICATIONS AND REVIEWS, VOL. 30, NO. 1, FEBUARY 2000 Optimal Design of CMAC Neural-Network Controller for Robot Manipulators Young H. Kim
More informationChapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control
Chapter 3 LQ, LQG and Control System H 2 Design Overview LQ optimization state feedback LQG optimization output feedback H 2 optimization non-stochastic version of LQG Application to feedback system design
More informationVideo 8.1 Vijay Kumar. Property of University of Pennsylvania, Vijay Kumar
Video 8.1 Vijay Kumar 1 Definitions State State equations Equilibrium 2 Stability Stable Unstable Neutrally (Critically) Stable 3 Stability Translate the origin to x e x(t) =0 is stable (Lyapunov stable)
More informationTemporal-Difference Q-learning in Active Fault Diagnosis
Temporal-Difference Q-learning in Active Fault Diagnosis Jan Škach 1 Ivo Punčochář 1 Frank L. Lewis 2 1 Identification and Decision Making Research Group (IDM) European Centre of Excellence - NTIS University
More informationNew Advances in Uncertainty Analysis and Estimation
New Advances in Uncertainty Analysis and Estimation Overview: Both sensor observation data and mathematical models are used to assist in the understanding of physical dynamic systems. However, observational
More informationDr Ian R. Manchester
Week Content Notes 1 Introduction 2 Frequency Domain Modelling 3 Transient Performance and the s-plane 4 Block Diagrams 5 Feedback System Characteristics Assign 1 Due 6 Root Locus 7 Root Locus 2 Assign
More informationInverse Optimality Design for Biological Movement Systems
Inverse Optimality Design for Biological Movement Systems Weiwei Li Emanuel Todorov Dan Liu Nordson Asymtek Carlsbad CA 921 USA e-mail: wwli@ieee.org. University of Washington Seattle WA 98195 USA Google
More informationAn Adaptive Full-State Feedback Controller for Bilateral Telerobotic Systems
21 American Control Conference Marriott Waterfront Baltimore MD USA June 3-July 2 21 FrB16.3 An Adaptive Full-State Feedback Controller for Bilateral Telerobotic Systems Ufuk Ozbay Erkan Zergeroglu and
More informationFormally Analyzing Adaptive Flight Control
Formally Analyzing Adaptive Flight Control Ashish Tiwari SRI International 333 Ravenswood Ave Menlo Park, CA 94025 Supported in part by NASA IRAC NRA grant number: NNX08AB95A Ashish Tiwari Symbolic Verification
More informationChapter 2. Classical Control System Design. Dutch Institute of Systems and Control
Chapter 2 Classical Control System Design Overview Ch. 2. 2. Classical control system design Introduction Introduction Steady-state Steady-state errors errors Type Type k k systems systems Integral Integral
More informationDISTURBANCE OBSERVER BASED CONTROL: CONCEPTS, METHODS AND CHALLENGES
DISTURBANCE OBSERVER BASED CONTROL: CONCEPTS, METHODS AND CHALLENGES Wen-Hua Chen Professor in Autonomous Vehicles Department of Aeronautical and Automotive Engineering Loughborough University 1 Outline
More informationChapter 2 Review of Linear and Nonlinear Controller Designs
Chapter 2 Review of Linear and Nonlinear Controller Designs This Chapter reviews several flight controller designs for unmanned rotorcraft. 1 Flight control systems have been proposed and tested on a wide
More informationPERIODIC signals are commonly experienced in industrial
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. 15, NO. 2, MARCH 2007 369 Repetitive Learning Control of Nonlinear Continuous-Time Systems Using Quasi-Sliding Mode Xiao-Dong Li, Tommy W. S. Chow,
More informationA Tour of Reinforcement Learning The View from Continuous Control. Benjamin Recht University of California, Berkeley
A Tour of Reinforcement Learning The View from Continuous Control Benjamin Recht University of California, Berkeley trustable, scalable, predictable Control Theory! Reinforcement Learning is the study
More informationCDS 101/110a: Lecture 8-1 Frequency Domain Design
CDS 11/11a: Lecture 8-1 Frequency Domain Design Richard M. Murray 17 November 28 Goals: Describe canonical control design problem and standard performance measures Show how to use loop shaping to achieve
More informationLecture «Robot Dynamics»: Dynamics and Control
Lecture «Robot Dynamics»: Dynamics and Control 151-0851-00 V lecture: CAB G11 Tuesday 10:15 12:00, every week exercise: HG E1.2 Wednesday 8:15 10:00, according to schedule (about every 2nd week) Marco
More informationLINEAR QUADRATIC GAUSSIAN
ECE553: Multivariable Control Systems II. LINEAR QUADRATIC GAUSSIAN.: Deriving LQG via separation principle We will now start to look at the design of controllers for systems Px.t/ D A.t/x.t/ C B u.t/u.t/
More informationRobotics. Dynamics. University of Stuttgart Winter 2018/19
Robotics Dynamics 1D point mass, damping & oscillation, PID, dynamics of mechanical systems, Euler-Lagrange equation, Newton-Euler, joint space control, reference trajectory following, optimal operational
More informationREPETITIVE LEARNING OF BACKSTEPPING CONTROLLED NONLINEAR ELECTROHYDRAULIC MATERIAL TESTING SYSTEM 1. Seunghyeokk James Lee 2, Tsu-Chin Tsao
REPETITIVE LEARNING OF BACKSTEPPING CONTROLLED NONLINEAR ELECTROHYDRAULIC MATERIAL TESTING SYSTEM Seunghyeokk James Lee, Tsu-Chin Tsao Mechanical and Aerospace Engineering Department University of California
More informationTracking Control of an Ultrasonic Linear Motor Actuated Stage Using a Sliding-mode Controller with Friction Compensation
Vol. 3, No., pp. 3-39() http://dx.doi.org/.693/smartsci.. Tracking Control of an Ultrasonic Linear Motor Actuated Stage Using a Sliding-mode Controller with Friction Compensation Chih-Jer Lin,*, Ming-Jia
More informationOptimal Control with Learned Forward Models
Optimal Control with Learned Forward Models Pieter Abbeel UC Berkeley Jan Peters TU Darmstadt 1 Where we are? Reinforcement Learning Data = {(x i, u i, x i+1, r i )}} x u xx r u xx V (x) π (u x) Now V
More informationFall 線性系統 Linear Systems. Chapter 08 State Feedback & State Estimators (SISO) Feng-Li Lian. NTU-EE Sep07 Jan08
Fall 2007 線性系統 Linear Systems Chapter 08 State Feedback & State Estimators (SISO) Feng-Li Lian NTU-EE Sep07 Jan08 Materials used in these lecture notes are adopted from Linear System Theory & Design, 3rd.
More informationChapter 7 Control. Part State Space Control. Mobile Robotics - Prof Alonzo Kelly, CMU RI
Chapter 7 Control Part 2 7.2 State Space Control 1 7.2 State Space Control Outline 7.2.1 Introduction 7.2.2 State Space Feedback Control 7.2.3 Example: Robot Trajectory Following 7.2.4 Perception Based
More informationIntroduction to centralized control
Industrial Robots Control Part 2 Introduction to centralized control Independent joint decentralized control may prove inadequate when the user requires high task velocities structured disturbance torques
More informationStability Analysis of Optimal Adaptive Control under Value Iteration using a Stabilizing Initial Policy
Stability Analysis of Optimal Adaptive Control under Value Iteration using a Stabilizing Initial Policy Ali Heydari, Member, IEEE Abstract Adaptive optimal control using value iteration initiated from
More informationAlireza Mousavi Brunel University
Alireza Mousavi Brunel University 1 » Online Lecture Material at (www.brunel.ac.uk/~emstaam)» C. W. De Silva, Modelling and Control of Engineering Systems, CRC Press, Francis & Taylor, 2009.» M. P. Groover,
More informationGeneral procedure for formulation of robot dynamics STEP 1 STEP 3. Module 9 : Robot Dynamics & controls
Module 9 : Robot Dynamics & controls Lecture 32 : General procedure for dynamics equation forming and introduction to control Objectives In this course you will learn the following Lagrangian Formulation
More informationRobotics Part II: From Learning Model-based Control to Model-free Reinforcement Learning
Robotics Part II: From Learning Model-based Control to Model-free Reinforcement Learning Stefan Schaal Max-Planck-Institute for Intelligent Systems Tübingen, Germany & Computer Science, Neuroscience, &
More informationLearning to Control an Octopus Arm with Gaussian Process Temporal Difference Methods
Learning to Control an Octopus Arm with Gaussian Process Temporal Difference Methods Yaakov Engel Joint work with Peter Szabo and Dmitry Volkinshtein (ex. Technion) Why use GPs in RL? A Bayesian approach
More informationAdaptive Robust Control
Adaptive Robust Control Adaptive control: modifies the control law to cope with the fact that the system and environment are uncertain. Robust control: sacrifices performance by guaranteeing stability
More informationMechanical Engineering Department - University of São Paulo at São Carlos, São Carlos, SP, , Brazil
MIXED MODEL BASED/FUZZY ADAPTIVE ROBUST CONTROLLER WITH H CRITERION APPLIED TO FREE-FLOATING SPACE MANIPULATORS Tatiana FPAT Pazelli, Roberto S Inoue, Adriano AG Siqueira, Marco H Terra Electrical Engineering
More informationControl System Design
ELEC ENG 4CL4: Control System Design Notes for Lecture #24 Wednesday, March 10, 2004 Dr. Ian C. Bruce Room: CRL-229 Phone ext.: 26984 Email: ibruce@mail.ece.mcmaster.ca Remedies We next turn to the question
More informationM. De La Sen, A. Almansa and J. C. Soto Instituto de Investigación y Desarrollo de Procesos, Leioa ( Bizkaia). Aptdo. 644 de Bilbao, Spain
American Journal of Applied Sciences 4 (6): 346-353, 007 ISSN 546-939 007 Science Publications Adaptive Control of Robotic Manipulators with Improvement of the ransient Behavior hrough an Intelligent Supervision
More informationADAPTIVE FORCE AND MOTION CONTROL OF ROBOT MANIPULATORS IN CONSTRAINED MOTION WITH DISTURBANCES
ADAPTIVE FORCE AND MOTION CONTROL OF ROBOT MANIPULATORS IN CONSTRAINED MOTION WITH DISTURBANCES By YUNG-SHENG CHANG A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
More informationExam. 135 minutes, 15 minutes reading time
Exam August 6, 208 Control Systems II (5-0590-00) Dr. Jacopo Tani Exam Exam Duration: 35 minutes, 5 minutes reading time Number of Problems: 35 Number of Points: 47 Permitted aids: 0 pages (5 sheets) A4.
More informationStochastic and Adaptive Optimal Control
Stochastic and Adaptive Optimal Control Robert Stengel Optimal Control and Estimation, MAE 546 Princeton University, 2018! Nonlinear systems with random inputs and perfect measurements! Stochastic neighboring-optimal
More informationNatural and artificial constraints
FORCE CONTROL Manipulator interaction with environment Compliance control Impedance control Force control Constrained motion Natural and artificial constraints Hybrid force/motion control MANIPULATOR INTERACTION
More informationA Sliding Mode Control based on Nonlinear Disturbance Observer for the Mobile Manipulator
International Core Journal of Engineering Vol.3 No.6 7 ISSN: 44-895 A Sliding Mode Control based on Nonlinear Disturbance Observer for the Mobile Manipulator Yanna Si Information Engineering College Henan
More informationRobust Internal Model Control for Impulse Elimination of Singular Systems
International Journal of Control Science and Engineering ; (): -7 DOI:.59/j.control.. Robust Internal Model Control for Impulse Elimination of Singular Systems M. M. Share Pasandand *, H. D. Taghirad Department
More informationOutput Feedback Bilateral Teleoperation with Force Estimation in the Presence of Time Delays
Output Feedback Bilateral Teleoperation with Force Estimation in the Presence of Time Delays by John M. Daly A thesis presented to the University of Waterloo in fulfilment of the thesis requirement for
More informationDesign On-Line Tunable Gain Artificial Nonlinear Controller
Journal of Computer Engineering 1 (2009) 3-11 Design On-Line Tunable Gain Artificial Nonlinear Controller Farzin Piltan, Nasri Sulaiman, M. H. Marhaban and R. Ramli Department of Electrical and Electronic
More informationResearch on State-of-Charge (SOC) estimation using current integration based on temperature compensation
IOP Conference Series: Earth and Environmental Science PAPER OPEN ACCESS Research on State-of-Charge (SOC) estimation using current integration based on temperature compensation To cite this article: J
More informationLecture 6: CS395T Numerical Optimization for Graphics and AI Line Search Applications
Lecture 6: CS395T Numerical Optimization for Graphics and AI Line Search Applications Qixing Huang The University of Texas at Austin huangqx@cs.utexas.edu 1 Disclaimer This note is adapted from Section
More informationNeural Network Controller for Robotic Manipulator
MMAE54 Robotics- Class Project Paper Neural Network Controller for Robotic Manipulator Kai Qian Department of Biomeical Engineering, Illinois Institute of echnology, Chicago, IL 666 USA. Introuction Artificial
More informationAdaptive Predictive Observer Design for Class of Uncertain Nonlinear Systems with Bounded Disturbance
International Journal of Control Science and Engineering 2018, 8(2): 31-35 DOI: 10.5923/j.control.20180802.01 Adaptive Predictive Observer Design for Class of Saeed Kashefi *, Majid Hajatipor Faculty of
More informationOptimal Control. McGill COMP 765 Oct 3 rd, 2017
Optimal Control McGill COMP 765 Oct 3 rd, 2017 Classical Control Quiz Question 1: Can a PID controller be used to balance an inverted pendulum: A) That starts upright? B) That must be swung-up (perhaps
More informationDr Ian R. Manchester Dr Ian R. Manchester AMME 3500 : Review
Week Date Content Notes 1 6 Mar Introduction 2 13 Mar Frequency Domain Modelling 3 20 Mar Transient Performance and the s-plane 4 27 Mar Block Diagrams Assign 1 Due 5 3 Apr Feedback System Characteristics
More informationNeural Network Sliding-Mode-PID Controller Design for Electrically Driven Robot Manipulators
Neural Network Sliding-Mode-PID Controller Design for Electrically Driven Robot Manipulators S. E. Shafiei 1, M. R. Soltanpour 2 1. Department of Electrical and Robotic Engineering, Shahrood University
More informationCoordinated Tracking Control of Multiple Laboratory Helicopters: Centralized and De-Centralized Design Approaches
Coordinated Tracking Control of Multiple Laboratory Helicopters: Centralized and De-Centralized Design Approaches Hugh H. T. Liu University of Toronto, Toronto, Ontario, M3H 5T6, Canada Sebastian Nowotny
More informationProf. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be
REINFORCEMENT LEARNING AN INTRODUCTION Prof. Dr. Ann Nowé Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING WHAT IS IT? What is it? Learning from interaction Learning about, from, and while
More informationFeedback Control of Dynamic Bipedal Robot Locomotion
Feedback Control of Dynamic Bipedal Robot Locomotion Eric R. Westervelt Jessy W. Grizzle Christine Chevaiiereau Jun Ho Choi Benjamin Morris CRC Press Taylor & Francis Croup Boca Raton London New York CRC
More informationDesign and Control of Variable Stiffness Actuation Systems
Design and Control of Variable Stiffness Actuation Systems Gianluca Palli, Claudio Melchiorri, Giovanni Berselli and Gabriele Vassura DEIS - DIEM - Università di Bologna LAR - Laboratory of Automation
More informationTracking Control of Robot Manipulators with Bounded Torque Inputs* W.E. Dixon, M.S. de Queiroz, F. Zhang and D.M. Dawson
Robotica (1999) volume 17, pp. 121 129. Printed in the United Kingdom 1999 Cambridge University Press Tracking Control of Robot Manipulators with Bounded Torque Inputs* W.E. Dixon, M.S. de Queiroz, F.
More informationDevelopment of a Deep Recurrent Neural Network Controller for Flight Applications
Development of a Deep Recurrent Neural Network Controller for Flight Applications American Control Conference (ACC) May 26, 2017 Scott A. Nivison Pramod P. Khargonekar Department of Electrical and Computer
More informationACTIVE VIBRATION CONTROL PROTOTYPING IN ANSYS: A VERIFICATION EXPERIMENT
ACTIVE VIBRATION CONTROL PROTOTYPING IN ANSYS: A VERIFICATION EXPERIMENT Ing. Gergely TAKÁCS, PhD.* * Institute of Automation, Measurement and Applied Informatics Faculty of Mechanical Engineering Slovak
More informationReinforcement Learning. Yishay Mansour Tel-Aviv University
Reinforcement Learning Yishay Mansour Tel-Aviv University 1 Reinforcement Learning: Course Information Classes: Wednesday Lecture 10-13 Yishay Mansour Recitations:14-15/15-16 Eliya Nachmani Adam Polyak
More informationReinforcement Learning, Neural Networks and PI Control Applied to a Heating Coil
Reinforcement Learning, Neural Networks and PI Control Applied to a Heating Coil Charles W. Anderson 1, Douglas C. Hittle 2, Alon D. Katz 2, and R. Matt Kretchmar 1 1 Department of Computer Science Colorado
More information