Chapter 4 Supervised learning:
|
|
- Randall Allen
- 5 years ago
- Views:
Transcription
1 Chapter 4 Spervised learning: Mltilayer Networks II
2 Madaline Other Feedforward Networks Mltiple adalines of a sort as hidden nodes Weight change follows minimm distrbance principle Adaptive mlti-layer layer networks Dynamically change the network size # of hidden nodes Prediction networks BP nets for prediction Recrrent nets Networks of radial basis fnction RBF e.g., Gassian fnction Perform better than sigmoid fnction e.g., interpolation in fnction approximation Some other selected types of layered NN
3 Architectres Madaline Hidden layers of adaline nodes Otpt nodes differ Learning Error driven, bt not by gradient descent Minimm distrbance: smaller change of weights is preferred, provided it can redce the error Three Madaline models Different node fnctions Different learning rles MR I, II, and III MR I and II developed in 60 s, MR III mch later 88
4 Madaline MRI net: Otpt nodes with logic fnction MRII net: Otpt nodes are adalines MRIII net: Same as MRII, except the nodes with sigmoid fnction
5 MR II rle Madaline Only change weights associated with nodes which have small net j Bottom p, layer by layer Otline of algorithm 1. At layer h: sort all nodes in order of increasing net vales, pt those with net <θ, in S. For each A j in S if reversing its otpt change x j to -x j improves the otpt error, then change the weight vector leading into A j by LMS of Adaline or other ways w x net i ji, j j ji,
6 MR III rle Madaline Even thogh node fnction is sigmoid, do not se gradient descent do not assme its derivative is known Use trial adaptation E: total sqare error at otpt nodes E k : total sqare error at otpt nodes if net k at node k is increased by ε > 0 Change weight leading to node k according to w i Ek E / or w ie E E / Update weight to one node at a time It can be shown to be eqivalent to BP Since it is not explicitly dependent on derivatives, this method can be sed for hardware devices that inaccrately ate implement e sigmoid fnction k
7 Adaptive Mltilayer Networks Smaller nets are often preferred Compting is faster Generalize better Training is faster Fewer weights to be trained Smaller # of training samples needed Heristics for optimal net size Prning: start with a large net, then prne it by removing nimportant nodes and associated connections/weights Growing: start with a very small net, then continosly increase its size with small increments ntil the performance becomes satisfactory Combining the above two: a cycle of prning and growing g ntil performance is satisfied and no more prning is possible
8 Adaptive Mltilayer Networks Prning a network by removing Weights with small magnitde e.g., 0 Nodes with small incoming weights Weights whose existence does not significantly affect network otpt If o / w is negligible By examining the second derivative E 1 '' E w E w where E'' E w w w when E approaches a local minimm, E / w 0, then 1 E E'' w effect of removing w is to change it to 0, ie i.e., w w 1 whether to remove w depends on if E E'' w is sfficiently small E Inpt nodes can also be prned if the reslting change of is negligible
9 Adaptive Mltilayer Networks Cascade correlation example of growing net size Cascade architectre development Start with a net withot hidden nodes Each time one hidden node is added between the otpt nodes and all other nodes The new node is connected to otpt nodes, and from all other nodes inpt and all existing hidden nodes Not strictly feedforward
10 Adaptive Mltilayer Networks Correlation learning: when a new node n is added first train all inpt weights to node n from all nodes below maximize covariance with crrent error of otpt nodes E then train all weight to otpt t nodes minimize E qickprop is sed all other weights to lower hidden nodes are not changes so it trains fast
11 Adaptive Mltilayer Networks Train w new to maximize covariance covariance between x and E old S w x x E E new new new k, k K k 1 P p1 x new, p x new E k, p E k, where is the otpt of for th, p x p sample, the mean vale of x over all samples the error on th otpt node for th p k p sample with old weights, its mean vale over all samples when Sw new i is maximized, i variance of x p from x mirrors that tof error E k, p from Ek, Sw new ismaximized by gradient ascent S K P ' wi Sk Ek, p Ek f pii, p, where wi k 1 p1 ' Sk is the sign of correlation between xnewand Ek, f p is the th th derivative of x's node fnction, and I the i inpt of p sample i, p x new w new
12 Adaptive Mltilayer Networks Example: corner isolation problem Hidden nodes are with sigmoid fnction [-0.5, 0.5] When trained withot t hidden node: 4 ot of 1 patterns are misclassified After adding 1 hidden node, only patterns are misclassified After adding the second hidden node, all 1 patterns are correctly classified At least 4 hidden nodes are reqired with BP learning X X X X
13 Prediction Prediction Networks Predict ft based on vales of ft 1, ft, Two NN models: feedforward dand recrrent A simple example section Forecasting commodity price at month t based on its prices at previos months Using a BP net with a single hidden layer 1 otpt node: forecasted price for month t k inpt nodes sing price of previos k months for prediction k hidden nodes Training sample: for k = : {x t-, x t-1 x t } Raw data: flor prices for 100 consective months, 90 for training, 10 for cross validation testing one-lag forecasting: predict x t based on x t- and x t-1 1 mltilag: sing predicted vales for frther forecasting
14 Training: 90 inpt data vales Last 10 prices for validation test Three attempts: t k =, 4, 6 Learning rate = 0.3, momentm = ,000 50,000 epochs -- net with good prediction Twolarger nets over-trained d with larger prediction errors for validation data Prediction Networks Reslts Network MSE --1 Training one-lag mltilag Training one-lag mltilag Training one-lag mltilag
15 Prediction Networks Generic NN model for prediction Preprocessor prepares p training samples xt from time series data xt Train predictor sing samples xt e.g., by BP learning Preprocessor In the previos example, Let k = d + 1 sing previos d + 1 data points to predict inpt sample at time t: xt xt d,..., xt 1, xt the desired otpt e.g., prediction: xt 1 More general: c i is called a kernel fnction for different memory model how previos data are remembered Examples: exponential trace memory; gamma memory see p.141
16 Recrrent NN architectre Cycles in the net Prediction Networks Otpt nodes with connections to hidden/inpt nodes Connections between nodes at the same layer Nd Node may connect tto itself Each node receives external inpt as well as inpt from other nodes Each node may be affected by otpt of every other node With a given external inpt vector, the net often converges to an eqilibrim state after a nmber of iterations otpt t of every er node stops to change An alternative NN model for fnction approximation Fewer nodes, more flexible/complicated connections Learning procedre is often more complicated
17 Prediction Networks Approach I: nfolding to a feedforward net Each layer represents a time delay of the network evoltion Weights in different layers are identical A flly connected net of 3 nodes Cannot directly apply BP learning becase weights in different layers are constrained to be identical How many layers to nfold to? Hard to determine Eqivalent FF net of k layers
18 Approach II: gradient descent A more general approach Prediction Networks Error driven: for a given external inpt t d t o t e t E k k k k k where k are otpt nodes desired otpt are known Weight pdate wi, j t 1 wi, j t wi, j t E t ok t wi, j t k dk t ok t wi, j wi, j t o t1 z t k l f ' net t[ ] where k w t z t l k, l i, k l w t1 w t i, j i, j 1 if i k, 0 otherwise and z t is inpt to node k from either ik, l inpt nodes or other nodes o ok 0 0 w 0 i, j
19 NN of Radial Basis Fnctions Motivations: better performance than sigmoid fnctions For some classification problems Fnction interpolation Definition A fnction is radial symmetric or is RBF if its otpt depends on the distance between the inpt vector and a stored vector related to that fnction Distance i where i is the inpt vector, is the vector associated with the RBF Otpt 1 whenever 1 NN with RBF node fnction are called RBF-nets
20 NN of Radial Basis Fnctions Gassian fnction is the most widely sed RBF abellshaped fnction centered at =0 / c e a bell-shaped fnction centered at = 0. Continos and differentiable g e ' / then if / ' / c e e c c Other RBF In erse q adratic f nction h persh]pheric f nction etc / then if / / c c e e g c g c g Inverse qadratic fnction, hypersh]pheric fnction, etc Gassian fnction μ Inverse qadratic μ hyperspheric fnction μ Gassian fnction fnction 0, β for c hyperspheric fnction c c s if 0 if 1
21 e c / Consider Gassian fnction again gives the center of the region for activating this nit gives the max otpt c determines the size of the region ex: for e / c g 0.9 c = 0.1 = c = 1.0 = c = 10. = 3.46 g Small c Large c
22 NN of Radial Basis Fnctions Pattern classification 4 or 5 sigmoid hidden nodes are reqired for a good classification Only 1 RBF node is reqired if the fnction can approximate the circle x x x x x x x x x x x
23 XOR problem --1 network ewo NN of Radial Basis Fnctions hidden nodes are RBF: xt ρ1 x e 1, t1 [1,1] xt ρ x e, t [0,0] Otpt node can be step or sigmoid When inpt x is applied Hidden node calclates distance x then its otpt All weights to hidden nodes set to 1 Weights to otpt node trained by LMS t 1 and t can also been trained xt j ρ x ρ x x x 1, , , , , 0 0, 1 1, 0 1 x 1, 1
24 NN of Radial Basis Fnctions Fnction interpolation Sppose yo know f x 1 and x x, to approximate x x by linear interpolation: 1 0 x f f xx f x0 f x1 f x f x1 x0 x1 / x x1 Let D x x, D x x be the distances of x0 from 1and then f x0 [ f x1 D1D f x f x1 D1]/ D1D [ f x1 D f x D1]/ D1D [ D f x D f x ]/[ D D ] x x i.e., sm of fnction vales, weighted and normalized dby distances Generalized to interpolating by more than known f vales D1 f x1 D f x D P f xp D1 D D P 0 0 f x where e P is the nm ber of neighbors to x 0 0 Only those x with small distance to x are sefl f i 0 0
25 Example: NN of Radial Basis Fnctions 8 samples with known fnction vales f x 0 can be interpolated sing only 4 nearest neighbors x, x, x, 3 4 x5 f x D f x D f x D f x D f x D D D D D 9D 3D 8D D D D D 3 4 5
26 NN of Radial Basis Fnctions Using RBF node to achieve neighborhood One hidden node per sample x p: = x p p,, and D D Network otpt for approximating f x is proportional to where d f x p p 1 weights w p = d p /P otpt node with net P p1 w p x x p hidden RBF nodes: Otpt x x p x
27 NN of Radial Basis Fnctions Clstering samples Too many hidden nodes when # of samples is large Groping similar samples having similar inpt and similar desired otpt together into N clsters, each with The center: vector i Mean desired otpt: Nt Network otpt: t i Sppose we know how to determine N and how to clster all P samples not a easy task itself, i and i can be determined by learning
28 NN of Radial Basis Fnctions Learning in RBF net Objective: learning to minimize i i Gradient descent approach seqential mode where fnction R is defined as R D D One can also obtain i by other clstering techniqes, then se GD learning for only i
29 D x x, o w x, E E d o n N P P p i j1 p, j i, j p i1 i p i p1 p p1 p p Learning wi E o p p d o d o x, w d o x p p p p p i i i p p p i w w i i Learning i, j E o x i d o w d o p p p p p i p p i, j i, j i, j x x x p i p i p i R x x p i p, j i, j i, j x p i i, j = ' where RD D w d o R' x x i, j i, j i p p p i p, j i, j For Gassian fnctions: D exp D /, RD exp D/, R' D 1/ exp D/ exp / i i p p p i w d o x w d o x exp x / i, j i, j i p p p, j i, j p i
30 NN of Radial Basis Fnctions A strategy for learning RBF net Start with a single RBF hidden node for a single clster containing only the first training sample. For each of the new training samples x If it is close to any of the existing clsters, do the gradient descent based pdates of the w and φ for all existing clsters/hidden nodes Otherwise, adding a new hidden node for a clster containing only x RBF networks are niversal approximators same representational power as BP networks
31 Polynomial networks Polynomial Networks Node fnctions allow direct compting of polynomials of inpts Approximating higher order fnctions with fewer nodes even withot hidden nodes Each node has more connection weights Higher-order networks n n kn # of weights per node: 1 1 k Can be trained by LMS General fnction approximator
32 Sigma-pi networks Polynomial Networks Does not allow terms with higher powers of inpts, so they are not a general fnction approximator # of weights per node: n n n 1 1k Can be trained by LMS Pi-sigma networks One hidden layer with Sigma fnction: Otpt nodes with Pi fnction: Prodct nits: Node comptes prodct: Integer power P j,i can be learned Often mix with other nits e.g., sigmoid
Classify by number of ports and examine the possible structures that result. Using only one-port elements, no more than two elements can be assembled.
Jnction elements in network models. Classify by nmber of ports and examine the possible strctres that reslt. Using only one-port elements, no more than two elements can be assembled. Combining two two-ports
More informationSection 7.4: Integration of Rational Functions by Partial Fractions
Section 7.4: Integration of Rational Fnctions by Partial Fractions This is abot as complicated as it gets. The Method of Partial Fractions Ecept for a few very special cases, crrently we have no way to
More informationChapter 2 Single Layer Feedforward Networks
Chapter 2 Single Layer Feedforward Networks By Rosenblatt (1962) Perceptrons For modeling visual perception (retina) A feedforward network of three layers of units: Sensory, Association, and Response Learning
More informationOn the circuit complexity of the standard and the Karatsuba methods of multiplying integers
On the circit complexity of the standard and the Karatsba methods of mltiplying integers arxiv:1602.02362v1 [cs.ds] 7 Feb 2016 Igor S. Sergeev The goal of the present paper is to obtain accrate estimates
More informationPath-SGD: Path-Normalized Optimization in Deep Neural Networks
Path-SGD: Path-Normalized Optimization in Deep Neral Networks Behnam Neyshabr, Rslan Salakhtdinov and Nathan Srebro Toyota Technological Institte at Chicago Department of Compter Science, University of
More informationConcepts Introduced. Digital Electronics. Logic Blocks. Truth Tables
Concepts Introdced Digital Electronics trth tables, logic eqations, and gates combinational logic seqential logic Digital electronics operate at either high or low voltage. Compters se a binary representation
More informationThe Linear Quadratic Regulator
10 The Linear Qadratic Reglator 10.1 Problem formlation This chapter concerns optimal control of dynamical systems. Most of this development concerns linear models with a particlarly simple notion of optimality.
More informationDigital Image Processing. Lecture 8 (Enhancement in the Frequency domain) Bu-Ali Sina University Computer Engineering Dep.
Digital Image Processing Lectre 8 Enhancement in the Freqenc domain B-Ali Sina Uniersit Compter Engineering Dep. Fall 009 Image Enhancement In The Freqenc Domain Otline Jean Baptiste Joseph Forier The
More informationControl Using Logic & Switching: Part III Supervisory Control
Control Using Logic & Switching: Part III Spervisor Control Ttorial for the 40th CDC João P. Hespanha Universit of Sothern California Universit of California at Santa Barbara Otline Spervisor control overview
More informationDesigning MIPS Processor
CSE 675.: Introdction to Compter Architectre Designing IPS Processor (lti-cycle) Presentation H Reading Assignment: 5.5,5.6 lti-cycle Design Principles Break p eection of each instrction into steps. The
More informationStep-Size Bounds Analysis of the Generalized Multidelay Adaptive Filter
WCE 007 Jly - 4 007 London UK Step-Size onds Analysis of the Generalized Mltidelay Adaptive Filter Jnghsi Lee and Hs Chang Hang Abstract In this paper we analyze the bonds of the fixed common step-size
More informationTechnical Note. ODiSI-B Sensor Strain Gage Factor Uncertainty
Technical Note EN-FY160 Revision November 30, 016 ODiSI-B Sensor Strain Gage Factor Uncertainty Abstract Lna has pdated or strain sensor calibration tool to spport NIST-traceable measrements, to compte
More informationFormal Methods for Deriving Element Equations
Formal Methods for Deriving Element Eqations And the importance of Shape Fnctions Formal Methods In previos lectres we obtained a bar element s stiffness eqations sing the Direct Method to obtain eact
More informationControl Performance Monitoring of State-Dependent Nonlinear Processes
Control Performance Monitoring of State-Dependent Nonlinear Processes Lis F. Recalde*, Hong Ye Wind Energy and Control Centre, Department of Electronic and Electrical Engineering, University of Strathclyde,
More information3.4-Miscellaneous Equations
.-Miscellaneos Eqations Factoring Higher Degree Polynomials: Many higher degree polynomials can be solved by factoring. Of particlar vale is the method of factoring by groping, however all types of factoring
More informationOutline. Model Predictive Control: Current Status and Future Challenges. Separation of the control problem. Separation of the control problem
Otline Model Predictive Control: Crrent Stats and Ftre Challenges James B. Rawlings Department of Chemical and Biological Engineering University of Wisconsin Madison UCLA Control Symposim May, 6 Overview
More informationREINFORCEMENT LEARNING AND OPTIMAL ADAPTIVE CONTROL
Lewis c11.tex V1-10/19/2011 4:10pm Page 461 11 REINFORCEMENT LEARNING AND OPTIMAL ADAPTIVE CONTROL In this book we have presented a variety of methods for the analysis and design of optimal control systems.
More informationSimple Neural Nets For Pattern Classification
CHAPTER 2 Simple Neural Nets For Pattern Classification Neural Networks General Discussion One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification
More informationA Model-Free Adaptive Control of Pulsed GTAW
A Model-Free Adaptive Control of Plsed GTAW F.L. Lv 1, S.B. Chen 1, and S.W. Dai 1 Institte of Welding Technology, Shanghai Jiao Tong University, Shanghai 00030, P.R. China Department of Atomatic Control,
More informationSerious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions
BACK-PROPAGATION NETWORKS Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks Cannot approximate (learn) non-linear functions Difficult (if not impossible) to design
More informationFast Path-Based Neural Branch Prediction
Fast Path-Based Neral Branch Prediction Daniel A. Jiménez http://camino.rtgers.ed Department of Compter Science Rtgers, The State University of New Jersey Overview The context: microarchitectre Branch
More informationChapter 3 MATHEMATICAL MODELING OF DYNAMIC SYSTEMS
Chapter 3 MATHEMATICAL MODELING OF DYNAMIC SYSTEMS 3. System Modeling Mathematical Modeling In designing control systems we mst be able to model engineered system dynamics. The model of a dynamic system
More informationAndrew W. Moore Professor School of Computer Science Carnegie Mellon University
Spport Vector Machines Note to other teachers and sers of these slides. Andrew wold be delighted if yo fond this sorce material sefl in giving yor own lectres. Feel free to se these slides verbatim, or
More informationRadial-Basis Function Networks
Radial-Basis Function etworks A function is radial basis () if its output depends on (is a non-increasing function of) the distance of the input from a given stored vector. s represent local receptors,
More informationSources of Non Stationarity in the Semivariogram
Sorces of Non Stationarity in the Semivariogram Migel A. Cba and Oy Leangthong Traditional ncertainty characterization techniqes sch as Simple Kriging or Seqential Gassian Simlation rely on stationary
More informationRadial-Basis Function Networks
Radial-Basis Function etworks A function is radial () if its output depends on (is a nonincreasing function of) the distance of the input from a given stored vector. s represent local receptors, as illustrated
More informationPrandl established a universal velocity profile for flow parallel to the bed given by
EM 0--00 (Part VI) (g) The nderlayers shold be at least three thicknesses of the W 50 stone, bt never less than 0.3 m (Ahrens 98b). The thickness can be calclated sing Eqation VI-5-9 with a coefficient
More informationLecture: Corporate Income Tax - Unlevered firms
Lectre: Corporate Income Tax - Unlevered firms Ltz Krschwitz & Andreas Löffler Disconted Cash Flow, Section 2.1, Otline 2.1 Unlevered firms Similar companies Notation 2.1.1 Valation eqation 2.1.2 Weak
More informationLecture Notes: Finite Element Analysis, J.E. Akin, Rice University
9. TRUSS ANALYSIS... 1 9.1 PLANAR TRUSS... 1 9. SPACE TRUSS... 11 9.3 SUMMARY... 1 9.4 EXERCISES... 15 9. Trss analysis 9.1 Planar trss: The differential eqation for the eqilibrim of an elastic bar (above)
More informationFRTN10 Exercise 12. Synthesis by Convex Optimization
FRTN Exercise 2. 2. We want to design a controller C for the stable SISO process P as shown in Figre 2. sing the Yola parametrization and convex optimization. To do this, the control loop mst first be
More informationi=1 y i 1fd i = dg= P N i=1 1fd i = dg.
ECOOMETRICS II (ECO 240S) University of Toronto. Department of Economics. Winter 208 Instrctor: Victor Agirregabiria SOLUTIO TO FIAL EXAM Tesday, April 0, 208. From 9:00am-2:00pm (3 hors) ISTRUCTIOS: -
More informationThe Perceptron Algorithm
The Perceptron Algorithm Greg Grudic Greg Grudic Machine Learning Questions? Greg Grudic Machine Learning 2 Binary Classification A binary classifier is a mapping from a set of d inputs to a single output
More informationIntegration of Basic Functions. Session 7 : 9/23 1
Integration o Basic Fnctions Session 7 : 9/3 Antiderivation Integration Deinition: Taking the antiderivative, or integral, o some nction F(), reslts in the nction () i ()F() Pt simply: i yo take the integral
More informationWorkshop on Understanding and Evaluating Radioanalytical Measurement Uncertainty November 2007
1833-3 Workshop on Understanding and Evalating Radioanalytical Measrement Uncertainty 5-16 November 007 Applied Statistics: Basic statistical terms and concepts Sabrina BARBIZZI APAT - Agenzia per la Protezione
More informationSTEP Support Programme. STEP III Hyperbolic Functions: Solutions
STEP Spport Programme STEP III Hyperbolic Fnctions: Soltions Start by sing the sbstittion t cosh x. This gives: sinh x cosh a cosh x cosh a sinh x t sinh x dt t dt t + ln t ln t + ln cosh a ln ln cosh
More informationIntroduction to Neural Networks
Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning
More informationControl Systems
6.5 Control Systems Last Time: Introdction Motivation Corse Overview Project Math. Descriptions of Systems ~ Review Classification of Systems Linear Systems LTI Systems The notion of state and state variables
More information3.1 The Basic Two-Level Model - The Formulas
CHAPTER 3 3 THE BASIC MULTILEVEL MODEL AND EXTENSIONS In the previos Chapter we introdced a nmber of models and we cleared ot the advantages of Mltilevel Models in the analysis of hierarchically nested
More informationA Note on Irreducible Polynomials and Identity Testing
A Note on Irrecible Polynomials an Ientity Testing Chanan Saha Department of Compter Science an Engineering Inian Institte of Technology Kanpr Abstract We show that, given a finite fiel F q an an integer
More informationVectors in Rn un. This definition of norm is an extension of the Pythagorean Theorem. Consider the vector u = (5, 8) in R 2
MATH 307 Vectors in Rn Dr. Neal, WKU Matrices of dimension 1 n can be thoght of as coordinates, or ectors, in n- dimensional space R n. We can perform special calclations on these ectors. In particlar,
More informationCurves - Foundation of Free-form Surfaces
Crves - Fondation of Free-form Srfaces Why Not Simply Use a Point Matrix to Represent a Crve? Storage isse and limited resoltion Comptation and transformation Difficlties in calclating the intersections
More informationFEA Solution Procedure
EA Soltion Procedre (demonstrated with a -D bar element problem) MAE 5 - inite Element Analysis Several slides from this set are adapted from B.S. Altan, Michigan Technological University EA Procedre for
More information22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1
Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable
More information1 Undiscounted Problem (Deterministic)
Lectre 9: Linear Qadratic Control Problems 1 Undisconted Problem (Deterministic) Choose ( t ) 0 to Minimize (x trx t + tq t ) t=0 sbject to x t+1 = Ax t + B t, x 0 given. x t is an n-vector state, t a
More informationPRINCIPLES OF NEURAL SPATIAL INTERACTION MODELLING. Manfred M. Fischer Vienna University of Economics and Business
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol 38, Part II PRINCIPLES OF NEURAL SPATIAL INTERACTION MODELLING Manfred M Fischer Vienna University
More informationIMPROVED ANALYSIS OF BOLTED SHEAR CONNECTION UNDER ECCENTRIC LOADS
Jornal of Marine Science and Technology, Vol. 5, No. 4, pp. 373-38 (17) 373 DOI: 1.6119/JMST-17-3-1 IMPROVED ANALYSIS OF BOLTED SHEAR ONNETION UNDER EENTRI LOADS Dng-Mya Le 1, heng-yen Liao, hien-hien
More information4 Exact laminar boundary layer solutions
4 Eact laminar bondary layer soltions 4.1 Bondary layer on a flat plate (Blasis 1908 In Sec. 3, we derived the bondary layer eqations for 2D incompressible flow of constant viscosity past a weakly crved
More informationAssignment Fall 2014
Assignment 5.086 Fall 04 De: Wednesday, 0 December at 5 PM. Upload yor soltion to corse website as a zip file YOURNAME_ASSIGNMENT_5 which incldes the script for each qestion as well as all Matlab fnctions
More information1 The space of linear transformations from R n to R m :
Math 540 Spring 20 Notes #4 Higher deriaties, Taylor s theorem The space of linear transformations from R n to R m We hae discssed linear transformations mapping R n to R m We can add sch linear transformations
More informationChapter 3 Supervised learning:
Chapter 3 Supervised learning: Multilayer Networks I Backpropagation Learning Architecture: Feedforward network of at least one layer of non-linear hidden nodes, e.g., # of layers L 2 (not counting the
More informationLecture: Corporate Income Tax
Lectre: Corporate Income Tax Ltz Krschwitz & Andreas Löffler Disconted Cash Flow, Section 2.1, Otline 2.1 Unlevered firms Similar companies Notation 2.1.1 Valation eqation 2.1.2 Weak atoregressive cash
More informationm = Average Rate of Change (Secant Slope) Example:
Average Rate o Change Secant Slope Deinition: The average change secant slope o a nction over a particlar interval [a, b] or [a, ]. Eample: What is the average rate o change o the nction over the interval
More informationLecture 3. (2) Last time: 3D space. The dot product. Dan Nichols January 30, 2018
Lectre 3 The dot prodct Dan Nichols nichols@math.mass.ed MATH 33, Spring 018 Uniersity of Massachsetts Janary 30, 018 () Last time: 3D space Right-hand rle, the three coordinate planes 3D coordinate system:
More informationDecision Oriented Bayesian Design of Experiments
Decision Oriented Bayesian Design of Experiments Farminder S. Anand*, Jay H. Lee**, Matthew J. Realff*** *School of Chemical & Biomoleclar Engineering Georgia Institte of echnology, Atlanta, GA 3332 USA
More informationFOUNTAIN codes [3], [4] provide an efficient solution
Inactivation Decoding of LT and Raptor Codes: Analysis and Code Design Francisco Lázaro, Stdent Member, IEEE, Gianligi Liva, Senior Member, IEEE, Gerhard Bach, Fellow, IEEE arxiv:176.5814v1 [cs.it 19 Jn
More informationImage and Multidimensional Signal Processing
Image and Mltidimensional Signal Processing Professor William Hoff Dept of Electrical Engineering &Compter Science http://inside.mines.ed/~whoff/ Forier Transform Part : D discrete transforms 2 Overview
More informationPulses on a Struck String
8.03 at ESG Spplemental Notes Plses on a Strck String These notes investigate specific eamples of transverse motion on a stretched string in cases where the string is at some time ndisplaced, bt with a
More informationData-Efficient Control Policy Search using Residual Dynamics Learning
Data-Efficient Control Policy Search sing Residal Dynamics Learning Matteo Saveriano 1, Ychao Yin 1, Pietro Falco 1 and Donghei Lee 1,2 Abstract In this work, we propose a model-based and data efficient
More informationSetting The K Value And Polarization Mode Of The Delta Undulator
LCLS-TN-4- Setting The Vale And Polarization Mode Of The Delta Undlator Zachary Wolf, Heinz-Dieter Nhn SLAC September 4, 04 Abstract This note provides the details for setting the longitdinal positions
More informationMachine Learning. Kernels. Fall (Kernels, Kernelized Perceptron and SVM) Professor Liang Huang. (Chap. 12 of CIML)
Machine Learning Fall 2017 Kernels (Kernels, Kernelized Perceptron and SVM) Professor Liang Huang (Chap. 12 of CIML) Nonlinear Features x4: -1 x1: +1 x3: +1 x2: -1 Concatenated (combined) features XOR:
More informationCDS 110b: Lecture 1-2 Introduction to Optimal Control
CDS 110b: Lectre 1-2 Introdction to Optimal Control Richard M. Mrray 4 Janary 2006 Goals: Introdce the problem of optimal control as method of trajectory generation State the maimm principle and give eamples
More informationJoint Transfer of Energy and Information in a Two-hop Relay Channel
Joint Transfer of Energy and Information in a Two-hop Relay Channel Ali H. Abdollahi Bafghi, Mahtab Mirmohseni, and Mohammad Reza Aref Information Systems and Secrity Lab (ISSL Department of Electrical
More informationOPTIMUM EXPRESSION FOR COMPUTATION OF THE GRAVITY FIELD OF A POLYHEDRAL BODY WITH LINEARLY INCREASING DENSITY 1
OPTIMUM EXPRESSION FOR COMPUTATION OF THE GRAVITY FIEL OF A POLYHERAL BOY WITH LINEARLY INCREASING ENSITY 1 V. POHÁNKA2 Abstract The formla for the comptation of the gravity field of a polyhedral body
More informationDesigning Single-Cycle MIPS Processor
CSE 32: Introdction to Compter Architectre Designing Single-Cycle IPS Processor Presentation G Stdy:.-. Gojko Babić 2/9/28 Introdction We're now ready to look at an implementation of the system that incldes
More informationGen Hebb Learn PPaplinski Generalized Hebbian Learning and its pplication in Dimensionality Redction Illstrative Example ndrew P Paplinski Department
Faclty of Compting and Information Technology Department of Digital Systems Technical Report 9-2 Generalized Hebbian Learning and its pplication in Dimensionality Redction Illstrative Example ndrew P Paplinski
More informationEE2 Mathematics : Functions of Multiple Variables
EE2 Mathematics : Fnctions of Mltiple Variables http://www2.imperial.ac.k/ nsjones These notes are not identical word-for-word with m lectres which will be gien on the blackboard. Some of these notes ma
More informationARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD
ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided
More informationDevelopment of Second Order Plus Time Delay (SOPTD) Model from Orthonormal Basis Filter (OBF) Model
Development of Second Order Pls Time Delay (SOPTD) Model from Orthonormal Basis Filter (OBF) Model Lemma D. Tfa*, M. Ramasamy*, Sachin C. Patwardhan **, M. Shhaimi* *Chemical Engineering Department, Universiti
More informationIII. Demonstration of a seismometer response with amplitude and phase responses at:
GG5330, Spring semester 006 Assignment #1, Seismometry and Grond Motions De 30 Janary 006. 1. Calibration Of A Seismometer Using Java: A really nifty se of Java is now available for demonstrating the seismic
More informationMove Blocking Strategies in Receding Horizon Control
Move Blocking Strategies in Receding Horizon Control Raphael Cagienard, Pascal Grieder, Eric C. Kerrigan and Manfred Morari Abstract In order to deal with the comptational brden of optimal control, it
More informationLecture 5: Logistic Regression. Neural Networks
Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture
More informationCharacterizations of probability distributions via bivariate regression of record values
Metrika (2008) 68:51 64 DOI 10.1007/s00184-007-0142-7 Characterizations of probability distribtions via bivariate regression of record vales George P. Yanev M. Ahsanllah M. I. Beg Received: 4 October 2006
More informationLecture Notes On THEORY OF COMPUTATION MODULE - 2 UNIT - 2
BIJU PATNAIK UNIVERSITY OF TECHNOLOGY, ODISHA Lectre Notes On THEORY OF COMPUTATION MODULE - 2 UNIT - 2 Prepared by, Dr. Sbhend Kmar Rath, BPUT, Odisha. Tring Machine- Miscellany UNIT 2 TURING MACHINE
More informationProblem Class 4. More State Machines (Problem Sheet 3 con t)
Problem Class 4 More State Machines (Problem Sheet 3 con t) Peter Cheng Department of Electrical & Electronic Engineering Imperial College London URL: www.ee.imperial.ac.k/pcheng/ee2_digital/ E-mail: p.cheng@imperial.ac.k
More informationCSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18
CSE 417T: Introduction to Machine Learning Final Review Henry Chai 12/4/18 Overfitting Overfitting is fitting the training data more than is warranted Fitting noise rather than signal 2 Estimating! "#$
More informationGraph-Modeled Data Clustering: Fixed-Parameter Algorithms for Clique Generation
Graph-Modeled Data Clstering: Fied-Parameter Algorithms for Cliqe Generation Jens Gramm Jiong Go Falk Hüffner Rolf Niedermeier Wilhelm-Schickard-Institt für Informatik, Universität Tübingen, Sand 13, D-72076
More informationMulti-Voltage Floorplan Design with Optimal Voltage Assignment
Mlti-Voltage Floorplan Design with Optimal Voltage Assignment ABSTRACT Qian Zaichen Department of CSE The Chinese University of Hong Kong Shatin,N.T., Hong Kong zcqian@cse.chk.ed.hk In this paper, we stdy
More informationTrace-class Monte Carlo Markov Chains for Bayesian Multivariate Linear Regression with Non-Gaussian Errors
Trace-class Monte Carlo Markov Chains for Bayesian Mltivariate Linear Regression with Non-Gassian Errors Qian Qin and James P. Hobert Department of Statistics University of Florida Janary 6 Abstract Let
More informationUncertainties of measurement
Uncertainties of measrement Laboratory tas A temperatre sensor is connected as a voltage divider according to the schematic diagram on Fig.. The temperatre sensor is a thermistor type B5764K [] with nominal
More informationA New Approach to Direct Sequential Simulation that Accounts for the Proportional Effect: Direct Lognormal Simulation
A ew Approach to Direct eqential imlation that Acconts for the Proportional ffect: Direct ognormal imlation John Manchk, Oy eangthong and Clayton Detsch Department of Civil & nvironmental ngineering University
More informationLinear System Theory (Fall 2011): Homework 1. Solutions
Linear System Theory (Fall 20): Homework Soltions De Sep. 29, 20 Exercise (C.T. Chen: Ex.3-8). Consider a linear system with inpt and otpt y. Three experiments are performed on this system sing the inpts
More informationStability of Model Predictive Control using Markov Chain Monte Carlo Optimisation
Stability of Model Predictive Control sing Markov Chain Monte Carlo Optimisation Elilini Siva, Pal Golart, Jan Maciejowski and Nikolas Kantas Abstract We apply stochastic Lyapnov theory to perform stability
More informationCollective Inference on Markov Models for Modeling Bird Migration
Collective Inference on Markov Models for Modeling Bird Migration Daniel Sheldon Cornell University dsheldon@cs.cornell.ed M. A. Saleh Elmohamed Cornell University saleh@cam.cornell.ed Dexter Kozen Cornell
More information3. Several Random Variables
. Several Random Variables. To Random Variables. Conditional Probabilit--Revisited. Statistical Independence.4 Correlation beteen Random Variables Standardied (or ero mean normalied) random variables.5
More informationRadial Basis Function (RBF) Networks
CSE 5526: Introduction to Neural Networks Radial Basis Function (RBF) Networks 1 Function approximation We have been using MLPs as pattern classifiers But in general, they are function approximators Depending
More informationCubic graphs have bounded slope parameter
Cbic graphs have bonded slope parameter B. Keszegh, J. Pach, D. Pálvölgyi, and G. Tóth Agst 25, 2009 Abstract We show that every finite connected graph G with maximm degree three and with at least one
More informationLinear and Nonlinear Model Predictive Control of Quadruple Tank Process
Linear and Nonlinear Model Predictive Control of Qadrple Tank Process P.Srinivasarao Research scholar Dr.M.G.R.University Chennai, India P.Sbbaiah, PhD. Prof of Dhanalaxmi college of Engineering Thambaram
More informationMATH2715: Statistical Methods
MATH275: Statistical Methods Exercises VI (based on lectre, work week 7, hand in lectre Mon 4 Nov) ALL qestions cont towards the continos assessment for this modle. Q. The random variable X has a discrete
More informationThe Cryptanalysis of a New Public-Key Cryptosystem based on Modular Knapsacks
The Cryptanalysis of a New Pblic-Key Cryptosystem based on Modlar Knapsacks Yeow Meng Chee Antoine Jox National Compter Systems DMI-GRECC Center for Information Technology 45 re d Ulm 73 Science Park Drive,
More informationPC1 PC4 PC2 PC3 PC5 PC6 PC7 PC8 PC
Accracy verss interpretability in exible modeling: implementing a tradeo sing Gassian process models Tony A. Plate (tap@mcs.vw.ac.nz) School of Mathematical and Compting Sciences Victoria University of
More informationFEA Solution Procedure
EA Soltion Procedre (demonstrated with a -D bar element problem) EA Procedre for Static Analysis. Prepare the E model a. discretize (mesh) the strctre b. prescribe loads c. prescribe spports. Perform calclations
More informationAN ALTERNATIVE DECOUPLED SINGLE-INPUT FUZZY SLIDING MODE CONTROL WITH APPLICATIONS
AN ALTERNATIVE DECOUPLED SINGLE-INPUT FUZZY SLIDING MODE CONTROL WITH APPLICATIONS Fang-Ming Y, Hng-Yan Chng* and Chen-Ning Hang Department of Electrical Engineering National Central University, Chngli,
More informationAn Investigation into Estimating Type B Degrees of Freedom
An Investigation into Estimating Type B Degrees of H. Castrp President, Integrated Sciences Grop Jne, 00 Backgrond The degrees of freedom associated with an ncertainty estimate qantifies the amont of information
More informationAN ALTERNATIVE DECOUPLED SINGLE-INPUT FUZZY SLIDING MODE CONTROL WITH APPLICATIONS
AN ALTERNATIVE DECOUPLED SINGLE-INPUT FUZZY SLIDING MODE CONTROL WITH APPLICATIONS Fang-Ming Y, Hng-Yan Chng* and Chen-Ning Hang Department of Electrical Engineering National Central University, Chngli,
More informationReflections on a mismatched transmission line Reflections.doc (4/1/00) Introduction The transmission line equations are given by
Reflections on a mismatched transmission line Reflections.doc (4/1/00) Introdction The transmission line eqations are given by, I z, t V z t l z t I z, t V z, t c z t (1) (2) Where, c is the per-nit-length
More informationChapter 2 Difficulties associated with corners
Chapter Difficlties associated with corners This chapter is aimed at resolving the problems revealed in Chapter, which are cased b corners and/or discontinos bondar conditions. The first section introdces
More informationPREDICTABILITY OF SOLID STATE ZENER REFERENCES
PREDICTABILITY OF SOLID STATE ZENER REFERENCES David Deaver Flke Corporation PO Box 99 Everett, WA 986 45-446-6434 David.Deaver@Flke.com Abstract - With the advent of ISO/IEC 175 and the growth in laboratory
More informationSensitivity Analysis in Bayesian Networks: From Single to Multiple Parameters
Sensitivity Analysis in Bayesian Networks: From Single to Mltiple Parameters Hei Chan and Adnan Darwiche Compter Science Department University of California, Los Angeles Los Angeles, CA 90095 {hei,darwiche}@cs.cla.ed
More informationEssentials of optimal control theory in ECON 4140
Essentials of optimal control theory in ECON 4140 Things yo need to know (and a detail yo need not care abot). A few words abot dynamic optimization in general. Dynamic optimization can be thoght of as
More information