Unsupervised Learning: Dimension Reduction
|
|
- Claud Anthony
- 5 years ago
- Views:
Transcription
1 Unsupervised Learning: Diension Reduction by Prof. Seungchul Lee isystes Design Lab UNIST Table of Contents I.. Principal Coponent Analysis (PCA) II. 2. PCA Algorith I. 2.. Pre-processing II Maxiize variance III Miniize the su-of-squared error IV Diension Reduction ethod (n k) III. 3. Python Codes IV. 4. PCA Exaple V. 5. PCA Visualization I. 5.. Eigenvalues II Eigenvectors. Principal Coponent Analysis (PCA) Motivation: Can we describe high-diensional data in a "sipler" way? Diension reduction without losing too uch inforation Find a low-diensional, yet useful representation of the data Why diensionality reduction? insights into the low-diensinal structures in the data (visualization) Fewer diensions Less chances of overfitting Better generalization Speeding up learning algoriths Most algoriths scale badly with increasing data diensionality Less storage requireents (data copression) Note: Diensionality Reduction is different fro Feature Selection.. although the goals are kind of the sae Diensionality reduction is ore like Feature Extraction Constructing a sall set of new features fro the original features
2 How? idea: highly correlated data contains redundant features x {, } - Each exaple has 2 features x x 2 - Consider ignoring the feature x 2 for each exaple - Each 2-diensional exaple x now becoes -diensional x = { x } - Are we losing uch inforation by throwing away x 2? - No. Most of the data spread is along x (very little variance along x 2 ) x {, } - Each exaple has 2 features x x 2 - Consider ignoring the feature x 2 for each exaple - Each 2-diensional exaple x now becoes -diensional x = { x } - Are we losing uch inforation by throwing away x 2? - Yes, the data has substantial variance along both features (_i.e._ both axes) - Now consider a change of axes - Each exaple x has 2 features { u, u 2 } - Consider ignoring the feature u 2 for each exaple - Each 2-diensional exaple x now becoes -diensional x = { u } - No. Most of the data spread is along u (very little variance along u 2 )
3 Data projection onto unit vector u^ PCA is used when we want projections capturing axiu variance directions Principal Coponents (PC): directions of axiu variability in the data Roughly speaking, PCA does a change of axes that can represent the data in a succinct anner HOW?. Maxiize variance (ost separable) 2. Miniize the su-of-squares (iniu squared error) 2. PCA Algorith
4 2.. Pre-processing Given data x (i) =, X = x (i) x (i) n (x () ) T (x (2) ) T (x () ) T Shifting (zero ean) and rescaling (unit variance). Shift to zero ean μ = i= x (i) x (i) x (i) μ (zero ean) 2. [optional] Rescaling (unit variance) σ 2 j = i= ( x ) (i) j 2 x (i) j x(i) j σ j 2.2. Maxiize variance Find unit vector u such that axiizes variance of projections for big data variance of projected data = ( u T x (i) ) 2 = ( x (i)t u) 2 i= i= = ( x (i)t u ) T ( u) = x (i)t u T x (i) x (i)t u i= i= Note: = u T ( x (i) x (i)t ) u i= = u T Su (S = X : saple covariance atrix) XT In an optiization for axiize subject to u T Su u T u = u T Su = u T λu = λu T u = λ (Eigen analysis : Su = λu) pick the largest eigenvalue λ of covariance atrix S u = u is the λ s corresponding eigenvector u is the first principal coponent (direction of highest variance in the data)
5 2.3. Miniize the su-of-squared error e (i) 2 = x (i) 2 ( x (i)t u) 2 = x (i) 2 ( x (i)t u ) T ( x (i)t u) = x (i) 2 u T x (i) x (i)t u i= e (i) 2 = x (i) 2 u T x (i) x (i)t u i= i= = x (i) 2 u T ( x (i) x (i)t ) u i= i= In an optiization for iniize x (i) 2 i= constant given x i u T i= u T ( x (i) x (i)t ) u i= axiize axiize ( x (i) x (i)t ) u = ax u T Su iniize erro r 2 = axiize variance
6 2.4. Diension Reduction ethod ( n k). Choose top (orthonoral) eigenvectors, u u 2 2. Project x i onto span { u, u 2,, u k } u T x(i) k U = [,,, ] z (i) Pictorial suary of PCA u T 2 x(i) u k = or z = U T x u T k x(i) x (i) projection onto unit vector u u T x (i) = distance fro the origin along u 3. Python Codes In [3]: iport nupy as np iport atplotlib.pyplot as plt fro pl_toolkits.plot3d iport axes3d %atplotlib inline
7 In [2]: # data generation = 5000 u = np.array([0, 0]) siga = np.array([[3,.5], [.5, ]]) X = np.rando.ultivariate_noral(u, siga, ) X = np.asatrix(x) fig = plt.figure(figsize=(0, 6)) plt.plot(x[:,0], X[:,], 'k.') plt.axis('equal') plt.show() In [3]: S = /(-)*X.T*X S = np.asatrix(s) D, V = np.linalg.eig(s) idx = np.argsort(-d) D = D[idx] V = V[:,idx] print(d) print(v) [ ] [[ ] [ ]]
8 In [4]: h = V[,0]/V[0,0] xp = np.arange(-6, 6, 0.) yp = h*xp fig = plt.figure(figsize=(0, 6)) plt.plot(x[:,0], X[:,], 'k.') plt.plot(xp, yp, 'r', linewidth=4.0) plt.axis('equal') plt.show()
9 In [5]: Z = X*V[:,] plt.figure(figsize=(0, 6)) plt.hist(z, 5) plt.show() 4. PCA Exaple ultiple video caera records of spring and ass syste source: (
10 In [6]: %%htl <center><ifrae width="560" height="35" src=" fraeborder ="0" allowfullscreen> </ifrae></center> 3 cut In [7]: %%htl <center><ifrae width="560" height="35" src=" fraeborder ="0" allowfullscreen> </ifrae></center> 4 cut
11 In [8]: %%htl <center><ifrae width="560" height="35" src=" fraeborder ="0" allowfullscreen> </ifrae></center> 5 cut x (i) x in caera y in caera x in caera 2 =, X = y in caera 2 x in caera 3 y in caera 3 ( x () ) ( x (2) ) ( x () ) In [4]: fro six.oves iport cpickle X = cpickle.load(open('./data_files/pca_spring.pkl','rb')) X = np.asatrix(x.t) print(x.shape) = X.shape[0] (273, 6)
12 In [5]: plt.figure(figsize=(5, 7)) plt.subplot(3) plt.plot(x[:,0], -X[:,],'r') plt.axis('equal') plt.title('caera ') plt.subplot(32) plt.plot(x[:,2], -X[:,3],'b') plt.axis('equal') plt.title('caera 2') plt.subplot(33) plt.plot(x[:,4], -X[:,5],'k') plt.axis('equal') plt.title('caera 3') plt.show()
13 In [6]: X = X - np.ean(x,axis=0) S = /(-)*X.T*X S = np.asatrix(s) D, V = np.linalg.eig(s) idx = np.argsort(-d) D = D[idx] V = V[:,idx] print(d) print(v) [ e e e e e e+00] [[ ] [ ] [ ] [ ] [ ] [ ]] In [7]: plt.figure(figsize=(0,6)) plt.ste(np.sqrt(d)) plt.show()
14 In [9]: # relative agnitutes of the principal coponents Z = X*V xp = np.arange(0,)/24 plt.figure(figsize=(0, 6)) plt.plot(xp, Z) plt.yticks([]) plt.show()
15 In [0]: ## projected onto the first principal coponent # 6 di -> di (di reduction) # relative agnitute of the first principal coponent Z = X*V[:,0] plt.figure(figsize=(0, 6)) plt.plot(z) plt.yticks([]) plt.show() Reference: John P Cunningha & Byron M Yu, Diensionality reduction for large-scale neural recordings, Nature Neuroscience 7, (204)
16 5. PCA Visualization 5.. Eigenvalues, λ λ 2 indicates variance along the eigenvectors, respectively. The larger eigenvalue is, the ore doinante feature (eigenvector) is 5.2. Eigenvectors given basis {, } x^ x^2 to transfored basis { u^, u^2 } [ ] = [ ] [ ] u^ u^2 x^ x^2 c c 3 c 2 c 4 In [5]: %%javascript $.getscript(' c.js')
Logistic Regression. by Prof. Seungchul Lee isystems Design Lab UNIST. Table of Contents
Logistic Regression by Prof. Seungchul Lee isystes Design Lab http://isystes.unist.ac.kr/ UNIST Table of Contents I.. Linear Classification: Logistic Regression I... Using all Distances II..2. Probabilistic
More informationFeature Extraction Techniques
Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that
More informationPrincipal Components Analysis
Principal Coponents Analysis Cheng Li, Bingyu Wang Noveber 3, 204 What s PCA Principal coponent analysis (PCA) is a statistical procedure that uses an orthogonal transforation to convert a set of observations
More informationIntroduction to Machine Learning. Recitation 11
Introduction to Machine Learning Lecturer: Regev Schweiger Recitation Fall Seester Scribe: Regev Schweiger. Kernel Ridge Regression We now take on the task of kernel-izing ridge regression. Let x,...,
More informationProbabilistic Machine Learning
Probabilistic Machine Learning by Prof. Seungchul Lee isystes Design Lab http://isystes.unist.ac.kr/ UNIST Table of Contents I.. Probabilistic Linear Regression I... Maxiu Likelihood Solution II... Maxiu-a-Posteriori
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,
More informationSupport Vector Machine
Support Vector Machine by Prof. Seungchul Lee isystems Design Lab http://isystems.unist.ac.kr/ UNIS able of Contents I.. Classification (Linear) II.. Distance from a Line III. 3. Illustrative Example I.
More informationBlock designs and statistics
Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent
More informationLeast Squares Fitting of Data
Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a
More informationSlide10. Haykin Chapter 8: Principal Components Analysis. Motivation. Principal Component Analysis: Variance Probe
Slide10 Motivation Haykin Chapter 8: Principal Coponents Analysis 1.6 1.4 1.2 1 0.8 cloud.dat 0.6 CPSC 636-600 Instructor: Yoonsuck Choe Spring 2015 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 How can we
More informationKernel Methods and Support Vector Machines
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic
More informationPrincipal Component Analysis
Principal Component Analysis Yingyu Liang yliang@cs.wisc.edu Computer Sciences Department University of Wisconsin, Madison [based on slides from Nina Balcan] slide 1 Goals for the lecture you should understand
More informationMultivariate Methods. Matlab Example. Principal Components Analysis -- PCA
Multivariate Methos Xiaoun Qi Principal Coponents Analysis -- PCA he PCA etho generates a new set of variables, calle principal coponents Each principal coponent is a linear cobination of the original
More information3.3 Variational Characterization of Singular Values
3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and
More informationPattern Recognition and Machine Learning. Artificial Neural networks
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016/2017 Lessons 9 11 Jan 2017 Outline Artificial Neural networks Notation...2 Convolutional Neural Networks...3
More informationMachine Learning Basics: Estimators, Bias and Variance
Machine Learning Basics: Estiators, Bias and Variance Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Basics
More informationTopic 5a Introduction to Curve Fitting & Linear Regression
/7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline
More informationIntelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes
More informationSupport Vector Machines MIT Course Notes Cynthia Rudin
Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance
More informationCh 12: Variations on Backpropagation
Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith
More informationIntelligent Systems: Reasoning and Recognition. Artificial Neural Networks
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial
More informationBayes Decision Rule and Naïve Bayes Classifier
Bayes Decision Rule and Naïve Bayes Classifier Le Song Machine Learning I CSE 6740, Fall 2013 Gaussian Mixture odel A density odel p(x) ay be ulti-odal: odel it as a ixture of uni-odal distributions (e.g.
More informationMachine Learning: Fisher s Linear Discriminant. Lecture 05
Machine Learning: Fisher s Linear Discriinant Lecture 05 Razvan C. Bunescu chool of Electrical Engineering and Coputer cience bunescu@ohio.edu Lecture 05 upervised Learning ask learn an (unkon) function
More informationCombining Classifiers
Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/
More informationSupport Vector Machines. Maximizing the Margin
Support Vector Machines Support vector achines (SVMs) learn a hypothesis: h(x) = b + Σ i= y i α i k(x, x i ) (x, y ),..., (x, y ) are the training exs., y i {, } b is the bias weight. α,..., α are the
More informationChaotic Coupled Map Lattices
Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each
More informationSupervised Learning. 1. Optimization. without Scikit Learn. by Prof. Seungchul Lee Industrial AI Lab
Supervised Learning without Scikit Learn by Prof. Seungchul Lee Industrial AI Lab http://isystems.unist.ac.kr/ POSECH able of Contents I.. Optimization II.. Linear Regression III. 3. Classification (Linear)
More informationPattern Recognition and Machine Learning. Artificial Neural networks
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial
More informationIntroduction to Machine Learning
10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what
More informationLinear Classification
Linear Classification by Prof. Seungchul Lee isystems Design Lab http://isystems.unist.ac.kr/ UNIS able of Contents I.. Supervised Learning II.. Classification III. 3. Perceptron I. 3.. Linear Classifier
More informationŞtefan ŞTEFĂNESCU * is the minimum global value for the function h (x)
7Applying Nelder Mead s Optiization Algorith APPLYING NELDER MEAD S OPTIMIZATION ALGORITHM FOR MULTIPLE GLOBAL MINIMA Abstract Ştefan ŞTEFĂNESCU * The iterative deterinistic optiization ethod could not
More informationCS Lecture 13. More Maximum Likelihood
CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood
More informationDepartment of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China
6th International Conference on Machinery, Materials, Environent, Biotechnology and Coputer (MMEBC 06) Solving Multi-Sensor Multi-Target Assignent Proble Based on Copositive Cobat Efficiency and QPSO Algorith
More informationF = 0. x o F = -k x o v = 0 F = 0. F = k x o v = 0 F = 0. x = 0 F = 0. F = -k x 1. PHYSICS 151 Notes for Online Lecture 2.4.
PHYSICS 151 Notes for Online Lecture.4 Springs, Strings, Pulleys, and Connected Objects Hook s Law F = 0 F = -k x 1 x = 0 x = x 1 Let s start with a horizontal spring, resting on a frictionless table.
More informationModel Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon
Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential
More informationDonald Fussell. October 28, Computer Science Department The University of Texas at Austin. Point Masses and Force Fields.
s Vector Moving s and Coputer Science Departent The University of Texas at Austin October 28, 2014 s Vector Moving s Siple classical dynaics - point asses oved by forces Point asses can odel particles
More informationarxiv: v1 [cs.ds] 29 Jan 2012
A parallel approxiation algorith for ixed packing covering seidefinite progras arxiv:1201.6090v1 [cs.ds] 29 Jan 2012 Rahul Jain National U. Singapore January 28, 2012 Abstract Penghui Yao National U. Singapore
More informationIntroduction to Discrete Optimization
Prof. Friedrich Eisenbrand Martin Nieeier Due Date: March 9 9 Discussions: March 9 Introduction to Discrete Optiization Spring 9 s Exercise Consider a school district with I neighborhoods J schools and
More informationMULTIAGENT Resource Allocation (MARA) is the
EDIC RESEARCH PROPOSAL 1 Designing Negotiation Protocols for Utility Maxiization in Multiagent Resource Allocation Tri Kurniawan Wijaya LSIR, I&C, EPFL Abstract Resource allocation is one of the ain concerns
More informationMomentum. February 15, Table of Contents. Momentum Defined. Momentum Defined. p =mv. SI Unit for Momentum. Momentum is a Vector Quantity.
Table of Contents Click on the topic to go to that section Moentu Ipulse-Moentu Equation The Moentu of a Syste of Objects Conservation of Moentu Types of Collisions Collisions in Two Diensions Moentu Return
More informationSupervised Baysian SAR image Classification Using The Full Polarimetric Data
Supervised Baysian SAR iage Classification Using The Full Polarietric Data (1) () Ziad BELHADJ (1) SUPCOM, Route de Raoued 3.5 083 El Ghazala - TUNSA () ENT, BP. 37, 100 Tunis Belvedere, TUNSA Abstract
More informationCompression and Predictive Distributions for Large Alphabet i.i.d and Markov models
2014 IEEE International Syposiu on Inforation Theory Copression and Predictive Distributions for Large Alphabet i.i.d and Markov odels Xiao Yang Departent of Statistics Yale University New Haven, CT, 06511
More informationPCA, Kernel PCA, ICA
PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per
More informationFundamentals of Image Compression
Fundaentals of Iage Copression Iage Copression reduce the size of iage data file while retaining necessary inforation Original uncopressed Iage Copression (encoding) 01101 Decopression (decoding) Copressed
More informationBoosting with log-loss
Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the
More informationCOS 424: Interacting with Data. Written Exercises
COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well
More informationChapter 6 1-D Continuous Groups
Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:
More informationNeuroscience Introduction
Neuroscience Introduction The brain As humans, we can identify galaxies light years away, we can study particles smaller than an atom. But we still haven t unlocked the mystery of the three pounds of matter
More informationma x = -bv x + F rod.
Notes on Dynaical Systes Dynaics is the study of change. The priary ingredients of a dynaical syste are its state and its rule of change (also soeties called the dynaic). Dynaical systes can be continuous
More informationMulti-Scale/Multi-Resolution: Wavelet Transform
Multi-Scale/Multi-Resolution: Wavelet Transfor Proble with Fourier Fourier analysis -- breaks down a signal into constituent sinusoids of different frequencies. A serious drawback in transforing to the
More informationLecture 20 November 7, 2013
CS 229r: Algoriths for Big Data Fall 2013 Prof. Jelani Nelson Lecture 20 Noveber 7, 2013 Scribe: Yun Willia Yu 1 Introduction Today we re going to go through the analysis of atrix copletion. First though,
More informationRecovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)
Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains
More informationPrinciples of Optimal Control Spring 2008
MIT OpenCourseWare http://ocw.it.edu 16.323 Principles of Optial Control Spring 2008 For inforation about citing these aterials or our Ters of Use, visit: http://ocw.it.edu/ters. 16.323 Lecture 10 Singular
More informationPRINCIPAL COMPONENTS ANALYSIS
121 CHAPTER 11 PRINCIPAL COMPONENTS ANALYSIS We now have the tools necessary to discuss one of the most important concepts in mathematical statistics: Principal Components Analysis (PCA). PCA involves
More informationProc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES
Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co
More informationA New Class of APEX-Like PCA Algorithms
Reprinted fro Proceedings of ISCAS-98, IEEE Int. Syposiu on Circuit and Systes, Monterey (USA), June 1998 A New Class of APEX-Like PCA Algoriths Sione Fiori, Aurelio Uncini, Francesco Piazza Dipartiento
More informationASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical
IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul
More informationPh 20.3 Numerical Solution of Ordinary Differential Equations
Ph 20.3 Nuerical Solution of Ordinary Differential Equations Due: Week 5 -v20170314- This Assignent So far, your assignents have tried to failiarize you with the hardware and software in the Physics Coputing
More information1 Identical Parallel Machines
FB3: Matheatik/Inforatik Dr. Syaantak Das Winter 2017/18 Optiizing under Uncertainty Lecture Notes 3: Scheduling to Miniize Makespan In any standard scheduling proble, we are given a set of jobs J = {j
More informationReed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product.
Coding Theory Massoud Malek Reed-Muller Codes An iportant class of linear block codes rich in algebraic and geoetric structure is the class of Reed-Muller codes, which includes the Extended Haing code.
More informationLeonardo R. Bachega*, Student Member, IEEE, Srikanth Hariharan, Student Member, IEEE Charles A. Bouman, Fellow, IEEE, and Ness Shroff, Fellow, IEEE
Distributed Signal Decorrelation and Detection in Sensor Networks Using the Vector Sparse Matrix Transfor Leonardo R Bachega*, Student Meber, I, Srikanth Hariharan, Student Meber, I Charles A Bouan, Fellow,
More informatione-companion ONLY AVAILABLE IN ELECTRONIC FORM
OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer
More informationPerceptron. by Prof. Seungchul Lee Industrial AI Lab POSTECH. Table of Contents
Perceptron by Prof. Seungchul Lee Industrial AI Lab http://isystems.unist.ac.kr/ POSTECH Table of Contents I.. Supervised Learning II.. Classification III. 3. Perceptron I. 3.. Linear Classifier II. 3..
More informationConstruction of an index by maximization of the sum of its absolute correlation coefficients with the constituent variables
Construction of an index by axiization of the su of its absolute correlation coefficients with the constituent variables SK Mishra Departent of Econoics North-Eastern Hill University Shillong (India) I.
More informationWhat is Principal Component Analysis?
What is Principal Component Analysis? Principal component analysis (PCA) Reduce the dimensionality of a data set by finding a new set of variables, smaller than the original set of variables Retains most
More informationLecture #8-3 Oscillations, Simple Harmonic Motion
Lecture #8-3 Oscillations Siple Haronic Motion So far we have considered two basic types of otion: translation and rotation. But these are not the only two types of otion we can observe in every day life.
More informationUsing EM To Estimate A Probablity Density With A Mixture Of Gaussians
Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points
More informationTraining an RBM: Contrastive Divergence. Sargur N. Srihari
Training an RBM: Contrastive Divergence Sargur N. srihari@cedar.buffalo.edu Topics in Partition Function Definition of Partition Function 1. The log-likelihood gradient 2. Stochastic axiu likelihood and
More informationPhysics 215 Winter The Density Matrix
Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it
More informationEnvelope frequency Response Function Analysis of Mechanical Structures with Uncertain Modal Damping Characteristics
Copyright c 2007 Tech Science Press CMES, vol.22, no.2, pp.129-149, 2007 Envelope frequency Response Function Analysis of Mechanical Structures with Uncertain Modal Daping Characteristics D. Moens 1, M.
More informationLinear Transformations
Linear Transforations Hopfield Network Questions Initial Condition Recurrent Layer p S x W S x S b n(t + ) a(t + ) S x S x D a(t) S x S S x S a(0) p a(t + ) satlins (Wa(t) + b) The network output is repeatedly
More informationExperimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis
City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna
More informationAn l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions
Journal of Matheatical Research with Applications Jul., 207, Vol. 37, No. 4, pp. 496 504 DOI:0.3770/j.issn:2095-265.207.04.0 Http://jre.dlut.edu.cn An l Regularized Method for Nuerical Differentiation
More informationVariational Adaptive-Newton Method
Variational Adaptive-Newton Method Mohaad Etiyaz Khan Wu Lin Voot Tangkaratt Zuozhu Liu Didrik Nielsen AIP, RIKEN, Tokyo Abstract We present a black-box learning ethod called the Variational Adaptive-Newton
More information6.2 Grid Search of Chi-Square Space
6.2 Grid Search of Chi-Square Space exaple data fro a Gaussian-shaped peak are given and plotted initial coefficient guesses are ade the basic grid search strateg is outlined an actual anual search is
More informationDimensionality Reduction
Lecture 5 1 Outline 1. Overview a) What is? b) Why? 2. Principal Component Analysis (PCA) a) Objectives b) Explaining variability c) SVD 3. Related approaches a) ICA b) Autoencoders 2 Example 1: Sportsball
More informationPCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani
PCA & ICA CE-717: Machine Learning Sharif University of Technology Spring 2015 Soleymani Dimensionality Reduction: Feature Selection vs. Feature Extraction Feature selection Select a subset of a given
More informationGraphical Models in Local, Asymmetric Multi-Agent Markov Decision Processes
Graphical Models in Local, Asyetric Multi-Agent Markov Decision Processes Ditri Dolgov and Edund Durfee Departent of Electrical Engineering and Coputer Science University of Michigan Ann Arbor, MI 48109
More information2.003 Engineering Dynamics Problem Set 2 Solutions
.003 Engineering Dynaics Proble Set Solutions This proble set is priarily eant to give the student practice in describing otion. This is the subject of kineatics. It is strongly recoended that you study
More informationWarning System of Dangerous Chemical Gas in Factory Based on Wireless Sensor Network
565 A publication of CHEMICAL ENGINEERING TRANSACTIONS VOL. 59, 07 Guest Editors: Zhuo Yang, Junie Ba, Jing Pan Copyright 07, AIDIC Servizi S.r.l. ISBN 978-88-95608-49-5; ISSN 83-96 The Italian Association
More informationToday s s topics are: Collisions and Momentum Conservation. Momentum Conservation
Today s s topics are: Collisions and P (&E) Conservation Ipulsive Force Energy Conservation How can we treat such an ipulsive force? Energy Conservation Ipulsive Force and Ipulse [Exaple] an ipulsive force
More informationm potential kinetic forms of energy.
Spring, Chapter : A. near the surface of the earth. The forces of gravity and an ideal spring are conservative forces. With only the forces of an ideal spring and gravity acting on a ass, energy F F will
More informationarxiv: v1 [cs.ds] 3 Feb 2014
arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/
More informationFoundations of Machine Learning Boosting. Mehryar Mohri Courant Institute and Google Research
Foundations of Machine Learning Boosting Mehryar Mohri Courant Institute and Google Research ohri@cis.nyu.edu Weak Learning Definition: concept class C is weakly PAC-learnable if there exists a (weak)
More informationSymbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm
Acta Polytechnica Hungarica Vol., No., 04 Sybolic Analysis as Universal Tool for Deriving Properties of Non-linear Algoriths Case study of EM Algorith Vladiir Mladenović, Miroslav Lutovac, Dana Porrat
More informationThe linear sampling method and the MUSIC algorithm
INSTITUTE OF PHYSICS PUBLISHING INVERSE PROBLEMS Inverse Probles 17 (2001) 591 595 www.iop.org/journals/ip PII: S0266-5611(01)16989-3 The linear sapling ethod and the MUSIC algorith Margaret Cheney Departent
More informationSupport Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization
Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering
More informationarxiv: v2 [cs.lg] 30 Mar 2017
Batch Renoralization: Towards Reducing Minibatch Dependence in Batch-Noralized Models Sergey Ioffe Google Inc., sioffe@google.co arxiv:1702.03275v2 [cs.lg] 30 Mar 2017 Abstract Batch Noralization is quite
More informationPattern Recognition and Machine Learning. Artificial Neural networks
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial
More informationBest Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence
Best Ar Identification: A Unified Approach to Fixed Budget and Fixed Confidence Victor Gabillon Mohaad Ghavazadeh Alessandro Lazaric INRIA Lille - Nord Europe, Tea SequeL {victor.gabillon,ohaad.ghavazadeh,alessandro.lazaric}@inria.fr
More information1 Principal Components Analysis
Lecture 3 and 4 Sept. 18 and Sept.20-2006 Data Visualization STAT 442 / 890, CM 462 Lecture: Ali Ghodsi 1 Principal Components Analysis Principal components analysis (PCA) is a very popular technique for
More information(t, m, s)-nets and Maximized Minimum Distance, Part II
(t,, s)-nets and Maxiized Miniu Distance, Part II Leonhard Grünschloß and Alexander Keller Abstract The quality paraeter t of (t,, s)-nets controls extensive stratification properties of the generated
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October
More informationLower Bounds for Quantized Matrix Completion
Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &
More informationComputation. For QDA we need to calculate: Lets first consider the case that
Computation For QDA we need to calculate: δ (x) = 1 2 log( Σ ) 1 2 (x µ ) Σ 1 (x µ ) + log(π ) Lets first consider the case that Σ = I,. This is the case where each distribution is spherical, around the
More informationThis model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.
CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when
More informationKernel based Object Tracking using Color Histogram Technique
Kernel based Object Tracking using Color Histogra Technique 1 Prajna Pariita Dash, 2 Dipti patra, 3 Sudhansu Kuar Mishra, 4 Jagannath Sethi, 1,2 Dept of Electrical engineering, NIT, Rourkela, India 3 Departent
More information5.1 m is therefore the maximum height of the ball above the window. This is 25.1 m above the ground. (b)
.6. Model: This is a case of free fall, so the su of the kinetic and gravitational potential energy does not change as the ball rises and falls. The figure shows a ball s before-and-after pictorial representation
More informationCS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)
CS4495/6495 Introduction to Computer Vision 8B-L2 Principle Component Analysis (and its use in Computer Vision) Wavelength 2 Wavelength 2 Principal Components Principal components are all about the directions
More informationarxiv: v1 [stat.ml] 31 Jan 2018
Increental kernel PCA and the Nyströ ethod arxiv:802.00043v [stat.ml] 3 Jan 208 Fredrik Hallgren Departent of Statistical Science University College London London WCE 6BT, United Kingdo fredrik.hallgren@ucl.ac.uk
More information