Computational Information Games Houman Owhadi
|
|
- Darlene Williamson
- 5 years ago
- Views:
Transcription
1 Computatonal Informaton Games Houman Owhad TexAMP 215
2 Man Queston Can we turn the process of dscovery of a scalable numercal method nto a UQ problem and, to some degree, solve t as such n an automated fashon? Can we use a computer, not only to mplement a numercal method but also to fnd the method tself?
3 Problem: Fnd a method for solvng (1) as fast as possble to a gven accuracy (1) dv(a u) =g, x Ω, u =, x Ω, Ω R d Ω s pec. Lp. a unf. ell. a,j L (Ω) log 1 (a)
4 Multgrd Methods Multgrd: [Fedorenko, 1961, Brandt, 1973, Hackbusch, 1978] Multresoluton/Wavelet based methods [Brewster and Beylkn, 1995, Beylkn and Coult, 1998, Averbuch et al., 1998] R m Lnear complexty wth smooth coeffcents Problem Severely affected by lack of smoothness
5 Robust/Algebrac multgrd [Mandel et al., 1999,Wan-Chan-Smth, 1999, Xu and Zkatanov, 24, Xu and Zhu, 28], [Ruge-Stüben, 1987] [Panayot - 21] Stablzed Herarchcal bases, Multlevel precondtoners [Vasslevsk - Wang, 1997, 1998] [Panayot - Vasslevsk, 1997] [Chow - Vasslevsk, 23] [Aksoylu- Holst, 21] Some degree of robustness but problem remans open wth rough coeffcents Why? Interpolaton operators are unknown Don t know how to brdge scales wth rough coeffcents!
6 Low Rank Matrx Decomposton methods Fast Multpole Method: [Greengard and Rokhln, 1987] Herarchcal Matrx Method: [Hackbusch et al., 22] [Bebendorf, 28]: N ln d+3 N complexty
7 Common theme between these methods Ther process of dscovery s based on ntuton, brllant nsght, and guesswork Can we turn ths process of dscovery nto an algorthm?
8 Answer: YES Compute fast Play adversaral Informaton game Identfy game Compute wth partal nformaton Fnd optmal strategy [Owhad 215, Mult-grd wth rough coeffcents and Multresoluton PDE decomposton from Herarchcal Informaton Games, arxv: ] Resultng method: N ln 2 N complexty Ths s a theorem
9 ( dv(a u) =g n Ω, Resultng method: u =on Ω, H 1 (Ω) =W (1) a W (2) a a W (k) a < ψ, χ > a := R Ω ( ψ)t a χ =for(ψ, χ) W () W (j), 6= j Theorem For v W (k) C 1 2 k kvk a k dv(a v)k L 2 (Ω) C 2 2 k kvk 2 a :=< v,v> a = R Ω ( v)t a v Looks lke an egenspace decomposton
10 u = w (1) + w (2) + + w (k) + w (k) =F.E.sol.ofPDEnW (k) Can be computed ndependently B (k) :Stffness matrx of PDE n W (k) Theorem λ max(b (k) ) λ mn (B (k) ) C Just relax n W (k) to fnd w (k) Quacks lke an egenspace decomposton
11 Multresoluton decomposton of soluton space u.14 w (1) w (2) w (3) = w (4) w (5) w (6) + + Solve tme-dscretzed wave equaton (mplct tme steps) wth rough coeffcents n O(N ln 2 N)-complexty Swms lke an egenspace decomposton
12 V: F.E.spaceofH 1 (Ω) ofdm.n Theorem The decomposton V = W (1) a W (2) a a W (k) Canbeperformedandstoredn O(N ln 2 N)operatons Doesn t have the complexty of an egenspace decomposton
13 ψ (1) χ (2) χ (3) χ (4) χ (5) χ (6) Bass functons look lke and behave lke wavelets: Localzed and can be used to compress the operator and locally analyze the soluton space
14 H 1 (Ω) u dv(a ) Reduced operator H 1 (Ω) g Inverse Problem R m u m g m Numercal mplementaton requres R m computaton wth partal nformaton. φ 1,...,φ m L 2 (Ω) u m =( φ Ω 1 u,..., Ω φ mu) u m R m Mssng nformaton u H 1 (Ω)
15 Dscovery process ( dv(a u) =g n Ω, u =on Ω, Identfy underlyng nformaton game Measurement functons: φ 1,...,φ m L 2 (Ω) Player A Player B Chooses g L 2 (Ω) Sees Ω uφ 1,..., Ω uφ m kgk L 2 (Ω) 1 Chooses u L 2 (Ω) Max Mn u u a kfk 2 a := R Ω ( f)t a f
16 Determnstc zero sum game Player B Player A Player A s payoff Player A & B both have a blue and a red marble At the same tme, they show each other a marble How should A & B play the (repeated) game?
17 Optmal strateges are mxed strateges Optmal way to play s at random q Player B 1 q Game theory p Player A 3-2 John Von Neumann 1 p -2 1 John Nash A s expected payoff =3pq +(1 p)(1 q) 2p(1 q) 2q(1 p) =1 3q + p(8q 3) = 1 8 for q = 3 8
18 Player A Chooses g L 2 (Ω) kgk L 2 (Ω) 1 Player B Sees R Ω uφ 1,..., R Ω uφ m Chooses u L 2 (Ω) u u a Contnuous game but as n decson theory under compactness t can be approxmated by a fnte game Abraham Wald The best strategy for A s to play at random Player B s best strategy lve n the Bayesan class of estmators
19 Player B s class of mxed strateges Pretend that player A s choosng g at random g L 2 (Ω) ξ: Randomfeld ( dv(a u) =g n Ω, u =on Ω, ( dv(a v) =ξ n Ω, v =on Ω, Player B s u (x) :=E bet v(x) R Ω v(y)φ (y) dy = R Ω u(y)φ (y) dy, Player s B optmal strategy? Player B s best bet? mn max problem over dstrbuton of ξ
20 Computatonal effcency ξ N (, Γ) Theorem Gamblets Elementary gambles form determnstc bass functons for player B s bet u (x) = P m =1 ψ (x) R Ω u(y)φ (y) dy ψ : Elementary gambles/bets Player B s bet f R Ω uφ j = δ,j,j=1,...,m ψ (x) :=E ξ N (,Γ) hv(x) RΩ v(y)φ j(y) dy = δ,j,j {1,...,m} ψ
21 What are these gamblets? Depend on Γ: Covarance functon of ξ (Player B s decson) (φ ) m =1 : Measurements functons (rules of the game) Example [Owhad, 214] arxv: x 1 Γ(x, y) =δ(x y) φ (x) =δ(x x ) Ω x x m a = I d a,j L (Ω) ψ : Polyharmonc splnes [Harder-Desmaras, 1972][Duchon 1976, 1977,1978] ψ : Rough Polyharmonc splnes [Owhad-Zhang-Berlyand 213]
22 What s Player B s best strategy? What s Player B s best choce for Γ(x, y) =E ξ(x)ξ(y)? Γ = L R Ω ξ(x)f(x) dx N (, kfk2 a) kfk 2 a := R Ω ( f)t a f L = dv(a ) Why? See algebrac generalzaton
23 The recovery s optmal (Galerkn projecton) Theorem If Γ = L then u (x) s the F.E. soluton of (1) n span{l 1 φ =1,...,m} ku u k a =nf ψ span{l 1 φ : {1,...,m}} ku ψk a L = dv(a ) ( dv(a u) =g, x Ω, (1) u =, x Ω,
24 Optmal varatonal propertes Theorem P m =1 w ψ mnmzes kψk a over all ψ such that R Ω φ jψ = w j for j =1,...,m Varatonal characterzaton ψ : Unque mnmzer of Theorem ( Mnmze kψk a Subject to ψ H 1(Ω) and R Ω φ jψ = δ,j, j =1,...,m
25 Selecton of measurement functons Example Indcator functons of a Partton of Ω of resoluton H φ =1 τ τ Ω τ j dam(τ ) H Theorem ku u k a H λ mn (a) kgk L 2 (Ω)
26 Elementary gamble ψ Your best bet on the value of u gven the nformaton that R τ u =1and R τ j u =forj 6= τ 1 ( dv(a u) =g, x Ω, (1) u =, x Ω, τ j Ω
27 Exponental decay of gamblets Ω τ r Theorem RΩ (B(τ,r)) c ( ψ ) T a ψ e r lh kψ k 2 a ψ ψ 4 x-axs slce 1 x-axs slce log ψ
28 Localzaton of the computaton of gamblets ψ loc,r ( Mnmze : Mnmzer of Subject to kψk a ψ H 1 (S r )and R S r φ j ψ = δ,j τ S r r Ω No loss of accuracy f localzaton H ln 1 H u,loc (x) = P m for τ j S r =1 ψloc,r (x) R Ω u(y)φ (y) dy Theorem If r CH ln 1 H ku u,loc 1 k a Hkgk L λmn (a) 2 (Ω)
29 Formulaton of the herarchcal game
30 Herarchy of nested Measurement functons φ (k) 1,..., k φ (k) Example wth k {1,...,q} = P j c,jφ (k+1),j φ (1) 1 φ (2) 1,j 1 φ (2) 1,j 2 φ (2) 1,j 3 φ (2) 1,j 4 φ (3) 1,j 2,k 1 φ (3) 1,j 2,k 2 φ (3) 1,j 2,k 3 φ (3) 1,j 2,k 4 : Indcator functons of a herarchcal nested partton of Ω of resoluton H k =2 k φ (k) τ (1) 2 τ (2) 2,3 τ (3) 2,3,1 φ (1) 2 =1 τ (1) 2 φ (2) 2,3 =1 τ (2) 2,3 φ (3) 2,3,1 =1 τ (3) 2,3,1
31 In the dscrete settng smply aggregate elements (as n algebrac multgrd) Π 1,2 j Π 1,2 Π 2 I τ (1) τ (2) j Π 1,3 Π 2,3 j I 1 I 2 I 3 φ (1) φ (2) φ (3) φ (4) φ (5) φ (6)
32 Formulaton of the herarchy of games Player A Chooses g L 2 (Ω) kgk L2 (Ω) 1 ( dv(a u) =g n Ω, u =on Ω, Player B Sees { uφ (k), I Ω k } Must predct u and { uφ (k+1),j I Ω j k+1 }
33 Player B s best strategy ( dv(a u) =g n Ω, u =on Ω, ξ N (, L) ( dv(a v) =ξ n Ω, v =on Ω, Player B s bets u (k) (x) :=E v(x) R Ω v(y)φ(k) (y) dy = R Ω u(y)φ(k) (y) dy, I k The sequence of approxmatons forms a martngale under the mxed strategy emergng from the game F k = σ( R Ω vφ(k), I k ) v (k) (x) :=E v(x) F k Theorem F k F k+1 v (k) (x) :=E v (k+1) (x) F k
34 Accuracy of the recovery Theorem ku u (k) k a H k λ mn (a) kgk L 2 (Ω) τ (k) H k := max dam(τ (k) ) φ (k) =1 τ (k) dam(τ (k) ) H k
35 In a dscrete settng the last step of the game recovers the soluton to numercal precson log 1 ku u (k) k a kuk a log 1 ku u (k) k a kuk a k k u (1) u(2) u (3) u (4) u (5) u (6)
36 Gamblets Elementary gambles form a herarchy of determnstc bass functons for player B s herarchy of bets Theorem u (k) (x) = P ψ(k) ψ (k) R (x) Ω u(y)φ(k) (y) dy : Elementary gambles/bets at resoluton H k =2 k h ψ (k) (x) :=E v(x) R Ω v(y)φ(k) j (y) dy = δ,j,j I k ψ (1) ψ (2) ψ (3) ψ (4) ψ (5) ψ (6)
37 Gamblets are nested V (k) := span{ψ (k), I k } ψ (1) 1 Theorem V (k) V (k+1) ψ (2) 1,j 1 ψ (2) 1,j 2 ψ (2) 1,j 3 ψ (2) 1,j 4 ψ (3) 1,j 2,k 1 ψ (3) 1,j 2,k 2 ψ (3) 1,j 2,k 3 ψ (3) 1,j 2,k 4 ψ (k) (x) = P j I k+1 R (k),j ψ(k+1) j (x)
38 Interpolaton/Prolongaton operator R (k),j = E R Ω v(y)φ(k+1) j (y) dy R Ω v(y)φ(k) l (y) dy = δ,l,l I k R (k),j τ (k) R Your best bet on the value of τ (k+1) j gven the nformaton that R τ (k) u =1and R τ l u =forl 6= 1 R (k),j u τ (k+1) j
39 At ths stage you can fnsh wth classcal multgrd But we want multresoluton decomposton
40 Elementary gamble χ (k) R τ (k) Your best bet on the value of u gven the nformaton that u =1, R τ (k) u = 1 and R τ (k) j τ (k) u =forj 6= τ (k) -1 1 τ (k) j Ω
41 χ (k) = ψ (k) ψ (k) ψ (1) 1 =( 1,..., k 1, k ) =( 1,..., k 1, k 1) ψ (2) 1,j 1 ψ (2) 1,j 2 ψ (2) 1,j 3 ψ (2) 1,j
42 χ (k) = ψ (k) ψ (k) ψ (1) χ (2) χ (3) χ (4) χ (5) χ (6)
43 Multresoluton decomposton of the soluton space V (k) := span{ψ (k), I k } W (k) := span{χ (k),} W (k+1) : Orthogonal complement of V (k) n V (k+1) wth respect to < ψ, χ > a := R Ω ( ψ)t a χ Theorem H 1 (Ω) =V (1) a W (2) a a W (k) a
44 Multresoluton decomposton of the soluton Theorem u (k+1) u (k) = F.E. sol. of PDE n W (k+1) u =.14 u (1).3 u (2) u (1) u (3) u (2) + + u (4) u (3) u (5) u (4) u (6) u (5) Subband solutons u (k+1) u (k) can be computed ndependently
45 A (k),j Unformly bounded condton numbers := ψ (k), ψ (k) j a B (k),j := χ (k), χ (k) j a Theorem λ max (B (k) ) λ mn (B (k) ) C 4.5 log1 ( λ max(a (k) ) λ mn (A (k) ) ) log 1 ( λ max(b (k) ) λ mn (B (k) ) ) Just relax! In v W (k) to get u (k) u (k 1)
46 c (1) c (2) j c (3) j 4 x x c (6) j c (4) j u = P c(1) -2-4 c (5) -2 j c (6) ψ (1) kψ (1) k a + P q k=2 Pj c(k) j χ (k) j kχ (k) j k a Coeffcents of the soluton n the gamblet bass j
47 Operator Compresson Gamblets behave lke wavelets but they are adapted to the PDE and can compress ts soluton space u Compresson rato = 15 Energy norm relatve error =.7 Gamblet compresson Throw 99% of the coeffcents
48 Fast gamblet transform O(N ln 2 N)complexty Nestng A (k) =(R (k,k+1) ) T A (k+1) R (k,k+1) Level(k) gamblets and stffness matrces can be computed from level(k+1) gamblets and stffness matrces Well condtoned lnear systems Underlyng lnear systems have unformly bounded condton numbers ψ (k) = ψ (k+1) (,1) + P j C(k+1),χ,j χ (k+1) j C (k+1),χ =(B (k+1) ) 1 Z (k+1) Localzaton Z (k+1) j, := (e (k+1) j e (k+1) j ) T A (k+1) e (k+1) (,1) The nested computaton can be localzed wthout compromsng accuracy or condton numbers
49 ϕ,a h,m h ψ (q) χ (q),b (q),a (q) u (q) u (q 1) Parallel operatng dagram both n space and n frequency ψ (3) ψ (q 1),A (q 1) χ (q 1),B (q 1) u (q 1) u (q 2) ψ (3) χ (3),B (3) u (1) ,A (3) u (3) u (2) ψ (2),A (2) χ (2),B (2) u (2) u (1) ψ (1),A (1) χ (3) u (3) u (2) ψ (2) χ (2).3 u (2) u (1) ψ (1) ψ (1).14 u (1)
Universal Scalable Robust Solvers from Computational Information Games and fast eigenspace adapted Multiresolution Analysis.
Universal Scalable Robust Solvers from Computational Information Games and fast eigenspace adapted Multiresolution Analysis Houman Owhadi Joint work with Clint Scovel IPAM Apr 3, 2017 DARPA EQUiPS / AFOSR
More informationComputational Information Games A minitutorial Part II Houman Owhadi ICERM June 5, 2017
Computational Information Games A minitutorial Part II Houman Owhadi ICERM June 5, 2017 DARPA EQUiPS / AFOSR award no FA9550-16-1-0054 (Computational Information Games) Question Can we design a linear
More informationarxiv: v2 [math.na] 29 May 2017
Unversal Scalable Robust Solvers from Computatonal Informaton Games and fast egenspace adapted Multresoluton Analyss Houman Owhad and Clnt Scovel arxv:1703.10761v2 [math.na] 29 May 2017 May 31, 2017 Abstract
More informationBootstrap AMG for Markov Chain Computations
Bootstrap AMG for Markov Chan Computatons Karsten Kahl Bergsche Unverstät Wuppertal May 27, 200 Outlne Markov Chans Subspace Egenvalue Approxmaton Ingredents of Least Squares Interpolaton Egensolver Bootstrap
More informationLecture 21: Numerical methods for pricing American type derivatives
Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)
More informationAppendix B. The Finite Difference Scheme
140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton
More informationRobust Norm Equivalencies and Preconditioning
Robust Norm Equvalences and Precondtonng Karl Scherer Insttut für Angewandte Mathematk, Unversty of Bonn, Wegelerstr. 6, 53115 Bonn, Germany Summary. In ths contrbuton we report on work done n contnuaton
More informationThe worst case approach to UQ
The worst case approach to UQ Houman Owhadi The gods to-day stand friendly, that we may, Lovers of peace, lead on our days to age! But, since the affairs of men rests still uncertain, Let s reason with
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationStatistical pattern recognition
Statstcal pattern recognton Bayes theorem Problem: decdng f a patent has a partcular condton based on a partcular test However, the test s mperfect Someone wth the condton may go undetected (false negatve
More informationThe Finite Element Method: A Short Introduction
Te Fnte Element Metod: A Sort ntroducton Wat s FEM? Te Fnte Element Metod (FEM) ntroduced by engneers n late 50 s and 60 s s a numercal tecnque for solvng problems wc are descrbed by Ordnary Dfferental
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationCHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE
CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationThe Second Anti-Mathima on Game Theory
The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationReport on Image warping
Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationStrong Markov property: Same assertion holds for stopping times τ.
Brownan moton Let X ={X t : t R + } be a real-valued stochastc process: a famlty of real random varables all defned on the same probablty space. Defne F t = nformaton avalable by observng the process up
More informationInvariant deformation parameters from GPS permanent networks using stochastic interpolation
Invarant deformaton parameters from GPS permanent networks usng stochastc nterpolaton Ludovco Bag, Poltecnco d Mlano, DIIAR Athanasos Dermans, Arstotle Unversty of Thessalonk Outlne Startng hypotheses
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased
More informationPredictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore
Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.
More informationFTCS Solution to the Heat Equation
FTCS Soluton to the Heat Equaton ME 448/548 Notes Gerald Recktenwald Portland State Unversty Department of Mechancal Engneerng gerry@pdx.edu ME 448/548: FTCS Soluton to the Heat Equaton Overvew 1. Use
More information5 The Rational Canonical Form
5 The Ratonal Canoncal Form Here p s a monc rreducble factor of the mnmum polynomal m T and s not necessarly of degree one Let F p denote the feld constructed earler n the course, consstng of all matrces
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationPrimer on High-Order Moment Estimators
Prmer on Hgh-Order Moment Estmators Ton M. Whted July 2007 The Errors-n-Varables Model We wll start wth the classcal EIV for one msmeasured regressor. The general case s n Erckson and Whted Econometrc
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More information( ) [ ( k) ( k) ( x) ( ) ( ) ( ) [ ] ξ [ ] [ ] [ ] ( )( ) i ( ) ( )( ) 2! ( ) = ( ) 3 Interpolation. Polynomial Approximation.
3 Interpolaton {( y } Gven:,,,,,, [ ] Fnd: y for some Mn, Ma Polynomal Appromaton Theorem (Weerstrass Appromaton Theorem --- estence ε [ ab] f( P( , then there ests a polynomal
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationSIO 224. m(r) =(ρ(r),k s (r),µ(r))
SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small
More informationAdditional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty
Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,
More informationVector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.
Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationNON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS
IJRRAS 8 (3 September 011 www.arpapress.com/volumes/vol8issue3/ijrras_8_3_08.pdf NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS H.O. Bakodah Dept. of Mathematc
More informationStructure from Motion. Forsyth&Ponce: Chap. 12 and 13 Szeliski: Chap. 7
Structure from Moton Forsyth&once: Chap. 2 and 3 Szelsk: Chap. 7 Introducton to Structure from Moton Forsyth&once: Chap. 2 Szelsk: Chap. 7 Structure from Moton Intro he Reconstructon roblem p 3?? p p 2
More informationMACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression
11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING
More informationContinuous Time Markov Chains
Contnuous Tme Markov Chans Brth and Death Processes,Transton Probablty Functon, Kolmogorov Equatons, Lmtng Probabltes, Unformzaton Chapter 6 1 Markovan Processes State Space Parameter Space (Tme) Dscrete
More informationTHE STURM-LIOUVILLE EIGENVALUE PROBLEM - A NUMERICAL SOLUTION USING THE CONTROL VOLUME METHOD
Journal of Appled Mathematcs and Computatonal Mechancs 06, 5(), 7-36 www.amcm.pcz.pl p-iss 99-9965 DOI: 0.75/jamcm.06..4 e-iss 353-0588 THE STURM-LIOUVILLE EIGEVALUE PROBLEM - A UMERICAL SOLUTIO USIG THE
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationHowever, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values
Fall 007 Soluton to Mdterm Examnaton STAT 7 Dr. Goel. [0 ponts] For the general lnear model = X + ε, wth uncorrelated errors havng mean zero and varance σ, suppose that the desgn matrx X s not necessarly
More informationProfessor Chris Murray. Midterm Exam
Econ 7 Econometrcs Sprng 4 Professor Chrs Murray McElhnney D cjmurray@uh.edu Mdterm Exam Wrte your answers on one sde of the blank whte paper that I have gven you.. Do not wrte your answers on ths exam.
More informationQueueing Networks II Network Performance
Queueng Networks II Network Performance Davd Tpper Assocate Professor Graduate Telecommuncatons and Networkng Program Unversty of Pttsburgh Sldes 6 Networks of Queues Many communcaton systems must be modeled
More informationMarkov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement
Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs
More informationDeriving the X-Z Identity from Auxiliary Space Method
Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationAPPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14
APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce
More informationLogistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI
Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton
More informationDynamical Systems and Information Theory
Dynamcal Systems and Informaton Theory Informaton Theory Lecture 4 Let s consder systems that evolve wth tme x F ( x, x, x,... That s, systems that can be descrbed as the evoluton of a set of state varables
More informationMDL-Based Unsupervised Attribute Ranking
MDL-Based Unsupervsed Attrbute Rankng Zdravko Markov Computer Scence Department Central Connectcut State Unversty New Brtan, CT 06050, USA http://www.cs.ccsu.edu/~markov/ markovz@ccsu.edu MDL-Based Unsupervsed
More informationA new Approach for Solving Linear Ordinary Differential Equations
, ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of
More informationCS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements
CS 750 Machne Learnng Lecture 5 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square CS 750 Machne Learnng Announcements Homework Due on Wednesday before the class Reports: hand n before
More informationInexact Newton Methods for Inverse Eigenvalue Problems
Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.
More informationDynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)
/24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes
More informationAdditive Schwarz Method for DG Discretization of Anisotropic Elliptic Problems
Addtve Schwarz Method for DG Dscretzaton of Ansotropc Ellptc Problems Maksymlan Dryja 1, Potr Krzyżanowsk 1, and Marcus Sarks 2 1 Introducton In the paper we consder a second order ellptc problem wth dscontnuous
More informationReview: Fit a line to N data points
Revew: Ft a lne to data ponts Correlated parameters: L y = a x + b Orthogonal parameters: J y = a (x ˆ x + b For ntercept b, set a=0 and fnd b by optmal average: ˆ b = y, Var[ b ˆ ] = For slope a, set
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationNeuro-Adaptive Design - I:
Lecture 36 Neuro-Adaptve Desgn - I: A Robustfyng ool for Dynamc Inverson Desgn Dr. Radhakant Padh Asst. Professor Dept. of Aerospace Engneerng Indan Insttute of Scence - Bangalore Motvaton Perfect system
More informationp 1 c 2 + p 2 c 2 + p 3 c p m c 2
Where to put a faclty? Gven locatons p 1,..., p m n R n of m houses, want to choose a locaton c n R n for the fre staton. Want c to be as close as possble to all the house. We know how to measure dstance
More informationNorms, Condition Numbers, Eigenvalues and Eigenvectors
Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b
More informationHomework Assignment 3 Due in class, Thursday October 15
Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016
U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and
More informationComputing Correlated Equilibria in Multi-Player Games
Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationCS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016
CS 29-128: Algorthms and Uncertanty Lecture 17 Date: October 26, 2016 Instructor: Nkhl Bansal Scrbe: Mchael Denns 1 Introducton In ths lecture we wll be lookng nto the secretary problem, and an nterestng
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationHongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)
ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of
More informationMatrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD
Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo
More informationMIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU
Group M D L M Chapter Bayesan Decson heory Xn-Shun Xu @ SDU School of Computer Scence and echnology, Shandong Unversty Bayesan Decson heory Bayesan decson theory s a statstcal approach to data mnng/pattern
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have
More informationNew Method for Solving Poisson Equation. on Irregular Domains
Appled Mathematcal Scences Vol. 6 01 no. 8 369 380 New Method for Solvng Posson Equaton on Irregular Domans J. Izadan and N. Karamooz Department of Mathematcs Facult of Scences Mashhad BranchIslamc Azad
More informationHaar Wavelet Collocation Method for the Numerical Solution of Nonlinear Volterra-Fredholm-Hammerstein Integral Equations
Global Journal of Pure and Appled Mathematcs. ISS 0973-768 Volume 3, umber 2 (207), pp. 463-474 Research Inda Publcatons http://www.rpublcaton.com Haar Wavelet Collocaton Method for the umercal Soluton
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationCommunication Complexity 16:198: February Lecture 4. x ij y ij
Communcaton Complexty 16:198:671 09 February 2010 Lecture 4 Lecturer: Troy Lee Scrbe: Rajat Mttal 1 Homework problem : Trbes We wll solve the thrd queston n the homework. The goal s to show that the nondetermnstc
More informationCS4495/6495 Introduction to Computer Vision. 3C-L3 Calibrating cameras
CS4495/6495 Introducton to Computer Vson 3C-L3 Calbratng cameras Fnally (last tme): Camera parameters Projecton equaton the cumulatve effect of all parameters: M (3x4) f s x ' 1 0 0 0 c R 0 I T 3 3 3 x1
More informationLecture 5.8 Flux Vector Splitting
Lecture 5.8 Flux Vector Splttng 1 Flux Vector Splttng The vector E n (5.7.) can be rewrtten as E = AU (5.8.1) (wth A as gven n (5.7.4) or (5.7.6) ) whenever, the equaton of state s of the separable form
More informationWinter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan
Wnter 2008 CS567 Stochastc Lnear/Integer Programmng Guest Lecturer: Xu, Huan Class 2: More Modelng Examples 1 Capacty Expanson Capacty expanson models optmal choces of the tmng and levels of nvestments
More informationProcedia Computer Science
Avalable onlne at www.scencedrect.com Proceda Proceda Computer Computer Scence Scence 1 (01) 00 (009) 589 597 000 000 Proceda Computer Scence www.elsever.com/locate/proceda Internatonal Conference on Computatonal
More informationDFT with Planewaves pseudopotential accuracy (LDA, PBE) Fast time to solution 1 step in minutes (not hours!!!) to be useful for MD
LLNL-PRES-673679 Ths work was performed under the auspces of the U.S. Department of Energy by under contract DE-AC52-07NA27344. Lawrence Lvermore Natonal Securty, LLC Sequoa, IBM BGQ, 1,572,864 cores O(N)
More informationNumerical Solution of Ordinary Differential Equations
Numercal Methods (CENG 00) CHAPTER-VI Numercal Soluton of Ordnar Dfferental Equatons 6 Introducton Dfferental equatons are equatons composed of an unknown functon and ts dervatves The followng are examples
More information2.29 Numerical Fluid Mechanics Fall 2011 Lecture 12
REVIEW Lecture 11: 2.29 Numercal Flud Mechancs Fall 2011 Lecture 12 End of (Lnear) Algebrac Systems Gradent Methods Krylov Subspace Methods Precondtonng of Ax=b FINITE DIFFERENCES Classfcaton of Partal
More informationCOS 521: Advanced Algorithms Game Theory and Linear Programming
COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton
More informationSpectral Clustering. Shannon Quinn
Spectral Clusterng Shannon Qunn (wth thanks to Wllam Cohen of Carnege Mellon Unverst, and J. Leskovec, A. Raaraman, and J. Ullman of Stanford Unverst) Graph Parttonng Undrected graph B- parttonng task:
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13
CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,
More informationDepartment of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING
MACHINE LEANING Vasant Honavar Bonformatcs and Computatonal Bology rogram Center for Computatonal Intellgence, Learnng, & Dscovery Iowa State Unversty honavar@cs.astate.edu www.cs.astate.edu/~honavar/
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationMaximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models
ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models
More informationConvexity preserving interpolation by splines of arbitrary degree
Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete
More information8 : Learning in Fully Observed Markov Networks. 1 Why We Need to Learn Undirected Graphical Models. 2 Structural Learning for Completely Observed MRF
10-708: Probablstc Graphcal Models 10-708, Sprng 2014 8 : Learnng n Fully Observed Markov Networks Lecturer: Erc P. Xng Scrbes: Meng Song, L Zhou 1 Why We Need to Learn Undrected Graphcal Models In the
More informationrisk and uncertainty assessment
Optmal forecastng of atmospherc qualty n ndustral regons: rsk and uncertanty assessment Vladmr Penenko Insttute of Computatonal Mathematcs and Mathematcal Geophyscs SD RAS Goal Development of theoretcal
More informationPROBLEM SET 7 GENERAL EQUILIBRIUM
PROBLEM SET 7 GENERAL EQUILIBRIUM Queston a Defnton: An Arrow-Debreu Compettve Equlbrum s a vector of prces {p t } and allocatons {c t, c 2 t } whch satsfes ( Gven {p t }, c t maxmzes βt ln c t subject
More informationHidden Markov Models
Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,
More informationPreconditioning techniques in Chebyshev collocation method for elliptic equations
Precondtonng technques n Chebyshev collocaton method for ellptc equatons Zh-We Fang Je Shen Ha-We Sun (n memory of late Professor Benyu Guo Abstract When one approxmates ellptc equatons by the spectral
More informationIV. Performance Optimization
IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton
More informationExpected Value and Variance
MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or
More informationLOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin
Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationOn a direct solver for linear least squares problems
ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear
More information4DVAR, according to the name, is a four-dimensional variational method.
4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The
More information