Web Appendix for Joint Variable Selection for Fixed and Random Effects in Linear Mixed-Effects Models
|
|
- Lillian Merritt
- 5 years ago
- Views:
Transcription
1 Web Appendix for Joint Variable Selection for Fixed and Rando Effects in Linear Mixed-Effects Models Howard D. Bondell, Arun Krishna, and Sujit K. Ghosh APPENDIX A A. Regularity Conditions Assue that the data {(X i, Z i, y i ) ; i =,...,} is a rando saple fro a linear ixed-effects odel (2.2) with probability density f(y i X i, Z i, φ) where φ = (β, d, γ ) is a k vector of unknown paraeters. Let L i (φ) = log(f(y i X i, Z i, φ)) denote the contribution of observation i to the log-likelihood function, and is given by L i (φ) = 2 log V i 2 (y i X i β) (V i ) (y i X i β), (A.) where V i = σ 2 (Z i DΓΓ DZ i + I n i ). Let L(φ) = i= L i(φ) and Q(φ) denote the log-likelihood and the penalized log-likelihood as given in (2.4) and (4.), respectively. To present the proof of the theores the following regularity conditions are iposed: (i) Each cluster size n i K, for soe K < and i =,...,. (ii) Let v i = I{Z i is full rank}, where I{A} denotes the indictor function of the event A. Assue that i= v i/ c, for soe 0 < c. (iii) The Fisher inforation atrix I(φ 0 ) knowing φ 20 = 0 is finite and positive definite. (iv) There exists a subset Θ of R k, containing the true paraeter φ 0 such that L i (φ) given in (A.) adits all third order derivatives. Specifically, for φ j = β j and (φ l, φ ) = {(d l, γ ), (d l, γ ), (γ l, γ )}, there exists a function M jl (y i, X i, Z i ) such that 3 L i (φ) β j φ l φ = X V i ij (y φ l φ i X i β) < M jl(y i, X i, Z i ), for all φ Θ, and E φ0 [M jl (y i, X i, Z i )] <. For (φ j, φ l ) = (β j, β l ) and φ is either d or γ there exists a function N jl (y, X, Z) such that 3 β jβ l d L i (φ) = X ij(v i S i V i )X il 3 β jβ l γ L i (φ) = X ij (V i T i V i )X il < N jl(y i, X i, Z i ),
2 for all φ Θ, and E φ0 [N jl (y i, X i, Z i )] <. Here S i and T i denote the partial derivatives of V i with respect to d and γ, respectively, and are given by { } { } S i = Z i (DΓΓ D) Z d i, T i = Z i D (ΓΓ ) DZ dγ i. (A.2) For φ j = d j and (φ l, φ ) = {(d l, d ), (d l, γ ), (γ l, γ )}, 3 L i (φ) d j φ l φ < P jl(y i, X i, Z i ), for all φ Θ, and E φ0 [P jl (y i, X i, Z i )] <. Although it ust be that d j 0 for all j, we allow the estiates to fall outside the boundary of the paraeter space by using the axiu likelihood (ML) ethod as opposed to the REML. Note that condition (i) can be relaxed to allow the cluster sized to also increase without bound. However, this can lead to a faster convergence rate for the fixed effects than that for the rando effects (see, for exaple, Nie, 2007). Appropriate odifications to the theory presented here is then possible, but beyond the scope of the paper. Condition (ii) is a sufficient condition that allows for full inforation regarding each rando effect to grow at order, that will typically hold in practice. However, less strict conditions can be derived. A.2 Proof of Theore Proof. Consider the penalized log-likelihood given in (4.) in a neighborhood of the true value φ 0. Let α = /2 with u 0, and φ = φ 0 + α u. Fixing φ 2 = 0 we show that for a sall enough ǫ > 0 there exists a large constant C such that for sufficiently large, P sup Q φ 0 + α u < Q φ 0 u =C 0 0 ǫ. Note that D (u) Q(φ ) Q(φ 0 ) s = {L(φ 0 + α u) L(φ 0 )} λ w j ( φ j0 + α u j φ j0 ). j= Using a Taylor series expansion we have D (u) = α ( L(φ 0 )) u + 2 u [ 2 L(φ 0 )]uα 2 λ s w j sgn(φ j0 )α u j j= = ( L(φ 0 )) u + 2 u [ 2 L(φ 0 )]u λ s j= w j sgn(φ j0 )u j, (A.3) 2
3 where L(φ 0 ), 2 L(φ 0 ) denote the vector and atrix of the first and second order partial derivatives of L(φ ), respectively, evaluated at φ 0. Fro regularity condition (iv) it follows that ( ) L 6 3/2 i (φ) 0, as, φ j φ l φ φ =φ 0 i= hence the reainder ter vanishes. For L(φ 0 ) the j th partial derivative for each corresponding β, d and γ satisfies { } [ E β j L(φ ) = E { } [ E d j L(φ ) = E { } [ E γ j L(φ ) = E X ()jṽ 2 2 ] () (y X ()β ) [Tr(Ṽ () S j ()) + (y X () β ) (Ṽ () S j ()Ṽ ())(y X () β )] [ Tr(Ṽ () T j () ) + (y X ()β ) (Ṽ () T j ()Ṽ () )(y X ()β )] ] ] = 0, φ =φ 0 where X ()j corresponds to the j th colun of stacked atrix X (), and S j () and T j () diagonal atrices of the partial derivatives of Ṽ () and are given by { } { } S j () = Z () ( d D D Γ Γ ) Z () and T j () = Z () D ( j γ Γ Γ ) j Fro standard arguents we have X ()jṽ () (y X ()β ) 2 [Tr(Ṽ S j () ()) + (y X () β ) (Ṽ S j () ()Ṽ ())(y X () β )] 2 [ Tr(Ṽ T j () () ) + (y X ()β ) (Ṽ T j () ()Ṽ () )(y X ()β )] For 2 L(φ ) we have 2 L(φ 0 ) p I(φ 0 ), D Z (). are block = O p (). (A.4) φ =φ 0 (A.5) where I(φ 0 ) is the Fisher inforation evaluated at φ 0. Using (A.4) and (A.5) the expansion in (A.3) becoes D (u) = O p ()u 2 u {I(φ 0 ) + o p ()}u λ s j= w j sgn(φ j0 )u j. Since I(φ 0 ) is finite and positive definite (condition i), hence choosing a sufficiently large C, the second ter doinates the first ter uniforly in u = C. For the penalty ter, if λ / 0 as, and since w j = /ˆφ j /φ j, it follows that λ s j= w j sgn(φ j0 )u j p 0, and thus is also doinated by the second ter. Hence by choosing a sufficiently large C there exists a local axiu in the ball {(φ 0 + α u, 0) : u C} with probability with ǫ, and hence there exists a local axiizer ˆφ = (ˆφ, 0) of φ 0 = (φ 0, 0) such that ˆφ φ 0 = O p ( /2 ). 3
4 A.3 Proof of Theore 2 Let φ = (β, d, γ ) denote the k vector of unknown paraeters, where k = k β + k d + k γ, the su of the lengths corresponding to each paraeter. Let φ 2 = (β 2, d 2, γ 2) be a vector of length k 2 = k s, corresponding to the true zero values, where k 2 = k β2 + k d2 + k γ2. Proof. It is sufficient to show that with probability tending to as, for any φ satisfying φ φ 0 M /2 and for soe sall ǫ = M /2 and for each j = (s + ),...,(k β2 + k d2 ), we have that Note that Q(φ) < 0 φ j for 0 < φ j < ǫ, Q(φ) > 0 φ j for ǫ < φ j < 0. (A.6) Q(φ) = L(φ) λ w j sgn(φ j ). φ j φ j To show (A.6) consider the Taylor series expansion about L(φ)/φ j, we have φ j Q(φ) = φ j L(φ 0 ) + 2 k k l= k i= l= = φ j φ l L(φ 0 )(φ l φ l0 ) 3 φ j φ l φ L i (φ )(φ l φ l0 )(φ φ 0 ) λ w j sgn(φ j ), (A.7) where φ lies between φ and φ 0. Again the first order partial derivative for j th ter for each β and d are given by β j L(φ 0 ) = X jv 0 (y Xβ 0) = O p (), d j L(φ 0 ) = 0. where X j corresponds to the j th colun of the stacked atrix X. The second order derivatives in (A.7) follows ( ) 2 L(φ) E( 2 L(φ)), φ=φ φ=φ 0 0 where E( 2 L(φ)) is given as E ( 2 L(φ) ) = E L ββ L βd L βγ L βd L dd L dγ L βγ L dγ L γγ, 4
5 where E(L ββ ) = X Ṽ X, and E(L βd ), E(L βγ ) has j th colun } E {L βd = j E[X j (Ṽ SjṼ )(y Xβ)] } E {L βγ = j E[X j (Ṽ T jṽ )(y Xβ)] = 0, φ=φ 0 where S j and T j are block diagonal atrices of S i and T i given in (A.2). The expectation for the second order partial derivatives for d and γ has (j, l) th ter E { L dd }jl = Tr(Ṽ SjṼ Sl ) E { Lγγ } = Tr(Ṽ T jṽ l T jl ) } E {L dγ = Tr(Ṽ SjṼ l T ), (A.8) jl for j = s+,...,(k β +k d ), it can be shown that S j or T j when evaluated at φ j = 0 are zero atrices and the set of equations given in (A.8) siplifies to zero. First, consider φ j = β j the expansion given in (A.7) yields ( ) Q(φ) = k β k O p( /2 ) {X β j j V 0 X d k γ l + o p()}(β l β l0 ) o p()(d l d l0 ) o p()(γ l γ l0 ) l= l=k β + l=k d + k β + k d i= l= =k β + X ij (V i S i V i )X il(β l β l0 )(d d 0 ) + k β + k d k d V i X ij (y 2 d i= l=k β + l=k β + l d i X i β )(d l d l0 )(d d 0 ) k γ i= l= =k d + X ij (V i T i V i )X il (β l β l0 )(γ γ 0 ) + k γ k γ V i X ij (y 2 γ i= l=k d + =k d + l γ i X i β )(γ l γ l0 )(γ γ 0 ) + k d k γ i= l=k β + =k d + (y d l γ i X i β )(d l d l0 )(γ γ 0 ) λ w j sgn(β j ), V i X ij (A.9) where φ φ 0 φ φ 0. Since we are considering φ φ 0 Mn /2, (A.9) gives w j Q(φ) = λ sgn(β j ) + O p (). β j Since for β j0 = 0, we have w j / = β j = O p (), and λ, the sign of the derivative is copletely deterined by that of β j. 5
6 Now consider φ j = d j. The Taylor series expansion in (A.7) gives ( ) Q(φ) = k β k d k γ 0 o p()(β l β l0 ) o p()(d l d l0 ) o p()(γ l γ l0 ) d j l= l=k β + l=k d + + k d k d L i (d j ) (d l d l0 )(d d 0 ) + k γ k γ L i (d j ) (γ l γ l0 )(γ γ 0 ) 2 d i= l=k β + =k β + l d 2 γ i= l=k d + =k d + l γ where k β + k β i= l= = k β + k γ i= l= =k d + X il (V i Sj i V k i )X d k γ il(β l β l0 )(β dβ 0 ) + i= l=k β + =k d + k β k d L i (d j ) (β l β l0 )(d d 0 ), β i= l= =k β + l d L i (d j ) β l γ (β l β l0 )(γ γ 0 ) + L i (d j ) d l γ (d l d l0 )(γ γ 0 ) L i (d j ) = L i (φ d ) = [ Tr(V i j 2 Sj i ) + (y i X i β ) (V i Sj i V i )(y i X i β )], and φ lies between φ and φ 0. As above, (A.0) siplifies to w Q(φ) = λ j sgn(d j ). d j (A.0) Hence, for d j0 = 0, as w j / = d j = O p (), the sign of the derivative is again copletely deterined by that of d j. This copletes the proof. A.4 Proof of Theore 3 Proof. We have fro Theore shown that there exists a ˆφ that is a local axiizer of Q(φ ) such that ˆφ φ 0 = O p ( /2 ), and satisfies the set of penalized likelihood equations Q(φ) φ ˆφ = φ=(,0) L(φ) φ ˆφ λ h(ˆφ ) = 0, φ=(,0) where h(ˆφ ) = ( w sgn(ˆφ ),..., w s sgn(ˆφ s )) an s vector where w j = 0 for φ j = γ j. Using the Taylor series expansion and ultiplying throughout by /, we have L(φ 0) {I(φ 0 ) + o p ()}(ˆφ φ 0 ) λ h(φ 0) = 0 { } (ˆφ φ 0 )I(φ 0 ) + λ h(φ 0) = L(φ 0 ). Since E { L(φ )} = 0 as in the proof of Theore, it follows fro the ultivariate central theore that L(φ 0 ) d N (0, I(φ 0 )), 6
7 where I(φ 0 ) is as given in the proof of Theore. Therefore This copletes the proof. I(φ0 ){(ˆφ φ 0 ) + λ I(φ 0) h(φ 0 )} d N{0, I(φ 0 )}. 7
8 APPENDIX B B. Further coputational details on the EM algorith Oitting ters that do not involve φ, we ay rewrite the expression in (3.3) as β d X X X ZDiag( Γb)( q I ) β ( q I ) Diag( Γb)Z X ( q I ) Diag( Γb)Z ZDiag( Γb)( q I ) d [ ] 2 y X ZDiag( Γb)( q I ) β p β j q + λ d β j + d j d j (B.) j= j= After soe atrix anipulation, part of the lower right block of the atrix in the quadratic for above can be written as Diag( Γb)Z ZDiag( Γb) = W ΓDiag(b) Diag(b) Γ, (B.2) where represents the Hadaard (eleent by eleent) product operator, represents an q vector of ones, and W = Z Z is a syetric block diagonal atrix. Coputing the outer product Diag(b) Diag(b), the expression given in (B.2) further siplifies to W Γbb Γ. Using this siplification, and taking the conditional expectation of (B.) yields a penalized quadratic objective function for (β,d) as g(β,d φ (ω) ) = β d X X X ZDiag( Γˆb (ω) )( q I ) ( q I ) Diag( Γˆb (ω) )Z X ( q I ) (W ) ΓĜ(ω) Γ ( q I ) [ ] 2 y X ZDiag( Γˆb(ω) )(q I ) β { p + λ j= d β d β j β j + q j= } d j d, j where Ĝ(ω) = E(bb ) = U (ω) + ˆb (ω)ˆb (ω), and U (ω) and ˆb (ω) are as given in (3.4). For a fixed γ, we now iniize the objective function (B.3) to obtain the updated estiates for (β,d). 8
9 The next step is, for a fixed (β,d), to obtain a closed for expression for the estiate of γ, the vector that relates to the correlation between the rando effect paraeters. Note that, if γ = 0, then the rando effects are utually independent with the rando effect covariance atrix Ψ being reduced to a siple diagonal for. Furtherore, if d l = 0 then γ lr = 0 for r = l +,...,q. Hence, the eleents of Γ and d are functionally related. Fixing (β,d) at its ost recent update, we first rewrite y Z D Γb Xβ 2 in a quadratic for for γ and then copute its conditional expectation. After a bit of atrix anipulation, and oitting ters not involving γ, we have that the objective function for γ is given by { g(γ φ (ω) ) = γ P (ω) γ 2 (y Xβ) R (ω) T (ω)} γ, (B.3) where P (ω) = E b y,φ (ω) {A A}, T (ω) = E b y,φ (ω)(a Z Db), and R (ω) = E b y,φ (ω)(a). Where A = [A,...,A ] represents a stacked atrix of A i, with each A i an n i q(q )/2 atrix, whose eleents in each row are given by A ij = (b il d r z ijr : l =,...,(q ),r = l +,...,q). Here, A ij denotes the j th row of the i th atrix, which contains q(q )/2 eleents. The appropriate iniizer of (B.3) is then given by γ = (P (ω)) { R (ω) (y Xβ) T (ω)}, (B.4) where A denotes the Moore-Penrose generalized inverse of A. Note that the atrices P (ω), T (ω) and R (ω) only involve first and second oents, i.e. ˆb (ω) and U (ω). The optiization proble is now solved by iniizing the quadratic for (B.3), along with explicit solution (B.4) iteratively to coplete the M-step. The final penalized likelihood estiates, ˆφ = (ˆβ, ˆd, ˆγ ) are obtained by successive EM steps. B.2 Coputation of the M-step Recently, Efron, Hastie, Johnstone and Tibshirani (2004) proposed the LARS (Least Angle Regression) algorith, and showed that it can be used to obtain the entire solution path for LASSO estiates, while being coputationally efficient. Zou (2006) showed that with inor changes to the design atrix, the LARS algorith can be ipleented to obtain the estiates for the regression coefficients under the adaptive LASSO penalty. Although we 9
10 can use the LARS algorith to iniize the penalized quadratic for in our M-step via a pseudo design atrix, it is not as advantageous to obtain the entire solution path here. This is due to the fact that the design atrix changes with every iterative step of the EM algorith. Hence, we propose the use of a standard quadratic prograing technique to obtain the penalized likelihood estiates for our paraeters at each iterative step. Given φ = φ (ω) and a tuning paraeter λ, we write β = β + β with both β + and β being non-negative, and only one is non-zero, and β = β + + β, the optiization proble for (β,d) given in (B.3) is equivalent to iniize [ 2 (y β + β d X ZDiag( Γˆb (ω) )( q I ) X X X ZDiag( Γˆb (ω) )( q I ) ( q I ) Diag( Γˆb (ω) )Z X ( q I ) (W ΓĜ(ω) Γ ) ( q I ) ] [ ]) + λ β,, β, p β,, β, d,, dq p subject to β + β d β + β d β + 0, β 0, d 0. (B.5) where the atrix X = [X X]. The iniization with respect to the expanded paraeter (β +,β,d) is now a direct quadratic prograing proble with 2p + q total paraeters and 2p + q total linear constraints. 0
11 APPENDIX C C. Brief Description of CASTNet Data A coplete description of the data can be found on the EPA website: The data used here is a subset of the coplete data and consists of 826 observation fro 5 relevant sites fro 2000 to 2004 across the eastern United States. The ap shown in Figure arks the relevant sites we have used for this analysis. Here is a list describing briefly our response variable and the 6 predictors. Y LOG(TNO) 3, Log of Total Nitrate Concentration (µol/ 3 ) x SO 4, Sulphate Concentration (µol/ 3 ) x 2 NH 4, Aonia Concentration (µol/ 3 ) x 3 O 3, Maxiu Ozone (ppb, parts per billion) x 4 T, Average Teperature ( C) x 5 T d, Average Due Point Teperature ( C) x 6 RH, Average Relative Huidity (%) x 7 SR, Average Solar Radiation (W/ 2 ) x 8 x 9 WS, Average Wind Speed (/sec) P, Total Precipitation (/onth) l(t) Tie of easureent in onths (,...,60) fro s j (t) c j (t) Sin( 2πjt 2 ) where j =,2,3 Cos( 2πjt 2 ) where j =,2,3 Figure 3 plots the predicted ean overlayed with the observed values of log(tno 3 ) using our penalized likelihood estiates for the fixed effects, for 4 specific sites of interest. Though it sees to fit well, we see that in soe sites it tends to underestiate (DCP 4) while overestiating (PNF 26) in others. This further reiterates that the rando effects are iportant to account for heterogeneity between the sites. We can also see fro Figure 2 that a single cycle (s (t),c (t)) sees to be sufficient to describe the seasonal trend.
12 ANA5 DCP4 MKG3 CDR9 VPI20 CTH0 PSU06 BEL6 SHN48 CAD50 ESP27 PNF26 COW37 GAS53 CND25 Figure : The location of the 5 Sites that were used for our analysis. The represents the 4 sites used for the overlay plots in Figure 3.
13 Log(TNO_3) Month Figure 2: Site (individual) profile plot to assess the seasonal trend in Nitrate concentration over each 2 onth period, for the CASTNet dataset. 3
14 DCP 4 CND 25 log(tno_3) log(tno_3) Months Months GAS 53 PNF 26 log(tno_3) log(tno_3) Months Months Figure 3: Plot of the observed LOG(TNO) 3 concentration represented by the solid line, overlayed with the fitted values for the fixed effects odel selected using our proposed ethod for 4 centers, for the CASTNet dataset. 4
Block designs and statistics
Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent
More informationCOS 424: Interacting with Data. Written Exercises
COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well
More informationUsing EM To Estimate A Probablity Density With A Mixture Of Gaussians
Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points
More informationCS Lecture 13. More Maximum Likelihood
CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood
More informationIntelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes
More informationThe Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Parameters
journal of ultivariate analysis 58, 96106 (1996) article no. 0041 The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Paraeters H. S. Steyn
More informationMachine Learning Basics: Estimators, Bias and Variance
Machine Learning Basics: Estiators, Bias and Variance Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Basics
More informationFeature Extraction Techniques
Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that
More information13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices
CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay
More informationTopic 5a Introduction to Curve Fitting & Linear Regression
/7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline
More informationLeast Squares Fitting of Data
Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a
More informationLower Bounds for Quantized Matrix Completion
Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &
More informationSharp Time Data Tradeoffs for Linear Inverse Problems
Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used
More informationAlgorithms for parallel processor scheduling with distinct due windows and unit-time jobs
BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and
More informationCh 12: Variations on Backpropagation
Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith
More informationBayes Decision Rule and Naïve Bayes Classifier
Bayes Decision Rule and Naïve Bayes Classifier Le Song Machine Learning I CSE 6740, Fall 2013 Gaussian Mixture odel A density odel p(x) ay be ulti-odal: odel it as a ixture of uni-odal distributions (e.g.
More informationEstimating Parameters for a Gaussian pdf
Pattern Recognition and achine Learning Jaes L. Crowley ENSIAG 3 IS First Seester 00/0 Lesson 5 7 Noveber 00 Contents Estiating Paraeters for a Gaussian pdf Notation... The Pattern Recognition Proble...3
More informationExplicit solution of the polynomial least-squares approximation problem on Chebyshev extrema nodes
Explicit solution of the polynoial least-squares approxiation proble on Chebyshev extrea nodes Alfredo Eisinberg, Giuseppe Fedele Dipartiento di Elettronica Inforatica e Sisteistica, Università degli Studi
More informationPhysics 139B Solutions to Homework Set 3 Fall 2009
Physics 139B Solutions to Hoework Set 3 Fall 009 1. Consider a particle of ass attached to a rigid assless rod of fixed length R whose other end is fixed at the origin. The rod is free to rotate about
More informationW-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS
W-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS. Introduction When it coes to applying econoetric odels to analyze georeferenced data, researchers are well
More informationSequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5,
Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5, 2015 31 11 Motif Finding Sources for this section: Rouchka, 1997, A Brief Overview of Gibbs Sapling. J. Buhler, M. Topa:
More informationPattern Recognition and Machine Learning. Artificial Neural networks
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial
More information1 Bounding the Margin
COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost
More informationThe Methods of Solution for Constrained Nonlinear Programming
Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 01-06 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.co The Methods of Solution for Constrained
More informationESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics
ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents
More informationSupport Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization
Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering
More informationpaper prepared for the 1996 PTRC Conference, September 2-6, Brunel University, UK ON THE CALIBRATION OF THE GRAVITY MODEL
paper prepared for the 1996 PTRC Conference, Septeber 2-6, Brunel University, UK ON THE CALIBRATION OF THE GRAVITY MODEL Nanne J. van der Zijpp 1 Transportation and Traffic Engineering Section Delft University
More informationSupport Vector Machines MIT Course Notes Cynthia Rudin
Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance
More informationSymbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm
Acta Polytechnica Hungarica Vol., No., 04 Sybolic Analysis as Universal Tool for Deriving Properties of Non-linear Algoriths Case study of EM Algorith Vladiir Mladenović, Miroslav Lutovac, Dana Porrat
More informationFundamental Limits of Database Alignment
Fundaental Liits of Database Alignent Daniel Cullina Dept of Electrical Engineering Princeton University dcullina@princetonedu Prateek Mittal Dept of Electrical Engineering Princeton University pittal@princetonedu
More informationPolygonal Designs: Existence and Construction
Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G
More informationMulti-Dimensional Hegselmann-Krause Dynamics
Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory
More informationCHAPTER 19: Single-Loop IMC Control
When I coplete this chapter, I want to be able to do the following. Recognize that other feedback algoriths are possible Understand the IMC structure and how it provides the essential control features
More informationProbability Distributions
Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples
More informationDERIVING PROPER UNIFORM PRIORS FOR REGRESSION COEFFICIENTS
DERIVING PROPER UNIFORM PRIORS FOR REGRESSION COEFFICIENTS N. van Erp and P. van Gelder Structural Hydraulic and Probabilistic Design, TU Delft Delft, The Netherlands Abstract. In probles of odel coparison
More informationSupplementary Materials: Proofs and Technical Details for Parsimonious Tensor Response Regression Lexin Li and Xin Zhang
Suppleentary Materials: Proofs and Tecnical Details for Parsionious Tensor Response Regression Lexin Li and Xin Zang A Soe preliinary results We will apply te following two results repeatedly. For a positive
More information2.9 Feedback and Feedforward Control
2.9 Feedback and Feedforward Control M. F. HORDESKI (985) B. G. LIPTÁK (995) F. G. SHINSKEY (970, 2005) Feedback control is the action of oving a anipulated variable in response to a deviation or error
More informationHybrid System Identification: An SDP Approach
49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The
More informationA method to determine relative stroke detection efficiencies from multiplicity distributions
A ethod to deterine relative stroke detection eiciencies ro ultiplicity distributions Schulz W. and Cuins K. 2. Austrian Lightning Detection and Inoration Syste (ALDIS), Kahlenberger Str.2A, 90 Vienna,
More informationKernel Methods and Support Vector Machines
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic
More informationPAC-Bayes Analysis Of Maximum Entropy Learning
PAC-Bayes Analysis Of Maxiu Entropy Learning John Shawe-Taylor and David R. Hardoon Centre for Coputational Statistics and Machine Learning Departent of Coputer Science University College London, UK, WC1E
More informationQuantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search
Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths
More informationA Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)
1 A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine (1900 words) Contact: Jerry Farlow Dept of Matheatics Univeristy of Maine Orono, ME 04469 Tel (07) 866-3540 Eail: farlow@ath.uaine.edu
More informationInteractive Markov Models of Evolutionary Algorithms
Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary
More informationNon-Parametric Non-Line-of-Sight Identification 1
Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,
More informationA Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair
Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving
More informationVulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links
Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Tie-Varying Jaing Links Jun Kurihara KDDI R&D Laboratories, Inc 2 5 Ohara, Fujiino, Saitaa, 356 8502 Japan Eail: kurihara@kddilabsjp
More informationThe proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013).
A Appendix: Proofs The proofs of Theore 1-3 are along the lines of Wied and Galeano (2013) Proof of Theore 1 Let D[d 1, d 2 ] be the space of càdlàg functions on the interval [d 1, d 2 ] equipped with
More informationThis model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.
CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when
More informationA Simple Regression Problem
A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where
More informationarxiv: v1 [math.na] 10 Oct 2016
GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble
More informationModel Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon
Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential
More informationAnalyzing Simulation Results
Analyzing Siulation Results Dr. John Mellor-Cruey Departent of Coputer Science Rice University johnc@cs.rice.edu COMP 528 Lecture 20 31 March 2005 Topics for Today Model verification Model validation Transient
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October
More informationBoosting with log-loss
Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the
More informationInspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information
Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub
More informationSupplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion
Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish
More informationTesting Properties of Collections of Distributions
Testing Properties of Collections of Distributions Reut Levi Dana Ron Ronitt Rubinfeld April 9, 0 Abstract We propose a fraework for studying property testing of collections of distributions, where the
More informationProjectile Motion with Air Resistance (Numerical Modeling, Euler s Method)
Projectile Motion with Air Resistance (Nuerical Modeling, Euler s Method) Theory Euler s ethod is a siple way to approxiate the solution of ordinary differential equations (ode s) nuerically. Specifically,
More informationUse of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization
Use of PSO in Paraeter Estiation of Robot Dynaics; Part One: No Need for Paraeterization Hossein Jahandideh, Mehrzad Navar Abstract Offline procedures for estiating paraeters of robot dynaics are practically
More informationQualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science
Proceedings of the 6th WSEAS International Conference on Applied Coputer Science, Tenerife, Canary Islands, Spain, Deceber 16-18, 2006 183 Qualitative Modelling of Tie Series Using Self-Organizing Maps:
More informationA Markov Framework for the Simple Genetic Algorithm
A arkov Fraework for the Siple Genetic Algorith Thoas E. Davis*, Jose C. Principe Electrical Engineering Departent University of Florida, Gainesville, FL 326 *WL/NGS Eglin AFB, FL32542 Abstract This paper
More informationDistributed Subgradient Methods for Multi-agent Optimization
1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions
More informationIn this chapter, we consider several graph-theoretic and probabilistic models
THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions
More informationExact tensor completion with sum-of-squares
Proceedings of Machine Learning Research vol 65:1 54, 2017 30th Annual Conference on Learning Theory Exact tensor copletion with su-of-squares Aaron Potechin Institute for Advanced Study, Princeton David
More informationLecture 21 Nov 18, 2015
CS 388R: Randoized Algoriths Fall 05 Prof. Eric Price Lecture Nov 8, 05 Scribe: Chad Voegele, Arun Sai Overview In the last class, we defined the ters cut sparsifier and spectral sparsifier and introduced
More informationIterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel
1 Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel Rai Cohen, Graduate Student eber, IEEE, and Yuval Cassuto, Senior eber, IEEE arxiv:1510.05311v2 [cs.it] 24 ay 2016 Abstract In
More informationTraining an RBM: Contrastive Divergence. Sargur N. Srihari
Training an RBM: Contrastive Divergence Sargur N. srihari@cedar.buffalo.edu Topics in Partition Function Definition of Partition Function 1. The log-likelihood gradient 2. Stochastic axiu likelihood and
More informationA1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3
A. Find all ordered pairs a, b) of positive integers for which a + b = 3 08. Answer. The six ordered pairs are 009, 08), 08, 009), 009 337, 674) = 35043, 674), 009 346, 673) = 3584, 673), 674, 009 337)
More informationIntelligent Systems: Reasoning and Recognition. Artificial Neural Networks
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial
More informationarxiv: v1 [cs.ds] 3 Feb 2014
arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/
More informationA Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number
Research Journal of Applied Sciences, Engineering and Technology 4(23): 5206-52, 202 ISSN: 2040-7467 Maxwell Scientific Organization, 202 Subitted: April 25, 202 Accepted: May 3, 202 Published: Deceber
More information3.3 Variational Characterization of Singular Values
3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and
More information1 Identical Parallel Machines
FB3: Matheatik/Inforatik Dr. Syaantak Das Winter 2017/18 Optiizing under Uncertainty Lecture Notes 3: Scheduling to Miniize Makespan In any standard scheduling proble, we are given a set of jobs J = {j
More informationDetection and Estimation Theory
ESE 54 Detection and Estiation Theory Joseph A. O Sullivan Sauel C. Sachs Professor Electronic Systes and Signals Research Laboratory Electrical and Systes Engineering Washington University 11 Urbauer
More informationNUMERICAL MODELLING OF THE TYRE/ROAD CONTACT
NUMERICAL MODELLING OF THE TYRE/ROAD CONTACT PACS REFERENCE: 43.5.LJ Krister Larsson Departent of Applied Acoustics Chalers University of Technology SE-412 96 Sweden Tel: +46 ()31 772 22 Fax: +46 ()31
More informationRecursive Algebraic Frisch Scheme: a Particle-Based Approach
Recursive Algebraic Frisch Schee: a Particle-Based Approach Stefano Massaroli Renato Myagusuku Federico Califano Claudio Melchiorri Atsushi Yaashita Hajie Asaa Departent of Precision Engineering, The University
More informationSupporting Information for Supression of Auger Processes in Confined Structures
Supporting Inforation for Supression of Auger Processes in Confined Structures George E. Cragg and Alexander. Efros Naval Research aboratory, Washington, DC 20375, USA 1 Solution of the Coupled, Two-band
More informationPhysics 215 Winter The Density Matrix
Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it
More informationDeflation of the I-O Series Some Technical Aspects. Giorgio Rampa University of Genoa April 2007
Deflation of the I-O Series 1959-2. Soe Technical Aspects Giorgio Rapa University of Genoa g.rapa@unige.it April 27 1. Introduction The nuber of sectors is 42 for the period 1965-2 and 38 for the initial
More informationPROXSCAL. Notation. W n n matrix with weights for source k. E n s matrix with raw independent variables F n p matrix with fixed coordinates
PROXSCAL PROXSCAL perfors ultidiensional scaling of proxiity data to find a leastsquares representation of the obects in a low-diensional space. Individual differences odels can be specified for ultiple
More informationRECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE
Proceedings of ICIPE rd International Conference on Inverse Probles in Engineering: Theory and Practice June -8, 999, Port Ludlow, Washington, USA : RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS
More informationLecture 21. Interior Point Methods Setup and Algorithm
Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and
More informationChaotic Coupled Map Lattices
Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each
More informationCurious Bounds for Floor Function Sums
1 47 6 11 Journal of Integer Sequences, Vol. 1 (018), Article 18.1.8 Curious Bounds for Floor Function Sus Thotsaporn Thanatipanonda and Elaine Wong 1 Science Division Mahidol University International
More informationReed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product.
Coding Theory Massoud Malek Reed-Muller Codes An iportant class of linear block codes rich in algebraic and geoetric structure is the class of Reed-Muller codes, which includes the Extended Haing code.
More informationEffective joint probabilistic data association using maximum a posteriori estimates of target states
Effective joint probabilistic data association using axiu a posteriori estiates of target states 1 Viji Paul Panakkal, 2 Rajbabu Velurugan 1 Central Research Laboratory, Bharat Electronics Ltd., Bangalore,
More informationComplex Quadratic Optimization and Semidefinite Programming
Coplex Quadratic Optiization and Seidefinite Prograing Shuzhong Zhang Yongwei Huang August 4 Abstract In this paper we study the approxiation algoriths for a class of discrete quadratic optiization probles
More informationCompression and Predictive Distributions for Large Alphabet i.i.d and Markov models
2014 IEEE International Syposiu on Inforation Theory Copression and Predictive Distributions for Large Alphabet i.i.d and Markov odels Xiao Yang Departent of Statistics Yale University New Haven, CT, 06511
More informationA note on the multiplication of sparse matrices
Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani
More informationOPTIMIZATION in multi-agent networks has attracted
Distributed constrained optiization and consensus in uncertain networks via proxial iniization Kostas Margellos, Alessandro Falsone, Sione Garatti and Maria Prandini arxiv:603.039v3 [ath.oc] 3 May 07 Abstract
More informationExperimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis
City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna
More informationTesting equality of variances for multiple univariate normal populations
University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Inforation Sciences 0 esting equality of variances for ultiple univariate
More informationarxiv: v1 [cs.ds] 17 Mar 2016
Tight Bounds for Single-Pass Streaing Coplexity of the Set Cover Proble Sepehr Assadi Sanjeev Khanna Yang Li Abstract arxiv:1603.05715v1 [cs.ds] 17 Mar 2016 We resolve the space coplexity of single-pass
More informationOnline Supplement. and. Pradeep K. Chintagunta Graduate School of Business University of Chicago
Online Suppleent easuring Cross-Category Price Effects with Aggregate Store Data Inseong Song ksong@usthk Departent of arketing, HKUST Hong Kong University of Science and Technology and Pradeep K Chintagunta
More informationTail Estimation of the Spectral Density under Fixed-Domain Asymptotics
Tail Estiation of the Spectral Density under Fixed-Doain Asyptotics Wei-Ying Wu, Chae Young Li and Yiin Xiao Wei-Ying Wu, Departent of Statistics & Probability Michigan State University, East Lansing,
More informationConvolutional Codes. Lecture Notes 8: Trellis Codes. Example: K=3,M=2, rate 1/2 code. Figure 95: Convolutional Encoder
Convolutional Codes Lecture Notes 8: Trellis Codes In this lecture we discuss construction of signals via a trellis. That is, signals are constructed by labeling the branches of an infinite trellis with
More informationIntroduction to Robotics (CS223A) (Winter 2006/2007) Homework #5 solutions
Introduction to Robotics (CS3A) Handout (Winter 6/7) Hoework #5 solutions. (a) Derive a forula that transfors an inertia tensor given in soe frae {C} into a new frae {A}. The frae {A} can differ fro frae
More informationRecovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)
Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,
More information