Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Size: px
Start display at page:

Download "Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)"

Transcription

1 Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, Introduction Data that arises fro engineering applications often contains soe type of low diensional structure that enables intelligent representation and processing That is, the data is on an unknown low diensional structure in a very high diensional space However, in any engineering disciplines we do not have access to the data directly, but only have access to easureents of the data In soe of these engineering disciplines we would like to get as few easureents as possible, while preserving quality (a good exaple is MR iaging where the nuber of easureents corresponds to MR scan tie) In others, the nubers of easureents are liited by the nature of the proble or soe physical constraint (eg the Netflix challenge where we only have access to only a few user ratings) Therefore in any practical settings we face a very challenging proble: recovering data fro underdeterined easureents The past decade has witnessed a treendous breakthrough in the recovery of low-diensional data (sparse/low rank) fro underdeterined linear easureents (eg copressed sensing/atrix copletion) However in any engineering applications the easureents are not linear, one such exaple is phase retrieval Phase retrieval is the proble of recovering a general signal, eg an iage fro the agnitude of its Fourier transfor (equivalent to quadratic easureent of the signal) Phase retrieval has nuerous applications: X-ray crystallography, iaging science, diffraction iaging, optics, astronoical iaging, and icroscopy, to nae just a few A nuber of ethods have been developed to address the phase retrieval proble over the last century There are two classes of ethods for the phase retrieval proble: The first class are ethods that are algebraic in nature and thus not robust to noise The second, class are ethods that rely on non-convex optiization schees and often get stuck in local inia In this class there has been heavy use of achine learning type algoriths in the literature eg Neural Networks, graphical odeling, etc Very recently new ethods have been developed (based on SDP relaxations), that are both robust to noise and are convex in nature Despite these advances these new ethods do not utilize the low diensional structure inherent in any phase retrieval applications (eg sparsity of iages in a given frae) Using these inherent low diensional structures of iage signals one ight be able to draatically decrease the nuber of easureents needed for any phase retrieval applications In this paper we present a unified view of how data can be recovered fro quadratic easureents We show that the aforeentioned SDP relaxation in the context of phase retrieval can be derived as a special case of this ore general forulation We will show that our general forulation leads to new algoriths and insights in noisy phase retrieval We also consider the case where the quadratic easureents are underdeterined ie there is no unique solution In this case, we will see that one can still recover the data using apriori inforation in the for of sparsity of data in a dictionary We will show that this forulation leads to a new algorith (Sparse Phase Lift) for the phase retrieval proble, that utilizes the inherent low diensional structure of the data to decrease the nuber of easureents needed in phase retrieval 2 Unsupervised Learning of data fro Quadratic Measureents In the first part of this section we show that one can use Shor s Seidefinite Relaxation schee [2], to recover a vector fro quadratic easureents as a convex optiization proble In the second part we study the underdeterined case and present a convex optiization forulation for recovery of data fro underdeterined quadratic easureents 21 General Case Assue that we wish to find a data vector x R n that satisfies a set of quadratic equations We forulate this as a convex optiization proble by adding a quadratic objective function 1 in f 0 (x) = x T A 0 x + 2b T 0 x + c 0 y 1 = x T A 1 x + 2b T 1 x + c 1 y = x T A x + 2b T x + c Notice that this prial proble is non-convex To bound fro bellow the optial value we build the dual proble where f λ (x) = f 0 (x) + λ i f i (x) = x T A(λ)x + 2b T (λ)x + c(λ) A(λ) = A 0 + λ i A i ; b(λ) = b 0 + λ i b i ; c(λ) = c 0 + λ i c i Notice that in x R nf λ(x) is a lower bound for the original proble Now the ethod we are going to use in order to relax (21) is by finding the best lower bound of f λ (x) η, by an appropriate convex search over all λ and η Now notice that x T A(λ)x + 2b T (λ)x + c(λ) η 0, if and only if (λ, η) R R satisfy c(λ) b(λ)t [ b(λ) T A(λ) ] 0 1 Notice that if one is only interested in recovery fro quadratic equality constraints one could set the objective to 0, ie A 0 = 0, b 0 = 0, and c 0 = 0 We keep the general for because it will prove useful in deriving results in noisy phase retrieval

2 So inorder to find a good lower bound we axiize the above expression, ie ax {η [c 0 + λ i c i η b T 0 + λ i b T i η,λ b 0 + λ i b i A 0 + ] 0} λ i A i If we write the seidefinite dual of the above proble we get in T r(a 0 X) X R (n+1) (n+1) subject to Tr( [ c i b i b T i X 0; X 11 = 1 A i ] X) = y i, i = 1,, Figure 1: President Obaa Figure 2: President Bush where we have excluded the details of the last derivation due to space constraints 22 Underdeterined Case Assue that the aforeentioned quadratic equality constraints are underdeterined, ie even if one is able to use cobinatorial search, there is not a unique solution to the aforeentioned quadratic equations (In fact in any probles of interest there are exponentially any such solutions) Furtherore, assue we have soe additional inforation about the data vector x in the for of sparsity in a given dictionary D R n N with N n; ie x = Ds, where s is a sparse vector We will assue that D is a frae and therefore DD T = I In this case we have X = xx T = Dss T D T, and using the fact that D is a frae, D T XD = ss T This iplies that if s is sparse, then so is D T XD Based on this observation we add a regularization ter to (22) to proote sparsity We do this by adding the convex surrogate of sparsity which is the l 1 nor Thus, we arrive at the following optiization forulation for recovering data fro underdeterined quadratic easureents, in X R (n+1) (n+1) T r(a 0 X) + λ DXD T subject to Tr( [ c i b i b T i X 0; X 11 = 1 A i ] X) = y i, i = 1,, 3 Phase Retrieval 31 Basic Forulation Phase retrieval is about recovering a general signal, fro the agnitude of its Fourier transfor This proble arises because detectors can often only record the squared odulus of the Fresnel or Fraunhofer diffraction pattern of the radiation that is scattered fro an object Therefore, the phase of the signal is lost and one faces the very challenging proble of recovering the phase of a signal given agnitude easureents We now give a 1-D forulation of the proble using finite length signals Suppose we have a 1D signal x = (x[0], x[1],, x[n 1]) C n and write its Fourier transfor ˆx[ω] = 1 x[t]e i2πωt/n, ω Ω n 0 t<n where Ω is a grid of sapled frequencies The phase retrieval proble is finding x fro the agnitude coefficients ˆx[ω] Before going further one should notice that there are soe inherent abiguities in phase retrieval in this basic for: Figure 3: Magnitude of obaa, phase of bush Figure 4: agnitude of bush, phase of obaa (1) For any a C with a = 1, x and cx have the sae agnitude (2) The tie reversed for of the signal also has the sae agnitude (3) the shifted version of the signal in tie also has the sae agnitude To avoid soe of these abiguities in phase retrieval applications one usually easures the agnitude of diffracted patterns (rather than just the DFT of the signal) ie one easures y i = a i, x 2 i = 1, 2,, where the diffracting wavefor a i [t] can be written as a i [t] w[t]e i2π w k,t Therefore the goal is to recover x fro easureents of the for y i = a, x 2 i = 1, 2,, The careful reader will notice that there is still one abiguity left, even in this generalized version, ie one cannot recover the global phase of x Therefore the goal of phase retrieval is to recover up to this global phase abiguity 32 The challenge To ephasize the challenging aspect of this proble consider Figures 1 and 2 which contains the portraits of the two recent presidents of the US Now consider Figure 3, this picture is obtained by cobining the agnitude of the Obaa picture and the phase of the Bush picture As can be seen the doinant picture here is that of Bush Figure 4 is also siilar It contains the agnitude of Bush along with the phase of Obaa These figures highlight the iportant role that the phase of a signal plays Therefore if the phase of the figure (the coponent that sees to carry a lot inforation) is lost, retracting the phase only fro agnitude easureents sees to be a very challenging proble 33 PhaseLift Inspired by the atrix copletion proble, in a breakthrough result [3], Candes et al present a novel technique for the phase retrieval proble up to global phase The convex relaxation they propose in the presence of noise is depicted in Algorith 1 The algorith offers a ore general fraework for different noisy scenarios What is rearkable about the result of [3] is that they

3 Algorith 1 Phase Lift [3] Input: Magnitude easureents y 1, y 2,, y and regularization ter λ 1 Solve in X R n n 1 (y 2σi 2 i µ i ) 2 + λt r(x) subject to µ i = a T i Xa i, i = 1, 2,, X 0 2 ˆx = σ 1 v 1, where σ 1 is the largest eigenvalue of the solution to the above optiization proble ( ˆX) and v 1 is the corresponding eigenvector Output: Signal ˆx up to a global phase prove in the noise less case and when the easureent vectors a i are gaussian rando vectors (either real or coplex) and as long as the nuber of easureents exceeds > cn log n, then ˆx = x up to the global phase abiguity we described earlier on! 34 Derivation of PhaseLift as a special case of the general Forulation Now we will show that one can get to the noiseless Phase Lift algorith using the general relaxation schee we presented earlier Notice that in the noiseless case the easureents are y i = a i, x y 2 i = x T a i a T i x (31) Notice that the above equation is of the for of the general forulation, and therefore the convex relaxation in this case reduces to in X R n n 0 subject to y i = a T i Xa i, i = 1, 2,, X 0 (32) Notice that this is exactly the phase lift forulation if one disregards the trace (In face the trace in [3] is only inspired by atrix copletion and as the authors note theselves is not necessary) Therefore we can recover Phase Lift as a special case of our general fraework 35 A new Relaxation for Noisy input Phase Retrieval The noise odel considered in [3] and all of its generalizations are based on the assuption that y i = µ i + z i where z i are iid gaussian and µ i = a i, x 2 Depending on the likelihood of z i, then one iniized an appropriate cost function For exaple the version of phase lift shown above denotes the case when z i is iid N (0, σi 2 ) However, a ore realistic noise odel is y i = a i, x + z i 2 (33) ie the noise ter is added to the easureents before the agnitude is taken This for of noise aongst other things odels the non-linearity in the input While phase Lift does not offer any guidance in this case the general convex relaxation schee we presented earlier provides a natural guidance For exaple consider the case that z i are iid N (0, σi 2 ) Notice that y i = x T a i a T i x + z 2 i + 2z i a i, x = [x z] T [ a ia T i B i B T i I i ] [ x z ] (34) where B i is a atrix with all entries zero except the i th colun which is a i Also, I i is a diagonal atrix with all entries 0 except the entry (i, i) which is one Therefore, the axiu likelihood estiator for the gaussian noise odel results in in z 2 i x,z R n 2σi 2 subject to y i = [x z] T [ a ia T i B i Bi T ] [ x I i z ] Notice that this is of the for of the general forulation (iniize a quadratic subject to quadratic equality constraints), and therefore one can use the general convex relaxation schee to handle this case as well Due to space liitations we will not write the final convex relaxation here 4 Proposed Method (Sparse Phase Lift) As entioned earlier Phase Lift boasts any advantages, the optiization schee is convex and therefore has a global solution, it is guaranteed to recover the original signal up to a global phase, and it is also robust to certain kind of noise However, it has two ajor flaws (1) the optiization schee is slow; (2) It does not take advantage of the inherent structures in any practical applications We will address the coputational cost in the next section Here we present an algorith that takes advantage of the inherent low diensionality of signals We will call this algorith Sparse Phase Lift This algorith utilizes the inherent sparsity that exists in any real world signals of interest eg iages So the basic assuption is that the input to Sparse Phase Lift is s-sparse with respect to a dictionary D R n N Given the low diensional nature of the signal one expects to be able to recover the signal fro uch lower nuber of easureents We will deonstrate that Sparse Phase Lift is capable of recovering the signal with uch lower nuber of easureents The ain steps of our suggested Sparse Phase Lift algorith is depicted in 2 Algorith 2 Sparse Phase Lift(SPL) Input: Magnitude easureents y 1, y 2,, y 1 Solve in D T XD X R n n subject to y i = a T i Xa i, i = 1, 2,, X 0 2 ˆx = σ 1 v 1, where σ 1 is the largest eigenvalue of the solution to the above optiization proble ( ˆX) and v 1 is the corresponding eigenvector Output: Signal ˆx up to a global phase 5 Efficient Ipleentation In this section we present a ethod that reduces the coputational coplexity involved in Sparse Phase Lift The ain steps of the algorith are depicted in 3 This derivation is inspired by the general schee of [5, 1] adapted to a dual forulation for this proble It should be noted that standard optiization ethods can not be applied here To be concrete, even when the size of the signal is as sall as n = 70, the proble becoes prohibitive for standard Interior Point ethod solvers eg [4] With the algorith we present here we can handle signals of size 16, 384 ( pixel iages) Scaling this up to larger sizes is an iportant question that we hope to address in the future In the next paragraphs we quickly go through the derivation of this algorith Since the objective of SPL is non-sooth we add a soothing ter of the for 1 2 µ X X 0 2 F We also introduce the epigraph variable t R

4 Algorith 3 Efficient Ipleentation of Sparse Phase Lift Input: Magnitude easureents y 1, y 2,, y, initial prial and dual solutions X 0, Λ 0 R n n, λ 0 R, Γ 0 R N N, soothing paraeter µ, and step sizes {t k } Initialization: θ 0 1, λ0 λ 0, Λ0 Λ 0, Γ0 Γ 0 Iteration: for k = 0, 1, 2, do λ k (1 ) λ k + λ k Λ k (1 ) Λ k + Λ k Γ k (1 ) Γ k + Γ k X k X µ ( Λk + D Γ k D T ( λ k ) i a i a T i ) λ k+1 λ y 1 a k t k T 1 X k a 1 y a T X k a Λ k+1 S( Λk t k X k ) Γ k+1 T ( Γ k t k D T X k D, t k ) λ k+1 (1 ) λ k + λ k+1 Λ k+1 (1 ) Λ k + Λ k+1 Γ k+1 (1 ) Γ k + Γ k ( θ k 2 ) end for Output: Signal ˆx up to a global phase Therefore, the optiization in SPL can be approxiated by in t R, X R n n t µ X X 0 2 F subject to y i = a T i Xa i, D T XD t Putting this into the Lagrangian for we have ax ν R, λ R, Λ R n n, Γ R N N in t R, X R n n t νt µ X X 0 2 F λ i (y i a T i Xa i ) X, Λ D T XD, Γ subject to Λ 0, Γ l ν Notice that, the Lagrangian is unbounded unless ν = 1, which results in ax in λ, Λ, Γ X 1 2 µ X X 0 2 F λ i y i X, Λ+DΓD T λ i a i a T i subject to Λ 0, Γ l 1 Define the dual function g µ (λ, Λ, Γ) as the objective of the above axiization proble The iniizer X(λ, Λ, Γ) is X(λ, Λ, Γ) = X µ (Λ + DΓDT λ i a i a T i ) For axiizing the dual function we use a Nestrov style gradient update [5], ie λ k+1 λ λ k Λ k+1 = arg in Λ Λ Γ k+1 λ,λ,γ k t k g µ (λ, Λ, Γ) Γ Γ k l 2 Notice that the updates are separable so we need to solve the following optiizations λ update: λ k+1 = arg in λ λ k+1 = λ k t k λ λ k 2 l 2t 2 + λ i (y i a T i X k a i ), k y 1 a T 1 X k a 1 y a T X k a Λ update: Λ k+1 = arg in Λ Λ 0 Λ k+1 = S(Λ k t k X k ), 2t k Λ Λ k 2 F + Λ, X k, where S is the projection onto the positive seidefinite cone, which can be calculated by taking SVD and keeping the positive eigenvalues and their corresponding eigenvectors Γ update: Γ k+1 = arg in Γ Γ l 1 Γ k+1 = T (Γ k t k D T X k D, t k ), 2t k Γ Γ k 2 F + Γ, DT X k D, where the operator T (Z, τ) applies sgn(z) in{ z, τ} to each eleent Notice that these are alost the update rules in Algorith 3 We get the algorith by adding Nestrov style oent updates for the dual variables and to ake the algorith converge faster This is the reason for the auxilary variables with and on top 2 We have used a backtracking ethod for the step sizes t k 6 Siulations on Synthetic Data In this section we perfor soe siulations on synthetic data to better understand the regie of applicability of Phase Lift and Sparse Phase Lift 61 Phase Lift vs Sparse Phase Lift on a siple exaple In this section we wish to deonstrate the perforance of Sparse Phase Lift on a sall synthetic iage For this purpose we generate a rando s = 5-sparse signal of size 8 8, 3 and take its wavelet transfor to get a synthetic iage x which is spare in a wavelet frae The wavelet frae we used was Coiflet fro Stanford Wavelab [6] We used = n rando gaussian agnitude easureents The synthetic iage x with its sparse coefficients in the wavelet frae s are shown in Figure 5(a) and 5(b) Using Phase Lift and Sparse Phase Lift the reconstructed iage ˆx with its sparse coefficients in the wavelet frae ŝ are shown in Figures 6(a), 6(b), 7(a) and 7(b) As can be seen Sparse Phase Lift is successful but Phase Lift fails This is to be expected since Phase Lift does not utilize the low diensional structure of the signal In this exaple Phase Lift and Sparse Phase Lift achieved relative errors 4 of and Phase Lift vs Sparse Phase Lift in ters of iniu easureents In this section we copare the nuber of easureents needed for Phase Lift and Sparse Phase Lift For this purpose we consider an s-sparse signal x R n For now we assue D = I The easureents a i where chosen iid rando vectors with N (0, 1) entries The nuber of easureents was varied fro sall to large For each value of, 100 trials were run A value of was considered successful if in at least 90 trials out of the total 100 trials that value of resulted in successful recovery (this corresponds to probability of success 09) The sallest such value for both ethods 2 Due to space liitations we we refer to [5, 1] for further details 3 By rando sparse signal we ean that the support of the signal was chosen uniforly at rando aong the all possible ( n ) possible supports of size s s The values on the support where chosen iid N (0, 1) Here, n = 8 8 = 64 4 We calculate relative error using ˆX X F X F

5 (a) Original Figure 5 (b) Sparse coeff Table 2: average relative error on 10 grayscale iages fro USC iage data base (down-sapled to 64 64) using Coiflet wavelet of Stanford Wavelab Algorith Gaussian = n Fourier = n Phase Lift Spare Phase Lift Algorith F &F W = 2n F, F W &F W = 3n Phase Lift Spare Phase Lift (a) Reconstructed iage (b) Sparse coeff ˆx Figure 6: Phase Lift (a) Reconstructed (b) Sparse coeff 4- = 3n easureents consisting of coluns of F and F W, and F W where W and W are independent diagonal atrices with each diagonal entry a coplex noral rando variable As can be seen in the Table, for case (4), both algoriths work well In case (1) and (3) Sparse Phase Lift shows superior perforance Again, this is to be expected since Phase Lift does not incorporate the sparsity of the iages in wavelet doain Both ethods fail in (2) This says that Sparse Phase Lift is still not able to reconstruct the iages fro only agnitudes of the Fourier transfor of the signal However, Sparse Phase Lift still draatically decreases the nuber of agnitude easureents needed Finally, we test Sparse Phase Lift on a boat iage fro the sae data base This tie we down saple the iage to size , and use case (3) for our easureents The original iage along with its reconstruction is shown in Figures 8(a) and 8(b) Figure 7: Sparse Phase Lift was tabulated in Table 1, for n = 50 and s = 5, 10, 20 As can be seen Sparse Phase Lift can achieve uch lower sapling rates than PhaseLift by exploiting the sparsity of the signal However, unlike copressed sensing where the nuber of easureents is roughly proportional to the sparsity level s, here the relationship sees to be quadratic (O(s 2 )) s Phase Lift Sparse Phase Lift Table 1: Coparison of iniu nuber of easureents for different sparsity levels with n = 50 7 Siulations on Real Iages As stated in previous sections, using our proposed efficient solver we are able to increase the range of applicability of Sparse Phase Lift Despite this iproveent, it takes a long tie to perfor Sparse Phase Lift on large iage sizes Therefore, due to tie constraints, we chose 10 grayscale iages fro the Miscellaneous category of the USC-SIPI Iage Database [7] and down sapled the to We present in Table 2 the average relative errors of Phase Lift and Sparse Phase Lift for various fors of easureent echaniss The dictionary used for the iages was Coiflet fro [6] The easureent ethods are: 1- = n rando gaussian easureents 2- = n easureents consisting of coluns of F where F is the DFT atrix 3- = 2n easureents consisting of coluns of F and F W, where W is a diagonal atrix with each diagonal entry a coplex noral rando variable (a) Original (b) Reconstructed Figure 8: Perforance of Sparse Phase Lift on downsized boat with easureents F and F W References [1] S Becker, E J Candès, and M Grant Teplates for convex cone probles with applications to sparse signal recovery To appear in, Matheatical Prograing Coputation 3(3), [2] A Ben-Tal and A Neirovski Lectures on Modern Convex Optiization, Analysis, Algoriths and Engineering Applications Philadelphia, PA: SIAM, 2001 [3] E J Candes, T Stroher, V Voroninski PhaseLift: Exact and Stable Signal Recovery fro Magnitude Measureents via Convex Prograing, arxiv: v2 [4] M Grant and S Boyd CVX: Matlab Software for Disciplined Convex Prograing, version 121, Apr 2011 [5] Y Nesterov Sooth iniization of non-sooth funcitons Math Progra, Serie A, 103: , 2005 [6] Available at [7] USC-SIPI Iage Database Available at database/ 5 The reader should be reinded that this is equivalent to solving SDP of size 4096 which is tie consuing even with our efficient solver

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

Weighted- 1 minimization with multiple weighting sets

Weighted- 1 minimization with multiple weighting sets Weighted- 1 iniization with ultiple weighting sets Hassan Mansour a,b and Özgür Yılaza a Matheatics Departent, University of British Colubia, Vancouver - BC, Canada; b Coputer Science Departent, University

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

A Nonlinear Sparsity Promoting Formulation and Algorithm for Full Waveform Inversion

A Nonlinear Sparsity Promoting Formulation and Algorithm for Full Waveform Inversion A Nonlinear Sparsity Prooting Forulation and Algorith for Full Wavefor Inversion Aleksandr Aravkin, Tristan van Leeuwen, Jaes V. Burke 2 and Felix Herrann Dept. of Earth and Ocean sciences University of

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010 A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING By Eanuel J Candès Yaniv Plan Technical Report No 200-0 Noveber 200 Departent of Statistics STANFORD UNIVERSITY Stanford, California 94305-4065

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

A Probabilistic and RIPless Theory of Compressed Sensing

A Probabilistic and RIPless Theory of Compressed Sensing A Probabilistic and RIPless Theory of Copressed Sensing Eanuel J Candès and Yaniv Plan 2 Departents of Matheatics and of Statistics, Stanford University, Stanford, CA 94305 2 Applied and Coputational Matheatics,

More information

Statistical clustering and Mineral Spectral Unmixing in Aviris Hyperspectral Image of Cuprite, NV

Statistical clustering and Mineral Spectral Unmixing in Aviris Hyperspectral Image of Cuprite, NV CS229 REPORT, DECEMBER 05 1 Statistical clustering and Mineral Spectral Unixing in Aviris Hyperspectral Iage of Cuprite, NV Mario Parente, Argyris Zynis I. INTRODUCTION Hyperspectral Iaging is a technique

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

Lecture 20 November 7, 2013

Lecture 20 November 7, 2013 CS 229r: Algoriths for Big Data Fall 2013 Prof. Jelani Nelson Lecture 20 Noveber 7, 2013 Scribe: Yun Willia Yu 1 Introduction Today we re going to go through the analysis of atrix copletion. First though,

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

Hamming Compressed Sensing

Hamming Compressed Sensing Haing Copressed Sensing Tianyi Zhou, and Dacheng Tao, Meber, IEEE Abstract arxiv:.73v2 [cs.it] Oct 2 Copressed sensing CS and -bit CS cannot directly recover quantized signals and require tie consuing

More information

On the Use of A Priori Information for Sparse Signal Approximations

On the Use of A Priori Information for Sparse Signal Approximations ITS TECHNICAL REPORT NO. 3/4 On the Use of A Priori Inforation for Sparse Signal Approxiations Oscar Divorra Escoda, Lorenzo Granai and Pierre Vandergheynst Signal Processing Institute ITS) Ecole Polytechnique

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

Structured signal recovery from quadratic measurements: Breaking sample complexity barriers via nonconvex optimization

Structured signal recovery from quadratic measurements: Breaking sample complexity barriers via nonconvex optimization Structured signal recovery fro quadratic easureents: Breaking saple coplexity barriers via nonconvex optiization Mahdi Soltanolkotabi Ming Hsieh Departent of Electrical Engineering University of Southern

More information

Introduction to Machine Learning. Recitation 11

Introduction to Machine Learning. Recitation 11 Introduction to Machine Learning Lecturer: Regev Schweiger Recitation Fall Seester Scribe: Regev Schweiger. Kernel Ridge Regression We now take on the task of kernel-izing ridge regression. Let x,...,

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

Convex Programming for Scheduling Unrelated Parallel Machines

Convex Programming for Scheduling Unrelated Parallel Machines Convex Prograing for Scheduling Unrelated Parallel Machines Yossi Azar Air Epstein Abstract We consider the classical proble of scheduling parallel unrelated achines. Each job is to be processed by exactly

More information

Multi-Scale/Multi-Resolution: Wavelet Transform

Multi-Scale/Multi-Resolution: Wavelet Transform Multi-Scale/Multi-Resolution: Wavelet Transfor Proble with Fourier Fourier analysis -- breaks down a signal into constituent sinusoids of different frequencies. A serious drawback in transforing to the

More information

On the theoretical analysis of cross validation in compressive sensing

On the theoretical analysis of cross validation in compressive sensing MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.erl.co On the theoretical analysis of cross validation in copressive sensing Zhang, J.; Chen, L.; Boufounos, P.T.; Gu, Y. TR2014-025 May 2014 Abstract

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Complex Quadratic Optimization and Semidefinite Programming

Complex Quadratic Optimization and Semidefinite Programming Coplex Quadratic Optiization and Seidefinite Prograing Shuzhong Zhang Yongwei Huang August 4 Abstract In this paper we study the approxiation algoriths for a class of discrete quadratic optiization probles

More information

RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION

RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION GUANGHUI LAN AND YI ZHOU Abstract. In this paper, we consider a class of finite-su convex optiization probles defined over a distributed

More information

Data-Driven Imaging in Anisotropic Media

Data-Driven Imaging in Anisotropic Media 18 th World Conference on Non destructive Testing, 16- April 1, Durban, South Africa Data-Driven Iaging in Anisotropic Media Arno VOLKER 1 and Alan HUNTER 1 TNO Stieltjesweg 1, 6 AD, Delft, The Netherlands

More information

Using a De-Convolution Window for Operating Modal Analysis

Using a De-Convolution Window for Operating Modal Analysis Using a De-Convolution Window for Operating Modal Analysis Brian Schwarz Vibrant Technology, Inc. Scotts Valley, CA Mark Richardson Vibrant Technology, Inc. Scotts Valley, CA Abstract Operating Modal Analysis

More information

Bayes Decision Rule and Naïve Bayes Classifier

Bayes Decision Rule and Naïve Bayes Classifier Bayes Decision Rule and Naïve Bayes Classifier Le Song Machine Learning I CSE 6740, Fall 2013 Gaussian Mixture odel A density odel p(x) ay be ulti-odal: odel it as a ixture of uni-odal distributions (e.g.

More information

arxiv: v1 [math.na] 10 Oct 2016

arxiv: v1 [math.na] 10 Oct 2016 GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

Compressive Sensing Over Networks

Compressive Sensing Over Networks Forty-Eighth Annual Allerton Conference Allerton House, UIUC, Illinois, USA Septeber 29 - October, 200 Copressive Sensing Over Networks Soheil Feizi MIT Eail: sfeizi@it.edu Muriel Médard MIT Eail: edard@it.edu

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x)

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x) 7Applying Nelder Mead s Optiization Algorith APPLYING NELDER MEAD S OPTIMIZATION ALGORITHM FOR MULTIPLE GLOBAL MINIMA Abstract Ştefan ŞTEFĂNESCU * The iterative deterinistic optiization ethod could not

More information

Solving Most Systems of Random Quadratic Equations

Solving Most Systems of Random Quadratic Equations Solving Most Systes of Rando Quadratic Equations Gang Wang, Georgios B. Giannakis Yousef Saad Jie Chen Digital Tech. Center & Dept. of Electrical and Coputer Eng., Univ. of Minnesota Key Lab of Intell.

More information

Recovery of Sparsely Corrupted Signals

Recovery of Sparsely Corrupted Signals TO APPEAR IN IEEE TRANSACTIONS ON INFORMATION TEORY 1 Recovery of Sparsely Corrupted Signals Christoph Studer, Meber, IEEE, Patrick Kuppinger, Student Meber, IEEE, Graee Pope, Student Meber, IEEE, and

More information

Support Vector Machines. Goals for the lecture

Support Vector Machines. Goals for the lecture Support Vector Machines Mark Craven and David Page Coputer Sciences 760 Spring 2018 www.biostat.wisc.edu/~craven/cs760/ Soe of the slides in these lectures have been adapted/borrowed fro aterials developed

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

arxiv: v1 [cs.ds] 29 Jan 2012

arxiv: v1 [cs.ds] 29 Jan 2012 A parallel approxiation algorith for ixed packing covering seidefinite progras arxiv:1201.6090v1 [cs.ds] 29 Jan 2012 Rahul Jain National U. Singapore January 28, 2012 Abstract Penghui Yao National U. Singapore

More information

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements 1 Copressive Distilled Sensing: Sparse Recovery Using Adaptivity in Copressive Measureents Jarvis D. Haupt 1 Richard G. Baraniuk 1 Rui M. Castro 2 and Robert D. Nowak 3 1 Dept. of Electrical and Coputer

More information

Transformation-invariant Collaborative Sub-representation

Transformation-invariant Collaborative Sub-representation Transforation-invariant Collaborative Sub-representation Yeqing Li, Chen Chen, Jungzhou Huang Departent of Coputer Science and Engineering University of Texas at Arlington, Texas 769, USA. Eail: yeqing.li@avs.uta.edu,

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Shannon Sampling II. Connections to Learning Theory

Shannon Sampling II. Connections to Learning Theory Shannon Sapling II Connections to Learning heory Steve Sale oyota echnological Institute at Chicago 147 East 60th Street, Chicago, IL 60637, USA E-ail: sale@athberkeleyedu Ding-Xuan Zhou Departent of Matheatics,

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

Bipartite subgraphs and the smallest eigenvalue

Bipartite subgraphs and the smallest eigenvalue Bipartite subgraphs and the sallest eigenvalue Noga Alon Benny Sudaov Abstract Two results dealing with the relation between the sallest eigenvalue of a graph and its bipartite subgraphs are obtained.

More information

A remark on a success rate model for DPA and CPA

A remark on a success rate model for DPA and CPA A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization Use of PSO in Paraeter Estiation of Robot Dynaics; Part One: No Need for Paraeterization Hossein Jahandideh, Mehrzad Navar Abstract Offline procedures for estiating paraeters of robot dynaics are practically

More information

An incremental mirror descent subgradient algorithm with random sweeping and proximal step

An incremental mirror descent subgradient algorithm with random sweeping and proximal step An increental irror descent subgradient algorith with rando sweeping and proxial step Radu Ioan Boţ Axel Böh Septeber 7, 07 Abstract We investigate the convergence properties of increental irror descent

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

Robust Spectral Compressed Sensing via Structured Matrix Completion Yuxin Chen, Student Member, IEEE, and Yuejie Chi, Member, IEEE

Robust Spectral Compressed Sensing via Structured Matrix Completion Yuxin Chen, Student Member, IEEE, and Yuejie Chi, Member, IEEE 6576 IEEE TRANSACTIONS ON INORMATION THEORY, VOL 60, NO 0, OCTOBER 04 Robust Spectral Copressed Sensing via Structured Matrix Copletion Yuxin Chen, Student Meber, IEEE, and Yuejie Chi, Meber, IEEE Abstract

More information

arxiv: v1 [cs.lg] 8 Jan 2019

arxiv: v1 [cs.lg] 8 Jan 2019 Data Masking with Privacy Guarantees Anh T. Pha Oregon State University phatheanhbka@gail.co Shalini Ghosh Sasung Research shalini.ghosh@gail.co Vinod Yegneswaran SRI international vinod@csl.sri.co arxiv:90.085v

More information

Tracking using CONDENSATION: Conditional Density Propagation

Tracking using CONDENSATION: Conditional Density Propagation Tracking using CONDENSATION: Conditional Density Propagation Goal Model-based visual tracking in dense clutter at near video frae rates M. Isard and A. Blake, CONDENSATION Conditional density propagation

More information

Support Vector Machines MIT Course Notes Cynthia Rudin

Support Vector Machines MIT Course Notes Cynthia Rudin Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Exact tensor completion with sum-of-squares

Exact tensor completion with sum-of-squares Proceedings of Machine Learning Research vol 65:1 54, 2017 30th Annual Conference on Learning Theory Exact tensor copletion with su-of-squares Aaron Potechin Institute for Advanced Study, Princeton David

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions Journal of Matheatical Research with Applications Jul., 207, Vol. 37, No. 4, pp. 496 504 DOI:0.3770/j.issn:2095-265.207.04.0 Http://jre.dlut.edu.cn An l Regularized Method for Nuerical Differentiation

More information

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation journal of coplexity 6, 459473 (2000) doi:0.006jco.2000.0544, available online at http:www.idealibrary.co on On the Counication Coplexity of Lipschitzian Optiization for the Coordinated Model of Coputation

More information

IN modern society that various systems have become more

IN modern society that various systems have become more Developent of Reliability Function in -Coponent Standby Redundant Syste with Priority Based on Maxiu Entropy Principle Ryosuke Hirata, Ikuo Arizono, Ryosuke Toohiro, Satoshi Oigawa, and Yasuhiko Takeoto

More information

An incremental mirror descent subgradient algorithm with random sweeping and proximal step

An incremental mirror descent subgradient algorithm with random sweeping and proximal step An increental irror descent subgradient algorith with rando sweeping and proxial step Radu Ioan Boţ Axel Böh April 6, 08 Abstract We investigate the convergence properties of increental irror descent type

More information

GLOBALLY CONVERGENT LEVENBERG-MARQUARDT METHOD FOR PHASE RETRIEVAL

GLOBALLY CONVERGENT LEVENBERG-MARQUARDT METHOD FOR PHASE RETRIEVAL GLOBALLY CONVERGENT LEVENBERG-MARQUARDT METHOD FOR PHASE RETRIEVAL CHAO MA, XIN LIU, AND ZAIWEN WEN Abstract. In this paper, we consider a nonlinear least squares odel for the phase retrieval proble. Since

More information

An Adaptive UKF Algorithm for the State and Parameter Estimations of a Mobile Robot

An Adaptive UKF Algorithm for the State and Parameter Estimations of a Mobile Robot Vol. 34, No. 1 ACTA AUTOMATICA SINICA January, 2008 An Adaptive UKF Algorith for the State and Paraeter Estiations of a Mobile Robot SONG Qi 1, 2 HAN Jian-Da 1 Abstract For iproving the estiation accuracy

More information

Asynchronous Gossip Algorithms for Stochastic Optimization

Asynchronous Gossip Algorithms for Stochastic Optimization Asynchronous Gossip Algoriths for Stochastic Optiization S. Sundhar Ra ECE Dept. University of Illinois Urbana, IL 680 ssrini@illinois.edu A. Nedić IESE Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu

More information

Department of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China

Department of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China 6th International Conference on Machinery, Materials, Environent, Biotechnology and Coputer (MMEBC 06) Solving Multi-Sensor Multi-Target Assignent Proble Based on Copositive Cobat Efficiency and QPSO Algorith

More information

Detection and Estimation Theory

Detection and Estimation Theory ESE 54 Detection and Estiation Theory Joseph A. O Sullivan Sauel C. Sachs Professor Electronic Systes and Signals Research Laboratory Electrical and Systes Engineering Washington University 11 Urbauer

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016/2017 Lessons 9 11 Jan 2017 Outline Artificial Neural networks Notation...2 Convolutional Neural Networks...3

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

Provable Non-convex Phase Retrieval with Outliers: Median Truncated Wirtinger Flow

Provable Non-convex Phase Retrieval with Outliers: Median Truncated Wirtinger Flow Provable Non-convex Phase Retrieval with Outliers: Median runcated Wirtinger Flow Huishuai Zhang Departent of EECS, Syracuse University, Syracuse, NY 3244 USA Yuejie Chi Departent of ECE, he Ohio State

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition

More information

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving

More information

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE Proceeding of the ASME 9 International Manufacturing Science and Engineering Conference MSEC9 October 4-7, 9, West Lafayette, Indiana, USA MSEC9-8466 MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost

More information

Tail Estimation of the Spectral Density under Fixed-Domain Asymptotics

Tail Estimation of the Spectral Density under Fixed-Domain Asymptotics Tail Estiation of the Spectral Density under Fixed-Doain Asyptotics Wei-Ying Wu, Chae Young Li and Yiin Xiao Wei-Ying Wu, Departent of Statistics & Probability Michigan State University, East Lansing,

More information

P016 Toward Gauss-Newton and Exact Newton Optimization for Full Waveform Inversion

P016 Toward Gauss-Newton and Exact Newton Optimization for Full Waveform Inversion P016 Toward Gauss-Newton and Exact Newton Optiization for Full Wavefor Inversion L. Métivier* ISTerre, R. Brossier ISTerre, J. Virieux ISTerre & S. Operto Géoazur SUMMARY Full Wavefor Inversion FWI applications

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

arxiv: v1 [stat.ot] 7 Jul 2010

arxiv: v1 [stat.ot] 7 Jul 2010 Hotelling s test for highly correlated data P. Bubeliny e-ail: bubeliny@karlin.ff.cuni.cz Charles University, Faculty of Matheatics and Physics, KPMS, Sokolovska 83, Prague, Czech Republic, 8675. arxiv:007.094v

More information

Sparse Signal Reconstruction via Iterative Support Detection

Sparse Signal Reconstruction via Iterative Support Detection Sparse Signal Reconstruction via Iterative Support Detection Yilun Wang and Wotao Yin Abstract. We present a novel sparse signal reconstruction ethod, iterative support detection (), aiing to achieve fast

More information

PAC-Bayes Analysis Of Maximum Entropy Learning

PAC-Bayes Analysis Of Maximum Entropy Learning PAC-Bayes Analysis Of Maxiu Entropy Learning John Shawe-Taylor and David R. Hardoon Centre for Coputational Statistics and Machine Learning Departent of Coputer Science University College London, UK, WC1E

More information

Consistent Multiclass Algorithms for Complex Performance Measures. Supplementary Material

Consistent Multiclass Algorithms for Complex Performance Measures. Supplementary Material Consistent Multiclass Algoriths for Coplex Perforance Measures Suppleentary Material Notations. Let λ be the base easure over n given by the unifor rando variable (say U over n. Hence, for all easurable

More information

Highly Robust Error Correction by Convex Programming

Highly Robust Error Correction by Convex Programming Highly Robust Error Correction by Convex Prograing Eanuel J. Candès and Paige A. Randall Applied and Coputational Matheatics, Caltech, Pasadena, CA 9115 Noveber 6; Revised Noveber 7 Abstract This paper

More information

PULSE-TRAIN BASED TIME-DELAY ESTIMATION IMPROVES RESILIENCY TO NOISE

PULSE-TRAIN BASED TIME-DELAY ESTIMATION IMPROVES RESILIENCY TO NOISE PULSE-TRAIN BASED TIME-DELAY ESTIMATION IMPROVES RESILIENCY TO NOISE 1 Nicola Neretti, 1 Nathan Intrator and 1,2 Leon N Cooper 1 Institute for Brain and Neural Systes, Brown University, Providence RI 02912.

More information

Lecture 9: Multi Kernel SVM

Lecture 9: Multi Kernel SVM Lecture 9: Multi Kernel SVM Stéphane Canu stephane.canu@litislab.eu Sao Paulo 204 April 6, 204 Roadap Tuning the kernel: MKL The ultiple kernel proble Sparse kernel achines for regression: SVR SipleMKL:

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information