Transformation-invariant Collaborative Sub-representation

Size: px
Start display at page:

Download "Transformation-invariant Collaborative Sub-representation"

Transcription

1 Transforation-invariant Collaborative Sub-representation Yeqing Li, Chen Chen, Jungzhou Huang Departent of Coputer Science and Engineering University of Texas at Arlington, Texas 769, USA. Eail: Abstract In this paper, we present an efficient and robust iage representation ethod that can handle isalignent, occlusion and big noises with lower coputational cost. It is otivated by the sub-selection technique, which uses partial observations to efficiently approxiate the original high diensional probles. While it is very efficient, their ethod can not handle any real probles in practical applications, such as isalignent, occlusion and big noises. To this end, we propose a robust sub-representation ethod, which can effectively handle these probles with an efficient schee. While its perforance guarantee was theoretically proved, nuerous experients on practical applications have further deonstrated that the proposed ethod can lead to significant perforance iproveent in ters of speed and accuracy. I. INTRODUCTION Iage representation is an iportant proble in coputer vision and pattern recognition and it has gained a lot of attentions in past decade. While huge interest has been seen in iage representation, to date the ost popular approaches are sparse representation and collaborative representation. In Wright et al. s pioneer work SRC [2] on sparse representation, they odel the recognition proble as finding sparse representation of the test iage based on linear cobination of training iages. Furtherore, outlier pixels are also assued to be sparse. Favourable result has been achieved on face recognition application with occlusion and corruption. Zhang et al. proposed collaborative representation (CR) [3], [4] which relax the sparsity constraint on the representation coefficients. CR inspires any following works, such as [5], DLRD SR [6]. CR uses the l 2 -nor regularization instead of l -nor in SRC. Therefore, it is uch faster. It can achieve desired results for the clean data without corruption. However, if the testing iages are sparsely corrupted, l -nor regularization has to be used for constraining sparse occlusion, which will lead to ipressively degradation of speed. The high coputation coplexity is a big obstacle for the being used for high-diensional iages. Moreover, in practice, due to the registration error of the object detector or the otion of the target object, the test iage is usually not well-aligned with the training iages. To achieve effective representation, the test iage has to be aligned with training iages first by using iterative transforation estiation ethods [5], [7], [8]. This process has higher coputation costs because the transforation estiation step is usually solved in high diensional pixel-space and not able to utilize diension reduction techniques. Though existing ethods have achieved great success in various situations such as illuation change, occlusion and isalignent, their coputational inefficiency liits the being used in practical application involving high-diensional iages. To this end, we propose a robust sub-representation ethod, which can not only efficiently represent the iage but also effectively handle the probles of isalignent, occlusion and big noises. Its perforance guarantee can be theoretically proved. While cobining it with existing collaborate representation ethod, a Transforation-invariant Collaborative Sub-Representation (TCSR) algorith is proposed in this paper. Nuerous experients on practical applications have been conducted to further deonstrate its superior perforance in ters of both coputational coplexity and accuracy. The contributions of this paper are: ) To handle big noises, sub-selection ethod is generalized to robust subrepresentation. We have theoretically proved its benefit over sub-selection ethod; 2) Sub-representation is further extended to handle isalignent, occlusion and corrupted pixels, etc.. This extension is done by cobing it with soe existing techniques like transforation estiation algorith and collaborative representation. The resulting ethod can also be regarded as a case study of sub-representation, which shows its great potential in cooperating with other ethods. 3) Extensive experients are conducted to validate the efficiency and effectiveness of the proposed ethods, which deonstrates that the proposed ethod can significantly accelerate the collaborative representation with iperceptible loss of accuracy. II. RELATED WORK A. Sub-selection and Handling Incoplete Data Sub-selection [9] is an efficient way to reduce the diension of data. It projects the data onto lower diensional subspaces using a randoly chosen subset of the data features. An exaple of sub-selection on pixel features of an iage is shown in Fig.. In theoretical analysis, one situation that is siilar to sub-selection is handling incoplete data. Balzano et. al. have recently proved theoretical guarantees of subspace detection using incoplete data []. The theory is sketched as follows for convenience. Let v Ω be the vector of diension Ω coprised of the eleents v i, i Ω, ordered lexigraphically, where Ω denotes the cardinality of Ω. The energy of v in the subspace S is P S v 2 2, where P S denotes the projection operator onto S. Let U be an n r atrix whose coluns span the r-diensional subspace S. In this case, P S = U(U T U) U T. With this representation in d, let U Ω denote the Ω r atrix, whose

2 Iage Vectorize v Sub-selection Fig. : Sub-selection on pixel features of an iage. rows are the Ω rows of U indexed by the set Ω, arranged in lexigraphic order. Suppose we only observe v on the set Ω. One approach for estiating its energy in S is to assess how well v Ω can be represented in ters of the rows of U Ω. Define the projection operator P SΩ := U Ω (UΩ T U Ω) UΩ T, where denotes the pseudo-inverse. It follows iediately that if v S, then v P S v 2 2 = and v Ω P SΩ v Ω 2 2 =. Let the entries of v be sapled uniforly with replaceent. Again let Ω refer to the set of indices for observations of entries in v, and denote Ω = k. The following theore has been proved in []: Theore II.. (Theore in []): Let δ > and k 8 2r 3rµ(S)log( δ ). Then with probability at least 4δ, where α = k n ( α α ) v P S v 2 2 v Ω P SΩ v Ω 2 2 8rµ(S) 3k log( 2r v Ω ( + α) k n v P Sv 2 2 () 2µ(y) 2 k log( δ ), β = 2µ(y)log( δ ), γ = δ ), and α = rµ(s) k (+β) 2 γ. µ(s) is the coherence of a subspace S []: µ(s) := n r ax j P S e j 2 2, where e j represents a standard basis eleent. Note that µ(s) n r. We let µ(y) denote the coherence of the subspace spanned by y. By plugging in the definition, we have µ(y) = n y 2. Here, v = x + y, x S and y S. y 2 2 For general cases, Balzano et. al. have shown that if Ω is just slightly greater than rlog(r), then with high probability v Ω P SΩ v Ω 2 2 is very close to Ω n v P Sv 2 2. It eans that partial observations can efficiently approxiate the full observation with high diension. Balzano et. al. s result provides a useful starting point to analyse perforance of sub-selection representation. However, it only concerns clean data and can not directly extend to handle any real probles in practical applications, such as the earlier noted isalignent, occlusion and big noises. B. Transforation-invariant Collaborative Representation Yang et al [5] proposed a transforation-invariant collaborative representation which can handle the representation and the transforation estiation siultaneously. First, a sparse ter e is eployed to handle occlusion. The proble is solved by iizing the objective function: x,e F (x, e) = y Ax e λ x γ e, (2) where e represents the sparse big error, y R n is the query iage, A = [I, I 2,..., I p ] R n p is the dictionary, x R p denotes the representation coefficients. And then the transforation paraeter is introduced so that the objective function becoes: F 2(x, e, τ) = y τ Ax e λ x γ e, x,e,τ where τ is the transforation paraeters and y τ eans to apply the transforation on the query iage. The authors use a two-step strategy to accelerate the optiization process. Step one is to calculate SVD decoposition on the dictionary and use the singular vectors to solve x and τ: (3) F 3(β, e, τ) = y τ Uβ e γ e, (4) β,e,τ where A = USV T is the SVD decoposition of A and β = SV T x is teporal representation coefficient with big absolute values. Miniizing this F 3 (β, e, τ) will give an estiation of transforation paraeters τ, big sparse error e, and the representation coefficients. This approach is uch faster than the previous sparse representation approach RASR [8]. However, due to the l - nor optiization and transforation estiation, it is still slower for high-diensional iages in practical applications. III. METHODOLOGY In this section, we introduce sub-representation approach and provide the theoretical proof of its perforance guarantee for the case with sparse big noise. The resulting ethod is called as robust sub-representation. Finally, after cobining it with transforation estiation and collaborative representation, we propose the TCSR approach. In the following discussion, we shall interchangeably use τ to indicate the transforation paraeters and an operator to apply that transforation on a set of iages. Likewise, we ay also use Ω as a sub-selection atrix as well as a sub-selection operation using that atrix. So that y τ indicates applying the transforation on iage y, and A τ indicates applying the transforation on each iage (colun) in A. Siilarly, y Ω and A Ω represents applying sub-selection on y and A respectively. A. Sub-Representation Consider the proble of representing a query iage by linear cobination of a set of iages. Let y R n be the query iage, A = [I, I 2,..., I p ] R n p be the dictionary of p iages, x R p denotes the representation coefficients. This proble can be forulated as solving the equation Ax = y. The diension of this equation can be reduced using subselection that we discussed in Section II-A. Instead of solving Ax = y, we solve A Ωˆx = y Ω. That is ˆx = arg y Ω A Ω x 2 2, (5) x

3 where A Ω = A Ω denotes the dictionary under subselection, y Ω = y Ω denotes the query iage under subselection and ˆx is the representation coefficient vector. Fro Theore II., ˆx should be very close to original x when Ω satisfy certain conditions. Equation 5 is our basic idea of subrepresentation. The benefit of sub-representation is the solution of the original equation can be approxiated by the solution of the low diensional version of the equation. Hence, the the coputational cost is significantly reduced. Fro now on, we shall extend sub-representation to handle several challenging probles and finally reach a practical iage representation ethod. B. Robust Sub-Representation Occlusions and corrupted pixels are coon challenges in any practical scenarios. In previous literatures, they are odelled as sparse big noise. Instead of solving Ax = y, we need to solve Ax = y e, where e is the sparse error ter. Adding sparse regularization on e, the proble becoes iizing the following objective function: x,e F 4(x, e) = γ e + y Ax e 2 2 (6) where γ is the regularization paraeter. Solving this equation directly can be tie consug for even a ediu size iage, like , due to the l -nor iization. However, usually p n or the rank of atrix A is far less than p and n, a sub-selection operation (e.g. Ω = n) Ω can be applied on the representation. The new objective function is: G 4 (ˆx, e Ω ) = γ e Ω + y Ω A Ωˆx e Ω 2 2 (7) ˆx,e Ω where e Ω = e Ω is the error ter resulting fro acting sub-selection on e. Now we shall discuss the relationship between Eq. (6) and Eq. (7), which is issing in the previous literatures. We first prove the boundedness of l -nor ter under sub-selection. The follow theore is required in later discussion: Theore III.. (McDiarid s Inequality []): Let X,..., Xn be independent rando variables, and assue f is a function for which there exist t i, i =,..., n satisfying sup f(x,..., x n ) f(x,..., ˆx i,..., x n ) t i (8) x,...,x n,ˆx i where ˆx i indicates replacing the saple value x i with any other of its possible values. Call f(x,..., X n ) := Y. Then for any ɛ >, P [Y E[Y ] ɛ] exp P [Y E[Y ] + ɛ] exp ( ) 2ɛ 2 n i= t2 i ( ) 2ɛ 2 n i= t2 i (9) () With Theore III., the following lea can be proved: Lea III.. : Suppose δ >, y R n and Ω =, then ( α ) n y y Ω ( + α ) n y () with probability at least 2δ. Proof: We use McDiarid s inequality fro Theore III. for the function f(x,..., X ) = i= X i to prove this. Set X i = y Ω(i). Since y Ω(i) y for all i, we have i= X i i k X i ˆX Xk k = ˆX k 2 y. We first calculate E[ i= X i] as follows. Define {} to be the indicator function, and assue that the saples are taken uniforly with replaceent. E [ i= X i] = E [ i= y Ω(i) ] = [ [ n ]] i= E j= y j {Ω(i)=j} = n y. Invoking the Theore III., the left hand side is P [ i= X i E [ i= X i] ɛ] = P [ i= X i n y ɛ ]. We can let ɛ = α n y ( and then have ) that this probability is bounded by exp 2 ( n )2 y 2 2α 4 y Thus, the resulting probability bound is P [ ( ) y Ω ( α) n y ] 2 α exp 2 y 2 2n 2 y. 2 Substituting our definitions of µ (y) = n y y and α = 2µ (y) 2 log( δ ) shows that the lower bound holds with probability at least δ. The arguent for the upper bound can be proved siilarly. The Lea now follows by applying the union bound. Lea III.2. : Let δ >. Then with probability at least 6δ, F 4 (x, e) in Eq. (6) and G 4 (ˆx, e Ω ) in Eq. (7) satisfy: n ( α 4)F 4 (x, e) G 4 (ˆx, e Ω ) n ( + α 4)F 4 (x, e) (2) where α 4 is a sall positive constant for given proble. Lea III.2 can be proved using Theore II. and Lea III.. Due to liitation of pages, the detailed proof of this Lea and later Leas are put in the suppleentary aterial. With Lea III.2, it s easy to derive that the subrepresentation coefficient ˆx will be very close to the origin solution x. Lea III.2 provides the theoretical guarantee for quality of solution of Eq. (7). C. Sub-selection and Transforation Estiation Sub-selection can be used to accelerate transforation estiation. It can be forulated as: τ = arg y y 2 τ 2 2 (3) τ where y, y 2 are two iages without correct alignent, τ is the unknown paraeter of the transforation. For any kinds of transforation, like affine, siilarity, hoograph, translation, etc., the forula can be solved by an iterative approach [2]. To approxiate Eq. (3), a Taylor expansion is usually applied. The proble becoes iizing: τ F 5( τ) = y y 2 τ c J τ τ 2 2, (4) where τ c is the current estiation of the transforation, τ is the paraeter increent at each iteration and J τ is the Jacobian atrix. Eq. (4) is a least square equation and have close for solution. The diension of the transforation paraeters is usually very low copared to the iage diension. Hence, sub-selection is applicable here. Assue Ω is a sub-selection atrix(n choose ), y Ω, = y Ω, y Ω,2 = y 2 Ω and

4 J τ,ω = Jacobianˆτ (y Ω,2 ). The result objective function is as follow: ˆτ G 5( ˆτ) = y Ω, y Ω,2 ˆτ c Jˆτ,Ω ˆτ 2 2. (5) Then, we have: Lea III.3. : Let δ >. Then with probability at least 4δ, F 5 ( τ) in Eq. (4) and G 5 ( ˆτ) in Eq. (5) satisfy: n ( α 5)F 5 ( τ) G 5 ( ˆτ) n ( + α 5)F 5 ( τ) where α 5 is a sall positive constant for given proble. Lea III.3 can be proved using Theore II.. With this Lea, ˆτ should be very close to τ. This property allows us to address the unresolved proble of solving transforation estiation in the sae low diensional space of solving the representation coefficients. Lea III.3 provides theoretical guarantee for quality of solution of Eq. (5). D. Transforation Invariant Collaborative Sub- Representations (TCSR) Now, we coplete our discussion by cobining techniques we discuss above and reach our final proposed ethod. The basic idea is transfor Eq. (3) to low diensional space via sub-selection. There resulting forula is as follow: ˆx,e Ω,ˆτ G 2(ˆx, e Ω, ˆτ) = y Ω ˆτ A Ωˆx e Ω λ n ˆx γ e Ω, (6) where ˆx, ˆτ are the representation paraeters and transforation paraeters respectively. Then we have: Lea III.4. : Let δ > Then with probability at least 6δ, F 2 (x, e, τ) in Eq. (3) and G 2 (ˆx, e Ω, ˆτ) in Eq. (6) satisfy: n ( α 7)F 2 (x) G 2 (ˆx) n ( + α 7)F 2 (x) where α 7 is a sall positive constant for given proble. This Lea can be proved using Lea III.2, Lea III.3 and Theore II.. Lea III.4 provides theoretical guarantee for quality of solution of Eq. (6). Lea III.4 is useful to prove the bound of sub-selection version of Eq. (4): { ˆβ, } e Ω, ˆτ} = arg { y Ω ˆτ U Ω ˆβ eω γ e Ω ˆβ,e Ω,ˆτ (7) where U Ω is the singular vectors of sub-selection dictionary Ω A = A Ω = Ω USV T = U Ω SV T and other ters have the sae eaning as in the above discussion. The nuber of rows of Eq. (6) and Eq. (7) are uch saller than that of Eq. (3) and Eq. (4), which can effectively reduce the coputational coplexity. An object classification algorith based on TCSR is suarized in Algorith. Algorith Transfor-invariant Collaborative subrepresentation (TCSR) Input: Training data atrix A, query iage y, and initial transforation τ of y. : Generate l rando selection operator Ω,..., Ω l 2: for q =,..., l do 3: Copute A Ω = A Ω q 4: Copute y Ω = y Ω q 5: Set Û as the first η colun vectors of U Ω where A Ω = U Ω SV T 6: Solving Eq. (6) to get ˆx, e Ω, ˆτ q 7: Copute residue r q i = y Ω ˆτ q A Ωˆx i for each class i 8: end for 9: Copute r i = E[r i ] : Copute identity(y) = arg i E[r i ] : Copute transfor(y) = E[ˆτ q ] Output: identity(y) and transf or(y) IV. EXPERIMENTS In this section, we conducted extensive experients to validate the acceleration perforance of the proposed subrepresentation based ethods against transforation estiation, collaborative representation, and transfor-invariant collaborative representation, respectively. For fair coparisons, we download the code of these algoriths fro their websites and follow their default paraeter settings. Three databases are used for training or testing: Multi-PIE [3], Extended Yale B(EYB) [4] and AR [5]. All experients are conducted on a desktop coputer with Intel icore 7 3.4GHz CPU. Matlab version is 22a. A. Transforation Estiation First, we test our sub-selection for transforation estiation algorith (STE, i.e. Eq. (5)) using public database CMU Multi-PIE database [3]. The iages of subjects fro Session 2 are chosen and all iages are resized to The areas of huan faces are used as the region of interest (ROI) and an artificial transforation of x and y directions are introduced to the ROI. The reference face area is resized to 6 2. We copare STE with the hierarchical odelbased otion estiation (HMME, i.e. Eq. (4)) [2] algorith. Artificial translation in both x and y directions are added to the position of ROI. Suppose the groundtruth rectangle position is a 4-D vector R consisting of x, y coordinates, width and height of the rectangle, while R 2 is a siilar vector for the result rectangle. The accuracy of the estiation is calculated as: acc = R R 2 2 / R 2 The translation is set as, 2, 3, 4 pixels respectively. We set the saple rate as /5 (i.e. Ω n/5). The result is shown in Table I. In the table, the proposed algorith is 2 to 3 ties faster than the original HMME algorith while the loss of estiation accuracies is no ore than.%. These experients indicate that the sub-selection technique effectively reduces the coputational coplexity while preserving the accuracies. B. Face Recognition We use the proposed TCSR (i.e. Algorith ) for face recognition to validate its benefits. We copare the proposed

5 Translation HMME [2] proposed Acc. 99.6% 99.% 94.9% 74.3% Tie Acc. 99.5% 99.% 95.5% 75.3% Tie TABLE I: Result Coparison of Transforation Estiation. Acc. stands for accuracy and Tie stands for average execution tie for each iage x translation (a) y translation (b) TCSR with original collaborative representation classification(crc) [4] algorith on the Multi-PIE [3], Extended Yale B(EYB) [4] and AR databases [5]. For fair coparison, we follow the experiental setting in [4]. The initial position of all the testing and training iages are autoatically detected by Viola and Jone s face detector [6]. For the setting of Multi- PIE, the first subjects in Session with illuations {,, 7, 3, 4, 6, 8} are used as the training set while the first subjects in Session 3 with illuation {3, 6,, 9} are used for testing. The face areas in the training iages are all resized to 6 2. The saple rate is /5; For the setting of EYB, 2 subjects are selected. For each subject, 32 randoly selected frontal iages are used for training, with 29 of the reaining iages for testing. The face areas are resized to The saple rate is /; For the setting of AR, subjects and 7 iages per subject are selected as the training set, while subjects and 6 iages per subject for testing. There are 5 ale faces and 5 feale faces with various facial expressions and illuations. The face areas are resized to The saple rate is /8; CRC [4] proposed MPIE EYB AR 88.7% 99.4% 86.% Avg. Tie % 99.% 86.6% Avg. Tie TABLE II: Recognition rates and execution tie The experiental result is tabulated in Table II. The recognition rate of proposed TCSR algorith is alost the sae as the original CRC [4] algorith with difference less than.5% while the speed is 3 to ties faster than the origin version related to the saple rate. Also the variation of illuations of the training set and testing set affects the results. While the variation of the training set is sufficient to represent the query iages, the saple rate can be lower, otherwise ore data are needed. All these results have validated the proposed collaborative sub-representation has outperfored the collaborative representation in speed while achieving coparable accuracy. C. Transforation Invariant Face Recognition In this section, we conduct experients to validate the benefit of TCSR under isalignent. First, we copare the proposed algorith with state-of-the-art ethods, [5], RASR [8], TSR [7] on the Multi-PIE [3] database. The first subjects in Session with illuations {,, 7, 3, 4, 6, 8} are used as the training set while the first subjects in Session 2 with illuation {} are used for testing. Artificial transforation of 5 pixels translation in both x and y directions is added to the test iages. The saple rate is set to /5. The recognition rates of RASR, TSR, x translation (c) y translation Fig. 2: Recognition rates and Speed with Translation: Top row: (a) with only x direction translation; (b) with only y direction translation; Botto row: (c) Average Tie with only x direction translation; (d) Average Tie with only y direction translation; Angle of rotation (a) 5 5 Angle of rotation (c) (d) Scales (b) Scales Fig. 3: Recognition rates and Speed with Scale and Rotation: Top row: (a) with only in-place rotation; (b) with only scale variantion; Botto row: (c) Average Tie with only in-place rotation; (d) Average Tie with only scale variantion; and the proposed algorith are 9.8%, 89.% 95.9% 95.9% respectively, while the average execution ties are 95.9, 5.5, 4.9,.4 respectively. The RASR will try to fit all identities one by one, which akes it very slow in big training set. Although TSR is faster, it s less accurate. is significantly faster than RASR and TSR and also ore accurate. In the following (d)

6 experients we shall only copare our algorith with. Also note that in this experient, our proposed algorith is nearly 3 ties faster than while aintaining alost the sae accuracy. Next, we conduct experients on the Multi-PIE database with various kinds of transforations. The experiental setting is the sae as the previous experient. Artificial transforations include x direction translation, y direction translation, in-place rotation and scales. These experients copare the perforance of [5] and the proposed algorith. Here we use the saple rate /5. The experiental results are shown in Fig.2 and 3. The proposed algorith is 3 to 4 ties faster than the, while the difference of their accuracy is less than %. Due to the redundancy of data, the sub-selection ethod preserve the accuracy very well. All these experients validate that the proposed TCSR can handle various isalignents with uch less coputational cost than existing ethods. D. Recognition Despite Rando Block Occlusion Occlusion percentage (a) Occlusion percentage Fig. 4: and (a) The recognition rate(accuray) vs. occlusion percentage; (b) The execution tie vs. occlustion percentage In this section, we further validate the robustness of TCSR by coparing our ethod and [5] on iages with rando block occlusions. This kind of experient has been conducted in the and RASR [8], so we follow the setting of the experient and use the dataset as the sae as those used in the previous experient. The saple rate is /3. The training set and testing set are fro the Multi-PIE database. The first subjects in Session are used as the training set. And the testing set is subjects fro Session 2. Various levels of block occlusion are added to the testing iages. The testing results are shown in Fig.4. TCSR has alost the sae recognition rates as those of the under various levels of block occlusions. However, the TCSR is 2 to 3 ties faster than the algorith. These experients validate the benefit of proposed TCSR in handling occlusions. V. CONCLUSION This paper proposed a novel sub-representation ethod for iage representation. We have theoretically proved its benefit for handling sparse big noise over previous ethods. Cobining it with existing techniques, we proposed a transfor-invariant sub-representation, which can efficiently handle isalignent, occlusion and big noise probles in practical applications. The benefit of proposed ethods were (b) not only theoretically proved but also epirically validated by extensive experient results on practical applications. In the future, we would like to extend our approach in several directions. First, it is worth investigating beyond the siple sparse regularization to group sparsity [7]. Second, we would like to apply our approach to ore applications such as MR iaging reconstruction proble [8], [9]. REFERENCES [] L. Balzano, B. Recht, and R. Nowak, High-diensional atched subspace detection when data are issing, in Proceedings of IEEE International Syposiu on Inforation Theory, 2. [2] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, Robust face recognition via sparse representation, PAMI 29, vol. 3, no. 2, pp , 29. [3] R. Rigaonti, M. A. Brown, and V. Lepetit, Are sparse representations really relevant for iage classification? in CVPR 2. IEEE, 2, pp [4] L. Zhang, M. Yang, and X. Feng, Sparse representation or collaborative representation: Which helps face recognition? in ICCV 2. IEEE, 2, pp [5] M. Yang, L. Zhang, and D. Zhang, Efficient isalignent-robust representation for real-tie face recognition, in ECCV 22, 22. [6] M. Yang, L. Zhang, D. Zhang, and S. Wang, Relaxed collaborative representation for pattern classification, in CVPR 22. IEEE, 22, pp [7] J. Huang, X. Huang, and D. Metaxas, Siultaneous iage transforation and sparse representation recovery, in CVPR 28. IEEE, 28, pp. 8. [8] A. Wagner, J. Wright, A. Ganesh, Z. Zhou, H. Mobahi, and Y. Ma, Toward a practical face recognition syste: Robust alignent and illuation by sparse representation, PAMI 22, vol. 34, no. 2, pp , 22. [9] Y. Li, C. Chen, W. Liu, and J. Huang, Sub-selective quantization for large-scale iage search, in Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI), 24. [] E. Candes and B. Recht, Exact atrix copletion via convex optiization, Foundations of Coputational Matheatics, vol. 9, no. 6, pp , 29. [] C. McDiarid, On ethod of bounded differences, Surveys in Cobinatorics, vol. 4, pp , 989. [2] J. R. Bergen, P. Anandan, K. J. Hanna, and R. Hingorani, Hierarchical odel-based otion estiation, in ECCV 92. Springer, 992, pp [3] R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker, Multi-pie, Iage and Vision Coputing, vol. 28, no. 5, pp , 2. [4] A. Georghiades, P. Belhueur, and D. Kriegan, Fro few to any: Illuation cone odels for face recognition under variable lighting and pose, PAMI, vol. 23, no. 6, pp , 2. [5] A. Martınez and R. Benavente, The ar face database, Rapport technique, vol. 24, 998. [6] P. Viola and M. J. Jones, Robust real-tie face detection, IJCV, vol. 57, no. 2, pp , 24. [7] J. Huang, X. Huang, and D. Metaxas, Learning with dynaic group sparsity, in Coputer Vision, 29 IEEE 2th International Conference on. IEEE, 29, pp [8] J. Huang, S. Zhang, H. Li, and D. Metaxas, Coposite splitting algoriths for convex optiization, Coputer Vision and Iage Understanding, vol. 5, no. 2, pp , 2. [9] J. Huang, S. Zhang, and D. Metaxas, Efficient r iage reconstruction for copressed r iaging, Medical Iage Analysis, vol. 5, no. 5, pp , 2.

Sub-Selective Quantization for Large-Scale Image Search

Sub-Selective Quantization for Large-Scale Image Search Sub-Selective Quantization for Large-Scale Iage Search Yeqing Li, Chen Chen, Wei Liu, Junzhou Huang University of Texas at Arlington, Arlington, TX, 769, USA IBM T. J. Watson Research Center, 598, NY,

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016/2017 Lessons 9 11 Jan 2017 Outline Artificial Neural networks Notation...2 Convolutional Neural Networks...3

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

Multi-Scale/Multi-Resolution: Wavelet Transform

Multi-Scale/Multi-Resolution: Wavelet Transform Multi-Scale/Multi-Resolution: Wavelet Transfor Proble with Fourier Fourier analysis -- breaks down a signal into constituent sinusoids of different frequencies. A serious drawback in transforing to the

More information

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup) Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

An improved self-adaptive harmony search algorithm for joint replenishment problems

An improved self-adaptive harmony search algorithm for joint replenishment problems An iproved self-adaptive harony search algorith for joint replenishent probles Lin Wang School of Manageent, Huazhong University of Science & Technology zhoulearner@gail.co Xiaojian Zhou School of Manageent,

More information

On the Use of A Priori Information for Sparse Signal Approximations

On the Use of A Priori Information for Sparse Signal Approximations ITS TECHNICAL REPORT NO. 3/4 On the Use of A Priori Inforation for Sparse Signal Approxiations Oscar Divorra Escoda, Lorenzo Granai and Pierre Vandergheynst Signal Processing Institute ITS) Ecole Polytechnique

More information

arxiv: v1 [math.na] 10 Oct 2016

arxiv: v1 [math.na] 10 Oct 2016 GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble

More information

Weighted- 1 minimization with multiple weighting sets

Weighted- 1 minimization with multiple weighting sets Weighted- 1 iniization with ultiple weighting sets Hassan Mansour a,b and Özgür Yılaza a Matheatics Departent, University of British Colubia, Vancouver - BC, Canada; b Coputer Science Departent, University

More information

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements 1 Copressive Distilled Sensing: Sparse Recovery Using Adaptivity in Copressive Measureents Jarvis D. Haupt 1 Richard G. Baraniuk 1 Rui M. Castro 2 and Robert D. Nowak 3 1 Dept. of Electrical and Coputer

More information

Data-Driven Imaging in Anisotropic Media

Data-Driven Imaging in Anisotropic Media 18 th World Conference on Non destructive Testing, 16- April 1, Durban, South Africa Data-Driven Iaging in Anisotropic Media Arno VOLKER 1 and Alan HUNTER 1 TNO Stieltjesweg 1, 6 AD, Delft, The Netherlands

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

Department of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China

Department of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China 6th International Conference on Machinery, Materials, Environent, Biotechnology and Coputer (MMEBC 06) Solving Multi-Sensor Multi-Target Assignent Proble Based on Copositive Cobat Efficiency and QPSO Algorith

More information

On the theoretical analysis of cross validation in compressive sensing

On the theoretical analysis of cross validation in compressive sensing MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.erl.co On the theoretical analysis of cross validation in copressive sensing Zhang, J.; Chen, L.; Boufounos, P.T.; Gu, Y. TR2014-025 May 2014 Abstract

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

The Hilbert Schmidt version of the commutator theorem for zero trace matrices

The Hilbert Schmidt version of the commutator theorem for zero trace matrices The Hilbert Schidt version of the coutator theore for zero trace atrices Oer Angel Gideon Schechtan March 205 Abstract Let A be a coplex atrix with zero trace. Then there are atrices B and C such that

More information

Supplementary to Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data

Supplementary to Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data Suppleentary to Learning Discriinative Bayesian Networks fro High-diensional Continuous Neuroiaging Data Luping Zhou, Lei Wang, Lingqiao Liu, Philip Ogunbona, and Dinggang Shen Proposition. Given a sparse

More information

Multi-view Discriminative Manifold Embedding for Pattern Classification

Multi-view Discriminative Manifold Embedding for Pattern Classification Multi-view Discriinative Manifold Ebedding for Pattern Classification X. Wang Departen of Inforation Zhenghzou 450053, China Y. Guo Departent of Digestive Zhengzhou 450053, China Z. Wang Henan University

More information

1 Generalization bounds based on Rademacher complexity

1 Generalization bounds based on Rademacher complexity COS 5: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #0 Scribe: Suqi Liu March 07, 08 Last tie we started proving this very general result about how quickly the epirical average converges

More information

Lecture 9 November 23, 2015

Lecture 9 November 23, 2015 CSC244: Discrepancy Theory in Coputer Science Fall 25 Aleksandar Nikolov Lecture 9 Noveber 23, 25 Scribe: Nick Spooner Properties of γ 2 Recall that γ 2 (A) is defined for A R n as follows: γ 2 (A) = in{r(u)

More information

Lecture 20 November 7, 2013

Lecture 20 November 7, 2013 CS 229r: Algoriths for Big Data Fall 2013 Prof. Jelani Nelson Lecture 20 Noveber 7, 2013 Scribe: Yun Willia Yu 1 Introduction Today we re going to go through the analysis of atrix copletion. First though,

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials Fast Montgoery-like Square Root Coputation over GF( ) for All Trinoials Yin Li a, Yu Zhang a, a Departent of Coputer Science and Technology, Xinyang Noral University, Henan, P.R.China Abstract This letter

More information

Nyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison

Nyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison yströ Method vs : A Theoretical and Epirical Coparison Tianbao Yang, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, Zhi-Hua Zhou Machine Learning Lab, GE Global Research, San Raon, CA 94583 Michigan State University,

More information

A Smoothed Boosting Algorithm Using Probabilistic Output Codes

A Smoothed Boosting Algorithm Using Probabilistic Output Codes A Soothed Boosting Algorith Using Probabilistic Output Codes Rong Jin rongjin@cse.su.edu Dept. of Coputer Science and Engineering, Michigan State University, MI 48824, USA Jian Zhang jian.zhang@cs.cu.edu

More information

Testing equality of variances for multiple univariate normal populations

Testing equality of variances for multiple univariate normal populations University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Inforation Sciences 0 esting equality of variances for ultiple univariate

More information

SPECTRUM sensing is a core concept of cognitive radio

SPECTRUM sensing is a core concept of cognitive radio World Acadey of Science, Engineering and Technology International Journal of Electronics and Counication Engineering Vol:6, o:2, 202 Efficient Detection Using Sequential Probability Ratio Test in Mobile

More information

Recovery of Sparsely Corrupted Signals

Recovery of Sparsely Corrupted Signals TO APPEAR IN IEEE TRANSACTIONS ON INFORMATION TEORY 1 Recovery of Sparsely Corrupted Signals Christoph Studer, Meber, IEEE, Patrick Kuppinger, Student Meber, IEEE, Graee Pope, Student Meber, IEEE, and

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

Introduction to Robotics (CS223A) (Winter 2006/2007) Homework #5 solutions

Introduction to Robotics (CS223A) (Winter 2006/2007) Homework #5 solutions Introduction to Robotics (CS3A) Handout (Winter 6/7) Hoework #5 solutions. (a) Derive a forula that transfors an inertia tensor given in soe frae {C} into a new frae {A}. The frae {A} can differ fro frae

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory TTIC 31120 Prof. Nati Srebro Lecture 2: PAC Learning and VC Theory I Fro Adversarial Online to Statistical Three reasons to ove fro worst-case deterinistic

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

An Algorithm for Quantization of Discrete Probability Distributions

An Algorithm for Quantization of Discrete Probability Distributions An Algorith for Quantization of Discrete Probability Distributions Yuriy A. Reznik Qualco Inc., San Diego, CA Eail: yreznik@ieee.org Abstract We study the proble of quantization of discrete probability

More information

Ensemble Based on Data Envelopment Analysis

Ensemble Based on Data Envelopment Analysis Enseble Based on Data Envelopent Analysis So Young Sohn & Hong Choi Departent of Coputer Science & Industrial Systes Engineering, Yonsei University, Seoul, Korea Tel) 82-2-223-404, Fax) 82-2- 364-7807

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

Randomized Accuracy-Aware Program Transformations For Efficient Approximate Computations

Randomized Accuracy-Aware Program Transformations For Efficient Approximate Computations Randoized Accuracy-Aware Progra Transforations For Efficient Approxiate Coputations Zeyuan Allen Zhu Sasa Misailovic Jonathan A. Kelner Martin Rinard MIT CSAIL zeyuan@csail.it.edu isailo@it.edu kelner@it.edu

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition

More information

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

Hamming Compressed Sensing

Hamming Compressed Sensing Haing Copressed Sensing Tianyi Zhou, and Dacheng Tao, Meber, IEEE Abstract arxiv:.73v2 [cs.it] Oct 2 Copressed sensing CS and -bit CS cannot directly recover quantized signals and require tie consuing

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Research Article On the Isolated Vertices and Connectivity in Random Intersection Graphs

Research Article On the Isolated Vertices and Connectivity in Random Intersection Graphs International Cobinatorics Volue 2011, Article ID 872703, 9 pages doi:10.1155/2011/872703 Research Article On the Isolated Vertices and Connectivity in Rando Intersection Graphs Yilun Shang Institute for

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial

More information

RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS

RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS BIT Nuerical Matheatics 43: 459 466, 2003. 2003 Kluwer Acadeic Publishers. Printed in The Netherlands 459 RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS V. SIMONCINI Dipartiento di

More information

Multi-Dimensional Hegselmann-Krause Dynamics

Multi-Dimensional Hegselmann-Krause Dynamics Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory

More information

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving

More information

arxiv: v3 [quant-ph] 18 Oct 2017

arxiv: v3 [quant-ph] 18 Oct 2017 Self-guaranteed easureent-based quantu coputation Masahito Hayashi 1,, and Michal Hajdušek, 1 Graduate School of Matheatics, Nagoya University, Furocho, Chikusa-ku, Nagoya 464-860, Japan Centre for Quantu

More information

Asynchronous Gossip Algorithms for Stochastic Optimization

Asynchronous Gossip Algorithms for Stochastic Optimization Asynchronous Gossip Algoriths for Stochastic Optiization S. Sundhar Ra ECE Dept. University of Illinois Urbana, IL 680 ssrini@illinois.edu A. Nedić IESE Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu

More information

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

Using a De-Convolution Window for Operating Modal Analysis

Using a De-Convolution Window for Operating Modal Analysis Using a De-Convolution Window for Operating Modal Analysis Brian Schwarz Vibrant Technology, Inc. Scotts Valley, CA Mark Richardson Vibrant Technology, Inc. Scotts Valley, CA Abstract Operating Modal Analysis

More information

OBJECTIVES INTRODUCTION

OBJECTIVES INTRODUCTION M7 Chapter 3 Section 1 OBJECTIVES Suarize data using easures of central tendency, such as the ean, edian, ode, and idrange. Describe data using the easures of variation, such as the range, variance, and

More information

arxiv: v1 [cs.lg] 8 Jan 2019

arxiv: v1 [cs.lg] 8 Jan 2019 Data Masking with Privacy Guarantees Anh T. Pha Oregon State University phatheanhbka@gail.co Shalini Ghosh Sasung Research shalini.ghosh@gail.co Vinod Yegneswaran SRI international vinod@csl.sri.co arxiv:90.085v

More information

EMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS

EMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS EMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS Jochen Till, Sebastian Engell, Sebastian Panek, and Olaf Stursberg Process Control Lab (CT-AST), University of Dortund,

More information

RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION

RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION GUANGHUI LAN AND YI ZHOU Abstract. In this paper, we consider a class of finite-su convex optiization probles defined over a distributed

More information

A remark on a success rate model for DPA and CPA

A remark on a success rate model for DPA and CPA A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

A Nonlinear Sparsity Promoting Formulation and Algorithm for Full Waveform Inversion

A Nonlinear Sparsity Promoting Formulation and Algorithm for Full Waveform Inversion A Nonlinear Sparsity Prooting Forulation and Algorith for Full Wavefor Inversion Aleksandr Aravkin, Tristan van Leeuwen, Jaes V. Burke 2 and Felix Herrann Dept. of Earth and Ocean sciences University of

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

When Short Runs Beat Long Runs

When Short Runs Beat Long Runs When Short Runs Beat Long Runs Sean Luke George Mason University http://www.cs.gu.edu/ sean/ Abstract What will yield the best results: doing one run n generations long or doing runs n/ generations long

More information

A Simple Homotopy Algorithm for Compressive Sensing

A Simple Homotopy Algorithm for Compressive Sensing A Siple Hootopy Algorith for Copressive Sensing Lijun Zhang Tianbao Yang Rong Jin Zhi-Hua Zhou National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China Departent of Coputer

More information

Chaotic Coupled Map Lattices

Chaotic Coupled Map Lattices Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

Supplementary Information for Design of Bending Multi-Layer Electroactive Polymer Actuators

Supplementary Information for Design of Bending Multi-Layer Electroactive Polymer Actuators Suppleentary Inforation for Design of Bending Multi-Layer Electroactive Polyer Actuators Bavani Balakrisnan, Alek Nacev, and Elisabeth Sela University of Maryland, College Park, Maryland 074 1 Analytical

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE Two-Diensional Multi-Label Active Learning with An Efficient Online Adaptation Model for Iage Classification Guo-Jun Qi, Xian-Sheng Hua, Meber,

More information

The degree of a typical vertex in generalized random intersection graph models

The degree of a typical vertex in generalized random intersection graph models Discrete Matheatics 306 006 15 165 www.elsevier.co/locate/disc The degree of a typical vertex in generalized rando intersection graph odels Jerzy Jaworski a, Michał Karoński a, Dudley Stark b a Departent

More information

New Slack-Monotonic Schedulability Analysis of Real-Time Tasks on Multiprocessors

New Slack-Monotonic Schedulability Analysis of Real-Time Tasks on Multiprocessors New Slack-Monotonic Schedulability Analysis of Real-Tie Tasks on Multiprocessors Risat Mahud Pathan and Jan Jonsson Chalers University of Technology SE-41 96, Göteborg, Sweden {risat, janjo}@chalers.se

More information

Face Image Modeling by Multilinear Subspace Analysis with Missing Values

Face Image Modeling by Multilinear Subspace Analysis with Missing Values IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS - PART B: CYBERNETICS 1 Face Iage Modeling by Multilinear Subspace Analysis with Missing Values Xin Geng, Kate Sith-Miles, Senior Meber, IEEE, Zhi-Hua

More information

Taming the Matthew Effect in Online Markets with Social Influence

Taming the Matthew Effect in Online Markets with Social Influence Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence AAAI-17 Taing the Matthew Effect in Online Markets with Social Influence Franco Berbeglia Carnegie Mellon University fberbegl@andrewcuedu

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

Handwriting Detection Model Based on Four-Dimensional Vector Space Model

Handwriting Detection Model Based on Four-Dimensional Vector Space Model Journal of Matheatics Research; Vol. 10, No. 4; August 2018 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education Handwriting Detection Model Based on Four-Diensional Vector

More information

REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION

REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION ISSN 139 14X INFORMATION TECHNOLOGY AND CONTROL, 008, Vol.37, No.3 REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION Riantas Barauskas, Vidantas Riavičius Departent of Syste Analysis, Kaunas

More information

ORIE 6340: Mathematics of Data Science

ORIE 6340: Mathematics of Data Science ORIE 6340: Matheatics of Data Science Daek Davis Contents 1 Estiation in High Diensions 1 1.1 Tools for understanding high-diensional sets................. 3 1.1.1 Concentration of volue in high-diensions...............

More information

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions Journal of Matheatical Research with Applications Jul., 207, Vol. 37, No. 4, pp. 496 504 DOI:0.3770/j.issn:2095-265.207.04.0 Http://jre.dlut.edu.cn An l Regularized Method for Nuerical Differentiation

More information

Tracking using CONDENSATION: Conditional Density Propagation

Tracking using CONDENSATION: Conditional Density Propagation Tracking using CONDENSATION: Conditional Density Propagation Goal Model-based visual tracking in dense clutter at near video frae rates M. Isard and A. Blake, CONDENSATION Conditional density propagation

More information