RECONSTRUCTION OF NON- CARTESIAN DATA USING BURS/RBURS ALGORITHM
|
|
- Marybeth Mosley
- 6 years ago
- Views:
Transcription
1 EE-591 MAGNETIC RESONANCE IMAGING TERM PROJECT RECONSTRUCTION OF NON- CARTESIAN DATA USING BURS/RBURS ALGORITHM Zheng L Department of Electrcal Engneerng December 5,
2 1. INTRODUCTION There are many alternatves to 2DFT acquston methods. These nclude spral scans, radal scans, Lssajou trajectory scan and so on (Fgure 1) [1]. Many of these have specfc advantages over spn-warp, such as speed and SNR effcency. The man dsadvantage wth these methods s the dffculty of reconstructng the resultng data sets. There are many choces for non-cartesan data sets mage reconstructon. The frst approach s to collect the non-cartesan data n a way that a prevously nown reconstructon method can be appled. For example Fltered Bac Projecton (FBP) can be appled for radal scans data set. Whle ths solves the reconstructon problem, t usually requres compromses n data acquston. Second, the non-cartesan data can be demodulated pont-by-pont wth the conjugate phase reconstructon. But ths method s very slow. The most computatonally effcent method of reconstructon s to resample the data onto a Cartesan grd, whch enable the subsequent use of nverse fast Fourer transform (IFFT), and post compensaton, f necessary. b) y y K y x x K x (a) (b) (c) Fgure 1. Some alternatve acquston methods. (a) Constant angular rate spral, whch can use projecton reconstructon method. (b) Lssajou trajectory (c) Spral trajectory used n ths project In MRI, the most wdely used resamplng algorthm s grddng. Usually, the grddng methods consst of four steps: 1) pre-compensaton for varyng samplng densty; 2) convoluton wth a Kaser-Bessel wndow onto a Cartesan grd; 3) IFFT; 4) postcompensaton by dvdng the mage by the transform of the wndow. In ths paper, the Bloc Unform Re-Samplng (BURS) and regularzaton Bloc Unform Re-Samplng (rburs) are used to nterpolate the non-cartesan scan data. BURS and rburs are both 2
3 optmal/suboptmal and computatonally effcent. Comparng to the conventonal grddng, nether pre- nor post-compensaton are requred, and the results were shown to be of excellent accuracy. 2. THEORY In ths secton, the theores for BURS algorthm wll be ntroduced frst. Then the theoretcal analyss of nose for BURS wll be addressed. Fnally, one nose reducton soluton for BURS, namely regularzaton BURS (rburs), wll be provded. 2.1 BLOCK UNIFORM RESAMPLING (BURS) ALGORITHM The BURS algorthm can be summarzed as follows: 1. Intalze an N by M matrx A wth zeros (N and M represent the number of the Cartesan grd ponts and the number of the non-unformly sampled data ponts, respectvely) 2. For each Cartesan grd pont, ( = 1, L, N ) : 2.a. Select the 2.b. Select the 2.c. Form a functon. M 2.d. Compute A M non-unformly sampled ponts n a δ neghborhood of. N Cartesan grd ponts n a N, the truncated sngular value decomposton (SVD) pseudonverse matrx of A. 2.e. Transfer the row of neghborhood of. matrx A of the nterpolaton coeffcents based on the snc A correspondng to the pont to the -th row of A. 3. The unform samples are calculated as x = A b, where b s a column vector contanng the non-unform data measurements. 4. Perform an nverse Fourer transform (IFT) on the resultng unform samples. Fgure 2 llustrates how to select the δ neghborhood of and M non-unformly sampled ponts n a N Cartesan grd ponts n a neghborhood of. The and neghborhoods of the are llustrated as crcle regons n the Fgure 2. But n the mplementaton, other shapes of neghborhood maybe used. For example, square δ 3
4 neghborhood can be used n Cartesan coordnate for computatonal effcency and easer mplementaton. Square and crcular shapes of neghborhood are tested n our smulatons. In BURS algorthm, the selectons of values for and δ wll dramatcally affect the fnal results (whch wll be shown n the smulatons). When N > M, the pseudonverse can be computed as: A T 1 T = ( A A) A (1) When N < M, the pseudo-nverse can be computed as: T T 1 A = A ( AA ) (2) The smulaton results gve some examples of how the reconstructon results vares wth dfferent combnaton of N and M. 0.4 K y δ 0 K x Fgure 2. The llustraton for BURS algorthm. The δ and plot are defned as a crcle regons. The bg dots represents neghborhoods of the n ths M non-unformly sampled ponts n a δ neghborhood of ; the bg cross sgns represent neghborhood of. N Cartesan grd ponts n a 4
5 2.2 EFFECT OF NOISE Several papers have reported that although the BURS algorthm s very accurate, t s also senstve to the nose. As a consequence, even n the presence of a low level of measurement nose, the resultng mage s often hghly contamnated wth nose. In the grddng process, each unform output pont at locaton (=1,,N) s lnearly nterpolated usng M nown data of non-unform samples {, m = 1, L, M } whch are wthn δ neghborhood of : M m f ( ) = a f ( ) (3) m m= 1 where f ( m ) s non-unform (non-cartesan) nput data; f ) s the nterpolated unform output(cartesan) at ; a are the nterpolaton coeffcents, n BURS algorthm, these coeffcents are derved by pseudo-nverse. Assumng the nose s addtve and conssts of zero mean whte Gaussan nose wth varance derved that the nose of the nterpolated data mean and the varance : 2 σ m M 2 σ f ( ) = σ a m m= 1 ( m, usng above equaton, t can be s addtve Gaussan nose wth zero σ (4) Because the nterpolaton coeffcents vary as changes, the nose level s space dependent n -space doman, even f we assume the nose s..d n orgnal non- Cartesan -space data. In Rosenfeld s paper [4], the -space nose amplfcaton s defned as: 2 σ Ω M = σ /σ = a (5) m= 1 2 m Rosenfeld [4] tested ths nose effect of BURS by usng a four-nterleaf spral trajectory. The Ω was calculated for each unform pont. We also dd the test on our spral trajectory and get smlar results, whch are show n Fgure 3. 5
6 500 Nose Amplfcaton (a) (b) Fgure 3. Nose amplfcaton Ω usng BURS for spral trajectores. (a) the result from [4], x-axs represents the dstance from the orgn of the -plane. (b) The y = 0 based on our own spral trajectory. The x-axs represents the Ω values for the row x coordnate. Both results show that most ponts have a nose amplfcaton of about unty, however a substantal number of ponts have extreme hgh nose amplfcaton number. Ths s the reason that cause the reposted nose contamnated result for BURS algorthm. Although only small part of -space pont have very hgh nose level, after the Fourer transform, the nose wll dstrbuted across the whole mage. Equaton (5) shows that the hgh nose amplfcaton coeffcents are due to the hgh value of nterpolaton coeffcents, whch s the row of A correspondng to the pont. We now that the soluton of an nverse problem s unstable, whch means that small changes n the nput data may lead to large perturbatons n the results (ll-posed problem). So t becomes clear that, the ll-condtoned matrxes T A A cause the large perturbatons n the coeffcents and fnally result n large nose level n reconstructed mage. 2.3 REGULARIZED BLOCK UNIFORM RESAMPLING ALGORITHM The basc deal of the rburs s to stablze the matrx nverson soluton by modfyng the problem n such a way that the nverson soluton becomes less senstve to small perturbatons n the data. At the same tme, the soluton to the modfed problem 6
7 must reman close to the orgnal soluton. Thus the orgnal soluton x = A by the approxmate soluton x = A ρ b such that b s replaced ρ 0 ρ lm A b = A b (6) where ρ s a postve smoothng parameter. We now focus on one type of regularzaton technque, referred to as spectral wndowng. By usng equaton (1), T A b = ( A 1 T = 1 A) A b α ( v b) u (7) T T where are egenvectors of AA ; u are egenvectors of A T A ; v α, α α 1 2 L are sngular values. where W ρ A ρ b s computed as: = W 1 A ρ b ρ α ( v b) u (8) s called the wndow coeffcents. There are many dfferent defntons for these coeffcents ncludng Truncated sngular system expanson and Thonov flter whch are defned separately as: 1 < (1/ ρ) Truncated sngular system expanson : Wρ = (9) 0 otherwse T Thonov flter : α W ρ (10) α + ρ = 2 2 WhenW s defned as (10), t can be proved that x = A b can be computed as: ρ T x = A ρ b = Wρ α ( v b) u = ( A A + ρi) A b (11) In our mplementaton, equaton (11) s employed for regularzaton. 3. IMPLEMENTATION In the real system, gven the non-cartesan -space trajectory, δ and BURS algorthm descrbed n secton 2.1, the matrx 1 T ρ 1 T, by usng the can be calculated and saved pre reconstructon. Whenever the data samplng s done and reconstructon s needed, the matrx A can be reloaded and used drectly. By ths way, the computatonal tme s shortened dramatcally. But ths method needs to process the huge sze matrx A, whch maes the data handlng not so easy. In addton, n order to test the dfferent parameter A 7
8 combnatons n ths paper, the parameters δ and maes A A change from tme to tme, whch change each tme. So, n our smulaton, nstead of storng the huge matrx and nterpolatng all ponts one tme, the Cartesan pont nterpolaton s done pont by pont through the whole mage. 3.1 IMPLEMENTATION OF BURS/rBURS ALGORITHM 1. Intalze an N N matrx M wth zeros (the sze of the mage s N N) 2. For each Cartesan grd pont M, ( = 1, L, N; j = 1, L, N ) : 2.a. Select the M j j non-unformly sampled ponts n a δ neghborhood of M j. Form a M 1 column vector d usng M j j j nown non-unformly sampled data. 2.b. Select the N Cartesan grd ponts n a neghborhood of. j M j 2.c. Form a M j N j matrx A of the nterpolaton coeffcents based on the snc functon. 2.d-BURS. For BURS algorthm, A =pseudo-nverse matrx of A. 2.d-rBURS. For rburs algorthm, A T 1 T = (A A + ρ I) A. 2.e. Let a = row of A correspondng to the pont Mj, M j = aj d j 3. Perform an nverse Fourer transform (IFT) on the M. 3.2 SHAPE OF THE NEIGHBORHOOD In the real mplementaton, the neghborhood of the pont ( x, 0 ) wthn radus be defned at least n two dfferent ways: 1. Crcular Neghborhood wth radus : r C 0 y r can { neghhorhood} {( x, y) ( x, y) ( x, y0 ) r } 2. Square Neghborhood wth radus : r S = (12) 0 C { neghhorhood} {( x, y) max( x x 0, y y0 ) rs } = (13) Notce that, when the raduses have the same value, the square neghborhood has larger coverage area than that of crcular neghborhood. To mae both defntons have the same coverage area, r and r should satsfy: C S 2 2 π C S S C π r = (2r ) r = r (14) 4 8
9 Crcular neghborhood has the advantage that the closest (n the sense of norm2) ponts from the center of the neghborhood are selected. Square neghborhood wll select some ponts (n the corner of the square) not so close to the center, but square neghborhood s easer to mplement and computatonal more effectve. In the smulaton, two neghborhood defntons are tested and compared. To mae the comparson equtable, same effectve radus r s used for dfferent shapes, then r and C rs have same effectve radus For crcular neghborhood: are computed usng equaton (14). Suppose both neghborhood r, then: 4. SIMULATIONS AND RESULTS r C = r ; For square neghborhood: π r S = r (15) 4 The data set used here s a smulated phantom usng a spral acquston wth 6 nterleaves of 1536 samples. Center part of the trajectory s llustrated n Fgure 2. Four problems are studed n our smulaton: 1) How the reconstructon result changes wth and δ. Fgure 4 shows the results for =1 whle δ vares from 0.3~1. Fgure 5 shows the results for =2 whle δ vares from 0.5~1.4. Beyond these δ ranges, the results become unacceptable. 2) How the shape of neghborhood (crcular vs. square) affect the results. In the smulaton, the crcular neghborhood s always used for δ (non-cartesan), crcular AND square neghborhoods are tested for radus (Cartesan ponts). Set the effectve =1, 2, 3 respectvely, δ values are chosen such that the best reconstructon acheved for each case. Crcular and square neghborhoods are tested wth same effectve radus and δ settngs. Fgure 6 shows the results. 3) BURS vs. rburs algorthm. One mage wth hgh-snr and one wth low-snr are tested usng BURS and rburs algorthm respectvely. The low-snr mage s produced by addng Gaussan nose to K-space spral sampled data. 4) Compare the result of BURS/rBURS wth true mage. The mage reconstructed by grddng w/ Pre-Densty Compensaton & Deapodzaton s used as the orgnal 9
10 mage. Then we compare the best results produced by BURS and rburs wth the orgnal mage. Fgure 8 shows the mages and the dfference mages. Fgure 9. shows the profle of the mages. (a) δ =0.3, =1; M =3, N =5@(64, 64) (b) δ =0.4, =1; M =4, N =5@(64, 64) (c) δ =0.5, =1; M =5, N =5@(64, 64) (d) δ =0.7, =1; M =12, N =5@(64, 64) (e) δ =0.9, =1; M =35, N =5@(64, 64) (f) δ =1, =1; M =37, N =5@(64, 64) Fgure 4. Comparng dfferent used for and δ. Fx =1, and δ combnatons for BURS algorthm. Crcular neghborhoods are δ value s changed from 0.3 to 1. M and N ( x, y ) =(64, 10
11 64) are provded for each case. It shows that when bad underdetermned case ( M >> N ) occurs, some artfacts wll appear n the reconstructed mage. (a) δ =0.5, =2; M =5, N =13@(64, 64) (b) δ =0.7, =2; M =12, N =13@(64, 64) (c) δ =0.9, =2; M =35, N =13@(64, 64) (d) δ =1, =2; M =37, N =13@(64, 64) (e) δ =1.2, =2; M =43, N =13@(64, 64) (f) δ =1.4, =2; M =51, N =13@(64, 64) Fgure 5.. Comparng dfferent and used for and, ) ( x y δ. Fx =2, δ selectons for BURS algorthm. Crcular neghborhoods are δ value s changed from 0.4 to 1.4. M and N =(64, 64) are provded for each case. It shows that when bad underdetermned case ( M >> N ) occurs, some artfacts wll appear n the reconstructed mage. 11
12 (a) δ =0.6, =1, Crcular δ =0.6, =1, Square δ =0.9, =2, Crcular δ =0.9, =2, Square δ =1.3, =3, Crcular δ =1.3, =3, Square Fgure 6. Comparng crcular neghborhood wth square neghborhood for BURS algorthm. Crcular neghborhoods are always used for δ ; crcular and square neghborhoods are tested for. Fx the effectve neghborhood radus =1, 2 and 3, δ values are selected such that best reconstructon result s acheved for each case. The results show that dfferent shapes of neghborhood have some but lmted effect (crcular neghbor s lttle bt better) on reconstructed mage. 12
13 (1a) Orgnal w/ Hgh SNR (2a) Orgnal w/ Low SNR (1b) BURS result w/ Hgh SNR (2b) BURS result w/ Low SNR (1c) rburs result w/ Hgh SNR (2c) rburs result w/ Low SNR Fgure 7. Compare BURS wth rburs algorthm. Left column s for Hgh SNR case, rght column s for Low SNR case. The orgnal mage s produced by grddng wth Pre-Densty Compensaton & Deapodzaton. Low SNR mage s produced by addng Gaussan nose n K-space. For all BURS/rBURS reconstructons, set δ =1.5, =3. Regularzaton smoothng parameter ρ=
14 (a) Orgnal Image (b) BURS (c) Dfference Image between BURS and orgnal mage (d) rburs (e) Dfference Image for rburs between rburs and orgnal mage Fgure 8. Compare the best BURS and best rburs results wth orgnal mage. The orgnal mage s produced by grddng wth Pre-Densty Compensaton & Deapodzaton. The results shown here for BURS and rburs are the best results we get durng the smulaton. The dfference mage shown on the rght s the dfference between BURS/rBURS wth the orgnal mage. 14
15 (a) Profle for Orgnal row x= (b) Profle for BURS row x= (c) Profle for rburs row x=78 Fgure 9. The profles for dfferent mages shown n Fgure CONCLUSIONS 15
16 (1) Effect of neghborhood radus δ and. Fgure 4 and 5 show that () Ifδ s too small ( δ <0.3), then no matter how large the s, we can not get very good result. () Keep fxed, when δ ncreases from a very small number (around 0.3), the result wll become better frst, then become worse. For the tested cases, when δ /1.5 ~ /. 0, the BURS produces best result. () When 2 δ fxed, ncreasng the value of, the result becomes better. If we chec the BURS algorthm more carefully, we wll fnd that although δ, wll affect the result, they are not the root of the reason. In fact, t s who really affect the result! In order to produce good results, M and N values M should NOT exceed N too much. If M >> N occurs for some ponts (often occurs around the orgn n -plane, because our spral data s more dense around the orgn whch maes M acheve t s maxmum value around the orgn), we can stll get the result, however, there wll be some low frequency artfacts n the mages (see Fgure 4e, 4f, 5e, 5f ). Now we can explan the ()~() lsted above based on too small. () M and N values. () N and M should not be M can not exceed N too much all the tme, otherwse the result wll have some low-frequency artfact. () the bgger the N and M, the better the result. (2) Effect of the shape of the neghborhood. Our results show that BURS wth crcular neghborhood wll produce a lttle bt better results than that of square neghborhood, but the dfferences are small (Fgure 6). (3) BURS vs. rburs. BURS s senstve to the hgh level of nose as well as underdetermned case (Fgure 7-1b, 2b). On the contrary, the rburs s robust to the hgh level of nose as well as underdetermned case (Fgure 7-1c, 2c). rburs s also robust n the case of combnaton of hgh nose and underdetermned matrx. Even n ths worst case, the result of rburs (Fgure 7-2c) s stll very close to the orgnal mage (Fgure 7-2a), whch s produced usng grddng wth Pre-Densty Compensaton & Deapodzaton. 16
17 (4) Fdelty of BURS/rBURS By checng the reconstructed mages, dfference mages (Fgure 8) and the profles of the reconstructed mages (Fgure 9), we can conclude that (I) The best results produced by BURS and rburs are very close to the orgnal mage. (II) There are some small errors occur n hgh frequency components,.e. some errors around the edges. REFERENCES [1] John Pauly. Image Reconstructon Textboo (n progress), Chaper 5: Reconstructon of non-cartesan Data. [2] Rosenfeld D. An optmal and effcent new grddng algorthm usng sngular value decomposton. Magnetc Resonance n Medcne 1998; 40:12-23 [3] Morguch H, Wendt M, Duer JL. Applyng the unform resamplng (URS) algorthm a a Lssajous trajectory: fast mage reconstructon wth optmal grddng. Magnetc Resonance n Medcne 2000; 44: [4] Rosenfeld, Danel. New Approach to Grddng usng Regularzaton and Estmaton Theory, Magnetc Resonance n Medcne 2002; 48: APPENDIX MATLAB CODES %%%%%%%%%%%%%%%%%%%%%%%%%%%% % EE591 MRI % % Term Project BURS & rburs % % Zheng L, Dec % %%%%%%%%%%%%%%%%%%%%%%%%%%%% clear; close all; n=128; load rt_spral.mat; %load nose_spral; %d=nd+d; %{d: data; : samplng ernel; w: weght} %{nd: addtve Gausan nose} load same random nose data each tme erc=3; % (effectve) radus of delta- neghborhood n Cartesan coordnate r=1.3; % radus of delta- neghborhood n Non-Cartesan coordnate shape='c'; % shape of the neghorhood, 'c'-->crcular; 's'-->square % for sqare neghorhood, rc=effectve r * sqrt(p/4) f sequal(shape, 's') rc=erc*sqrt(p/4); dsp(strcat('square Neghborhood, Radus=', num2str(rc))); % for crcular neghorhood, rc=effectve r f sequal(shape, 'c') rc=erc; dsp(strcat('crcular Neghborhood, Radus=', num2str(rc))); f (0) %BURS [MB, OMB]= grdburs(d,,n, r, rc, shape); % call BURS grddng functon; mgb=ft(mb); fgure; magesc(abs(mgb)); axs square; colormap('gray'); colormenu; axs off; else % rburs 17
18 [MrB, OMrB]=grdrBURS(d,, n, r, rc, 0.01, shape); % call rburs grddng functon mgrb=ft(mrb); fgure; magesc(abs(mgrb)); axs square; colormap('gray'); colormenu; axs off; %%%%%%%%%%%%%%%%%%%%%%% % functon BURS % %%%%%%%%%%%%%%%%%%%%%%% functon [M, OM] = grdburs(d,,n,r,rc,shape) % functon [M, OM] = grdburs(d,,n,r,rc) % Bloc Unform ReSamplng method for grddng % d -- -space data % -- -trajectory, scaled -0.5 to 0.5 % n -- mage sze % r-- non-cartesan ernel radus % rc-- cartesan ernel radus % shape-- choose crcle (=='c') neghborhood % or square neghborhood (=='s') for Cartesan ponts % % M -- K-space nterpolated data % OM-- nose amplfcaton (defned n Rosenfeld 2002 Magn Reson Med) % % Zheng L, Nov % convert to sngle column d=d(:); =(:); % convert -space samples to matrx ndces nx=(n+1)/2 + (n-1)*real(); ny=(n+1)/2 + (n-1)*mag(); % ntalze the output matrx M=zeros(n,n); OM=zeros(n,n); % change the cartesan coordnate to one column, so that we can fnd % the cartesan pont wthng "rc" easly. [mxc, myc]=meshgrd(1:n, 1:n); mxc=mxc(:); myc=myc(:); % man loop, compute the BURS grddng value for each pont for xc=cel(1+rc):floor(n-rc) for yc=cel(1+rc):floor(n-rc) f shape=='s' [mxdc, mydc]=meshgrd(cel(xc-rc):floor(xc+rc), cel(yc-rc):floor(yc+rc)); % get the Cartesan ponts n "square" neghborhood of (xc,yc) xyc=mxdc(:)+*mydc(:); ndc=ones(length(xyc),1); % just to mae to!=[] elsef shape=='c' % fnd the ndex of the Cartesan ponts n the "rc" neghborhood of (xc+yc*) ndc=fnd( ((mxc-xc).^2 + (myc-yc).^2) <=rc^2+eps ); f ~(sempty(ndc)) % get the Cartesan ponts n "crcular" neghborhood of (xc,yc) xyc=mxc(ndc)+*(myc(ndc)); % fnd the ndex of the Non-Cartesan ponts n the "c" neghborhood of (xc+yc*) nd=fnd( ((nx-xc).^2 + (ny-yc).^2) <=r^2+eps ); % prnt the N ( of Cartesan ponts), and M ( of Non-Cartesan ponts) several postons along y=0 axs. f (yc==64) & (mod(xc, 16)==0) dsp(strcat('n=', num2str(length(ndc)),... ', M=', num2str(length(nd)),... num2str(xc), ',', num2str(yc), ')')); f ~(sempty(nd)) & ~(sempty(ndc)) % get the Non-Cartesan ponts n "crcular" neghborhood of (xc,yc) 18
19 xy=nx(nd)+*(ny(nd)); A=nterp2snc(xyc, xy); pa=pnv(a); % pnv can handle over/under determned cases automatcally nd0=fnd( (xyc==xc+yc*) ); M(xc, yc)=pa(nd0, :)*d(nd); OM(xc, yc)=sqrt( sum(pa(nd0, :).^2) ); %end functon [M, OM] = grdrburs(d,,n,r,rc,r,shape) %%%%%%%%%%%%%%%%%%%%%%%%%%%% % functon rburs % %%%%%%%%%%%%%%%%%%%%%%%%%%%% functon [M, OM] = grdrburs(d,,n,r,rc,r,shape) % regularzed Bloc Unform ReSamplng method for grddng % d -- -space data % -- -trajectory, scaled -0.5 to 0.5 % n -- mage sze % r-- non-cartesan ernel radus % rc-- cartesan ernel radus % shape-- choose crcle (=='c') neghborhood % or square neghborhood (=='s') for Cartesan ponts % r -- regularzaton smoothng parameter % % M -- K-space nterpolated data % OM-- nose amplfcaton (defned n Rosenfeld 2002 Magn Reson Med) % % Zheng L, Nov % convert to sngle column d=d(:); =(:); % convert -space samples to matrx ndces nx=(n+1)/2 + (n-1)*real(); ny=(n+1)/2 + (n-1)*mag(); % ntalze the output matrx M=zeros(n,n); OM=zeros(n,n); % change the cartesan coordnate to one column, so that we can fnd % the cartesan pont wthng "rc" easly. [mxc, myc]=meshgrd(1:n, 1:n); mxc=mxc(:); myc=myc(:); % man loop, compute the BURS grddng value for each pont for xc=cel(1+rc):floor(n-rc) for yc=cel(1+rc):floor(n-rc) f shape=='s' [mxdc, mydc]=meshgrd(cel(xc-rc):floor(xc+rc), cel(yc-rc):floor(yc+rc)); % get the Cartesan ponts n "square" neghborhood of (xc,yc) xyc=mxdc(:)+*mydc(:); ndc=ones(length(xyc),1); % just to mae to!=[] elsef shape=='c' % fnd the ndex of the Cartesan ponts n the "rc" neghborhood of (xc+yc*) ndc=fnd( ((mxc-xc).^2 + (myc-yc).^2) <=rc^2+eps ); f ~(sempty(ndc)) % get the Cartesan ponts n "crcular" neghborhood of (xc,yc) xyc=mxc(ndc)+*(myc(ndc)); % fnd the ndex of the Non-Cartesan ponts n the "c" neghborhood of (xc+yc*) nd=fnd( ((nx-xc).^2 + (ny-yc).^2) <=r^2+eps ); % prnt the N ( of Cartesan ponts), and M ( of Non-Cartesan ponts) several postons along y=0 axs. 19
20 f (yc==64) & (mod(xc, 16)==0) dsp(strcat('n=', num2str(length(ndc)),... ', M=', num2str(length(nd)),... num2str(xc), ',', num2str(yc), ')')); f ~(sempty(nd)) & ~(sempty(ndc)) % get the Non-Cartesan ponts n "crcular" neghborhood of (xc,yc) xy=nx(nd)+*(ny(nd)); A=nterp2snc(xyc, xy); % compute the regularzed pseduo-nverse usng "Thonov" wndow coeffcents f length(nd)<=length(ndc) % underdetermned case pa=a'*nv(a*a'+r*eye(length(nd))); else % overdetermned case pa=nv(a'*a+r*eye(length(ndc)))*a'; nd0=fnd( (xyc==xc+yc*) ); M(xc, yc)=pa(nd0, :)*d(nd); OM(xc, yc)=sqrt( sum(pa(nd0, :).^2) ); %end %%%%%%%%%%%%%%%%%%%%%%% % functon nterp2snc % %%%%%%%%%%%%%%%%%%%%%%% functon A=nterp2snc(xyc, xy); % functon A=nterp2snc(xyc, xy) % 2D nterpolaton usng snc functon. % xyc: the column vector of Cartesan pont postons, % each poston s represented by a complex number. % xy: the column vector of Non-Cartesan pont poston. % A: the lnear transform matrx s.t. the (DATA@xy)=A*(DATA@xyc); % % Zheng L, Nov % pxy* s the postons n Cartesan(c) and Non-cartan() [pxyc, pxy]=meshgrd(xyc, xy); % the dstances between each ponts of Cartesan and Non-Cartesan dst=pxyc-pxy; A=snc(real(dst)).*snc(mag(dst)); % end 20
Report on Image warping
Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationA Robust Method for Calculating the Correlation Coefficient
A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal
More informationGrover s Algorithm + Quantum Zeno Effect + Vaidman
Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationSuppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl
RECURSIVE SPLINE INTERPOLATION METHOD FOR REAL TIME ENGINE CONTROL APPLICATIONS A. Stotsky Volvo Car Corporaton Engne Desgn and Development Dept. 97542, HA1N, SE- 405 31 Gothenburg Sweden. Emal: astotsky@volvocars.com
More informationDUE: WEDS FEB 21ST 2018
HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant
More informationCOMPUTATIONALLY EFFICIENT WAVELET AFFINE INVARIANT FUNCTIONS FOR SHAPE RECOGNITION. Erdem Bala, Dept. of Electrical and Computer Engineering,
COMPUTATIONALLY EFFICIENT WAVELET AFFINE INVARIANT FUNCTIONS FOR SHAPE RECOGNITION Erdem Bala, Dept. of Electrcal and Computer Engneerng, Unversty of Delaware, 40 Evans Hall, Newar, DE, 976 A. Ens Cetn,
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationCHAPTER III Neural Networks as Associative Memory
CHAPTER III Neural Networs as Assocatve Memory Introducton One of the prmary functons of the bran s assocatve memory. We assocate the faces wth names, letters wth sounds, or we can recognze the people
More informationLossy Compression. Compromise accuracy of reconstruction for increased compression.
Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost
More informationChapter 6. Supplemental Text Material
Chapter 6. Supplemental Text Materal S6-. actor Effect Estmates are Least Squares Estmates We have gven heurstc or ntutve explanatons of how the estmates of the factor effects are obtaned n the textboo.
More informationFall 2015: Computational and Variational Methods for Inverse Problems
Fall 215: Computatonal and Varatonal Methods for Inverse Problems Georg Stadler Courant Insttute of Mathematcal Scences New York Unversty Omar Ghattas Jackson School of Geoscences Department of Mechancal
More informationComparison of Wiener Filter solution by SVD with decompositions QR and QLP
Proceedngs of the 6th WSEAS Int Conf on Artfcal Intellgence, Knowledge Engneerng and Data Bases, Corfu Island, Greece, February 6-9, 007 7 Comparson of Wener Flter soluton by SVD wth decompostons QR and
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationOne-sided finite-difference approximations suitable for use with Richardson extrapolation
Journal of Computatonal Physcs 219 (2006) 13 20 Short note One-sded fnte-dfference approxmatons sutable for use wth Rchardson extrapolaton Kumar Rahul, S.N. Bhattacharyya * Department of Mechancal Engneerng,
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationNorms, Condition Numbers, Eigenvalues and Eigenvectors
Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationWeek3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity
Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle
More informationOutline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]
DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm
More information1 Matrix representations of canonical matrices
1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:
More informationEstimating the Fundamental Matrix by Transforming Image Points in Projective Space 1
Estmatng the Fundamental Matrx by Transformng Image Ponts n Projectve Space 1 Zhengyou Zhang and Charles Loop Mcrosoft Research, One Mcrosoft Way, Redmond, WA 98052, USA E-mal: fzhang,cloopg@mcrosoft.com
More information4DVAR, according to the name, is a four-dimensional variational method.
4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The
More informationExercises. 18 Algorithms
18 Algorthms Exercses 0.1. In each of the followng stuatons, ndcate whether f = O(g), or f = Ω(g), or both (n whch case f = Θ(g)). f(n) g(n) (a) n 100 n 200 (b) n 1/2 n 2/3 (c) 100n + log n n + (log n)
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationFeb 14: Spatial analysis of data fields
Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s
More informationCHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD
CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationCHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE
CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationTime-Varying Systems and Computations Lecture 6
Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy
More informationLecture 3. Ax x i a i. i i
18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest
More informationAppendix B: Resampling Algorithms
407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles
More informationSTAT 511 FINAL EXAM NAME Spring 2001
STAT 5 FINAL EXAM NAME Sprng Instructons: Ths s a closed book exam. No notes or books are allowed. ou may use a calculator but you are not allowed to store notes or formulas n the calculator. Please wrte
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationArmy Ants Tunneling for Classical Simulations
Electronc Supplementary Materal (ESI) for Chemcal Scence. Ths journal s The Royal Socety of Chemstry 2014 electronc supplementary nformaton (ESI) for Chemcal Scence Army Ants Tunnelng for Classcal Smulatons
More informationAdaptive Manifold Learning
Adaptve Manfold Learnng Jng Wang, Zhenyue Zhang Department of Mathematcs Zhejang Unversty, Yuquan Campus, Hangzhou, 327, P. R. Chna wroarng@sohu.com zyzhang@zju.edu.cn Hongyuan Zha Department of Computer
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationMACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression
11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationFourier Transform. Additive noise. Fourier Tansform. I = S + N. Noise doesn t depend on signal. We ll consider:
Flterng Announcements HW2 wll be posted later today Constructng a mosac by warpng mages. CSE252A Lecture 10a Flterng Exampel: Smoothng by Averagng Kernel: (From Bll Freeman) m=2 I Kernel sze s m+1 by m+1
More informationSection 8.3 Polar Form of Complex Numbers
80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the
More informationMore metrics on cartesian products
More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of
More informationUncertainty as the Overlap of Alternate Conditional Distributions
Uncertanty as the Overlap of Alternate Condtonal Dstrbutons Olena Babak and Clayton V. Deutsch Centre for Computatonal Geostatstcs Department of Cvl & Envronmental Engneerng Unversty of Alberta An mportant
More informationThe equation of motion of a dynamical system is given by a set of differential equations. That is (1)
Dynamcal Systems Many engneerng and natural systems are dynamcal systems. For example a pendulum s a dynamcal system. State l The state of the dynamcal system specfes t condtons. For a pendulum n the absence
More informationAnnexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances
ec Annexes Ths Annex frst llustrates a cycle-based move n the dynamc-block generaton tabu search. It then dsplays the characterstcs of the nstance sets, followed by detaled results of the parametercalbraton
More informationMATH 5630: Discrete Time-Space Model Hung Phan, UMass Lowell March 1, 2018
MATH 5630: Dscrete Tme-Space Model Hung Phan, UMass Lowell March, 08 Newton s Law of Coolng Consder the coolng of a well strred coffee so that the temperature does not depend on space Newton s law of collng
More informationHongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)
ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of
More informationChapter - 2. Distribution System Power Flow Analysis
Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load
More informationAperture Photometry Uncertainties assuming Priors and Correlated Noise
Aperture Photometry Uncertantes assumng Prors and Correlated Nose F. Masc, verson.0, 10/06/009 1. Summary We derve a general formula for the nose varance n the flux of a source estmated from aperture photometry
More informationCurve Fitting with the Least Square Method
WIKI Document Number 5 Interpolaton wth Least Squares Curve Fttng wth the Least Square Method Mattheu Bultelle Department of Bo-Engneerng Imperal College, London Context We wsh to model the postve feedback
More informationJoint Statistical Meetings - Biopharmaceutical Section
Iteratve Ch-Square Test for Equvalence of Multple Treatment Groups Te-Hua Ng*, U.S. Food and Drug Admnstraton 1401 Rockvlle Pke, #200S, HFM-217, Rockvlle, MD 20852-1448 Key Words: Equvalence Testng; Actve
More information= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.
Chapter Matlab Exercses Chapter Matlab Exercses. Consder the lnear system of Example n Secton.. x x x y z y y z (a) Use the MATLAB command rref to solve the system. (b) Let A be the coeffcent matrx and
More informationIV. Performance Optimization
IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton
More informationSpeeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem
H.K. Pathak et. al. / (IJCSE) Internatonal Journal on Computer Scence and Engneerng Speedng up Computaton of Scalar Multplcaton n Ellptc Curve Cryptosystem H. K. Pathak Manju Sangh S.o.S n Computer scence
More informationSolutions to exam in SF1811 Optimization, Jan 14, 2015
Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable
More informationGlobal Sensitivity. Tuesday 20 th February, 2018
Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationTutorial 2. COMP4134 Biometrics Authentication. February 9, Jun Xu, Teaching Asistant
Tutoral 2 COMP434 ometrcs uthentcaton Jun Xu, Teachng sstant csjunxu@comp.polyu.edu.hk February 9, 207 Table of Contents Problems Problem : nswer the questons Problem 2: Power law functon Problem 3: Convoluton
More informationWeek 9 Chapter 10 Section 1-5
Week 9 Chapter 10 Secton 1-5 Rotaton Rgd Object A rgd object s one that s nondeformable The relatve locatons of all partcles makng up the object reman constant All real objects are deformable to some extent,
More informationTracking with Kalman Filter
Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,
More informationGEMINI GEneric Multimedia INdexIng
GEMINI GEnerc Multmeda INdexIng Last lecture, LSH http://www.mt.edu/~andon/lsh/ Is there another possble soluton? Do we need to perform ANN? 1 GEnerc Multmeda INdexIng dstance measure Sub-pattern Match
More informationA linear imaging system with white additive Gaussian noise on the observed data is modeled as follows:
Supplementary Note Mathematcal bacground A lnear magng system wth whte addtve Gaussan nose on the observed data s modeled as follows: X = R ϕ V + G, () where X R are the expermental, two-dmensonal proecton
More informationApplication of Dynamic Time Warping on Kalman Filtering Framework for Abnormal ECG Filtering
Applcaton of Dynamc Tme Warpng on Kalman Flterng Framework for Abnormal ECG Flterng Abstract. Mohammad Nknazar, Bertrand Rvet, and Chrstan Jutten GIPSA-lab (UMR CNRS 5216) - Unversty of Grenoble Grenoble,
More informationDECOUPLING THEORY HW2
8.8 DECOUPLIG THEORY HW2 DOGHAO WAG DATE:OCT. 3 207 Problem We shall start by reformulatng the problem. Denote by δ S n the delta functon that s evenly dstrbuted at the n ) dmensonal unt sphere. As a temporal
More informationMarkov Chain Monte Carlo Lecture 6
where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways
More information2 Finite difference basics
Numersche Methoden 1, WS 11/12 B.J.P. Kaus 2 Fnte dfference bascs Consder the one- The bascs of the fnte dfference method are best understood wth an example. dmensonal transent heat conducton equaton T
More informationNotes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology
Inverse transformatons Generaton of random observatons from gven dstrbutons Assume that random numbers,,, are readly avalable, where each tself s a random varable whch s unformly dstrbuted over the range(,).
More informationUnified Subspace Analysis for Face Recognition
Unfed Subspace Analyss for Face Recognton Xaogang Wang and Xaoou Tang Department of Informaton Engneerng The Chnese Unversty of Hong Kong Shatn, Hong Kong {xgwang, xtang}@e.cuhk.edu.hk Abstract PCA, LDA
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationCONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION
CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING N. Phanthuna 1,2, F. Cheevasuvt 2 and S. Chtwong 2 1 Department of Electrcal Engneerng, Faculty of Engneerng Rajamangala
More informationFrequency dependence of the permittivity
Frequency dependence of the permttvty February 7, 016 In materals, the delectrc constant and permeablty are actually frequency dependent. Ths does not affect our results for sngle frequency modes, but
More informationECE559VV Project Report
ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate
More informationSpatial Statistics and Analysis Methods (for GEOG 104 class).
Spatal Statstcs and Analyss Methods (for GEOG 104 class). Provded by Dr. An L, San Dego State Unversty. 1 Ponts Types of spatal data Pont pattern analyss (PPA; such as nearest neghbor dstance, quadrat
More informationFormulas for the Determinant
page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use
More informationModeling curves. Graphs: y = ax+b, y = sin(x) Implicit ax + by + c = 0, x 2 +y 2 =r 2 Parametric:
Modelng curves Types of Curves Graphs: y = ax+b, y = sn(x) Implct ax + by + c = 0, x 2 +y 2 =r 2 Parametrc: x = ax + bxt x = cos t y = ay + byt y = snt Parametrc are the most common mplct are also used,
More informationEconomics 130. Lecture 4 Simple Linear Regression Continued
Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do
More informationElectrical double layer: revisit based on boundary conditions
Electrcal double layer: revst based on boundary condtons Jong U. Km Department of Electrcal and Computer Engneerng, Texas A&M Unversty College Staton, TX 77843-318, USA Abstract The electrcal double layer
More informationOn a direct solver for linear least squares problems
ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear
More informationCS4495/6495 Introduction to Computer Vision. 3C-L3 Calibrating cameras
CS4495/6495 Introducton to Computer Vson 3C-L3 Calbratng cameras Fnally (last tme): Camera parameters Projecton equaton the cumulatve effect of all parameters: M (3x4) f s x ' 1 0 0 0 c R 0 I T 3 3 3 x1
More informationEconomics 101. Lecture 4 - Equilibrium and Efficiency
Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of
More informationNotes on Frequency Estimation in Data Streams
Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to
More informationSecond Order Analysis
Second Order Analyss In the prevous classes we looked at a method that determnes the load correspondng to a state of bfurcaton equlbrum of a perfect frame by egenvalye analyss The system was assumed to
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationˆ (0.10 m) E ( N m /C ) 36 ˆj ( j C m)
7.. = = 3 = 4 = 5. The electrc feld s constant everywhere between the plates. Ths s ndcated by the electrc feld vectors, whch are all the same length and n the same drecton. 7.5. Model: The dstances to
More informationCHAPTER 14 GENERAL PERTURBATION THEORY
CHAPTER 4 GENERAL PERTURBATION THEORY 4 Introducton A partcle n orbt around a pont mass or a sphercally symmetrc mass dstrbuton s movng n a gravtatonal potental of the form GM / r In ths potental t moves
More informationImage Processing for Bubble Detection in Microfluidics
Image Processng for Bubble Detecton n Mcrofludcs Introducton Chen Fang Mechancal Engneerng Department Stanford Unverst Startng from recentl ears, mcrofludcs devces have been wdel used to buld the bomedcal
More informationTransform Coding. Transform Coding Principle
Transform Codng Prncple of block-wse transform codng Propertes of orthonormal transforms Dscrete cosne transform (DCT) Bt allocaton for transform coeffcents Entropy codng of transform coeffcents Typcal
More informationWhat would be a reasonable choice of the quantization step Δ?
CE 108 HOMEWORK 4 EXERCISE 1. Suppose you are samplng the output of a sensor at 10 KHz and quantze t wth a unform quantzer at 10 ts per sample. Assume that the margnal pdf of the sgnal s Gaussan wth mean
More informationFingerprint Enhancement Based on Discrete Cosine Transform
Fngerprnt Enhancement Based on Dscrete Cosne Transform Suksan Jrachaweng and Vutpong Areekul Kasetsart Sgnal & Image Processng Laboratory (KSIP Lab), Department of Electrcal Engneerng, Kasetsart Unversty,
More informationA new Approach for Solving Linear Ordinary Differential Equations
, ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13
CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,
More informationMatrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD
Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo
More informationStanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011
Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected
More informationPop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing
Advanced Scence and Technology Letters, pp.164-168 http://dx.do.org/10.14257/astl.2013 Pop-Clc Nose Detecton Usng Inter-Frame Correlaton for Improved Portable Audtory Sensng Dong Yun Lee, Kwang Myung Jeon,
More information