Lossless Compression Performance of a Simple Counter- Based Entropy Coder
|
|
- Suzanna Hardy
- 6 years ago
- Views:
Transcription
1 ITB J. ICT, Vol. 5, No. 3, 20, Lossless Compresson Performance of a Smple Counter- Based Entropy Coder Armen Z. R. Lang,2 ITB Research Center on Informaton and Communcaton Technology 2 Informaton Technology RG, School of Electrcal Engneerng and Informatcs Insttut Teknolog Bandung, Jalan Ganeca 0, Bandung, 406, Indonesa Emal: armen.z.r.lang@ste.tb.ac.d Abstract. Ths paper descrbes the performance of a smple counter based entropy coder, as compared to other entropy coders, especally Huffman coder. Lossless data compresson, such as Huffman coder and arthmetc coder, are desgned to perform well over a wde range of data entropy. As a result, the coders requre sgnfcant computatonal resources that could be the bottleneck of a compresson mplementaton performance. In contrast, counter-based coders are desgned to be optmal on a lmted entropy range only. Ths paper shows the encodng and decodng process of counter-based coder can be smple and fast, very sutable for hardware and software mplementatons. It also reports that the performance of the desgned coder s comparable to that of a much more complex Huffman coder. Keywords: entropy coder; counter-based coder; lossless compresson; Rce coders. Introducton Compresson schemes reduce the number of bts requred to represent sgnals and data []. Image communcaton and storage are examples of applcatons that beneft from mage compresson, because the compresson results n () faster (real tme) mage transmsson through bandlmted channels and () lower requrements for storage space. Noncompressed mages requre many data bts, makng t mpossble for real tme transmsson through bandlmted channels such as 48 kbt per second (kpbs) and 2 kbps ntegrated servces dgtal network (ISDN), as well as 9.6 kbps voce grade telephone or rado channels []. For example, transmttng a pxels grayscale stll mage over the voce grade lne would requre at least 54.6 s. Furthermore, one HDTV format needs 60 frames of pxels per second. Usng 24 bpp colour pxels, ths HDTV format would requre an mpractcal channel capacty of,440 Mbps. For example, space exploratons by Natonal Aeronautcs and Space Admnstraton (NASA) mssons generate huge scence data, requrng real-tme compresson [2]. Consequently, nternatonal bodes such as Commttee for Space Data Systems (CCDS) have defned compresson standards for real-tme systems [3]. Receved December 22 nd, 200, Revsed November 7 th, 20, Accepted for publcaton November 7 th, 20. Copyrght 20 Publshed by LPPM ITB, ISSN: , DOI: 0.564/tbj.ct
2 74 Armen Z.R. Lang In sgnal and data compresson cases, lossless compresson n forms of entropy coders s always needed. As shown n Fgure, a value quantzaton block converts any nput samples nto quantzed values accordng to a prescrbed target of compresson rato (CR) [4]. A lossless entropy encoder compresses quantzed values to obtan the desred effcent representatons (btstreams). The quantzed values can be recovered by nvertng encodng process at the decoder. An entropy decoder recovers the values losslessly. A value dequantzaton block reconstructs the sgnal samples. Input Samples Value Quantzaton Input Data Lossless Encoder Target CR Btstream Output Samples Value Dequantzaton Output Data Lossless Decoder Fgure A typcal use of lossless coder n an mage compresson system. Snce there are fast algorthms for value quantzaton, such as wavelet scalar quantzaton (WSQ) [5], the entropy coder s usually the tme performance bottleneck. Entropy coders such as zero run length (ZRL) coders and Huffman coders must perform statstcal modelng of data dstrbuton and assgn shorter length codewords to samples that occur more often. Ths n effect reduces the average of representaton bts, thus achevng on average a compresson. If the codeword table does not have to be transmtted, Huffman code s the shortest length code for statstcally ndependent data. However, the Huffman coder s desgned to obtan optmal performance for all range of entropy values [4]. As a result, the encodng cost s hgh because the coder must create a table of codewords for nput data through calculaton of data dstrbuton. Such processng can be very tme consumng. Ths paper then proposes to use counter-based coders [2] because such coders are much smpler. Counter based coder conssts of a set of subcoders operatng at specfc range of entropy values (statstcs). Such subcoders can be desgned to be optmal only for a narrow range of entropy [2]. Each subcoder s smply a counter, countng number of zeros or ones n a data stream. As a result, the encoder and decoder are very smple and fast.
3 Lossless Compresson Performance of Counter Based Coder 75 However we need to study the performance of counter-based coder n practce n the context of transform codng, snce the work n [2] was for dfferental pulse code modulaton schemes. Furthermore, Rce coders have always been thought to be a specal purpose coder, as compare to Huffman coder. We study and desgn of such counter coders to be used n transform coder compresson scheme. The entropy range of the wavelet coeffcents can be used to desgn a partcular Rce coder. We can then have a performance comparable to that of Huffman coder wth a fracton of computatonal costs [6]. 2 Entropy Coders In prncple, an entropy coder s a coder that allocates a number of bts to a codeword proportonally to an nformaton value of the codeword [4]. 2. Frst Order Entropy Let us assume that nput data consst of a sequence of fxed length codewords comng from a zero-memory nformaton source (.e., a source that produces statstcally ndependent codewords). The source has an alphabet S havng symbolss, s2,, s q. For a partcular set of nput data, the probabltes of symbol occurrences are Satsfyng P s, s 2 q s q P,, s P () P (2) If a symbol s rarely occurs (.e., suprsng value. Hence symbol nformaton s P s s very small), ts occurrence has a hgh s s sad to have a hgh amount of P. If a I, whch s nversely proposonal to magntude of s s occurs, we are sad to have receved an amount of nformaton I symbol from the source, defned as I s log (3) P s s
4 76 Armen Z.R. Lang Snce the probablty of recevng an amount of nformaton I s P, the average amount of nformaton receved for each symbol s called the frst-order entropy of S, denoted as H s, H q s Ps I s Ps q log (4) P s In other word, the source S produces an average amount of nformaton H for each occurrence of a symbol. If we use bnary logarthm (.e., log 2 / Ps ) the nformaton amount unt s n bts. To carry ths amount of nformaton, the source needs data resources namely codewords. Usng a code W, each symbol s needs a dstnct codeword w. If each codeword w needs nformaton from source S usng code W s l bts, then average bts requred to transfer s s R q P s l Snce we have I s l, or (5) s H R (6) Ths means for a zero-memory source, the entropy s the lower bound of average bts per symbol. In other words, an entropy coder wll have an effcent representaton of S,.e, a small R, but wll not be smaller than the entropy. The mnmum length achevable s the data entropy. Thus the performance of an entropy coder s n how close the resultng R to H s. It should be noted that t s possble to have R H s f the source happens to have memory. For statstcally ndependent data, the entropy s the smply the frst order entropy [2]. Effcent lossless codes depend on statstcal dstrbutons. An entropy coder allocates a number of bts proportonal to the amount of nformaton. A rarely occurrng symbol wll get more bts. Hghly frequent symbols get codewords wth fewer bts. As a result, we can have an effcent average of bt usages.
5 Lossless Compresson Performance of Counter Based Coder Zero Run-Length Coder Suppose that we have a source producng many consecutve zero symbols,.e., P s s hgh for s = 0x00. In ths case we can use ZRL, whch s a very smple code. It smply counts the number of consecutve zeros n an nput codeword and produces a zero symbol followed by a number representng the countng result. For example, consder a source that uses an alphabet consstng of fx 8 bts codewords, {0x00, 0x0, 0x02,, 0xFF}. If a ZRL encoder encounters a stream of nne nput data: (0x02, 0x0, 0x00, 0x00, 0x00, 0x00, 0x07, 0x00, 0x7F) (7) t wll produce a stream of eght output data (0x02, 0x0, 0x00, 0x04, 0x07, 0x00, 0x0, 0x7F) (8) Notce that there are fve zeros (0x00) n the nput stream, and the frst four consecutve zeros are replaced by a zero symbol 0x00 and a countng result 0x04. The last zero s replaced wth a zero symbol 0x00 and a countng result 0x0. Gven the smple codng rule, a ZRL decoder wll be able to recover the orgnal stream from the encoder output stream easly (unquely decodable) wthout any data loss. However notce also that the number of data have been reduced from nne to eght, performng a lossless compresson. Ths compresson s effectve as long as the nput stream contans sgnfcant number of consecutve zeros. Otherwse, ZRL wll n fact produce an expanson nstead of a compresson. 2.3 Huffman Coder The varable-length Huffman coder s a well known and wdely used coder because ts compresson rato performance s close to optmal (.e., the actual compresson rato s very close to the nput entropy). In an optmal code, occurrences of long codewords are rare hence the number of bts s reduced. A Huffman coder maps the nput samples usng a table of varable-length codewords. The codewords are desgned such that the most frequent data sample has the shortest codeword. Consequently, the resultng bt rate s always wthn H and H+, regardless of the nput entropy H. A Huffman coder mantans statstcal dstrbutons of the source. The coder uses the dstrbuton to estmate the nformaton value of each symbol. It then
6 78 Armen Z.R. Lang creates a codeword for each symbol accordng to ts nformaton value. Fnally t uses that codeword to represent correspondng symbol from the source. Thus the process of a Huffman coder s usually computatonally costly because t conssts of two passes. The frst pass reads all data samples to generate statstcal dstrbuton, to be used to allocate bts to the codewords. It allocates bts varably for codewords accordng to a herarchcal tree structure of symbols, n whch the herarchy reflects the mportance of the symbols. The frst bt s dstrbuted to the symbol wth the lowest nformaton values. The remanng bts are dstrbuted by traversng through the tree structure n a way that symbols wth more nformaton value receve more bts, whle ensurng that the resultng code s always unquely decodable. The second pass encodes the sample data usng the resultng codeword table. Ths two-pass Huffman code scheme must then nclude the codeword table n the btstream to allow decoder to decode data correctly. A sngle-pass Huffman code s even more computatonally expensve. To avod havng to send the codeword table, both encoder and decoder must mantan an adaptve codeword table. They adapt the table as each ncomng data sample changes symbol dstrbuton. As a result, both encoder and decoder must create a new tree structure and the codeword table perodcally. 3 Counter-Based Coders We then propose to use the counter coder as an alternatve and more computatonally effcent entropy coder. One basc counter coder called (or PSI ) works as follows [2]. Gven a block of data samples (for j =,..., J), replaces every sample n the block wth a number of consecutve zero bts 0 followed by a closng one bt (see Table ). For example, f a sample happens to have a value of 55, the encodes t usng 56 bts,.e., 55 zeros followed by a one. Hence, the encodng algorthm s smple. The reconstructon s obvously smple too. A decoder just counts the number of zero bts untl a one appears. The countng result s the sample value. Defne now a functon j such that a partcular data block conssts of symbols { s (), s (2),, s (J ) }. The total number of bts requred to encode a data block s (for ndex j and block sze J) L J J x j) j ( (9)
7 Lossless Compresson Performance of Counter Based Coder 79 Notce that ths code bascally allocates more codeword bts to data samples wth hgher sample values,.e., x x j l l j. Ths means code s optmal f the statstcal dstrbuton of data samples s monotoncally decreasng,.e., x x Ps Ps. The average codeword length of code s R q P s j j (0) It has been studed elsewhere [3] that ths code s optmal for a (monotoncally decreasng dstrbuton) source wth a frst-order entropy around,.e.,.5 H 2.5. It should be clear that n a monotoncally decreasng dstrbuton, the probablty of a sample has a large value s low. As a result, the probablty of long codeword length Eq. () s low. Thus t s optmal. We say that range s the natural entropy range of. For sources wth entropes outsde that range, there are several varatons of. Symbols s Samples Table A codeword table of Code. x Sample Data d Codewords w Length l s s s s s For example, f H 2. 5, t s safe to assume that the LSBs of sample data d ( j) are completely random. In ths case there s no need to perform any compresson on those LSBs. We can then splt d ( j) nto two portons: k LSBs and (8-k) MSBs. The MSBs are coded usng code before beng sent to btstream, whle the LSBs are sent uncoded. A decoder frst recovers the MSBs usng decoder, and then concatenates the results wth the uncoded LSBs,
8 80 Armen Z.R. Lang resultng n the desred d ( j). It can be shown that ths approach has a natural entropy range.5 k H 2. 5 k. Ths code s called, k. For 0.75 H. 5 range, we can frst use the, resultng n a btstream wth many bts. To explot ths, we can cascade t usng another smple code. The smple code frst complements the results of (.e., convertng 0 nto and vce versa), and group the resultng bt nto bnary 3-tuples. It fnally codes each bnary 3-tuples n a way that a tuple wth many 0 bts wll use short codewords. Ths code s called 0. For sources wth entropy H 0. 75, we can cascade the coder wth another coder, such as ZRL coder descrbed above. A ZRL encoder processes samples, and provdes ts outputs to a counter coder. In effect the ZRL coder brngs the data entropy nto counter code natural range. 4 Compresson Performance To observe the performance of ths smple counter, we have used a WSQ scheme shown n Fgure [5]. Here we used an mage sample of Lena.pgm, havng a dmenson of pxels at 8 bt per pxel (bpp) grayscale, hence a sze of 2,097,52 bts. The scheme frst quantzes the mage accordng to a set of prescrbed rates 0., 0.2,,0.9,, 2,,6, and 7 bpp. The resultng data becomes nput data to a lossless encoder. In addton to the counter code, we have used a Huffman coder for a comparson. The lossless encoder produces compressed btstreams. A lossless decoder can then reproduce output data from the btstreams. Gven a prescrbed rate, the value quantzaton block produces mage data and then later the coder produces the bstream. We measured the sze of the btstream. The compresson rato (CR) s the rato between the orgnal mage sze (n ths case 2,097,52 bts) and the btstream sze. We measured the CR for each prescrbed rate for counter coder, and measure smlar performance of Huffman coder for comparsons. We expect that the resultng CRs should approxmate the prescrbed CR. Table 2 and Fgure 2 show the results of the coders for varous prescrbed rates. Overall the performance of counter coder s comparable to that of Huffman coder. At very low rates Huffman Coder stll outperforms counter coder. However n most cases both coders perform better than expected rates.
9 Lossless Compresson Performance of Counter Based Coder 8 We can also measure the entropy the mage data for each prescrbed rate, and compare the CR performances relatve over that of the entropy. As shown n Fgure 2, for hgh prescrbed CR (thus low entropy), both counter coder and Huffman coder are able to outperform the entropy. For hgh entropy (low prescrbed CR), the counter coder outperforms the Huffman coder. The counter coder performance s comparable to entropy, whle Huffman coder underperforms the entropy. Table 2 A comparson of compresson rato (CR) performance. Prescrbed CR Performance Relatve Performance Over Entropy Rate (bpp) CR Entropy Huffman Counter Huffman Counter % 3% % 07% % 09% % 09% % 0% % 09% % 06% % 06% % 05% % 05% % 98% % 98% % 0% % 0% % 00% % 98% What s the mage qualty assocated wth the CR performance level? For each prescrbed rate the scheme can produce a btstream. The decoder scans then use the btstream to produce a reconstructed mage. We then compare the qualty of the reconstructed mage, as compared to the orgnal mage. One way to measure mage qualty s through a measurement of peak sgnal to nose rato (PSNR). Ths measure compares the two mages, and calculates the energy of mage dfferences. PSNR s then the rato of peak-value energy and that of the mage energy. A PSNR level of 30 db or more s consdered good qualty.
10 82 Armen Z.R. Lang 40% CR Performance Relatve to Entropy 20% 00% 80% 60% 40% 20% Huffman Counter 0% Prescrbed CR (In A Logarthmc Scale) Fgure 2 Compresson Rato (CR) performances of counter code as compared to Huffman coder, relatve to data entropy. Table 3 and Fgure 3 show compresson performances of the coders at varous levels of mage qualty, after beng combned wth the WSQ scheme shown n Fgure. The WSQ decoder uses the mage data from the btstream to generate a reconstructed mage. We can then compare the reconstructed mage wth the orgnal one, and the measure ts qualty n terms of PSNR. Here, counter coder outperforms Huffman coder for hgh qualty compresson. Huffman coder outperformed counter coder at lower mage qualty levels, although both outperform the entropy. Beyond 37 db PSNR, the CRs of the coders are bascally smlar. Table 3 Compresson rato performance over compresson qualty. Qualty PSNR (db) CR Entropy Coder CR Huffman Coder CR Counter Coder
11 Lossless Compresson Performance of Counter Based Coder 83 CR (n a Logarthmc Scale) PSNR (db) Entropy Huffman Counter Fgure 3 Compresson rato performances n a scalar quantzaton scheme. 5 Conclusons Smple counter coders can be used effectvely to compress mages n a WSQ scheme. The counter coders are optmal for monotoncally decreasng dstrbutons. Ths knd of dstrbuton s natural for ndces of the WSQ. In ths paper we have shown the counter coder performance, especally CR and PSNR, matches that of Huffman coder. Ths results n an almost optmal performance comparable to Huffman codng wth a fracton of code complexty. Acknowledgements Ths work was supported n part by Rset Unggulan (RU) ITB. Nomenclature CR db SNR PSNR References = Compresson rato = Decbel = Sgnal to nose rato = Peak SNR [] Jayant, N., Sgnal Compresson: Technology Targets and Research Drectons, IEEE J. Selected Areas Communcatons, IEEE /92, 0(5), pp , July 992.
12 84 Armen Z.R. Lang [2] Rce, R.F., Some Practcal Unversal Noseless Codng Codng Technques, Part III, Module PSI 4,K+, JPL Publcaton 9 3, NASA, JPL Calforna Insttute of Technology, 24p, November 99. [3] CCSDS, Image Data Compresson, Recommended Standard CCSDS 22.0-B-, Consultatve Commttee for Space Data Systems, Nov, (avalable at accessed 4 March 20) [4] Lang, A., Revew of Data Compresson Methods and Algorthms, Techncal Report, DSP-RTG 200 9, Insttut Teknolog Bandung, Sep [5] Bradley, J.N. & Brslawn, C.N., The Wavelet/Scalar Quantzaton Compresson Standard for Dgtal Fngerprnt Images, Proc. IEEE Int. Symp. Crcuts and Systems, London, May 3 June 2, 994. [6] Lang, A.Z.R., An FPGA Implementaton of a Smple Lossless Data Compresson Coprocessor, Proc. Internatonal Conference on Electrcal Engneerng and Informatcs (ICEEI 20), Bandung, July 20.
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationPulse Coded Modulation
Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationIntroduction to Information Theory, Data Compression,
Introducton to Informaton Theory, Data Compresson, Codng Mehd Ibm Brahm, Laura Mnkova Aprl 5, 208 Ths s the augmented transcrpt of a lecture gven by Luc Devroye on the 3th of March 208 for a Data Structures
More informationChapter 8 SCALAR QUANTIZATION
Outlne Chapter 8 SCALAR QUANTIZATION Yeuan-Kuen Lee [ CU, CSIE ] 8.1 Overvew 8. Introducton 8.4 Unform Quantzer 8.5 Adaptve Quantzaton 8.6 Nonunform Quantzaton 8.7 Entropy-Coded Quantzaton Ch 8 Scalar
More informationEGR 544 Communication Theory
EGR 544 Communcaton Theory. Informaton Sources Z. Alyazcoglu Electrcal and Computer Engneerng Department Cal Poly Pomona Introducton Informaton Source x n Informaton sources Analog sources Dscrete sources
More informationLossy Compression. Compromise accuracy of reconstruction for increased compression.
Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost
More informationEntropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or
Sgnal Compresson Sgnal Compresson Entropy Codng Entropy codng s also known as zero-error codng, data compresson or lossless compresson. Entropy codng s wdely used n vrtually all popular nternatonal multmeda
More informationLecture 3: Shannon s Theorem
CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts
More informationChapter 7 Channel Capacity and Coding
Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models
More informationChapter 7 Channel Capacity and Coding
Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform
More informationIntroduction to information theory and data compression
Introducton to nformaton theory and data compresson Adel Magra, Emma Gouné, Irène Woo March 8, 207 Ths s the augmented transcrpt of a lecture gven by Luc Devroye on March 9th 207 for a Data Structures
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationDepartment of Electrical & Electronic Engineeing Imperial College London. E4.20 Digital IC Design. Median Filter Project Specification
Desgn Project Specfcaton Medan Flter Department of Electrcal & Electronc Engneeng Imperal College London E4.20 Dgtal IC Desgn Medan Flter Project Specfcaton A medan flter s used to remove nose from a sampled
More informationTransform Coding. Transform Coding Principle
Transform Codng Prncple of block-wse transform codng Propertes of orthonormal transforms Dscrete cosne transform (DCT) Bt allocaton for transform coeffcents Entropy codng of transform coeffcents Typcal
More informationCompression in the Real World :Algorithms in the Real World. Compression in the Real World. Compression Outline
Compresson n the Real World 5-853:Algorthms n the Real World Data Compresson: Lectures and 2 Generc Fle Compresson Fles: gzp (LZ77), bzp (Burrows-Wheeler), BOA (PPM) Archvers: ARC (LZW), PKZp (LZW+) Fle
More informationResource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud
Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal
More informationSwitched Quasi-Logarithmic Quantizer with Golomb Rice Coding
http://dx.do.org/10.5755/j01.ee.3.4.1877 Swtched Quas-Logarthmc Quantzer wth Golomb Rce Codng Nkola Vucc 1, Zoran Perc 1, Mlan Dncc 1 1 Faculty of Electronc Engneerng, Unversty of Ns, Aleksandar Medvedev
More informationUncertainty in measurements of power and energy on power networks
Uncertanty n measurements of power and energy on power networks E. Manov, N. Kolev Department of Measurement and Instrumentaton, Techncal Unversty Sofa, bul. Klment Ohrdsk No8, bl., 000 Sofa, Bulgara Tel./fax:
More informationTornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003
Tornado and Luby Transform Codes Ashsh Khst 6.454 Presentaton October 22, 2003 Background: Erasure Channel Elas[956] studed the Erasure Channel β x x β β x 2 m x 2 k? Capacty of Noseless Erasure Channel
More informationAsymptotic Quantization: A Method for Determining Zador s Constant
Asymptotc Quantzaton: A Method for Determnng Zador s Constant Joyce Shh Because of the fnte capacty of modern communcaton systems better methods of encodng data are requred. Quantzaton refers to the methods
More informationCSE4210 Architecture and Hardware for DSP
4210 Archtecture and Hardware for DSP Lecture 1 Introducton & Number systems Admnstratve Stuff 4210 Archtecture and Hardware for DSP Text: VLSI Dgtal Sgnal Processng Systems: Desgn and Implementaton. K.
More informationNovel Pre-Compression Rate-Distortion Optimization Algorithm for JPEG 2000
Novel Pre-Compresson Rate-Dstorton Optmzaton Algorthm for JPEG 2000 Yu-We Chang, Hung-Ch Fang, Chung-Jr Lan, and Lang-Gee Chen DSP/IC Desgn Laboratory, Graduate Insttute of Electroncs Engneerng Natonal
More informationCONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION
CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING N. Phanthuna 1,2, F. Cheevasuvt 2 and S. Chtwong 2 1 Department of Electrcal Engneerng, Faculty of Engneerng Rajamangala
More informationECE559VV Project Report
ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate
More informationSimulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests
Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth
More informationFlexible Quantization
wb 06/02/21 1 Flexble Quantzaton Bastaan Klejn KTH School of Electrcal Engneerng Stocholm wb 06/02/21 2 Overvew Motvaton for codng technologes Basc quantzaton and codng Hgh-rate quantzaton theory wb 06/02/21
More informationError Probability for M Signals
Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationPop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing
Advanced Scence and Technology Letters, pp.164-168 http://dx.do.org/10.14257/astl.2013 Pop-Clc Nose Detecton Usng Inter-Frame Correlaton for Improved Portable Audtory Sensng Dong Yun Lee, Kwang Myung Jeon,
More informationA New Scrambling Evaluation Scheme based on Spatial Distribution Entropy and Centroid Difference of Bit-plane
A New Scramblng Evaluaton Scheme based on Spatal Dstrbuton Entropy and Centrod Dfference of Bt-plane Lang Zhao *, Avshek Adhkar Kouch Sakura * * Graduate School of Informaton Scence and Electrcal Engneerng,
More informationApplication of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations
Applcaton of Nonbnary LDPC Codes for Communcaton over Fadng Channels Usng Hgher Order Modulatons Rong-Hu Peng and Rong-Rong Chen Department of Electrcal and Computer Engneerng Unversty of Utah Ths work
More informationWinter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan
Wnter 2008 CS567 Stochastc Lnear/Integer Programmng Guest Lecturer: Xu, Huan Class 2: More Modelng Examples 1 Capacty Expanson Capacty expanson models optmal choces of the tmng and levels of nvestments
More informationAppendix B: Resampling Algorithms
407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles
More informationQueueing Networks II Network Performance
Queueng Networks II Network Performance Davd Tpper Assocate Professor Graduate Telecommuncatons and Networkng Program Unversty of Pttsburgh Sldes 6 Networks of Queues Many communcaton systems must be modeled
More informationLecture 4: November 17, Part 1 Single Buffer Management
Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input
More informationx = , so that calculated
Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to
More informationSpeeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem
H.K. Pathak et. al. / (IJCSE) Internatonal Journal on Computer Scence and Engneerng Speedng up Computaton of Scalar Multplcaton n Ellptc Curve Cryptosystem H. K. Pathak Manju Sangh S.o.S n Computer scence
More informationMarkov Chain Monte Carlo Lecture 6
where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways
More informationCOMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationHiding data in images by simple LSB substitution
Pattern Recognton 37 (004) 469 474 www.elsever.com/locate/patcog Hdng data n mages by smple LSB substtuton Ch-Kwong Chan, L.M. Cheng Department of Computer Engneerng and Informaton Technology, Cty Unversty
More informationRegulation No. 117 (Tyres rolling noise and wet grip adhesion) Proposal for amendments to ECE/TRANS/WP.29/GRB/2010/3
Transmtted by the expert from France Informal Document No. GRB-51-14 (67 th GRB, 15 17 February 2010, agenda tem 7) Regulaton No. 117 (Tyres rollng nose and wet grp adheson) Proposal for amendments to
More informationMicrowave Diversity Imaging Compression Using Bioinspired
Mcrowave Dversty Imagng Compresson Usng Bonspred Neural Networks Youwe Yuan 1, Yong L 1, Wele Xu 1, Janghong Yu * 1 School of Computer Scence and Technology, Hangzhou Danz Unversty, Hangzhou, Zhejang,
More informationECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)
ECE 534: Elements of Informaton Theory Solutons to Mdterm Eam (Sprng 6) Problem [ pts.] A dscrete memoryless source has an alphabet of three letters,, =,, 3, wth probabltes.4,.4, and., respectvely. (a)
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationBOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS. M. Krishna Reddy, B. Naveen Kumar and Y. Ramu
BOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS M. Krshna Reddy, B. Naveen Kumar and Y. Ramu Department of Statstcs, Osmana Unversty, Hyderabad -500 007, Inda. nanbyrozu@gmal.com, ramu0@gmal.com
More informationCopyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor
Taylor Enterprses, Inc. Control Lmts for P Charts Copyrght 2017 by Taylor Enterprses, Inc., All Rghts Reserved. Control Lmts for P Charts Dr. Wayne A. Taylor Abstract: P charts are used for count data
More informationSingle-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition
Sngle-Faclty Schedulng over Long Tme Horzons by Logc-based Benders Decomposton Elvn Coban and J. N. Hooker Tepper School of Busness, Carnege Mellon Unversty ecoban@andrew.cmu.edu, john@hooker.tepper.cmu.edu
More informationOn balancing multiple video streams with distributed QoS control in mobile communications
On balancng multple vdeo streams wth dstrbuted QoS control n moble communcatons Arjen van der Schaaf, José Angel Lso Arellano, and R. (Inald) L. Lagendjk TU Delft, Mekelweg 4, 68 CD Delft, The Netherlands
More informationLecture 4: Universal Hash Functions/Streaming Cont d
CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected
More informationAn Improved multiple fractal algorithm
Advanced Scence and Technology Letters Vol.31 (MulGraB 213), pp.184-188 http://dx.do.org/1.1427/astl.213.31.41 An Improved multple fractal algorthm Yun Ln, Xaochu Xu, Jnfeng Pang College of Informaton
More informationAverage Decision Threshold of CA CFAR and excision CFAR Detectors in the Presence of Strong Pulse Jamming 1
Average Decson hreshold of CA CFAR and excson CFAR Detectors n the Presence of Strong Pulse Jammng Ivan G. Garvanov and Chrsto A. Kabachev Insttute of Informaton echnologes Bulgaran Academy of Scences
More informationComparison of Regression Lines
STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence
More informationA Robust Method for Calculating the Correlation Coefficient
A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal
More informationComparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method
Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method
More informationGEMINI GEneric Multimedia INdexIng
GEMINI GEnerc Multmeda INdexIng Last lecture, LSH http://www.mt.edu/~andon/lsh/ Is there another possble soluton? Do we need to perform ANN? 1 GEnerc Multmeda INdexIng dstance measure Sub-pattern Match
More informationThe Synchronous 8th-Order Differential Attack on 12 Rounds of the Block Cipher HyRAL
The Synchronous 8th-Order Dfferental Attack on 12 Rounds of the Block Cpher HyRAL Yasutaka Igarash, Sej Fukushma, and Tomohro Hachno Kagoshma Unversty, Kagoshma, Japan Emal: {garash, fukushma, hachno}@eee.kagoshma-u.ac.jp
More informationNotes on Frequency Estimation in Data Streams
Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to
More informationTopic 23 - Randomized Complete Block Designs (RCBD)
Topc 3 ANOVA (III) 3-1 Topc 3 - Randomzed Complete Block Desgns (RCBD) Defn: A Randomzed Complete Block Desgn s a varant of the completely randomzed desgn (CRD) that we recently learned. In ths desgn,
More informationMathematical Models for Information Sources A Logarithmic i Measure of Information
Introducton to Informaton Theory Wreless Informaton Transmsson System Lab. Insttute of Communcatons Engneerng g Natonal Sun Yat-sen Unversty Table of Contents Mathematcal Models for Informaton Sources
More informationTOPICS MULTIPLIERLESS FILTER DESIGN ELEMENTARY SCHOOL ALGORITHM MULTIPLICATION
1 2 MULTIPLIERLESS FILTER DESIGN Realzaton of flters wthout full-fledged multplers Some sldes based on support materal by W. Wolf for hs book Modern VLSI Desgn, 3 rd edton. Partly based on followng papers:
More informationComparison of Wiener Filter solution by SVD with decompositions QR and QLP
Proceedngs of the 6th WSEAS Int Conf on Artfcal Intellgence, Knowledge Engneerng and Data Bases, Corfu Island, Greece, February 6-9, 007 7 Comparson of Wener Flter soluton by SVD wth decompostons QR and
More informationGlobal Sensitivity. Tuesday 20 th February, 2018
Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values
More informationAn Upper Bound on SINR Threshold for Call Admission Control in Multiple-Class CDMA Systems with Imperfect Power-Control
An Upper Bound on SINR Threshold for Call Admsson Control n Multple-Class CDMA Systems wth Imperfect ower-control Mahmoud El-Sayes MacDonald, Dettwler and Assocates td. (MDA) Toronto, Canada melsayes@hotmal.com
More informationGrover s Algorithm + Quantum Zeno Effect + Vaidman
Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the
More informationChapter 13: Multiple Regression
Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to
More informationThis column is a continuation of our previous column
Comparson of Goodness of Ft Statstcs for Lnear Regresson, Part II The authors contnue ther dscusson of the correlaton coeffcent n developng a calbraton for quanttatve analyss. Jerome Workman Jr. and Howard
More informationMDL-Based Unsupervised Attribute Ranking
MDL-Based Unsupervsed Attrbute Rankng Zdravko Markov Computer Scence Department Central Connectcut State Unversty New Brtan, CT 06050, USA http://www.cs.ccsu.edu/~markov/ markovz@ccsu.edu MDL-Based Unsupervsed
More informationA New Design of Multiplier using Modified Booth Algorithm and Reversible Gate Logic
Internatonal Journal of Computer Applcatons Technology and Research A New Desgn of Multpler usng Modfed Booth Algorthm and Reversble Gate Logc K.Nagarjun Department of ECE Vardhaman College of Engneerng,
More informationMultigradient for Neural Networks for Equalizers 1
Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT
More informationPsychology 282 Lecture #24 Outline Regression Diagnostics: Outliers
Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.
More information18.1 Introduction and Recap
CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng
More informationDETERMINATION OF TEMPERATURE DISTRIBUTION FOR ANNULAR FINS WITH TEMPERATURE DEPENDENT THERMAL CONDUCTIVITY BY HPM
Ganj, Z. Z., et al.: Determnaton of Temperature Dstrbuton for S111 DETERMINATION OF TEMPERATURE DISTRIBUTION FOR ANNULAR FINS WITH TEMPERATURE DEPENDENT THERMAL CONDUCTIVITY BY HPM by Davood Domr GANJI
More informationA new construction of 3-separable matrices via an improved decoding of Macula s construction
Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula
More informationThe Digital Fingerprinting Method for Static Images Based on Weighted Hamming Metric and on Weighted Container Model
Journal of Computer and Communcatons, 04,, -6 Publshed Onlne July 04 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.436/cc.04.906 The Dgtal Fngerprntng Method for Statc Images Based on Weghted
More informationLow Complexity Soft-Input Soft-Output Hamming Decoder
Low Complexty Soft-Input Soft-Output Hammng Der Benjamn Müller, Martn Holters, Udo Zölzer Helmut Schmdt Unversty Unversty of the Federal Armed Forces Department of Sgnal Processng and Communcatons Holstenhofweg
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have
More informationDigital Modems. Lecture 2
Dgtal Modems Lecture Revew We have shown that both Bayes and eyman/pearson crtera are based on the Lkelhood Rato Test (LRT) Λ ( r ) < > η Λ r s called observaton transformaton or suffcent statstc The crtera
More informationMarkov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement
Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs
More informationSuppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl
RECURSIVE SPLINE INTERPOLATION METHOD FOR REAL TIME ENGINE CONTROL APPLICATIONS A. Stotsky Volvo Car Corporaton Engne Desgn and Development Dept. 97542, HA1N, SE- 405 31 Gothenburg Sweden. Emal: astotsky@volvocars.com
More informationA Fast Fractal Image Compression Algorithm Using Predefined Values for Contrast Scaling
Proceedngs of the World Congress on Engneerng and Computer Scence 007 WCECS 007, October 4-6, 007, San Francsco, USA A Fast Fractal Image Compresson Algorthm Usng Predefned Values for Contrast Scalng H.
More informationQuantum and Classical Information Theory with Disentropy
Quantum and Classcal Informaton Theory wth Dsentropy R V Ramos rubensramos@ufcbr Lab of Quantum Informaton Technology, Department of Telenformatc Engneerng Federal Unversty of Ceara - DETI/UFC, CP 6007
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationDepartment of Statistics University of Toronto STA305H1S / 1004 HS Design and Analysis of Experiments Term Test - Winter Solution
Department of Statstcs Unversty of Toronto STA35HS / HS Desgn and Analyss of Experments Term Test - Wnter - Soluton February, Last Name: Frst Name: Student Number: Instructons: Tme: hours. Ads: a non-programmable
More informationThe Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD
he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationConsider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder.
PASSBAND DIGITAL MODULATION TECHNIQUES Consder the followng passband dgtal communcaton system model. cos( ω + φ ) c t message source m sgnal encoder s modulator s () t communcaton xt () channel t r a n
More informationLecture 12: Classification
Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationUsing the estimated penetrances to determine the range of the underlying genetic model in casecontrol
Georgetown Unversty From the SelectedWorks of Mark J Meyer 8 Usng the estmated penetrances to determne the range of the underlyng genetc model n casecontrol desgn Mark J Meyer Neal Jeffres Gang Zheng Avalable
More informationP R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /
Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons
More informationClock-Gating and Its Application to Low Power Design of Sequential Circuits
Clock-Gatng and Its Applcaton to Low Power Desgn of Sequental Crcuts ng WU Department of Electrcal Engneerng-Systems, Unversty of Southern Calforna Los Angeles, CA 989, USA, Phone: (23)74-448 Massoud PEDRAM
More informationStatistics for Economics & Business
Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable
More informationBoostrapaggregating (Bagging)
Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod
More informationUsing T.O.M to Estimate Parameter of distributions that have not Single Exponential Family
IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran
More informationPower law and dimension of the maximum value for belief distribution with the max Deng entropy
Power law and dmenson of the maxmum value for belef dstrbuton wth the max Deng entropy Bngy Kang a, a College of Informaton Engneerng, Northwest A&F Unversty, Yanglng, Shaanx, 712100, Chna. Abstract Deng
More information