Recent Advanced Statistical Background Modeling for Foreground Detection - A Systematic Survey

Similar documents
An introduction to Support Vector Machine

Variants of Pegasos. December 11, 2009

Robust and Accurate Cancer Classification with Gene Expression Profiling

A Novel Object Detection Method Using Gaussian Mixture Codebook Model of RGB-D Information

CHAPTER 10: LINEAR DISCRIMINATION

Outline. Probabilistic Model Learning. Probabilistic Model Learning. Probabilistic Model for Time-series Data: Hidden Markov Model

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 4

Lecture 11 SVM cont

Fall 2010 Graduate Course on Dynamic Learning

Robustness Experiments with Two Variance Components

Introduction to Boosting

Solution in semi infinite diffusion couples (error function analysis)

Clustering (Bishop ch 9)

Lecture 6: Learning for Control (Generalised Linear Regression)

CS 536: Machine Learning. Nonparametric Density Estimation Unsupervised Learning - Clustering

Lecture VI Regression

Bayes rule for a classification problem INF Discriminant functions for the normal density. Euclidean distance. Mahalanobis distance

On One Analytic Method of. Constructing Program Controls

Cubic Bezier Homotopy Function for Solving Exponential Equations

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

( t) Outline of program: BGC1: Survival and event history analysis Oslo, March-May Recapitulation. The additive regression model

Single and Multiple Object Tracking Using a Multi-Feature Joint Sparse Representation

V.Abramov - FURTHER ANALYSIS OF CONFIDENCE INTERVALS FOR LARGE CLIENT/SERVER COMPUTER NETWORKS

Computing Relevance, Similarity: The Vector Space Model

Advanced Machine Learning & Perception

( ) () we define the interaction representation by the unitary transformation () = ()

Linear Response Theory: The connection between QFT and experiments

EEL 6266 Power System Operation and Control. Chapter 5 Unit Commitment

THE PREDICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS

Machine Learning 2nd Edition

Anomaly Detection. Lecture Notes for Chapter 9. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

5th International Conference on Advanced Design and Manufacturing Engineering (ICADME 2015)

January Examinations 2012

Algorithm Research on Moving Object Detection of Surveillance Video Sequence *

John Geweke a and Gianni Amisano b a Departments of Economics and Statistics, University of Iowa, USA b European Central Bank, Frankfurt, Germany

Dynamic Team Decision Theory. EECS 558 Project Shrutivandana Sharma and David Shuman December 10, 2005

GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS. Youngwoo Ahn and Kitae Kim

FTCS Solution to the Heat Equation

New M-Estimator Objective Function. in Simultaneous Equations Model. (A Comparative Study)

Sampling Procedure of the Sum of two Binary Markov Process Realizations

( ) [ ] MAP Decision Rule

Department of Economics University of Toronto

Reactive Methods to Solve the Berth AllocationProblem with Stochastic Arrival and Handling Times

Learning Objectives. Self Organization Map. Hamming Distance(1/5) Introduction. Hamming Distance(3/5) Hamming Distance(2/5) 15/04/2015

Machine Learning Linear Regression

DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL

Kernel-Based Bayesian Filtering for Object Tracking

doi: info:doi/ /

A Bayesian algorithm for tracking multiple moving objects in outdoor surveillance video

Chapter 6: AC Circuits

WiH Wei He

Mechanics Physics 151

Mechanics Physics 151

Detection of Waving Hands from Images Using Time Series of Intensity Values

Li An-Ping. Beijing , P.R.China

Econ107 Applied Econometrics Topic 5: Specification: Choosing Independent Variables (Studenmund, Chapter 6)

Time-interval analysis of β decay. V. Horvat and J. C. Hardy

Single-loop System Reliability-Based Design & Topology Optimization (SRBDO/SRBTO): A Matrix-based System Reliability (MSR) Method

Boosted LMS-based Piecewise Linear Adaptive Filters

Notes on the stability of dynamic systems and the use of Eigen Values.

In the complete model, these slopes are ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL. (! i+1 -! i ) + [(!") i+1,q - [(!

J i-1 i. J i i+1. Numerical integration of the diffusion equation (I) Finite difference method. Spatial Discretization. Internal nodes.

RELATIONSHIP BETWEEN VOLATILITY AND TRADING VOLUME: THE CASE OF HSI STOCK RETURNS DATA

CS286.2 Lecture 14: Quantum de Finetti Theorems II

Comparison of Supervised & Unsupervised Learning in βs Estimation between Stocks and the S&P500

Video-Based Face Recognition Using Adaptive Hidden Markov Models

[ ] 2. [ ]3 + (Δx i + Δx i 1 ) / 2. Δx i-1 Δx i Δx i+1. TPG4160 Reservoir Simulation 2018 Lecture note 3. page 1 of 5

Math 128b Project. Jude Yuen

TSS = SST + SSE An orthogonal partition of the total SS

Ordinary Differential Equations in Neuroscience with Matlab examples. Aim 1- Gain understanding of how to set up and solve ODE s

Let s treat the problem of the response of a system to an applied external force. Again,

Volatility Interpolation

CH.3. COMPATIBILITY EQUATIONS. Continuum Mechanics Course (MMC) - ETSECCPB - UPC

Forecasting customer behaviour in a multi-service financial organisation: a profitability perspective

HEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD

The Finite Element Method for the Analysis of Non-Linear and Dynamic Systems

Lecture 18: The Laplace Transform (See Sections and 14.7 in Boas)

F-Tests and Analysis of Variance (ANOVA) in the Simple Linear Regression Model. 1. Introduction

Attribute Reduction Algorithm Based on Discernibility Matrix with Algebraic Method GAO Jing1,a, Ma Hui1, Han Zhidong2,b

Approximate Analytic Solution of (2+1) - Dimensional Zakharov-Kuznetsov(Zk) Equations Using Homotopy

CHAPTER 5: MULTIVARIATE METHODS

FI 3103 Quantum Physics

Hidden Markov Models

Mechanics Physics 151

2. SPATIALLY LAGGED DEPENDENT VARIABLES

Machine Vision based Micro-crack Inspection in Thin-film Solar Cell Panel

Appendix H: Rarefaction and extrapolation of Hill numbers for incidence data

Testing a new idea to solve the P = NP problem with mathematical induction

M. Y. Adamu Mathematical Sciences Programme, AbubakarTafawaBalewa University, Bauchi, Nigeria

Introduction ( Week 1-2) Course introduction A brief introduction to molecular biology A brief introduction to sequence comparison Part I: Algorithms

Filtrage particulaire et suivi multi-pistes Carine Hue Jean-Pierre Le Cadre and Patrick Pérez

Genetic Algorithm in Parameter Estimation of Nonlinear Dynamic Systems

GMM parameter estimation. Xiaoye Lu CMPS290c Final Project

ABSTRACT KEYWORDS. Bonus-malus systems, frequency component, severity component. 1. INTRODUCTION

. The geometric multiplicity is dim[ker( λi. number of linearly independent eigenvectors associated with this eigenvalue.

FACIAL IMAGE FEATURE EXTRACTION USING SUPPORT VECTOR MACHINES

Forecasting Using First-Order Difference of Time Series and Bagging of Competitive Associative Nets

Dual Approximate Dynamic Programming for Large Scale Hydro Valleys

On computing differential transform of nonlinear non-autonomous functions and its applications

Chapter 6 DETECTION AND ESTIMATION: Model of digital communication system. Fundamental issues in digital communications are

Transcription:

Recen Advanced Sascal Background Modelng for Foreground Deecon - A Sysemac Survey Therry Bouwmans To ce hs verson: Therry Bouwmans. Recen Advanced Sascal Background Modelng for Foreground Deecon - A Sysemac Survey. Recen Paens on Compuer Scence, Benham Scence Publshers, 0, 4 (3), pp.47-76. <hal-00644746> HAL Id: hal-00644746 hps://hal.archves-ouveres.fr/hal-00644746 Submed on 9 Nov 05 HAL s a mul-dscplnary open access archve for he depos and dssemnaon of scenfc research documens, wheher hey are publshed or no. The documens may come from eachng and research nsuons n France or abroad, or from publc or prvae research ceners. L archve ouvere plurdscplnare HAL, es desnée au dépô e à la dffuson de documens scenfques de nveau recherche, publés ou non, émanan des éablssemens d ensegnemen e de recherche franças ou érangers, des laboraores publcs ou prvés.

Recen Advanced Sascal Background Modelng for Foreground Deecon - A Sysemac Survey Therry Bouwmans Laboraore MIA, Unversé de La Rochelle, Avenue M. Crépeau, 7000 La Rochelle, France Tel Phone: (33) 05.46.45.7.0 Emal address: bouwman@unv-lr.fr Fax number: (33) 05.46.45.8.40 Shor Runnng Tle: Advanced Background Modelng: A Sysemac Survey Absrac: Background modelng s currenly used o deec movng objecs n vdeo acqured from sac cameras. Numerous sascal mehods have been developed over he recen years. The am of hs paper s frsly o provde an exended and updaed survey of he recen researches and paens whch concern sascal background modelng and secondly o acheve a comparave evaluaon. For hs, we frsly classfed he sascal mehods n erm of caegory. Then, he orgnal mehods are remnded and dscussed followng he challenges me n vdeo sequences. We classfed her respecve mprovemens n erm of sraeges used. Furhermore, we dscussed hem n erm of he crcal suaons hey clam o handle. Fnally, we conclude wh several promsng drecons for fuure research. The survey also dscussed relevan paens. Keywords: Background modelng, Kernel Densy Esmaon, Mxure of Gaussans, Sngle Gaussan, Subspace Learnng. INTRODUCTION Dfferen applcaons such as vdeo survellance [], opcal moon capure [-4] and mulmeda [5-7] need frsly o model he background and hen o deec he movng objecs. One way o oban he background s o acqure a background mage whch doesn' nclude any movng objec bu n some envronmen he background s no avalable. Furhermore, can always be changed under crcal suaons lke llumnaon changes, objecs beng nroduced or removed from he scene. To ake no accoun hese problems, many background modelng mehods have been developed [8, 9] and hese mehods can be classfed n he followng caegores: - Basc Background Modelng: In hs case, he background s modeled usng he average [0] or he medan [] or he hsogram analyss over me []. - Sascal Background Modelng: The background s modeled usng a sngle Gaussan [3] or a Mxure of Gaussans [4] or a Kernel Densy Esmaon [5]. Sascal varables are used o classfy he pxels as foreground or background. - Fuzzy Background Modelng: The background s modeled usng a fuzzy runnng average [6] or Type- fuzzy mxure of Gaussans [7]. Foreground deecon s made usng he Sugeno negral [8] or he Choque negral [9]. The foreground deecon can be performed by fuzzy nferences [335]. *Address correspondence o hese auhors a he Laboraory de Mahemacs Image and Applcaons (LMIA), Pôle Scence, Unversé de La Rochelle, 7000 La Rochelle, France; E-mal: bouwman@unv-lr.fr - Background Cluserng: The background model supposes ha each pxel n he frame can be represened emporally by clusers. Incomng pxels are mached agans he correspondng cluser group and are classfed accordng o wheher he machng cluser s consdered par of he background. The cluserng approach consss n usng K-mean algorhm [36] or usng Codebook [36]. - Neural Nework Background Modelng: The background s represened by mean of he weghs of a neural nework suably raned on N clean frames. The nework learns how o classfy each pxel as background or foreground [33][333]. - Wavele Background Modelng: The background model s defned n he emporal doman, ulzng he coeffcens of dscree wavele ransform (DWT) [336]. - Background Esmaon: The background s esmaed usng a fler. Any pxel of he curren mage ha devaes sgnfcanly from s predced value s declared foreground. Ths fler may be a Wener fler [0], a Kalman fler [] or a Tchebychev fler []. Table shows an overvew of hs classfcaon. The frs column ndcaes he caegory and he second column he name of each mehod. The number of papers couned for each mehod s ndcaed n he parenhess. The hrd column gves he name of he auhors who have made he man publcaon for he correspondng mehod and he dae of he relaed publcaon. Oher classfcaons can be found n erm of predcon [3], recurson [], adapaon [4], or modaly [5].

Table. Background Modelng Mehods: An Overvew Caegory Mehods Auhors - Daes Basc Background Modelng Mean () Lee e al. (00) [0] Medan (3) Mac Farlane e al. (995) [] Hsogram over me (3) Sascal Background Modelng Sngle Gaussan (33) Mxure of Gaussans (7) Kernel Densy Esmaon (5) Zheng e al. (006) [] Wren e al. (997) [3] Sauffer and Grmson (999) [4] Elgammal e al. (000) [5] Fuzzy Background Modelng Fuzzy Runnng Average (5) Type- Fuzzy Mxure of Gaussans (3) Sgar e al. (008) [6] El Baf e al. (008) [7] Background Cluserng K-Means () Codebook (35) Buler e al. (003) [36] Km e al. (005) [36] Neural Nework Background Modelng General Regresson Neural Nework () Self Organzng Neural Nework (9) Culbrk e al. (006) [33] Maddalena and Perosno (007) [333] Wavele Background Modelng Dscree Wavele Transform Bswas e al. [336] Background Esmaon Wener Fler () Kalman Fler (9) Tchebychev Fler (3) Toyama e al. (999) [0] Messelod e al. (005) [] Chang e al. (004) [] All hese modelng approaches are used n background subracon conex whch presens he followng seps and ssues: background modelng, background nalzaon, background manenance, foreground deecon, choce of he feaure sze (pxel, a block or a cluser), choce of he feaure ype (color feaures, edge feaures, sereo feaures, moon feaures and exure feaures). Developng a background subracon mehod, all hese choces deermne he robusness of he mehod o he crcal suaons me n vdeo sequence [5, 0]: Nose mage due o a poor qualy mage source (NI), Camera jer (CJ), Camera auomac adjusmens (CA), Tme of he day (TD), Lgh swch (LS), Boosrappng (B), Camouflage (C), Foreground aperure (FA), Moved background objecs (MO), Insered background (IB), Wakng foreground objec (WFO), Sleepng foreground objec (SFO) and Shadows (S). The man dffcules come from he dynamc backgrounds and llumnaon changes: - Dynamc backgrounds ofen appear n oudoor scenes. Fg. (). presens four ypcal examples: Camera jer, wavng rees, waer rpplng and waer surface. The lef column shows he orgnal mages and he rgh he foreground mask obaned by he MOG [4]. In each case, here s a bg amoun of false deecons. - Illumnaon changes appear n ndoor and oudoor scenes. Fg. (). shows an ndoor scene n whch we can observe a gradual llumnaon change. Ths causes false deecons n several pars of he foreground mask obaned by he MOG [4]. Fg. (3). llusraes he case of sudden llumnaon change due o a lgh on/off. Every pxel n he mages s affeced by hs change whch generaes a large amoun of false deecons (see Fg. 3c). Fg. (). The frs column presens orgnal scenes conanng dynamc backgrounds. The second column shows he foreground masks obaned by he MOG [4]. a) Sequence Camera jer from [9] b) Sequence Campus from [34] c) Sequence Waer rpplng from [34] d) Sequence Waer surface from [34]

Fg. (). From lef o rgh: The frs mage presens an ndoor scene wh low llumnaon. The second mage presens he same scene wh a moderae llumnaon whle he hrd mage shows he scene wh a hgh llumnaon. The fourh mage shows he foreground mask obaned wh MOG [4]. Ths sequence called Tme of Day comes from he Wallflower daase [0]. a) Low b) Moderae c) Hgh d) Foreground mask Fg. (3). From lef o rgh: The frs mage presens an ndoor scene wh lgh-on. The second mage shows he same scene wh lgh-off. The hrd mage shows he foreground mask obaned wh MOG [4]. Ths sequence called Lgh Swch comes from he Wallflower daase [0]. a) Lgh-on b) Lgh-off c) Foreground mask Dfferen daases benchmarks are avalable [6-3] o evaluae he robusness of he background subracon mehods agans hese crcal suaons whch have dfferen spaal and emporal characerscs whch mus be ake no accoun o oban a good segmenaon. Ths challenge mus be made n he conex of real-me applcaon whch runs on common PC and so wo consrans are nroduced: less compuaon me (CT) and less memory requremen (MR) as possble. The performance s evaluaed usng he ROC analyss [3] or he PDR Analyss [33] or he smlary measure [34]. Ohers performance evaluaon mehods are proposed and compared n [35, 36]. Readng he leraure, wo man remarks can be made: () The mos frequenly used models are he sascal ones due o her robusness o he crcal suaons. () There are many recen developmens regardng sascal models as can be seen for he MOG model wh he acronyms found lke GMM [37], TLGMM [38], STGMM [39], SKMGM [40], TAPPMOG [4] and S-TAPPMOG [4]. The objecve s hen o caegorze he sascal models n one paper and classfy her recen mprovemens followng he sraeges used. We also dscuss hem followng he challenges me n vdeo sequences and evaluae some of hem n erm of false alarms usng he Wallflower daase [0]. Ths paper s an exended and updaed paper of he surveys on Mxure of Gaussans for background modelng [48] and Subspace Learnng for background modelng [334]. The res of hs paper s organzed as follows: In Secon, we frsly provde a background on he sascal background models and a classfcaon of hese models. In Secon 3, we survey he frs generaon models and her respecve mprovemens. In Secon 4, we classfed he second generaon models. In Secon 5, he hrd generaon models are revewed. In Secon 6, we frsly nvesgaed he performance n erm of robusness on dynamc backgrounds and llumnaon changes and secondly n erms on per-pxel complexy. Then, a comparave evaluaon s provded n Secon 7. Fnally, concluson and fuure developmens are gven.. STATISTICAL BACKGROUND MODELING: AN OVERVIEW The sascal ools provde a good framework o model he background and so many mehods have been developed. We classfed hem n erm of caegory as follows: - Frs caegory: The frs way o represen sascally he background s o assume ha he hsory over me of nensy values of a pxel can be modeled by a sngle Gaussan (SG) [3]. However, a unmodal model canno handle dynamc backgrounds when here are wavng rees, waer rpplng or movng algae. To solve hs problem, he Mxure of Gaussans (MOG) has been used o model dynamc backgrounds [4]. Ths model has some dsadvanages. Background havng fas varaons canno be accuraely modeled wh jus a few Gaussans (usually 3 o 5), causng problems for sensve deecon. So, a non-paramerc echnque was developed for esmang background probables a each pxel from many recen samples over me usng Kernel densy esmaon (KDE) [5] bu s me consumng. In [65], Subspace Learnng usng Prncpal Componen Analyss (SL-PCA) s appled on N mages o consruc a background model, whch s represened by he mean mage and he projecon marx comprsng he frs p sgnfcan egenvecors of PCA. In hs way, foreground segmenaon s accomplshed by compung he dfference beween he npu mage and s reconsrucon.

Table. Advanced Sascal Background Modelng: An Overvew Caegory Mehods Auhors - Daes Frs Caegory Sngle Gaussan (SG) (33) Mxure of Gaussans (MOG) (7) Kernel Densy Esmaon (KDE) (55) Prncpal Componens Analyss (SL-PCA) (5) Second Caegory Suppor Vecor Machne (SVM) (9) Suppor Vecor Regresson (SVR) (3) Suppor Vecor Daa Descrpon (SVDD) (6) Thrd Caegory Sngle General Gaussan (SGG) (3) Mxure of General Gaussans (MOGG) (3) Independen Componen Analyss (SL-ICA) (3) Incremenal Non Negave Marx Facorzaon (SL-INMF) (3) Incremenal Rank-(R,R,R 3 ) Tensor (SL-IRT) () Wren e al. (997) [3] Sauffer and Grmson (999) [4] Elgammal e al. (000) [5] Olver e al. e al. (999) [65] Ln e al. (00) [80] Wang e al. (006) [83] Tavakkol e al. (006) [86] Km e al. (007) [90] Alll e al. (007) [94] Yamazak e al. (006) [98] Bucak e al. (007) [0] L e al. (008) [04] - Second caegory: Ths second caegory uses suppor vecor models. The objecve s dfferen followng he models used. Ln e al. [80] used a SVM algorhm o nalze he background n oudoor scene. Wang e al. [83, 84] modeled he background by usng SVR n he case of raffc survellance scene where llumnaon changes (TD) appear. Tavakkol e al. [86-89] appled SVDD o deal wh dynamc backgrounds (MB). - Thrd caegory: These models generalze he frs generaon model as he sngle general Gaussan (SGG) [90-9], he mxure of general Gaussans (MOGG) [93-95] and subspace learnng usng Independen Componen Analyss (SL-ICA) [98, 00], Incremenal Non-negave Marx Facorzaon (SL-INMF) [0, 03] or Incremenal Rank-(R,R,R 3 ) Tensor (SL-IRT) [04, 05]. The sngle general Gaussan (SGG) allevaes he consran of a src Gaussan and hen shows beer performance n he case of llumnaon changes (TD) and shadow (S). The MOGG have been developed o be more robus o dynamc backgrounds (MB). Subspace learnng mehods are more robus o llumnaon changes (LS). Table shows an overvew of he sascal background modelng. The frs column ndcaes he generaon and he second column he name of each mehod. Ther correspondng acronym s ndcaed n he frs parenhess and he number of papers couned for each mehod n he second parenhess. The hrd column gves he name of he auhors who have made he man publcaon for he correspondng mehod and he dae of he relaed publcaon. We can see ha he MOG wh 7 papers s he mos modfed and mproved because s he mos used due o a good compromse beween robusness. In he followng secons, we remnd he orgnal mehods for each generaon and we have classfy her relaed mprovemens n he followng way: nrnsc mprovemens whch concern he modfcaon made n he nalzaon, he manenance and he foreground deecon seps, and exrnsc mprovemens whch conss n usng exernal ools o perform he resuls. 3. FIRST CATEGORY 3. Sngle Gaussan (SG) Wren e al. [l3] proposed o model he background ndependenly a each pxel locaon (,j). The model s based on deally fng a Gaussan probably densy funcon on he las n pxel s values. In order o avod fng he pdf from scrach a each new frame me +, he mean and he varance are updaed as follows: σ µ + = ( α) µ + α. X + T + = ( α ) σ + α( X + µ + )( X + µ + ) where X + s he pxel s curren value, µ s he prevous average, σ s he prevous varance and α s he learnng rae. The foreground deecon s made as follows: f µ T, he pxel s classfed as background + X + < oherwse he pxel s classfed as foreground. Improvemens: Medon e al. [43] operaed n he Hue- Sauraon-Value (HSV) color space nsead of he RGB one. The advanage s ha he HSV color model s more robus o gradual llumnaon changes (TD) because separaes he nensy and chromac nformaon. Furhermore, HSV perms o elmnae parally camouflage. Zhao e al. [44] used HSV oo remarkng ha he respecve dsrbuons of H and S vary naurally a lo and ha he dsrbuon of V s he mos sable. So, he componen H and S are only used when hey are sable. Resuls [44] show beer performance n presence of gradual llumnaon changes (TD) and shadows (S). Dscusson: The sngle Gaussan (SG) s sued for ndoor scenes where here are moderae llumnaon changes.

3. Mxure of Gaussans (MOG) In he conex of a raffc survellance sysem, Fredman and Russel [45] proposed o model each background pxel usng a mxure of hree Gaussans correspondng o road, vehcle and shadows. Ths model s nalzed usng an EM algorhm. Then, he Gaussans are manually labeled n a heursc manner as follows: he darkes componen s labeled as shadow; n he remanng wo componens, he one wh he larges varance s labeled as vehcle and he oher one as road. Ths remans fxed for all he process gvng lack of adapaon o changes over me. For he foreground deecon, each pxel s compared wh each Gaussan and s classfed accordng o correspondng Gaussan. The manenance s made usng an ncremenal EM algorhm for real me consderaon. Sauffer and Grmson [4] generalzed hs dea by modelng he recen hsory of he color feaures of each pxel { X,..., } by a mxure of K Gaussans. We remnd below he algorhm. Prncple Frs, each pxel s characerzed by s nensy n he RGB color space. Then, he probably of observng he curren pxel value s consdered gven by he followng formula n he muldmensonal case: P K ( X ) =,. η( X, µ,,, ) = X ω () where he parameers are K s he number of dsrbuons, ω s a wegh assocaed o he h Gaussan a me wh, mean µ and sandard devaon,, probably densy funcon:. η s a Gaussan ( X µ ) Σ ( X µ ) η( X, µ, Σ) = e () n / / (π ) Σ For compuaonal reasons, Sauffer and Grmson [4] assumed ha he RGB color componens are ndependen and have he same varances. So, he covarance marx s of he form: Σ = I (3), σ, So, each pxel s characerzed by a mxure of K Gaussans. Once he background model s defned, he dfferen parameers of he mxure of Gaussans mus be nalzed. The parameers of he MOG s model are he number of Gaussans K, he wegh ω assocaed o he h Gaussan a me, he mean µ and he covarance marx,.,, Remarks: - K deermned he mulmodaly of he background and by he avalable memory and compuaonal power. Sauffer and Grmson [4] proposed o se K from 3 o 5. - The nalzaon of he wegh, he mean and he covarance marx s made usng an EM algorhm. Sauffer and Grmson [4] used he K-mean algorhm for real me consderaon. Once he parameers nalzaon s made, a frs foreground deecon can be made and hen he parameers are updaed. Frsly, Sauffer and Grmson [4] used as creron he rao r = and ordered he K Gaussans followng hs j ω j σ j rao. Ths orderng supposes ha a background pxel corresponds o a hgh wegh wh a weak varance due o he fac ha he background s more presen han movng objecs and ha s value s praccally consan. The frs B Gaussan dsrbuons whch exceed ceran hreshold T are reaned for a background dsrbuon: b ( T ) B = argmnb = ω, > (4) The oher dsrbuons are consdered o represen a foreground dsrbuon. Then, when he new frame ncomes a mes +, a mach es s made for each pxel. A pxel maches a Gaussan dsrbuon f: T ( X µ, ).,.( X + µ, ) < kσ sqr, + (5) where k s a consan hreshold equal o.5. Then, wo cases can occur: - Case : A mach s found wh one of he K Gaussans. In hs case, f he Gaussan dsrbuon s denfed as a background one, he pxel s classfed as background else he pxel s classfed as foreground. - Case : No mach s found wh any of he K Gaussans. In hs case, he pxel s classfed as foreground. A hs sep, a bnary mask s obaned. Then, o make he nex foreground deecon, he parameers mus be updaed. Usng he mach es (5), wo cases can occur lke n he foreground deecon: Case : A mach s found wh one of he K Gaussans. - For he mached componen, he updae s done as follows: ( α ) ω α ω, + =, + (6) where α s a consan learnng rae., + = ( ρ ) µ, + ρ. X + µ (7)

σ = ρ σ ρ µ µ (8) T, + ( ), + ( X +, + ).( X +, + ) where ρ = α. η( X +, µ, ) - For he unmached componens, µ and are unchanged, only he wegh s replaced by: j, = ( ) ω j, ω + α (9) Case : No mach s found wh any of he K Gaussans. In hs case, he leas probable dsrbuon k s replaced wh a new one wh parameers: ω k,+ = Low Pror Wegh (0) µ X () = k, + + σ Large Inal Varance () k, + = Once he parameers manenance s made, foreground deecon can be made and so on. Complee sudes on he sgnfcaon and he seng of he parameers can be found n [46, 47][8][89]. Improvemens: The orgnal MOG presens several advanages. Indeed, can work whou havng o sore an mporan se of npu daa n he runnng process. The mulmodaly of he model allows dealng wh mulmodal backgrounds and gradual llumnaon changes. Despe, hs model presen some dsadvanages: he number of Gaussans mus be predeermned, he need for good nalzaons, he dependence of he resuls on he rue dsrbuon law whch can be non-gaussan and slow recovery from falures. Ohers lmaons are he needs for a seres of ranng frames absen of movng objecs and he amoun of memory requred n hs sep. To allevae hese lmaons, numerous mprovemens (7 papers) have been proposed over he recen years. All he developed mprovemens can be classfed followng he sraeges and a complee survey over 00 papers n he perod 999-007 can be found n [48]. We have summarzed and updaed hem n he followng classfcaon: - Inrnsc mprovemens: These sraeges (Table 3) conss o be more rgorous n he sascal sense or o nroduce spaal and/or emporal consran n he dfferen sep of he model. For example, some auhors [49-53] propose o deermne auomacally and dynamcally he number of Gaussans o be more robus o dynamc backgrounds. Oher approaches use anoher algorhm for he nalzaon [54, 55] and allow presence of foreground objecs n he ranng sequence [56, 57, 58]. For he manenance, he learnng raes are beer se [66, 67] or adap over me [60-6, 68-78]. For he foreground deecon, he mprovemen found n he leraure are made usng a dfferen measure for he machng es [53, 79-8], usng a Pxel Perssence Map (PPM) [75, 76, 83], usng he probables [84, 85], usng a foreground model [6, 63, 86], usng some machng ess [39, 60] and usng he mos domnan background model [87, 88, 89]. For he feaure sze, block wse [90, 9] or cluser wse [9] approaches are more robus han he pxel one. For he feaure ype, several feaures are used nsead of he RGB space lke dfferen color feaures [93-99], edge feaures [00, 0], exure feaures [0], sereo feaure [03, 04], spaal feaures [05], moon feaures [40] and vdeo feaures [06]. Zheng e al. [67, 68] combned mulple feaures such as brghness, chromacy and neghborhood nformaon. Recen paens concern block wse approaches [35], exure feaures [353], moon feaures [354] and spaal feaures [355]. An overvew of he dfferen feaures used n he leraure s shown n Table 5. - Exrnsc mprovemens: Anoher way o mprove he effcency and robusness of he orgnal GMM conss n usng exernal sraeges (Table 4). Some auhors used Markov Random Felds [07-09], herarchcal approaches [0-3], mul-level approaches [00, 4-8], mulple backgrounds [9, ], graph cus [8], mul-layer approaches [, 3], rackng feedback [8, 9] or specfc pos-processng [30-3]. Recen paens concern graph cus approaches [3576, 357]. - Reducng he compuaon me: All he nrnsc and exrnsc mprovemens concern he qualy of he foreground deecon bu here s anoher manner o mprove he orgnal MOG whch consss n reducng he compuaon me. I acheved by usng regon of neres [3] [87], by usng a varable adapon rae [33], by swchng he background model [34] [7], by usng space samplng sraeges [35][6][38][7] or by usng hardware mplemenaon [36, 37] [7]. - Enhancng he foreground deecon: All he prevous mprovemens concern drecly he orgnal MOG and he foreground deecon resuls only from. Anoher way o mprove hs mehod s o enhance he resuls of he foreground deecon by usng cooperaon wh anoher segmenaon mehod. I acheved by cooperaon wh a sascal background dsurbance echnque [38], wh color segmenaon [39], and wh a regon based moon deecon [40]. Oher auhors used a cooperaon wh opcal flow [7], block machng [47-48], predcve models [49], exure models [5][303], consecuve frame dfference [58][6-6][79-80][8] and basc background subracon [304-305][330]. A recen paen concern he cooperaon wh hsogram sascs [358]. Table 6 and Table 7 show respecvely an overvew of he crcal suaons and he real-me consrans for he dfferen MOG versons ha can ackle hem beer han he orgnal one.

Table 3. Inrnsc mprovemens of he MOG Background Sep Parameers Auhors - References Background Inalzaon Varable K Zvkovc [49], Cheng e al. [50], Shmada e al. [5], Tan e al. [5], Carmna e al. [53], Klare and Sarka [30], Shmada e al. [37], Shahd e al. [40], Sngh and Mra [48], Wang e al. [78], Huang e al. [88], Wang e al. [307], Zhou e al. [37] Varables µ, σ, ω Anoher algorhm: Morellas e al. [54], Lee [55], Ju e al.. [4], Sngh e al. [45], Sngh e al. [46], Wang and Da [5], Hu e al.. [59], Guo e al. [70], Moln [85], Qn e al. [86], L e al. [35], Wang and Mller [33] Allowng presence of movng objecs: Zhang e al. [56], Amnoos e al. [57], Lepsk [58], Lee e al. [73], Wang e al. [307] Background Manenance Foreground Deecon Varable K Zvkovc [49], Cheng e al. [50], Shmada e al. [5], Tan e al. [5], Klare and Sarka [30], Shmada e al. [37], Sngh and Mra [48], Wang e al. [78], Zhou e al. [37] Varables µ, σ, ω Manenance rules: Han and L [59], Park and Buyn [66] Manenance mechansms: Zhang e al. [56], Wang and Suer [60], Lndsrom e al. [6], L e al. [69], Lee e al. [73] Selecve manenance: Sauffer and Grmson [6], Landabaso and Pardas [63], Park e al. [64], Mal and Huenlocher [65], Salas e al. [5], Wang and Da [5], Hu e al. [59], L e al. [65], Lu and Zhang [76], Yu e al. [90] Learnng raes α, ρ Beer sengs: Zang and Klee [66], Whe and Shah [67] Adapve learnng raes: Wang and Suer [60], Lndsrom e al. [6], Sauffer and Grmson [6], KaewTraKulPong and Bowden [68-70] Lee[7], Harvlle e al. [7], Porkl [73], Lu e al. [74], Pnevmakaks e al. [75, 76], Power e al. [77], Leoa e al. [78], Sheng and Cu [7], Quas e al. [84], Moln [85], Qn e al. [86], Shah e al. [98], Kan e al. [30], Quas e al. [308], Ln e al. [309], Bn and Lu [30], Zhao and He [3], L e al. [33] Dfferen measure for he Carmna e al. [53], Ren a al. [79], Lee [80], Sun [8], Morellas e al. [8], machng es Xuehua e al. [6], Ru e al. [6] Pxel Perssence Map Pnevmakaks e al. [75, 76], Landabaso and Pardas [83] (PPM) Probables Yang and Hsu [84], Lee [85], Len e al. [5], Zhang and Zhou [] Foreground model Lndsrom e al. [6], Landabaso e al. [63], Whagen e al. [86], Landabaso e al. [63], Feldman e al. [33], Feldman [34], Tan and Wang [38] Some machng ess Zhang e al. [39], Wang and Suer [60] Fuson rules Len e al. [5] Mos domnan background Haque e al. [87, 88, 89] Table 4. Exrnsc mprovemens of he MOG Mehods Markov Random Felds Herarchcal approaches Auhors - References Kumar and Sengupa [07], Zhou and Zhang [08], Schndler and Wang [09], Landabaso e al. [63], L e al [9], Dcknson e al. [36], Zhang and Zhou [37], Wang e al. [38] Sun and Yuan [0], Park e al. [], Chen e al. [], Zhou e al. [3], Zhong e al. [4], Zhong e al. [64], L e al. [65] Mul-level approaches Javed e al. [00], Zang and Klee [4], Zhong e al. [5], Crsan e al. [6-8], Yang e al. [35] Mulple backgrounds Su and Hu [9, 0], Porkl [], Q e al. [30], Q e al. [3] Graph cus Sun [8], Chang and Hsu [57], L e al. [69], L e al. [9] Mul-layer approaches Yang e al. [], Porkl and Tuzel [3], Park and Buyn [66], Huang and Wu [9] Feaures-Cameras sraeges Xu and Ells [4], Nadm and Bhanu [5, 6], Conare e al. [7] Trackng feedback Harvlle [8], Taycher e al. [9], Wang e al. [75], He e al. [30], Yuan e al. [344], Shao e al. [36] Pos-processng Turdu and Erdogan [30], Parks and Fels [3], Fazl e al. [306]

Table 5. Feaures mprovemens of he MOG Sze/Type Auhors - References Feaure Sze Block Fang e al. [90], Pokrajac and Laeck [9], Wang e al. [75], Zhong e al. [8], Zhang e al. [94], Wang e al. [39] Cluser Bhaskar e al. [9], Ca e al. [43] Feaure Type Color feaures Normalzed RGB YUV HSV HSI Luv Improved HLS YCrCb Sjman e al. [93], Xu e Ells [94] Harvlle e al.[7], Sun [8], Fang e al. [90], Guo e al. [70], Feldman e al. [33], Feldman [34] Sun [8], Xuehua e al. [6], Ru e al. [6], Wang and Tang [74] Wang and Wu [95] Yang and Hsu [96] Seawan e al. [97] Krsensen e al. [98], Rbero e al. [99] Edge feaures Javed e al. [00], Jan e al. [0], Klare and Sarka [03], L e al. [53] Texure feaures Tan and Hampapur [0], Shmada and Tanguch [50], Huang e al. [55] Sereo feaures Dspary Deph Spaal feaures Gordon e al. [03] Harvlle e al. [7], Slvesre [04] Yang and Hsu [84], Dcknson e al. [05], Klare and Sarka [30], We e al. [3] Moon feaures Tang e al. [40] Phase feaures Xue e al. [3] Vdeo feaures Wang e al. [06], Wang e al. [39] Enropy feaures Park e al. [95], Park e al. [96] Bayer feaures Suhr e al. [97] HOG feaures Faban [99], Hu e al. [300] Table 6. Challenges and MOG Versons Crcal Suaons Auhors - References CS - Nose Image Xu [], Texera e al. [], L e al. [65] CS - - Camera jer Campbell-Wes e al. [9], Xu [], Achkar and Amer [3], Rao e al. [4], L e al. [65] CS - - Camera Adjusemens Zen and La [5], Moln [85] CS 3 - Gradual Illumnaon Changes Tan e al. [34], Huang e al. [54], Wang e al. [77], Baloch [83], Huang e al. [88], Ln e al. [309] CS 4 - Sudden Illumnaon Changes Tan e al. [34], L e al. [53], Baloch [83], Ln e al. [309], Xue e al. [3], L e al. [33] CS 5- - Boosrappng durng nalzaon Gao e al. [0] CS 5- - Boosrappng durng manenance Lndsrom e al. [6] CS 6 - Camouflage Guo e al. [70] CS7 - Foreground Aperure Uas and Czún [6] CS 8 - Moved background objecs Texera e al. [] CS 9 - Insered background objecs Texera e al. [] CS 0 - Mulmodal background Dalley e al. [7], L e al. [65] CS - Wakng foreground objec Su and Hu [9], Hu and Su [0] CS - Sleepng foreground objecs Cheng e al. [9], Ca e al. [56], Hu e al. [59] CS 3 - Shadows Deecon Xu [], Huang and Chen [3], Zhang e al. [33], Tan e al. [34], Izad e al. [35], Rahman [36], Chen e al. [60], Landabaso e al. [63], L e al. [65], Quas e al. [84], Moln [85], Huang e al. [88], Forczmansk and Seweryn [93], Tan and Wang [38], L and Xu [39], Bn and Lu [30], Lu and Bn [3], La e al. [34], Wang e al. [38]

Table 7. Real Tme Consrans and MOG Versons Real-Tme Consrans Compuaon Tme Auhors - References Memory Requremen Krshna e al. [7] Cuevas e al. [8], Chang and Hsu [57], Krshna e al. [7] Dscusson: The Mxure of Gaussans (MOG) s adaped for oudoor scene where here are slow mulmodal varaons n he backgrounds. For he dynamc backgrounds lke camera jer, wavng rees and waer rpplng, hs model causes false deecons. 3.3 Kernel Densy Esmaon (KDE) To deal wh dynamc backgrounds lke camera jer, wavng rees and waer rpplng, Elgammal e al. [5] proposed o esmae he probably densy funcon for each pxel usng he kernel esmaor K for N recen sample of nensy values { x, x,..., x N } aken consecuvely n a me sze wndow W as follows: P N ( x ) = K ( x x ) N = (3) where K() s he kernel esmaor funcon whch s aken as a Normal Gaussan funcon N ( 0, Σ). So, he probably densy funcon s deermned as follows: P N d / / N = *( ) ( ) ( ) / T x x Σ x x x = e (π ) Σ (4) Elgammal e al. [5] assumed ha he dfferen color channels are ndependen wh dfferen kernel bandwdhs, hen he kernel funcon bandwdh s as follows: σ Σ = 0 0 0 σ 0 0 0 σ 3 (5) So, he probably densy funcon can be wren as follows: P ( x ) N d T / *( x x ) / σ j j j Π N = (6) j= = e πσ j Elgammal e al. [5] deeced he foreground usng he probables and a hreshold T as follows: If ( x ) T P < hen he pxel classfed as foreground else he pxel s classfed as background (7) A hs sep, a bnary mask s obaned. Then, o make he nex foreground deecon, he parameers mus be updaed. For hs, Elgammal e al. [5] used wo background models: a shor erm one and a long erm one. These wo models acheve dfferen objecves: - The shor erm model adaps quckly o allow very sensve deecon. Ths model consss of he mos recen N background sample values. The sample s updaed usng a selecve manenance mechansm, where he decson s based on he foreground classfcaon. - The long erm model capures a more sable represenaon of he scene background and adaps o changes slowly. Ths model consss of N sample pxels aken from a much larger wndow n me. The sample s updaed usng a non selecve manenance mechansm. So, o combne he advanage of each model and o elmnae her dsadvanages, he nex foreground deecon s obaned by akng he nersecon of he wo foreground deecon comng from he shor erm model and he long erm model. Ths nersecon elmnaes he perssence false posves deecon from he shor erm model and exra false posves deecon ha occur n he long erm model resuls. The only false posves deecon ha wll reman wll be rare evens no represened n eher model. If hs rare even persss over me n he scene hen he long erm model wll adap o, and wll be suppressed from he resul laer. Takng he nersecon wll, unforunaely, suppress rue posves n he frs model resul ha are false negaves n he second, because he long erm model adaps o foreground as well f hey are saonary or movng slowly. To address hs problem, all pxels deeced by he shor erm model ha are adjacen o pxels deeced by he combnaon are ncluded n he fnal foreground deecon. Improvemens: The orgnal KDE presen several advanages. The mulmodaly of he model allows dealng wh mulmodal backgrounds parcularly n fas changes (wavng rees, waer rpplng, ec ). Despe, hs model

presen some dsadvanages: N frames need o be kep n memory durng he enre deecon process whch s cosly memory wse when N s large. The algorhm s me consumng oo due he complexy n O(N*N). To solve hese problems, dfferen mprovemens have been proposed: - Inrnsc mprovemens: These sraeges conss n changng he kernel funcon [4-49] as shown n Table 8. For he ranng, some auhors propose o decrease he number of samples by deermnng a proper sze of he frame buffer [43], by usng a dversy samplng scheme [50,5] or by usng a sequenal Mone Carlo samplng scheme [5]. A recen paen concern he sequenal kernel densy approxmaon hrough mode propagaon [359]. Furhermore, recursve manenance [43-45,53, 54, 59] can be adoped o reduce he compuaon me. For he foreground deecon, dfferen scheme can be used as n [43, 46, 47, 53-55]. For he feaure ype, several feaures are used nsead of he RGB space lke he edge feaures [56] and moon feaures [57]. To choose whch feaures o use, Parag e al. [58] proposed a framework for feaure selecon. - Exrnsc mprovemens: Some auhors (Table 9) used Markov Random Felds [55, 59], herarchcal approaches [60], mulple backgrounds [6] and graph cus [6]. - Enhancng he foreground deecon: Anoher way o mprove hs mehod s o enhance he resuls of he foreground deecon by usng cooperaon wh anoher segmenaon mehod. I acheved by cooperaon wh he consecuve frame dfference [63] or usng a subspace learnng approach usng PCA [64]. The Table 8 and 9 gve respecvely an overvew of he nrnsc and exrnsc mprovemens. Table 0 and Table show respecvely an overvew of he crcal suaons and he real-me consrans for he dfferen KDE versons ha can ackle hem beer han he orgnal one. Table 8. Inrnsc mprovemens of he KDE Background Sep Improvemens Auhors - References Background Model Gaussan Kernel Funcon Auomac selecon of kernel band wdh: Tavakkol e al. [4, 4] Recangular Kernel Funcon Consan kernel band wdh: Ianas e al. [43], Tanaka e al. [44, 45] Varable kernel band wdh: Zvkovc [46] Dervave Kernel Funcon Cvekovc e al. [47] Negave coeffcen polynomal kernel funcon Wherspoon and Zhang [48] Background Inalzaon Cauchy Kernel Funcon Ramezan e al. [49] Decreasng he number of samples Adopng he proper sze of frame buffer: Ianas e al. [43] Dversy samples scheme: Mao and Sh [50, 5] Sequenal Mone Carlo samplng: Tang e al. [5] Background Manenance Background mage Ianas e al. [43] Recursve Manenance Recursve manenance of he PDF: Tavakkol e al. [53], Tanaka e al. [44, 45], Ramezan e al. [49] Recursve manenance of he background PDF and foreground PDF: Tavakkol e al. [54] Recursve manenance of he PDF and he background mage: Ianas e al. [43] Number of samples Zvkovc [46] Selecve Manenance Tavakkol e al. [4, 4], Mao and Sh [5] Foreground Deecon Dssmlary measure Ianas e al. [43] Probably Zvkovc [46], Tavakkol e al. [53] Foreground model Tavakkol e al. [53, 54] Two hresholds Cvekovc e al. [47] Table 9. Exrnsc mprovemens of he KDE Mehods Auhors - References Markov Random Felds Pahalawaa e al. [59] Herarchcal approaches Oren e al. [60] Mulples backgrounds Tanaka e al. [6] Graph cus Mahamud [6]

Table 0. Challenges and KDE Versons Crcal Suaons Auhors - References CS - Nose Image Mao and Sh [50, 5] CS - - Camera jer Shehk and Shah [55] CS - - Camera Adjusemens Cvekovc e al. [47], Sung e al. [347], Hwang e al. [348] CS 3 - Gradual Illumnaon Changes Shehk and Shah [55] CS 4 - Sudden Illumnaon Changes Sung e al. [48], Hwang e al. [49] CS 5- - Boosrappng durng nalzaon Marel-Brsson and Zaccarn [346] CS 5- - Boosrappng durng manenance Shehk and Shah [55] CS 6 - Camouflage Tavakkol e al. [4], Gu e al. [345] CS7 - Foreground Aperure CS 8 - Moved background objecs Elgammal e al. [5], Cvekovc e al. [47] CS 9 - Insered background objecs CS 0 - Mulmodal background CS - Wakng foreground objec CS - Sleepng foreground objecs CS 3 - Shadows Deecon Elgammal e al. [5], Cvekovc e al. [47], Mao and Sh [50, 5] Table. Real Tme Consrans and KDE Versons Real-Tme Consrans Auhors - References Compuaon Tme Elgammal [349], Sadegh e al. [350] Memory Requremen Elgammal [349], Sadegh e al. [350] Dscusson: The KDE s more adaped for oudoor scene where dynamc backgrounds appear bu less sued for llumnaon changes. 3.4 Subspace Learnng usng PCA (SL-PCA) Subspace learnng offer a good framework o deal wh llumnaon changes as allows akng no accoun spaal nformaon. Olver e al. [65] proposed o model each background pxel usng an egenbackground model. Ths model consss n akng a sample of N mages { I I,..., }, I N and compung he mean background mage µ B and s covarance marx C B. Ths covarance marx s hen dagonalzed usng an egenvalue decomposon as follows: Once he egenbackground mages sored n he marx are obaned and he mean µ B oo, he npu mage I can be approxmaed by he mean background and weghed sum of he egenbackgrounds. Φ. M The coordnae n egenbackground space of npu mage can be compued as follows: w T = ( I µ B ) Φ (9) M When w s back projeced ono he mage space, a reconsruced background mage s creaed as follows: T B = Φ M w + µ (0) B Then, he foreground objec deecon s made as follows: Φ M I L B = Φ C Φ (8) B B T B I B T () > where ΦB s he egenvecor marx of he covarance of he daa and LB s he correspondng dagonal marx of s egenvalues. In order o reduce, he dmensonaly of he space, only M egenvecors (M<N) are kep n a prncpal componen analyss (PCA). The M larges egenvalues are conaned n he marx L and he M vecors correspond o hese M M larges egenvalues n he marx Φ M. where T s a consan hreshold. Improvemens: The egenbackground model whch we have called SL-PCA provdes a robus model of he probably dsrbuon funcon of he background, bu no of he movng objecs whle hey do no have a sgnfcan conrbuon o he model. So, he frs lmaon of hs model s ha he sze of he foreground objec mus be small and don appear n he same locaon durng a long perod n he ranng sequence. The second lmaon appears for he background manenance. Indeed, s compuaonally nensve o perform model updang usng he bach mode

PCA. Moreover whou a mechansm of robus analyss, he oulers or foreground objecs may be absorbed no he background model. The hrd lmaon s ha he applcaon of hs model s mosly lmed o he gray-scale mages snce he negraon of mul-channel daa s no sraghforward. I nvolves much hgher dmensonal space and causes addonal dffculy o manage daa n general. Anoher lmaon s ha he represenaon s no mulmodal so varous llumnaon changes canno be handled correcly. To allevae hese lmaons, numerous mprovemens (5 papers) have been proposed over he recen years. A survey over 5 papers n he perod 999-009 can be found n [334]. Thus, he dfferen mprovemens whch aemp o solve hese four lmaons are summarzed n he followng classfcaon wh he recen advances: - Allevae he lmaon of he sze of he foreground objec: Xu e al. [66, 67] proposed o apply recursvely an error compensaon process whch reduces he nfluence of foreground movng objecs on he egenbackground model. An adapve hreshold mehod s also nroduced for background subracon, where he hreshold s deermned by combnng a fxed global hreshold and a varable local hreshold. Resuls show more robusness n presence of movng objecs. Anoher approach developed by Kawabaa e al. [68] consss n an erave opmal projecon mehod o esmae a vared background n real me from a dynamc scene wh foreground. Frsly, background mages are colleced for a whle and hen he background mages are compressed usng egenspace mehod o form a daabase. Afer hs nalzaon, a new mage s aken and projeced ono he egenspace o esmae he background. As he esmaed mage s much affeced by he foreground, he foreground regon s calculaed by usng background subracon wh former esmaed background o exclude he regon from he projecon. Thus he mage whose foreground regon s replaced by he former background s projeced o egenspace and hen he background s updaed. Kawabaa e al. [5] proved ha he cycle converges o a correc background mage. Recenly, Quvy and Kumazawa [35] proposed o generae he background mages usng he Nelder-Mead Smplex algorhm and a dynamc maskng procedure. Ths paper presens an orgnal mehod ha replaces he projecon/reconsrucon sep of he SL-PCA by a drec background mage generaon. The expermens proved ha he proposed mehod performs beer hen han he SL-PCA [65], SL-REC [66, 67], and SL-IOP [68] for large and fas movng objecs. - Dealng wh he me requremen and he robusness: For he manenance, some auhors [69-77] proposed dfferen algorhms of ncremenal PCA. The ncremenal PCA proposed by [69] need less compuaon bu he background mage s conamned by he foreground objec. To solve hs, L e al. [70, 7] proposed an ncremenal PCA whch s robus n presence of oulers. However, when keepng he background model updaed ncremenally, assgned he same weghs o he dfferen frames. Thus, clean frames and frames whch conan foreground objecs have he same conrbuon. The consequence s a relave polluon of he background model. In hs conex, Skocaj e al. [7, 73] used a weghed ncremenal and robus. The weghs are dfferen followng he frame and hs mehod acheved a beer background model. However, he weghs were appled o he whole frame whou consderng he conrbuon of dfferen mage pars o buldng he background model. To acheve a pxel-wse precson for he weghs, Zhang and Zhuang [74] proposed an adapve weghed selecon for an ncremenal PCA. Ths mehod performs a beer model by assgnng a wegh o each pxel a each new frame durng he updae. Expermens [74] show ha hs mehod acheves beer resuls han he SL-IRPCA [70, 7]. Wang e al. [75, 76] used a smlar approach usng he sequenal Karhunen-Loeve algorhm. Recenly, Zhang e al. [09] mproved hs approach wh an adapve scheme. All hese ncremenal mehods avod he egen-decomposon of he hgh dmensonal covarance marx usng approxmaon of and so a low decomposon s allowed a he manenance sep wh less compuaonal load. However, hese ncremenal mehods manan he whole egensrucure ncludng boh he egenvalues and Φ he exac marx M.To address hs problem, L e al. [77] proposed a fas recursve and robus egenbackground manenance avodng egendecomposon. Ths mehod acheves smlar resuls han he SL-IPCA [69] and he SL-IRPCA [70, 7] a beer frames raes. Fg. (4). shows a classfcaon of hese algorhms followng her robusness and her adapvy. - Dealng wh he grey scale and he pxel-wse lmaons: Recenly, Wu e al. [07] proposed o combne he PCA model wh sngle gaussan model. PCA allow he robusness o llumnaon changes and he sngle gaussan o descrbe color nformaon for each pxel. So, can deec he chroma changes and remove shadow pxels. An adapvely sraegy s used o negrae he wo models. A bnary graph cu s hen used o perform he foreground/background segmenaon. In anoher way, Han and Jan [78] proposed an effcen algorhm usng a weghed ncremenal -Dmensonal Prncpal Componen Analyss. I s shown ha he prncpal componens n DPCA are compued effcenly by ransformaon o sandard PCA. To perform he compuaonal me, Han and Jan [78] used an ncremenal algorhm o updae egenvecors o handle emporal varaons of background. The proposed algorhm was appled o 3- channel (RGB) and 4-channel (RGB+IR) daa.

Fg. (4): Adapvy of he SL-PCA Algorhms Resuls show noceable mprovemens n presence of mulmodal background (MB) and shadows (S). To solve he pxel-wse lmaon, Zhao e al. [06] used spao-emporal block nsead of pxel. Furhermore, her mehod conss n applyng he candd covarance free ncremenal prncpal componens analyss algorhm (CCIPCA) whch s fas n convergence rae and low n compuaonal complexy han classcal IPCA algorhms. Resuls show more robusness robus o nose and fas lghng changes. - Dealng wh mulmodal llumnaon changes: Recenly, Dong e al. [] proposed o use a mul-subspace learnng o handle dfferen llumnaon changes. The feaure space s organzed no clusers whch represen he dfferen lghng condons. A Local Prncple Componen Analyss (LPCA) ransformaon s used o learn separaely an egen-subspace for each cluser. When a curren mage arrves, he algorhm selecs he learned subspace whch shares he neares lghng condon. The resuls [] show ha he LPCA algorhm ouperforms he orgnal PCA [65] algorhm and MOG [4] especally under sudden llumnaon changes. In a smlar way, Kawansh e al. [3-4] generaed he background mage whch well expresses he weaher and he lghng condon of he scene. Ths mehod collecs a huge number of mages by super long erm survellance, classfes hem accordng o her me n he day, and apples he PCA so as o reconsruc he background mage. A recen paen concern a mehod based on space-me vdeo block and onlne subspace learnng [360]. Ths mehod allows a robus ncremenal updae and allevaes he pxelwse lmaons. The Table, Table 3, Table 4 and Table 5 group by ype he dfferen mprovemens of he SL-PCA. Table. Influence of he foreground objecs Table 3. Tme requremen and he robusness Mehods Incremenal PCA (SL-IPCA) Incremenal and robus PCA (SL-IRPCA) Weghed Incremenal and Robus PCA (SL-WIRPCA) Adapve Wegh Selecon for Incremenal PCA (SL-AWIPCA) Sequenal Karhunen-Loeve algorhm (SL-SKL) Adapve Sequenal Karhunen- Loeve algorhm (SL-ASKL) Fas Recursve Manenance (SL-FRM) Auhors - Daes Rymel e al. (004) [69] L e al. (003) [70, 7] Skocaj e al. (003) [7, 73] Zhang and Zhuang (007) [74] Wang e al. (006) [75, 76] Zhang e al. [09] L e al. (006) [77] Table 4. Dealng wh he grey scale and he pxel-wse lmaons Mehods PCA - Sngle Gaussan (SL-PCA-SG) Weghed Incremenal PCA (SL-WIDPCA) Candd Covarance Incremenal PCA (SL-CCIPCA) Auhors - Daes Wu e al. (009) [07, 08] Han and Jan (007) [78] Zhao e al. (008) [06] Table 5. Dealng wh mulmodal llumnaon changes Mehods Local Prncple Componen Analyss on Clusers (LPCA-C) Local Prncple Componen Analyss on Separaed Sequences (LPCA-SS) Auhors - Daes Dong e al. (00) [, ] Kawansh e al. (009) [3-4] Mehods Recursve Error Compensaon (SL-REC) Ierave Opmal Projecon (SL-IOP) Smplex Algorhm (SL-SA) Auhors - Daes Xu e al. (006) [66, 67] Kawabaa e al. (006) [68] Quvy and Kumazawa (0) [35] 3.5. Dscusson In Secon 3, we surveyed he models of he frs caegory and her relaed mprovemens. These mprovemens perform each orgnal algorhm for specfed crcal suaons. However, some auhors have recenly proposed o use more advanced sascal models as Suppor Vecor models o deal more accuraely wh dynamcs background.

4. SECOND CATEGORY The second caegory models use more sophscaed sascal model as suppor vecor machne (SVM), suppor vecor regresson (SVR) and suppor vecor daa descrpon (SVDD). 4. Suppor Vecor Machne (SVM) Suppor Vecor Machnes were nroduced by Vapnk e al. [79]. For classfcaon, SVMs work by deermnng a hyperplane n a hgh dmensonal feaure space o separae he ranng daa no wo classes. The bes hyperplane can be derved by mnmzng he margn whch represens he leas dsance from he hyperplane o he daa. Usng hs classfcaon aspec, Ln e al. [80] proposed o use he SVMs for background modelng. Parcularly, Ln e al. [80] used a PSVM wh probablsc oupus because he SVM gves only bnary oupus. A sgmod model s used o conver bnary SVM scores no poseror probables: p( y = f ) = () + exp( Af + B) ( where y s bnary class label and f s an oupu score of he SVM decson funcon. The wo parameers A and B are fed usng maxmum lkelhood esmaon from a ranng se f, y ), and derved by mnmzng he negave loglkelhood funcon: where mn log( p ) + ( )log( p ) (3) y + = and p + exp( Af = (4) + B) To avod overfng and o derve unbased ranng for he mnmzaon, a hold-ou se s generaed from he daa by dvdng each ranng se of 80% and 0% respecvely. The large subse s used for SVM ranng, and he smaller one s used for he wo parameer mnmzaon. In hs conex, Ln e al. [80] used 00 mages of sze 60*0 wh known background. Each mage s dvded no blocks of sze 4*4 and consderng wo feaures for each block: opcal flow value and consecuve mage dfference. For each block, s label s defned as + for background and - oherwse. The background nalzaon sars wh he frs mage and each block are esed by he PSVM. An mage block s classfed as background f s probably oupu s larger han a hreshold T: p(b ) > T (5) When an mage block p b ) > T ( s classfed as background for M consecuve mes, he Fsher lnear dsance s used: ( µ µ back d( b,bback) ( σ σ back ) ) = (6) where µ and σ are he mean and he varance of he nensy dsrbuon of a block. When he dsance beween he wo blocks s large, wo possble condons appear. The curren block can be eher par of a unform regon of a movng objec or a new background jus revealed. The averagng PSVM probably for he curren block over he pas M frames s compared wh he PSVM probably of he background. If he new average PSVM probably s larger, hen he background s replaced by he curren block. Connung hs way, he nalzaon process wll be ermnaed when replacemen evens do no occur for a consecuve M frames. When he nalzaon s fnshed, he foreground deecon s made by hresholdng he dfference beween he background model and he curren mage. 4. Suppor Vecor Regresson (SVR) Gven a se of ranng daa, SVR fs a funcon by specfyng an upper bound on a fracon of ranng daa allowed o le ousde of a dsance ε from he regresson esmae. Ths ype of SVR s usually referred o as ε- nsensve SVR [8]. For each pxel belongng o he background, a separae SVR s used o model as a funcon of nensy. To classfy a gven pxel as background or no, Wang e al. [83] [84] feed s nensy value o he SVR assocaed and hreshold he oupu of he SVR. Le assume a se of ranng daa for some pxel p obaned from several frames {(x, y ),...,(x N, y N )}, where x corresponds o he nensy value of pxel p a frame, and y corresponds o he confdence of pxel p beng a background pxel. Once he SVR has been raned, he confdence of he pxel p n a new frame, f(x ), s compued usng he followng lnear regresson funcon: f( x ) N = ( a a )k( x, x ) + ξ * j j (7) j = where k( x, x j ) s a kernel funcon. The parameers a, * a and ξ, called Lagrange mulplers, are obaned by solvng an opmzaon problem usng he mehod of he Lagrange mulplers. Gven he SVR-based background model, he nensy of each pxel n a new frame forms he npu o he SVR. The oupu of he SVR represens he confdence ha a gven pxel belongs o he background. Evenually, a pxel s labelled as background f s

confdence s beween a low hreshold S l and a hgh hreshold S h. Specfcally, a bnary foreground deecon map s formed a frame as follows: M x = 0 f S l < f( x ) < Sh M x = oherwse (8) where ( x ) f s he SVR oupu and { S, } S = are he l S h nal hresholds. Then, for each regon n he bnary map, he SVR-based background model s updae usng an onlne SVR learnng algorhm [8]. 4.3 Suppor Vecor Daa Descrpon (SVDD) Tavakkol e al. [86] proposed o model he background usng suppor vecor daa descrpon (SVDD) n vdeos wh quas-saonary backgrounds. Daa doman descrpon concerns he characerscs of a daa se [85]. The boundary of he daase can be used o deec novel daa or oulers. A normal daa descrpon gves a closed boundary around he daa. The smples boundary can be represened by a hypersphere. The volume of hs hyper-sphere wh cener a and radus R should be mnmzed whle conanng all he ranng samples x. To allow he possbly of oulers n he ranng se, slack varables ε 0 are nroduced. The error funcon o be mnmzed s defned as: F( R,a) = R + C ε (9) Subjecs o he conrans: x a R + ε (30) In equaon (), C s a rade-off beween smplcy of he sysem and s error and s called confdence parameer. Afer ncorporang he consrans (30) no he error funcon (9) by Lagrange mulplers we have: L( R,a, α, γ, ε ) = R + C ε α( R + ε ( x a )) (3) γ ε L should be maxmzed wh respec o Lagrange mulplers α 0 and γ 0 and mnmzed wh respec o R, a and ε. Lagrange mulplers γ can be removed f he consran α s mposed. Afer solvng he 0 C opmzaon problem we have: L = α( x x ) αα j( x : 0 α C, j x α (3) j ) When a new sample sasfes he nequaly n (30), hen s correspondng Lagrange mulplers are 0, oherwse hey are zero. Therefore we have: α x a < R α = 0, γ = 0 x a > R α = C, γ > 0 (33) From he above, we can remark ha only samples wh nonzero α are needed n he descrpon of he daa se, herefore hey are called suppor vecors of he descrpon. To es a new sample y, s dsance o he cener of he hyper-sphere s calculaed and esed agans R. Tavakkol e al. [86] used hs mehodology o bul a descrpve boundary for each pxel n he background ranng frames o generae s model for he background. Then, hese boundares are used o classfy her correspondng pxels n new frames as background and foreground pxels. In pracce, for each pxel n he scene a sngle class classfer s raned by usng s values n he background ranng frames. Ths classfer consss of he descrpon boundary and suppor vecors, as well as a hreshold used o descrbe he daa. For he foreground deecon, each pxel n he new frames s classfed as background or foreground usng s value and s correspondng classfer from he ranng sage. Feaure vecors x j used n he curren mplemenaon are x j = [C r ; C g ], where C r and C g are he red and green chromnance values for pxel (,j). Improvemens: Ths model presens several advanages: The accuracy s no bounded o he accuracy of he esmaed probably densy funcons and he memory requremen s less han non-paramerc echnques. Because suppor vecor daa descrpon explcly models he decson boundary of he known class, s suable for novely deecon whou he need o use hresholds. Furhermore, he classfer performance n erms of false posve s conrolled explcly. The man dsadvanage s ha he ranng of SVDD requres a Lagrange opmzaon whch s compuaonally nensve. For he manenance, all he SVDD mus be recompued. To perform he ranng, Tavakkol e al. [87] proposed o use a genec approach o solve he Lagrange opmzaon problem. The Genec Algorhm (GA) sars wh he nal guess and solves he opmzaon problem eravely. In [88][89], Tavakkol e al. proposed o use an ncremenal SVDD. In hs way, he manenance s performed oo. 4.4 Dscusson Suppor vecor models offer a nce framework for background modelng specfcally n presence of llumnaon changes and dynamc backgrounds. Anoher way o model he background s o perform he frs caegory by usng a more adapve model.

5. THIRD CATEGORY The hrd caegory models generalze he frs caegory model as he sngle general Gaussan (SGG), he mxure of general Gaussans (MOGG) and subspace learnng usng Incremenal Componen Analyss (SL-ICA), Incremenal Non-negave Marx Facorzaon (SL-INMF) or Incremenal Rank-(R,R,R 3 ) Tensor (SL-IRT). 5. Sngle General Gaussan (SGG) Km e al. [90-9] proposed o model he background usng a generalsed Gaussan famly (GGF) model of dsrbuons o cope wh problems from varous changes n background and shadows. The dea s ha pxel varance fed somemes a Laplace one or a Gaussan one. Indeed, pxel varance n a sac scene over me n ndoor scenes aken wh he laes camera s closer o a Laplace dsrbuon han a Gaussan, bu he Laplace model has lmaon for use n varous envronmens. The pxel varaon n a sac scene over me s defned as: P( X ργ p p ( γ x µ ) ) = e wh Γ(/ ρ) Γ( 3/ ρ) γ = (34) σ Γ(/ ρ) where Γ ( ) s a gamma funcon and σ s a varance of he dsrbuon. In Equaon (), ρ = represens a Laplace dsrbuon whle ρ = represens a Gaussan dsrbuon. The models are decded for each pxel by compung excess kuross g of he frs m frames. The excess kuross of Laplace and Gaussan dsrbuons s respecvely 3 and 0. The opmal parameers of he background model are esmaed by he maxmzaon of he lkelhood of he observed value: N 4 N ( x µ ) = g = N ( x = µ ) = 3 (35) In pracce, Km e al. [90-9] modelled he background n wo pars: a lumnance componen obaned by a weghed mean of RGB channels and a hue componen n HSI color space. The manenance s made usng a selecve runnng average as n [3]. The foreground deecon s frsly performed by subracng he nensy componens of he curren frame from he background model: D( x, y) = I( x, y) B( x, y) (36) where I ( x, y) and B ( x, y) correspond respecvely o he lumnance of he curren frame and he background model. Then, pxels are classfed no hree caegores usng wo hresholds as follows: background pxel f D( x, y) < Tk( x, y) suspcous pxel f Tk( x, y) D( x, y) Tk( x, y) foreground pxel f T k( x, y) < D( x, y) (37) where k ( x, y) s a scale parameer. The hresholds T, T and T3 are deermned usng he ranng frames. The SGG shows beer performance han he MOG and he KDE n ndoor and oudoor scene. 5.. Mxure of General Gaussans (MOGG) Alll e al. [93-95] proposed a fne mxure model of general Gaussans for robus segmenaon n he presence of nose and oulers. Ths model has more flexbly o adap he shape of daa and less sensbly for over-fng he number of classes han he mxure of Gaussans. Each pxel s characerzed by s nensy n he RGB color space. Then, he probably of observng he curren pxel value s consdered gven by he followng formula n he muldmensonal case: P K ( X ) =,. η( X, µ,, σ,, λ ) = ω (38) where he parameers are K s he number of dsrbuons, ω s a wegh assocaed o he h Gaussan a me wh, mean µ and sandard devaon,, =. λ 0 f he = dsrbuon s a Gaussan one and λ 3 f he dsrbuon s a Laplace one. η s a Gaussan probably densy funcon: λ j d X j µ j η( X =, µ,, σ,, λ ) A( λ j )exp B( λ ) j = σ j / λ ( Γ( 3/ λ)/ Γ(/ λ) ) where A( λ) = and σγ(/ λ) Γ( 3/ λ) B( λ ) =. (39) Γ(/ λ) The opmal number of Gaussans s compued a each me by mnmzng he creron Mnmum Message Lengh (MML). If he number of Gaussans a me + s smaller han a me, he parameers are updaed n a smlar way han n [4]. The same machng es as n [4] s used o check f a pxel maches a Gaussan. For he labelng, he same scheme ha Sauffer and Grmson [4] s used. The MOGG show beer performance han he MOG n he presence of shadows (S).

5.3 Subspace Learnng Subspace learnng can be made usng PCA as seen n he Secon 3.4. In he leraure [96], here are oher mehods o reduce he space and hese dfferen mehods have been classfed by Skocaj and Leonards [97] as reconsrucve mehods and dscrmnave mehods: - Reconsrucve subspace learnng: The reconsrucve mehods allow a well approxmaon of daa and so provde a good reconsrucon. Anoher advanage s ha reconsrucve mehods are unsupervsed echnques. Furhermore, reconsrucve mehods enable ncremenal updang whch s very suable for real-me applcaon. These mehods are ask-ndependens. The mos common reconsrucve mehods are he followng: Prncpal Componens Analyss (PCA) [5], Independen Componen Analyss (ICA) [5] and Nonnegave Marx Facorzaon (NMF) [53]. PCA ransforms a number of possbly correlaed daa no a smaller number of uncorrelaed daa called prncpal componens. ICA s a varan of PCA n whch he componens are assumed o be muually sascally ndependen nsead of merely uncorrelaed. The sronger condon allows remove he roaonal nvarance of PCA,.e. ICA provdes a meanngful unque blnear decomposon of wo-way daa ha can be consdered as a lnear mxure of a number of ndependen source sgnals. Non-negave marx facorzaon (NMF) fnds lnear represenaons of nonnegave daa. Gven a non-negave daa marx V, NMF fnds an approxmae facorzaon V =WH no non-negave facors W and H. The non-negavy consrans make he represenaon purely addve,.e allowng no subracons, n conras o prncpal componen analyss (PCA) and ndependen componen analyss (ICA). - Dscrmnave subspace learnng: The dscrmnave mehods are supervsed echnques and allow a well separaon of daa and so provde a good classfcaon. Furhermore, dscrmnave mehods are spaally and compuaonally effcen. These mehods are askdependens. The mos common dscrmnave mehods are he followng: Lnear Dscrmnan Analyss (LDA) [54] and Canoncal Correlaon Analyss (CCA) [55]. LDA projecs he daa ono a lower-dmensonal vecor space such ha he rao of he beween-class dsance o he whn-class dsance s maxmzed. The goal s o acheve maxmum dscrmnaon. Canoncal correlaon analyss (CCA) s a mulvarae sascal model ha faclaes he sudy of nerrelaonshps among ses of mulple dependen varables and mulple ndependen varables. Canoncal correlaon smulaneously predcs mulple dependen varables from mulple ndependen varables. All hese mehods are orgnally mplemened wh bach algorhms whch requre ha he daa mus be avalable n advance and be gven once alogeher. However, hs ype of bach algorhms s no adaped for he applcaon of background modelng n whch he daa are ncremenally receved from he camera. Furhermore, when he dmenson of he daase s hgh, boh he compuaon and sorage complexy grow dramacally. Thus, ncremenal mehods are hghly needed o compue n real-me he adapve subspace for he daa arrvng sequenally. Followng hese consrans, he reconsrucve mehods are he mos adaped for background modelng. Furhermore, her unsupervsed aspec allows avod a manual nervenon n he learnng sep. In he followng paragraphs, we survey he subspace leanng mehods appled recenly o background modelng: Independen Componen Analyss (ICA), Non-negave Marx Facorzaon (NMF) and Incremenal Rank-(R,R,R 3 ) Tensor. 5.3. Subspace learnng usng ICA (SL-ICA) ICA generalzes he echnque of PCA. When some mxures of probablscally ndependen source sgnals are observed, ICA recovers he orgnal source sgnals from he observed mxures whou knowng how he sources are mxed. The assumpon made s ha he observaon vecors T X = ( x, x,..., x M ) can be represened n erms of a lnear superposon of unknown ndependen vecors T S = ( s, s,..., s : M ) X = AS (40) where A s an unknown mxng marx (M N). ICA fnds a marx W, so ha he resulng vecors: Y = WX (4) recovers he ndependen vecors S, probablscally permued and rescaled. W s roughly he nverse marx of A. Applyng o background modelng, he ICA model s gven by: Y = WX (4) T X = ( x B, x F ) s he mxure daa marx of sze *K n whch K=M*N. x = ( x,x,...,xk ) s he frs frame whch can conan or no foreground objecs and x = ( xé,x,...,xk ) s he second frame whch T conan foreground objecs. W = ( w,w ) s he demxng marx, n whch w = ( w,w ) wh =,. T Y = ( y, y ) s he esmaed source sgnals n whch y ( y, y,..., y ). Several ICA algorhms can be = k used o deermne W. Yamazak e al. [98] used a neural learnng algorhm [99]. In anoher way, Tsa and La [00] used a Parcle Swarm Algorhm (PSO) [0]. Once W s

deermned, here are wo ways n he leraure o generae he background and he foreground mask mages: - The frs case whch x conans foreground objec lke n Yamazak e al. [98]. Then, he foreground mask for he frames x and x s obaned by hresholdng respecvely y and y. The background mage s obaned by replacng regons represenng foreground objecs n x by he correspondng regons represenng background n x. - The second case whch x conans no foreground objec lke n Tsa and La [00]. Then, he foreground mask for he framesx s obaned by hresholdng y. The background mage s y. The ICA model was esed on raffc scenes by Yamazak e al. [98] and show robusness n changng background lke llumnaon changes. In [00], he algorhm was esed on ndoor scenes where sudden llumnaon changes appear. 5.3. Subspace learnng usng INMF (SL-INMF) The non-negave marx facorzaon (NMF), wh rank r, p q decomposes he daa marx V R no wo marces p r whch are W R called he mxng marx, and r q H R named as he encodng marx: V WH (43) So, NMF ams o fnd an approxmae facorzaon ha mnmzes he reconsrucon error. Dfferen cos funcons based on he reconsrucon error have been defned n he leraure, bu because of s smplcy and effecveness, he squared error s he mos used: F p q = V WH = = = (V j (WH) j ) (44) where subscrpon j sands for he j h marx eny. Applyng o background modellng, Bucak e al. [0, 03] proposed an ncremenal NMF algorhm. The background nalzaon s made usng N ranng frames. So, V s vecor column correspondng o a marx of sze( p q) N. The marces W and H are updaed ncremenally. The foreground deecon s made by hresholdng he resdual error whch correspond o he devaons beween he background model and he projecon of he curren frame ono he background model. The INMF has smlar performance o dynamc background and llumnaon changes han he IRPCA proposed by L e al. [70]. 5.3.3 Subspace learnng usng Incremenal Rank- (R,R, R3) Tensor (SL-IRT) The dfferen prevous subspace learnng consdered mage as a vecor. So, he local spaal nformaon s almos los. L e al. [04, 05] proposed o use a hgh-order ensor learnng algorhm called ncremenal rank-(r,r,r 3 ) ensor based subspace learnng o ake no accoun he spaal nformaon. Ths onlne algorhm consrucs a low-order ensor egenspace model n whch he sample mean and he egenbass are updaed adapvely. Denoe M N G = BM q R as a scene s background { } q=,,..., appearance sequence wh he q-h frame beng BM q. Denoe p as he x-h and he y-h pxel of he scene. The xy ensor-based egenspace model for an exsng xy I I A = BM q R (I =I =5 correspondng o { } q=,,..., a K negborhood of p wh K= I I -=4 ) consss of he uv mananed egenspace dmensons (R,R,R 3 ) correspondng o he hree ensor unfoldng modes, he mode-n column projecon marces ( n) U R ( 3) ( In V R I n R n.i ) R, he mode-3 row 3 projecon marx, he column means ( ) ( ) L and L of he mode-(,) unfoldng marces A () and A ( ), and he row mean A marx ( 3) ( 3) L of he mode-3 unfoldng. Gven he K-neghbor mage regon uv I I I + R cenered a he x-h and y-h pxel p of he xy curren ncomng frame xy I R M N +, he dsance RM (deermned by he hree reconsrucon error norms xy I of he hree modes) beween + and he learned ensorbased egenspace model s compued. Then, he foreground deecon s defned as follows: p s classfed as background f exp( RM xy ) > T xy σ p s classfed as foreground oherwse (45) xy where σ s a scalng facor and T denoes a hreshold. Thus, he new background model BM + ( x, y) a me + s defned as: BM = BM + ( x, y) H xy f xy ( x, y) I+ ( x, y) p s classfed as foreground + = oherwse (46) where H = ( α ) MB ( x, y) + αi ( x, y), MB s he mean marx of xy + BM a me and α s a learnng rae : facor. Then, he ensor egenspace model s updaed ncremenally and so on. The IRT show more robusness o nose han he IRPCA proposed by L e al. [70].

Table 6. Performance evaluaon on dynamc backgrounds and llumnaon changes Mehod Dynamc backgrounds Illumnaon changes Indoor/oudoor scene Applcaons SG [3] MOG [4] KDE [5] SL-PCA [65] SVM [80] SVR [83] SVDD [89] SGG [90] MOGG [94] SL-ICA [00] SL-INMF [0] SL-IRT [05] Table 7. Compuaonal complexy Mehod Background Inalzaon Background Manenance Foreground Deecon SG [3] MOG [4] KDE [5] SL-PCA [65] O(N) O(NK) O(N) O(N) O() O(K) O(n) O(N+M) O() O(K) O() O(P) SVM [80] SVR [83] SVDD [89] SGG [90] MOGG [94] SL-ICA [00] SL-INMF [0] SL-IRT [05] - Slow movemen Yes - - Slow movemen Yes - Slow movemen - - - O(N) O(N) O(N) O(N) O(NK) O(N) O(N) O(N) Slow changes Slow changes Slow changes Yes Slow changes Slow changes Yes Slow changes Slow changes Yes Yes Yes O(N+) O() O() O() O(K) O(M) O(M) O(M) Indoor scene Oudoor scene Oudoor scene Oudoor scene (small objecs) Oudoor scene Oudoor scene Oudoor scene Indoor scene Oudoor scene Oudoor scene (small objecs) Oudoor scene (small objecs) Oudoor scene (small objecs) O() O() O() O() O(K) O(P) O(P) O(P) Moon Capure Vdeo Survellance Vdeo Survellance Vdeo Survellance Vdeo Survellance Vdeo Survellance Vdeo Survellance Moon Capure Vdeo Survellance Vdeo Survellance Vdeo Survellance Vdeo Survellance 6. PERFORMANCE EVALUATION We have frsly evaluaed he ably of each mehod o deal wh dynamcs backgrounds and llumnaon changes. Then, he evaluaon s conduced of per-pxel compuaonal complexy and memory requremens. 6. Challenges Table 6 groups he ably of each mehod o deal wh dynamcs backgrounds and llumnaon changes. The hrd column ndcaes n whch ype of scene he mehod s well sued. The relaed applcaons are ndcaed n he fourh column. 6. Compuaonal complexy The SG s he fases mehod because he classfcaon s jus made usng a hreshold and he background manenance jus adaps he mean and he varance. Is complexy depends on N for he nalzaon. The MOG mehod has O(NK) complexy wh K he number of Gaussan dsrbuons used, ypcally beween 3 and 5. For manenance, he KDE compues s value n he Gaussan kernels cenered on he pas n frames, hus rasng O(n) complexy, wh n ypcally as hgh as 00. For he reconsrucve subspace learnng, her compuaonal complexes are relaed o he operaons needed o compue he elemens sored and updaed,.e he prncpal marx or he egensrucures. For example, he ncremenal ensor subspace learnng requres O(I I (R +R +R 3 )) operaons [05]. For he foreground deecon, he reconsrucve subspace learnng mehods have an esmaed complexy per pxel of O(P), where P s he number of he bes egenvecors. For he background manenance, her complexy s relaed o M whch s he number of samples used o updae he model. M= f he model s updae every frame. Table 7 shows he per-pxel compuaonal complexy of each algorhm a each sage. More deals abou he complexy of each algorhm can be found n her correspondng papers. 6.3 Memory requremens For he sascal mehods, he memory complexy per pxel s he same as he compuaonal complexy. A classfcaon me, reconsrucve approaches requre a memory complexy per pxel O(P), wh P he number of he bes egenvecors. However, a ranng me hese mehods requre allocaon of all he N ranng mages, wh an O(N) complexy. For he reconsrucve subspace learnng, he memory requremens are relaed o he elemens sored and updaed,.e he prncpal marx or he egensrucures. For example, he ncremenal ensor subspace learnng requres O(I R +I R +(I I )R 3 ) memory uns [05].

7. COMPARISON We have chosen o compare dfferen mprovemens of he MOG for dynamc backgrounds and he subspace learnng models (SL-PCA, SL-ICA, SL-INMF and SL-IRT) for llumnaon changes. Resuls on he Wallflower daase provded by Toyama e al. [0] are presened. We colleced hese resuls because of how frequen s use s n hs feld. Ths frequency s due o s fahful represenaon of real-lfe suaons ypcal of scenes suscepble o vdeo survellance. Moreover, consss of seven vdeo sequences n whch each sequence presenng one of he dffcules a praccal ask s lkely o encouner (.e llumnaon changes, dynamc backgrounds). The sze of he mages s 60*0 pxels. A bref descrpon of he Wallflower mage sequences can be made as follows: - Moved Objec (MO): A person eners no a room, makes a phone call, and leaves. The phone and he char are lef n a dfferen poson. Ths vdeo conans 747 mages - Tme of Day (TOD): The lgh n a room gradually changes from dark o brgh. Then, a person eners he room and ss down. Ths vdeo conans 5890 mages - Lgh Swch (LS): A room scene begns wh he lghs on. Then a person eners he room and urns off he lghs for a long perod. Laer, a person walks n he room, swches on he lgh, and moves he char, whle he door s closed. Ths vdeo conans 75 mages. - Wavng Trees (WT): A ree s swayng and a person walks n fron of he ree. Ths vdeo conans 87 mages. - Camouflage (C): A person walks n fron of a monor, whch has rollng nerference bars on he screen. The bars nclude smlar color o he person s clohng. Ths vdeo conans 353 mages. - Boosrappng (B): The mage sequence shows a busy cafeera and each frame conans people. Ths vdeo conans 3055 mages. - Foreground Aperure (FA): A person wh unformly colored shr wakes up and begns o move slowly. Ths vdeo conans 3 mages. For each sequence, he ground ruh s provded for one mage when he algorhm has o show s robusness o a specfc change n he scene. Thus, he performance s evaluaed agans hand-segmened ground ruh. Three erms are used n he evaluaon: False Posve (FP) s he number of background pxels ha are wrongly marked as foreground; False Negave (FN) s he number of foreground pxels ha are wrongly marked as background; Toal Error (TE) s he sum of FP and FN. 7. MOG and s mprovemens For he frs caegory, we compare he MOG wh s man mprovemens. Table 8 and Fg. (5) group he expermenal resuls found n he leraure for he algorhms chosen whch are:. The orgnal algorhm: Sauffer and Grmson [4].. Three nrnsc mprovemens: Whe e al. [67] whch used a beer seng for he learnng raes usng Parcle Swarm Opmzaon, Wang e al. [60] whch modfed he foreground deecon sep usng a mxed color space.e a normalzed RGB color space for pxels wh hgh nenses and n RGB color space for pxels wh low nenses and Seawan e al. [97] whch used he IHLS space. 3. Three exrnsc mprovemens: Schndler e al. [09] whch used he MRFS o smooh he resuls spaally, Crsan e al. [7] whch proposed he Spaal-Tme Adapve Per Pxel Mxure Of Gaussan called S- TAPPMOG and Crsan e al. [8] whch used an adapve spao-emporal neghborhood analyss called ASTNA. For hese wo las algorhms, he auhors don gve he resul for he followng mage sequences: Moved Objec, Tme of Day and Lgh Swch. So, we have ndcaed for hese he Toal Error whou hese mage sequences. From Table 8, we can see ha he orgnal MOG gves he bgger oal of error. A beer seng of he learnng rae and he hreshold T usng he PSO [67] dvdes approxmaely by he number of oal errors. The use of he IHLS color space [97] decreases a lo he number TE whch becomes jus under 0 000. The mprovemen proposed by Wang e al. [60] gves he beer resuls for he nrnsc mprovemens. For he exrnsc mprovemens, he bes resuls are obaned by MOG usng MRF proposed by Schndler e al. [09] followed by S-TAPPMOG [7] and ASTNA [8]. For all he mehods, he mage sequences Lgh Swch (LS) gves he larger amoun of false posve. Here, he bes resul s obaned by he mehod proposed by Schndler e al. [09]. The use of IHLS [97] gves bes mprovemen for he mage sequences Camouflage (C) and for he mehod proposed by Wang e al. [30], s he mage sequences Wavng Trees (WT). In concluson, hs performance evaluaon shows ha akng no accoun spaal and emporal conssency mproves he resuls n a sgnfcan way. Fg. (6) presens he overall performance for he fve frs algorhms. I s no nended o be a defnve rankng of hese algorhms. Such a rankng s necessarly ask-, sequence-, and applcaon dependen.

Fg. (5). Resuls on he Wallflower daase [6] for he MOG and s mprovemens. Sequence MO TD LS WT C B FA Tes mage Ground Truh MOG Sauffer e al. [4] MOG wh PSO Whe e al. [67] MOG usng IHLS Seawan e al. [97] Improved MOG Wang e al. [60] MOG wh MRF Schndler e al. [09] S-TAPMOG Crsan e al. [7] ASTNA Crsan e al. [8] - - - - - - Table 8. Comparson on he Wallflower daase [6] for he MOG and s mprovemens. Problem Type Algorhm MO TD LS WT C B FA TE MOG [4] FN 0 008 633 33 398 874 44 FP 0 0 469 34 3098 7 530 7053 MOG wh PSO [67] FN 0 807 76 43 386 55 39 FP 0 6 77 689 463 59 57 396 MOG-IHLS [97] FN 0 379 46 3 88 647 37 FP 0 99 98 70 467 333 554 9739 Improved MOG FD [60] FN 0 597 48 44 06 76 74 FP 0 358 669 88 43 34 54 708 MOG wh MRF [09] FN 0 47 04 5 6 060 34 FP 0 40 546 3 467 0 604 3808 S-TAPPMOG [7] FN - - - 53 643 44 9 FP - - - 5 38 8 377 7844 ASTNA [8] FN - - - 53 83 349 900 FP - - - 00 73 73 360 703

Fg. (7). Resuls on he Wallflower daase [6] for he subspace learnng models. Sequence Tes mage MO TD LS WT C B FA Ground ruh SL-PCA Olver e al. [65] SL-ICA Tsa and La [00] SL-INMF Bucak e al. [0] SL-IRT L e al. [04] Table 9. Comparson on he Wallflower daase [6] for he subspace learnng models. Algorhm Problem Type MO TD LS WT C B FA TE SL-PCA [65] SL-ICA [00] SL-INMF [0] SL-IRT [04] FN FP FN FP FN FP FN FP 0 879 96 07 350 065 6 36 057 548 0 99 557 337 3054 0 0 0 48 43 0 74 593 337 666 0 48 303 65 34 0 8 8 455 49 0 59 389 7 4 304 44 7677 69 537 560 7 5308 6 48 40 34 9098 90 65 734 438 7053 080 7. Subspace learnng models SL-PCA whch s from he frs caegory s compared wh he subspace learnng models from he hrd caegory: SL- IRT, SL-PCA and SL-INMF. Table 9 and Fg. (7) group he expermenal resuls found n he leraure for he subspace learnng algorhms. From Table 9, we can see ha SL-ICA gves he smalles TE followed by SL-IRT, SL-PCA and SL- Ths rankng INMF. Fg. (8) shows he overall performance. has o be aken wh precauon because a poor performance on one vdeo nfluences he TE and hen modfes he rank. The man nerpreaon s ha all he models are robus o llumnaon changes as can be seen on he sequence called Tme of Day (TD) and Lgh Swch (LS). Oherwse, he subspace learnng algorhms are more or less adaped for specfc suaons. For example, only SL-PCA gves FP n he sequence called Moved Objecs (MO) due he fac ha he model s no updae overme. In he same way, SLdue o s resuls on he INMF gves he bgges oal error sequence called Camouflage (C). Ths s confrmed by he Fg. (9). whch shows he performance whou hs sequence. In hs case, SL-INMF s he second n erm of performance. SL-ICA has globally good performance excep for he sequence called Boosrap (B) by gvng less rue deecon. SL-IRT seems o be more effcen n he case of camouflage. SL-PCA gves less FN han FP. For SL-ICA, SL-INMF and SL-IRT, s he conrary. We can remark ha SL-ICA provdes very less FP han FN. I s neresng n vdeo- false alarms. survellance because decreases Fg. (6). Overall performance on he Wallflower daase [6] for he MOG and s mprovemens. MOG wh MRF [09] Improved MOG - MOG-IHLS [97] FN MOG wh PSO [67] FP MOG [4] 0 0000 0000 30000