Special Issue on “Background Modeling for Foreground Detection in Real-World Dynamic Scenes”

Although background modeling and foreground detection are not mandatory steps for computer vision applications, they may prove useful as they separate the primal objects usually called ”foreground” from the remaining part of the scene called

Please download to get full document.

View again

of 4
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.


Publish on:

Views: 3 | Pages: 4

Extension: PDF | Download: 0

  Noname manuscript No. (will be inserted by the editor) Special Issue on Background Modeling for ForegroundDetection in Real-World Dynamic Scenes Thierry Bouwmans  ·  Jordi Gonz`alez  ·  Caifeng Shan  ·  MassimoPiccardi  ·  Larry Davis Received: date / Accepted: date Although background modeling and foreground de-tection are not mandatory steps for computer vision ap-plications, they may prove useful as they separate theprimal objects usually called ”foreground” from the re-maining part of the scene called ”background”, and per-mits different algorithmic treatment in the video pro-cessing field such as video-surveillance, optical motioncapture, multimedia applications, teleconferencing andhuman-computer interfaces. Conventional backgroundmodeling methods exploit the temporal variation of eachpixel to model the background and the foreground de-tection is made by using change detection. The lastdecade witnessed very significant publications on back-ground modeling but recently new applications in whichbackground is not static, such as recordings taken frommobile devices or Internet videos, need new develop-ments to detect robustly moving objects in challengingenvironments. Thus, effective methods for robustnessto deal both with dynamic backgrounds, illumination T. BouwmansLab. MIA, Univ. of La Rochelle, FranceTel.: +33-0546457202Fax: +33-0546458242E-mail: tbouwman@univ-lr.frJ. Gonz`alezComputer Vision Center, Univ. Aut`onoma de Barcelona, SpainE-mail: jordi.gonzalez@uab.catC. ShanPhilips Research, The NetherlandsE-mail: caifeng.shan@philips.comM. PiccardiUniv. of Technology, Sydney, AustraliaE-mail: massimo.piccardi@uts.edu.auL. DavisCV Lab, Univ. of Maryland, USAE-mail: lsd@umiacs.umd.edu changes in real scenes with fixed cameras or mobiledevices are needed and so different strategies may beused such as automatic feature selection, model selec-tion or hierarchical models. Another feature of back-ground modeling methods is that the use of advancedmodels has to be computed in real-time and with lowmemory requirements. Algorithms may need to be re-designed to meet these requirements. Thus, the read-ers can find 1) new methods to model the background,2) recent strategies to improve foreground detection totackle challenges such as dynamic backgrounds and il-lumination changes, and 3) adaptive and incrementalalgorithms to achieve real-time applications.First, Shah et al. [10] adopt the Mixture of Gaus-sians (MOG) [12] as the basic framework for their com-plete system. A new online and self-adaptive methodpermits an automatic selection of the parameters for theGMM. In a second time, they introduce several new so-lutions to address key challenges such as sudden illumi-nation changes and ghosts. Indeed, a novel hierarchicalSURF feature matching algorithm suppresses ghosts inthe foreground mask. Moreover, a voting based schemeis used to exploit spatial and temporal information torefine the foreground mask. Finally, temporal and spa-tial history of foreground blobs is used to detect andhandle paused objects. The proposed model shows sig-nificant robustness in presence of illumination changesand ghosts.Shimada et al. [11] propose a novel framework forthe GMM to reduce the memory requirement withoutloss of accuracy. This ”case-based background model-ing” creates or removes a background model only whennecessary. Furthermore, a case-by-case model is sharedby some of the pixels. Finally, pixel features are dividedinto two groups, one for model selection and the otherfor modeling. This complete approach realizes a low-  2 cost and high accurate background model. The memoryusage and the computational cost could be reduced byhalf of the traditional GMM with better accuracy.Alvar et al. [1] present an algorithm called Mix-ture of Merged Gaussian Algorithm (MMGA) to re-duce drastically the execution time to reach real timeimplementation, without altering the reliability and ac-curacy. The algorithm is based on a combination of theprobabilistic model of the Mixture of Gaussians (MOG)[12], and the learning process of Real Time DynamicEllipsoidal Neural Network (RTDENN) model. Resultsshow that the MMGA achieves a very significant re-duction of execution time compared to the MOG witha higher degree of robustness against noise and illumi-nation changes.Modeling the background using the Gaussian mix-ture is based on the assumption that the backgroundand foreground distributions are Gaussians which isn’talways the case for most environments. Furthermore,it is unable to distinguish between moving shadowsand moving objects. In this context, Elguebaly andBouguila [4] propose a mixture of asymmetric Gaus-sians to enhance the robustness and lexibility of mix-ture modeling, and a shadow detection scheme to re-move unwanted shadows from the scene.Narayana et al. [8] describe a probabilistic modelthat includes a background likelihood, a foreground like-lihood, and a prior at each pixel, and uses Bayes ruleto classify pixels. They argue that a clear separationof the model components results in a model that iseasy to interpret and extend. Their model for the likeli-hoods is built using not only the past observations at agiven pixel location but by also including observationsin a spatial neighborhood around the location. This al-lows them to model the influence between neighboringpixels. Although similar in spirit to the joint domain-range model, their model overcomes certain deficienciesin that model.Hernandez-Lopez and Rivera [7] adopt a change de-tection method to achieve real-time performance. Thisapproach implements a probabilistic segmentation basedon the Quadratic Markov Measure Fields (QMMF) model.This framework regularizes the likelihood of each pixelbelonging to each one of the classes, that is backgroundor foreground). A likelihood that takes into account twocases. The first one is when the background is static andthe foreground might be static or moving. The secondone is when the background is unstable and the fore-ground is moving. Moreover, this likelihood is robust toillumination changes, cast shadows and camouflage sit-uations. Furthermore, the algorithm was implementedin CUDA using a NVIDIA GPU in order to fulfill real-time execution requirement.Camplani et al. [3] develop a Bayesian frameworkthat is able to accurately segment foreground objects inRGB-D imagery. In particular, the final segmentation isobtained by considering a prediction of the foregroundregions, carried out by a novel Bayesian network witha depth-based dynamic model, and, by considering twoindependent depth and color-based GMM backgroundmodels. As a result, more compact segmentations, andrefined foreground object silhouettes are obtained.In another way, Fernandez-Sanchez et al. [5] proposea depth-extended Codebook model which fuses rangeand color information, as well as a post-processing maskfusion stage to get the best of each feature. Results arepresented with a complete dataset of stereo images.Seidel et al. [9] adopt a Robust PCA model to sepa-rate the sparse foreground objects from the background.While many RPCA algorithms use the  l 1 -norm as a con-vex relaxation, their approach uses a smoothed  l  p -quasi-norm Robust Online Subspace Tracking (pROST). Thealgorithm is based on alternating minimization on man-ifolds. The implementation on a graphics processingunit (GPU) achieves realtime performance at a reso-lution of 160 × 120. Experimental results show that themethod succeeds in a variety of challenges such as cam-era jitter and dynamic backgrounds.Hagege [6] describes a scene appearance model as afunction of the behavior of static illumination sources,within or beyond the scene, and arbitrary three dimen-sional configurations of patches and their reflectancesdistributions. Then, a spatial prediction technique wasdeveloped to predict the appearance of the scene, givena few measurements within it. The scene appearancemodel and the prediction technique were developed an-alytically and tested empirically. Results shows thatthis scene appearance model permit to detect changesthat are not the result of illumination changes at theresolution of single pixels, despite sudden and complexillumination changes, and to do so independently of thetexture of the region in the neighborhood of the pixel.Maritime environment represents a challenging ap-plication due to the complexity of the observed scene(waves on the water surface, boat wakes, weather is-sues). In this context, Bloisi et al. [2] present a methodfor creating a discretization of an unknown distribu-tion that can model highly dynamic background suchas water background with varying light and weatherconditions. A quantitative evaluation carried out on therecent MAR datasets demonstrates the effectiveness of this approach.Zhang et al. [13] propose an effective mosaic algo-rithm which Combined SIFT and Dynamic Program-ming (CSDP) for image mosaic which is a useful pre-processing step for background subtraction in videos  3 recorded by a moving camera. To deal with the ghost-ing effect and mosaic failure, this algorithm uses an im-proved optimal seam searching criterion that provides aprotection mechanism for moving objects with an edge-enhanced weighting intensity difference operator. Fur-thermore, it addresses the ghosts and incomplete effectinduced by moving objects. Experimental results showthe effectiveness in the presence of huge exposure dif-ference and big parallax between adjacent images. Acknowledgments  We thank all the reviewers for theirvaluable comments that ensure the high quality of thespecial issue, and all the contributing authors for theirinteresting and innovative work. We would also liketo thank the current Editor-in-Chief Prof. MubarakShah for sharing our vision and providing guidance.The editorial staff of MVA, especially Cherry Place andShradha Menon, have been extremely supportive, help-ful, and patient throughout the entire process. References 1. M. Alvar, A. Rodriguez-Calvo, A. Sanchez-Miralles, andA. Arranz. Mixture of merged gaussian algorithm using RT-DENN.  Machine Vision and Applications , 2013.2. D. Bloisi, A. Pennisi, and L. Iocchi. Background modellingin the maritime domain.  Machine Vision and Applications ,2013.3. M. Camplani, C. Del Blanco, L. Salgad, N. Garca, andF. Jaureguizar. Advanced background modeling with rgb-dsensors through classifiers combination and inter-frame fore-ground prediction.  Machine Vision and Applications , 2013.4. T. Elguebaly and N. Bouguila. Background subtraction us-ing finite mixtures of asymmetric gaussian distributions andshadow detection.  Machine Vision and Applications , 2013.5. E. Fernandez-Sanchez, J. Diaz, and E. Ros. Background sub-traction model based on color and depth cues.  Machine Vi-sion and Applications , 2013.6. R. Hagege. Scene appearance model based on spatial predic-tion.  Machine Vision and Applications , 2013.7. F. Hernandez-Lopez and M. Rivera. Change detection byprobabilistic segmentation from monocular view.  Machine Vision and Applications , 2013.8. M. Narayana, A. Hanson, and E. Learned-Miller. Back-ground subtraction - separating the modeling and the in-ference.  Machine Vision and Applications , 2013.9. F. Seidel, C. Hage, and M. Kleinsteuber. pROST : Asmoothed lp-norm robust online subspace tracking methodfor realtime background subtraction in video.  Machine Vi-sion and Applications , 2013.10. M. Shah, J. Deng, and B. Woodford. Video background mod-eling: Recent approaches, issues and our solutions.  Machine Vision and Applications , 2013.11. A. Shimada, Y. Nonaka, H. Nagahara, and R. Taniguchi.Case-based background modeling -towards low-cost andhigh-performance background model.  Machine Vision and Applications , 2013.12. C. Stauffer and E. Grimson. Adaptive background mixturemodels for real-time tracking.  IEEE Conference on Com-puter Vision and Pattern Recognition, CVPR , pages 246–252, 1999.13. L. Zeng, S. Zhang, and Y. Zhang. Dynamic image mosaicvia SIFT and dynamic programming.  Machine Vision and Applications , 2013. Author Biographies  Thierry Bouwmans  is an As-sociate Professor at the University of La Rochelle, France.His research interests consist mainly in the detectionof moving objects in challenging environments. He hasrecently authored over 30 papers in the field of back-ground modeling and foreground segmentation. Thesepapers investigated particularly the use of fuzzy con-cepts, discriminative subspace learning models and ro-bust PCA. He also developed surveys on mathematicaltools used in the field. He has supervised Ph.D. studentsin this field. He is the creator and the administrator of the Background Subtraction Web Site. He has served asa reviewer for numerous international conferences and journals. Jordi Gonz`alez  completed his PhD in ComputerEngineering in 2004 at Universitat Autnoma de Barcelona(UAB), Spain. At present, he is Associate Professorin Computer Science and responsible of doctoral stud-ies at the Computer Science Department, UAB. He isalso a research fellow at the Computer Vision Cen-ter. The topic of his research is the cognitive evalua-tion of human behaviours in image sequences, or video-hermeneutics. The aim is the generation of both lin-guistic and visual descriptions, which best explain thosebehaviours observed in imagery streams. Towards thisend, he has co-authored more than 100 publications andhas organized several scientific events and workshops.He is also Area Editor of the Computer Vision and Im-age Understanding journal and member of the EditorialBoard of the IET Computer Vision journal. He has alsoserved as a Program Committee Member and Reviewerfor numerous international conferences and journals. Caifeng Shan is a Senior Scientist with Philips Re-search, Eindhoven, The Netherlands. He received thePhD degree in computer vision from Queen Mary, Uni-versity of London, UK. His research interests includecomputer vision, pattern recognition, image/video pro-cessing and analysis, machine learning, multimedia, andrelated applications. He has authored over 50 technicalpapers and 9 patent applications. He is Associate Edi-tor of IEEE Transactions on Circuits and Systems forVideo Technology. He has edited three books, and hasbeen the Guest Editor of IEEE Transactions on Multi-media, IEEE Transactions on Circuits and Systems forVideo Technology, and Signal Processing (Elsevier). Heorganized several international workshops at flagshipconferences such as IEEE ICCV and ACM Multime-dia. He has served as a Program Committee Member  4 and Reviewer for numerous international conferencesand journals. Massimo Piccardi  is currently a Professor at Uni-versity of Technology, Sydney (UTS). He received aMEng and PhD from the University of Bologna, Italy,in 1991 and 1995, respectively. His research interestsare in the areas of pattern recognition, computer vision,and image and video analysis, with main applications totracking and action recognition. Over his career, he hasbeen the author or co-author of over a hundred andtwenty scientific papers in international journals andconference proceedings, and several book chapters. Heis an initiator and a steering committee member of theIEEE conference series on Advanced Video and Signal-Based Surveillance (AVSS) and serves as an AssociateEditor for journals Machine Vision and Applicationsand Image and Vision Computing. Larry S. Davis  is currently a Professor in the In-stitute and the Computer Science Department, as wellas Chair of the Computer Science Department. He wasnamed a Fellow of the IEEE in 1997. Prof. Davis isknown for his research in computer vision and high per-formance computing. He has published over 100 papersin journals and 200 conference papers and has super-vised over 25 Ph. D. students. He is an Associate Editorof the International Journal of Computer Vision andan area editor for Computer Models for Image Process-ing: Image Understanding. He has served as program orgeneral chair for most of the field’s major conferencesand workshops, including the 5’th International Confer-ence on Computer Vision, the 2004 Computer Visionand Pattern Recognition Conference, the 11’th Inter-national Conference on Computer Vision held in 2006,and the 2010 Computer Vision and Pattern Recogni-tion Conference.
Related Search
Similar documents
View more...
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks