MAPPING URBAN LAND COVER USING MULTI-SCALE AND SPATIAL AUTOCORRELATION INFORMATION IN HIGH RESOLUTION IMAGERY. Brian A. Johnson

Size: px
Start display at page:

Download "MAPPING URBAN LAND COVER USING MULTI-SCALE AND SPATIAL AUTOCORRELATION INFORMATION IN HIGH RESOLUTION IMAGERY. Brian A. Johnson"

Transcription

1 MAPPING URBAN LAND COVER USING MULTI-SCALE AND SPATIAL AUTOCORRELATION INFORMATION IN HIGH RESOLUTION IMAGERY by Brian A. Johnson A Dissertation Submitted to the Faculty of The Charles E. Schmidt College of Science in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Florida Atlantic University Boca Raton, FL May 2012

2 MAPPING URBAN LAND COVER USING MULTI-SCALE AND SPATIAL AUTOCORRELATION INFORMATION IN HIGH RESOLUTION IMAGERY by Brian A. Johnson This dissertation was prepared under the direction of the candidate's dissertation advisor, Dr. Zhixiao Xie, Department of Geosciences, and has been approved by the members of his supervisory committee. It was submitted to the faculty of the Charles E. Schmidt College of Science and was accepted in partial fulfillment of the requirements for the degree ofdoctor ofphilosophy. SUPERVISORY COMMITTEE: ~... ~e ==--- Zhixiao Xie, Ph.D. Dissertation Advisor ~e~d.q~ Chair, Department ofgeosciences ~ (J.~- ~/ g, r~;y Ga Perry,.D. Dean, Charles E. Schmidt College of Science ~~T~~~ ~~1-0"2P1~ Date ii

3 ACKNOWLEDGEMENTS I would like to thank my advisor, Dr. Zhixiao Xie, for all of his support and advice over the last several years. I also want to thank Dr. Charles Roberts, Dr. Caiyun Zhang, and Dr. Ken Chen for their support and ideas that helped improve this dissertation. I had a great time working with and learning from all of you, and you made this experience an enjoyable one. Finally, I would like to thank the Broward County Property Appraiser for providing the Broward County imagery. iii

4 ABSTRACT Author: Title: Institution: Dissertation Advisor: Degree: Brian A. Johnson Mapping Urban Land Cover Using Multi-Scale and Spatial Autocorrelation Information in High Resolution Imagery Florida Atlantic University Dr. Zhixiao Xie Doctor of Philosophy Year: 2012 Fine-scale urban land cover information is important for a number of applications, including urban tree canopy mapping, green space analysis, and urban hydrologic modeling. Land cover information has traditionally been extracted from satellite or aerial images using automated image classification techniques, which classify pixels into different categories of land cover based on their spectral characteristics. However, in fine spatial resolution images (4 meters or better), the high degree of within-class spectral variability and between-class spectral similarity of many types of land cover leads to low classification accuracy when pixel-based, purely spectral classification techniques are used. Object-based classification methods, which involve segmenting an image into relatively homogeneous regions (i.e. image segments) prior to classification, have been shown to increase classification accuracy by incorporating the spectral (e.g. mean, iv

5 standard deviation) and non-spectral (e.g. texture, size, shape) information of image segments for classification. One difficulty with the object-based method, however, is that a segmentation parameter (or set of parameters), which determines the average size of segments (i.e. the segmentation scale), is difficult to choose. Some studies use one segmentation scale to segment and classify all types of land cover, while others use multiple scales due to the fact that different types of land cover typically vary in size. In this dissertation, two multi-scale object-based classification methods were developed and tested for classifying high resolution images of Deerfield Beach, FL and Houston, TX. These multi-scale methods achieved higher overall classification accuracies and Kappa coefficients than single-scale object-based classification methods. Since the two dissertation methods used an automated algorithm (Random Forest) for image classification, they are also less subjective and easier to apply to other study areas than most existing multi-scale object-based methods that rely on expert knowledge (i.e. decision rules developed based on detailed visual inspection of image segments) for classifying each type of land cover. v

6 MAPPING URBAN LAND COVER USING MULTI-SCALE AND SPATIAL AUTOCORRELATION INFORMATION IN HIGH RESOLUTION IMAGERY List of Tables...ix List of Figures...xii 1. Introduction Background Information Objectives Organization of Dissertation Literature Review: Remote Sensing of Urban Land Use and Land Cover Introduction Difficulties with Purely Spectral Classification for High Spatial Resolution Images Incorporation of Non-Spectral Information for Image Classification Object-based Image Analysis for Urban Land Use/Land Cover Mapping Contributions of the Research Conducted in this Dissertation Study Areas, Data, and General Methods Study Areas and Data Sources Deerfield Beach, Florida study area Houston, Texas study area General Methods Image segmentation Deerfield study area Houston study area Random Forest classification algorithm Image Classification Using Super-Object Variables...28 vi

7 4.1 Introduction Methodology Assigning super-object statistics to segments Image classification Deerfield study area Houston study area Results Land cover classification system Deerfield study area Comparison of Random Forest and a benchmark algorithm (Decision Tree) Houston study area Comparison of Random forest and Decision Tree classifications Summary Image Segmentation Refinement Using Local Heterogeneity Measures Introduction Methodology Identifying the optimal single-scale segmentation Deerfield study area Houston study area Refining undersegmented regions Deerfield study area Houston study area Refining oversegmented regions Deerfield study area Houston study area Results Deerfield study area Identifying the optimal single-scale segmentation Refining the optimal single-scale segmentation...77 vii

8 Comparison of Random Forest and Decision Tree classifications Houston study area Identifying the optimal single-scale segmentation Refining the optimal single-scale segmentation Comparison of Random Forest and Decision Tree classifications Summary General Discussion and Conclusions Summary of the Dissertation Research Potential Applications of the Land Cover Data for Urban Planning Purposes Recommendations for Future Research...99 References viii

9 TABLES Table 1: Literature related to object-based land use/land cover classification of urban areas...12 Table 2: Error matrices for the pixel-based classification, the best pixel-based classification with super-object variables, the best object-based classification, and the best object-based classification with super-object variables (Deerfield study area)...39 Table 3: Pairwise comparison of error matrices for classifications performed with and without super-object variables (Deerfield study area)...40 Table 4: Pairwise comparison of error matrices for classifications that included super-object variables, performed using different scales of segments as the base units (Deerfield study area)...42 Table 5: Error matrices for the pixel-based classification, the best pixel-based classification with super-object variables, the best object-based classification, and the best object-based classification with super-object variables (Houston study area)...51 Table 6: Pairwise comparison of error matrices for classifications performed with and without super-object variables (Houston study area)...52 Table 7: Moran s I, weighted variance, normalized Moran s I, normalized ix

10 weighted variance, and Global Score for each of the single-scale segmentations (Deerfield study area)...77 Table 8: Overall classification accuracy for each of the two-scale segmentations (Deerfield study area)...78 Table 9: Overall classification accuracy for each of the three-scale segmentations (Deerfield study area)...79 Table 10: Error matrices for the most accurate single-scale segmentation, the single-scale segmentation with the lowest GS value, the best two-scale segmentation, and the best three-scale segmentation (Deerfield study area)...81 Table 11: Pairwise comparisons of single-scale and refined three-scale segmentations (Deerfield study area)...82 Table 12: Moran s I, weighted variance, normalized Moran s I, normalized Weighted variance, and Global Score for each of the single-scale segmentations (Houston study area)...86 Table 13: Overall classification accuracy for each of the two-scale segmentations (Houston study area)...87 Table 14: Overall classification accuracy for each of the three-scale segmentations (Houston study area)...88 Table 15: Error matrices for the most accurate single-scale segmentation, the single-scale segmentation with the lowest GS value, the best two-scale segmentation, and the best three-scale segmentation (Houston study area)...92 x

11 Table 16: Pairwise comparisons of the single-scale and three-scale segmentations (Houston study area)...93 xi

12 FIGURES Figure 1: The spectral variability of a single rooftop is high due to the difference in the amount of direct solar illumination that each side of the rooftop receives...9 Figure 2: Color infrared image of the Deerfield Beach, FL study area...18 Figure 3: Color infrared image of the Houston, TX study area...19 Figure 4: Scale 20, 80, and 140 segments overlaid on the color infrared imagery for a subset of the image (Deerfield study area)...21 Figure 5: Scatter plot of segment values from the blue and green spectral bands of the scale 20 segmentation (Deerfield study area)...24 Figure 6: Example of a training data set with two classes and two classification variables (left), and the Decision Tree (right) produced using the training data...26 Figure 7: Segment generated at a fine scale located within a segment generated at a coarser scale...29 Figure 8: Flowchart of the classification methods used for classification method Figure 9: Building class training segment for the scale 20 segmentation and the scale 100 segmentation...33 Figure 10: Overall classification accuracies for the single-scale and per-pixel classifications, and the multi-scale classifications with: single pixels, scale 20 segments, and scale 40 segments used as the base units for classification (Deerfield study area)...38 xii

13 Figure 11: Subset of the Deerfield image, and land cover maps produced from the most accurate multi-scale and single-scale classifications...41 Figure 12: Land cover map produced by the best multi-scale object-based classification of the Deerfield image...45 Figure 13: Overall classification accuracies for the single-scale and per-pixel classifications, and the multi-scale classifications with: single pixels, scale 10 segments, scale 20 segments, and scale 30 segments used as the base units for classification (Houston study area)...49 Figure 14: Land cover map produced by the best multi-scale object-based classification of the Houston image...54 Figure 15: Subset of the Houston image, and land cover maps produced from the most accurate multi-scale and single-scale classifications...55 Figure 16: H values for a subset region of the Deerfield Beach image...70 Figure 17: Extracted undersegmented image regions with heterogeneity thresholds of 30% and 60% (Deerfield study area)...71 Figure 18: Extracted oversegmented image regions with heterogeneity thresholds of 10%, 20%, 30%, and 40% (Deerfield study area)...73 Figure 19: Refined undersegmented regions (a), refined oversegmented regions (b), remaining segments from the optimal single-scale segmentation (c), and the final refined segmentation (d) created by merging (a), (b), and (c) together...75 Figure 20: Classified land cover map of the most accurate three-scale segmentation (Deerfield study area)...83 xiii

14 Figure 21: Subset of the Deerfield image, the most accurate three-scale segmentation/classification, and the most accurate single-scale segmentation/ classification...84 Figure 22: Classified land cover map of the most accurate three-scale segmentation (Houston study area)...89 Figure 23: Subset of the Houston image, the most accurate three-scale segmentation/classification, and the most accurate single-scale segmentation/ classification...90 xiv

15 1. INTRODUCTION 1.1. Background Information High spatial resolution remote sensing imagery obtained from satellite (IKONOS, Quickbird, GeoEye-1, WorldView-2, etc.) and airborne sensors has become increasingly available in recent years. Urban land cover information extracted from high resolution imagery can be used for a variety of purposes, including urban tree canopy mapping (Walton et al., 2008), green space mapping (Lang et al., 2008), impervious surface mapping (Zhou and Wang, 2008; Han and Burian, 2009), and updating building footprint GIS data (Jin and Davis, 2005). Land cover data can also be helpful for mapping urban land use (Herold et al., 2003). However, extracting land cover information from this high resolution data can be difficult when traditional pixel-based image classification methods are used, as there is often a high degree of spectral variability within land cover classes (caused by shadows, sun angle, gaps in tree canopy, etc.) that causes low classification accuracy (Yu et al., 2006). This high degree of within-class spectral variability is due to the fact that a single pixel typically represents only a small part of a classification target (e.g. tree canopy, building rooftop, or road) in a high-resolution image. In previous studies, an alternative approach, referred to as geographic objectbased image analysis (GEOBIA or simply OBIA), has outperformed the pixel-based approach (Thomas et al., 2003; Blaschke et al., 2004; Yu et al., 2006; Myint et al., 2011). 1

16 In the object-based approach, an image is segmented into relatively homogeneous regions (i.e. segments, image regions, or image objects ) prior to classification, and the attributes of these segments are used for classification instead of attributes of single pixels (Benz et al., 2004). Use of segments rather than single pixels as the base units for analysis reduces within-class spectral variability because representative values of segments (e.g. mean values) are used instead of individual pixel values. It also allows for spatial and contextual information such as size, shape, texture, and topological relationships (e.g. containment and adjacency) to be incorporated for classification (Blaschke et al., 2004; Benz et al., 2004). Although the object-based approach has a number of advantages over the pixel-based approach for classifying high-resolution imagery, it is not without problems. One issue with the object-based approach is that classification accuracy is affected by image segmentation quality (Liu and Xia, 2010). Many image segmentation algorithms, such as the commonly-used Multiresolution Segmentation region-merging algorithm described by Benz et al. (2004), require users to set one or more parameters that influence the average size of segments produced by a segmentation (i.e. the segmentation scale), and choosing appropriate parameters can be difficult. Choosing segmentation parameter(s) that produce segments smaller than the actual ground features in an image results in oversegmentation, which is undesirable because non-spectral information (e.g. size and shape) calculated for segments will not be useful for classification. Using parameter(s) that produce segments larger than the actual features in an image results in undersegmentation, which is undesirable because segments will 2

17 contain pixels from more than one type of land cover. Oversegmentation and undersegmentation have both been shown to lower classification accuracy, but the effect of undersegmentation is generally considered to be worse (Kim et al., 2009; Liu and Xia 2010). In an attempt to minimize over- and undersegmentation, some studies have compared multiple segmentations of a scene prior to classification to identify the best one (Kim et al., 2008; Trias-Sanz et al., 2008; Novack et al., 2011). Others have identified the optimal segmentation after classification by comparing the classification accuracies of each segmented image (Dorren et al., 2003; Kim et al., 2010; Liu and Xia, 2010). The main downside to these single-scale object-based approaches is that different types of land cover may be classified better at different scales, so using only one segmentation scale may not produce the best results. To deal with this problem, some studies have employed a multi-scale object-based approach that involves building a hierarchy of multiple segmentations and classifying different types of land cover at each segmentation scale based on expert knowledge (Dorren et al., 2003; Zhou and Troy, 2009; Myint et al., 2011; Kim et al., 2011). However, manually choosing appropriate segmentation scale(s) and decision rules to classify each type of land cover requires a detailed investigation of segments at each scale, and the procedure can be time intensive and subjective. A multiscale object-based approach that does not require users to investigate segments and develop classification rules at each scale may provide a faster and less subjective alternative. 3

18 1.2 Objectives The objective of this dissertation is to develop and test multi-scale object-based image classification techniques that do not require manually selecting optimal segmentation scale(s) and classification rules for each type of land cover. Two multiscale methods will be tested, and the classification accuracy achieved using each of them will be compared with that achieved using traditional single-scale methods to see if results improve. Finally, the two multi-scale classification methods will be compared to assess their relative strengths and weaknesses. The first multi-scale method tested will build on existing research by Bruzzone and Carlin (2006) that incorporated variables from multiple image segmentation scales for classification purposes. In their study, single pixels in an image were assigned the spectral, size, and shape variables of the segments that contain them (i.e. their superobjects ) from multiple segmentation scales, and results were compared with those achieved using pixel-based classification techniques. Unlike the previous research, in this dissertation image segments of multiple scales will also be tested as the base units that super-object variables are assigned to, and classification accuracy of the classifications that include super-object variables will be compared to the classification accuracy achieved by the single-scale segmentation/classification approach. This research is necessary because a fully object-based approach may be more suitable for high resolution imagery than a pixel-object hybrid approach. The second multi-scale method tested will build on research by Espindola et al. (2006). In their study, global empirical goodness metrics that coincide with a human s 4

19 perception of a good segmentation (i.e. homogeneous image segments that are very different from neighboring segments) were derived in order to identify an optimal singlescale segmentation. Unlike the previous research, local empirical goodness metrics will also be developed and used to identify under- and over-segmented regions in the optimal single-scale segmentation. Undersegmented regions will be further segmented at finer scales, and oversegmented regions will be merged with other oversegmented neighboring segments. This refinement procedure is necessary because, even in an optimal singlescale segmentation, it is very likely that under- and oversegmentation still exist. This approach is considered to be multi-scale because multiple segmentation parameters are used to create the final segmented image. The two multi-scale methods will be used for classifying urban land cover in high resolution images of Deerfield Beach, FL and Houston, TX. The types of urban land cover mapped in this study are: trees, grass, buildings, other impervious surfaces (e.g. concrete and asphalt), bare soil, and pools. Land cover maps produced using the developed methods may be useful for applications such as urban green space analysis, tree canopy mapping, and urban hydrological modeling. Use of the proposed methods to classify and compare imagery from multiple dates will also be useful for monitoring urban land cover change. 1.3 Organization of Dissertation The following is an outline of the organization of this dissertation. In Chapter 2, literature related to object-based image analysis and urban remote sensing will be 5

20 reviewed. Chapter 3 will describe the study areas, data sources, image segmentation procedures, and image classification algorithm used in this dissertation. Chapter 4 will provide the background information, methods, and results of the first multi-scale classification method, entitled Image classification using super-object variables. Chapter 5 will cover the background information, methods, and results of the second multi-scale classification method, entitled Image segmentation refinement using local heterogeneity measures. Finally, a summary of the findings of this dissertation will be given, potential applications for the land cover maps will be discussed, and recommendations for future research will be made in Chapter 6. 6

21 2. LITERATURE REVIEW: REMOTE SENSING OF URBAN LAND USE AND LAND COVER 2.1 Introduction Urban remote sensing has proven to be useful for cross-scale urban planning and urban ecological research in recent years (Netzband and Jurgens, 2010). This chapter reviews the literature related to urban land cover and land use mapping, which makes up the backbone of urban remote sensing (Fugate et al., 2010). For mapping urban land cover and/or land use from remote sensing images, there are manual classification approaches and automated or semi-automated classification approaches. Manual classification approaches involve visual interpretation of the imagery and manual delineation of land cover or land use polygons, while automated or semi-automated approaches use supervised (e.g. minimum distance, maximum-likelihood, Decision Tree, etc.) or unsupervised (e.g. k-means or ISODATA clustering) image classification algorithms to map land cover or land use. Although a manual classification by a skilled image interpreter may be very accurate, automated or semi-automated methods are typically preferable due to the fact that they are much faster (Novack et al., 2011). This chapter reviews the literature related to remote sensing of urban land use/land cover in high resolution images, with a focus on object-based methods that can overcome the difficulties of the traditional pixel-based, purely spectral classification approach. 7

22 2.2 Difficulties with Purely Spectral Classification for High Spatial Resolution Images The increasing availability of high spatial resolution remote sensing imagery and the advancements in computing technology have made it possible for urban land cover and/or land use to be mapped at very fine spatial scales (4 m spatial resolution or finer). However, while high resolution images allow for the extraction of more categories of geographical features from urban images (Schopfer et al., 2010), they also significantly increase the spectral variability and decrease the potential accuracy of a purely pixelbased approach to classification (Tadesse et al., 2003). An example of this within-class spectral variability is shown in Figure 1, as the sides of the building rooftop that are in direct sunlight have a higher reflectance than the sides of the rooftop that are not under direct sunlight, which may cause errors for a purely spectral classification. Another difficulty with classifying urban images comes from the fact that many types of urban land cover are spectrally similar. For example, asphalt roads have a spectral signature very similar to that of dark tile or composite shingle roofs, with no distinct absorption characteristics between them (Herold and Roberts, 2010). This high degree of withinclass spectral variability and between-class spectral similarity can cause low classification accuracy in high resolution urban images when purely spectral classification methods are used. 8

23 Figure 1: The spectral variability of a single rooftop is high due to the difference in the amount of direct solar illumination that each side of the rooftop receives. 2.3 Incorporation of Non-Spectral Information for Image Classification Due to the problems with purely spectral classification of high resolution imagery, a number of pixel-based approaches and object-based approaches have been used to improve classification accuracy. Pixel-based methods typically involve the calculation of image texture within a pixel neighborhood (Chica-Olmo and Abarca-Hernandez, 2004) or a set of pixel neighborhoods (Coburn and Roberts, 2004; Huang et al., 2008; Pacifici et al., 2009). An example of a simple texture statistic would be the variance of pixels spectral values within a 3 x 3 pixel neighborhood, for a single spectral band. The texture value is assigned to the central pixel of a pixel neighborhood, and then used alone or in addition to spectral information for classifying the pixel. More complex texture information can be derived from gray-level co-occurrence matrix values (GLCM) (Pacifici et al., 2009), geostatistical measures (Chica-Olmo and Abarca-Hernandez, 2004), or wavelet-based statistics (Huang et al., 2008) within a pixel neighborhood. The incorporation of multi-scale texture information (i.e. texture calculations from multiple 9

24 neighborhood sizes) has led to improvement in classification accuracy over the purely spectral pixel-based approach (Coburn and Roberts, 2004; Huang et al., 2008; Pacifici et al., 2009), but there are some problems with the multi-scale pixel-based approaches. One downside comes from the fact that the pixel neighborhood(s) used for texture calculation are fixed in shape (either square or circular neighborhoods), while different types of land cover or land use vary in shape. Another disadvantage is that pixels near the boundary of two or more types of land cover/land use receive texture information from a combination of the adjacent textures, which leads to misclassification near the edges of features (Coburn and Roberts, 2004). Object-based methods for calculating non-spectral information involve first segmenting the image into relatively homogeneous regions that (ideally) represent realworld land cover objects, and then calculating the non-spectral information for these image segments. One benefit of the object-based approach is that calculating texture information for homogeneous segments (rather than regularly-shaped pixel neighborhoods) can reduce classification errors near the edges of adjacent features (Bruzzone and Carlin, 2006). Another benefit of the object-based approach is its ability to incorporate additional non-spectral information such as size (e.g. area, perimeter, etc.) and shape (e.g. compactness, rectangularity, etc.) for image segments. 2.4 Object-based Image Analysis for Urban Land Use/Land Cover Mapping This section reviews the literature related to object-based classification of urban land use and land cover using high resolution imagery. The methodologies of past object- 10

25 based studies are discussed in regards to three main criteria: scale (single-scale or multiscale), classification scheme (land cover classification or land use classification), and classification method (automated classification, manual classification using expert knowledge, or a combination of both). Table 1 shows how each of the past studies is classified in regards to these three criteria. 11

26 12

27 Table 1: Literature related to object-based land use/land cover classification of urban areas. Classification methods: SVM, support vector machines; MASC, multiple agent segmentation and classification; SNN, standard nearest neighbor; NN, nearest neighbor; DT, Decision Tree; ML, maximum-likelihood; RF, random forest; RT, regression tree. Herold et al. (2003), Lizarazo and Elsner (2009), Liu and Xia (2010), and Novack et al. (2011) employed a single-scale approach for segmentation and classification. Zhou and Wang (2008) used a single-scale segmentation approach, but segments that contained shadows were detected and sub-divided following the initial segmentation. The disadvantage of the single-scale approach is that not all types of land cover will be segmented best using a single set of segmentation parameters. On the other hand, Benz et al. (2004), Bruzzone and Carlin (2006), Lackner and Conway (2008), Walter and Blaschke (2008), Zhou and Troy (2009), Zhang et al. (2010), and Myint et al. (2011) used a multi-scale object-based approach in which a range of segmentation parameters were used to create a hierarchy of multiple image segmentations. The advantage of this multiscale approach is that different types of land cover will be segmented best at different segmentation scales. The main difficulty with the multi-scale approach is developing an appropriate classification methodology. Many multi-scale GEOBIA studies have used expert knowledge in the form of decision rules or fuzzy decision rules to classify urban land cover (Herold et al., 2003; Benz et al., 2004; Lackner and Conway, 2008; Walter and Blaschke, 2008; Zhou and Troy, 2009; Zhang et al., 2010; Myint et al., 2011) and urban land use (Lackner and 13

28 Conway, 2008). Classification rules are created for each type of land cover or land use by visually inspecting the features of interest at different segmentation scales to determine the best scale(s) and rule(s) for each class. For example, based on a visual inspection of the imagery, an image analyst may decide to classify buildings at a coarse segmentation scale using shape information (e.g. a compactness threshold), trees at a fine scale using an NDVI threshold and a variance threshold for the near infrared spectral band, and so on. This approach may work well if the analyst has a great deal of knowledge about the features of interest in the image, but some disadvantages are that: (i) creating decision rules can be time-consuming and subjective and (ii) the developed rules may not be applicable to other images (e.g. using decision rules developed to classify buildings in one urban area may not work well in another urban area where the size and shape of buildings are different). An automated classification approach that is less subjective and easier to apply to other study areas is more desirable. A number of GEOBIA studies have used an automated classification approach. Bruzzone and Carlin (2006), Zhou and Wang (2008), Lizarazo and Elsner (2009), Liu and Xia (2010), and Novack et al. (2011) used supervised classification algorithms for mapping urban land cover, and Herold et al., 2003 used a supervised classification algorithm to classify urban land use. Most of these studies used non-linear and nonparametric classification algorithms that are efficient at feature selection, as traditional algorithms (e.g. maximum-likelihood, ISODATA) do not work well for object-based classification due to the high dimensionality of the datasets and the fact that some variables do not follow a normal distribution (Zhang et al., 2010). The most commonly- 14

29 used classification algorithm for urban GEOBIA studies has been support vector machines (SVM), a non-linear and non-parametric statistical learning algorithm that identifies the optimal decision boundaries between classes to minimize misclassification (Burges 1998). Despite the popularity of SVM for urban land cover classification, a study that compared SVM with other machine-learning algorithms including Decision Tree, Random Forest, and Regression Tree found that the Random Forest classification method worked better than SVM (Novack et al., 2011). The majority of studies that employed a purely automated classification approach used it to classify a single-scale segmentation (Zhou and Wang, 2008; Lizarazo and Elsner, 2009; Liu and Xia, 2010; Novack et al., 2011). Bruzzone and Carlin (2006) used an automated classification algorithm (SVM) to classify pixels with context information from segments of multiple scales, but the classification accuracy of this pixel-based classification was not compared with the accuracy that could be achieved when singlescale segmentations are classified using SVM. This type of comparison is necessary to show that an automated multi-scale approach with pixels as the base classification units can outperform the automated single-scale GEOBIA approach that has been used in many studies. Additionally, to date no multi-scale, purely object-based studies have used an automated classification algorithm to classify urban land cover. 2.5 Contributions of the Research Conducted in this Dissertation The research conducted in this dissertation fills some of the gaps that exist in the literature by: (i) providing two multi-scale, automated, purely object-based methods for 15

30 classifying urban land cover in high resolution imagery [Chapters 4 and 5], (ii) testing pixels and image segments of different scales as the base units for classifications that include multi-scale object-based context information [Chapter 4], and (iii) comparing the accuracy achieved by classification methods that contain multi-scale context information with the accuracy achieved by single-scale classification methods [Chapter 4]. 16

31 3. STUDY AREAS, DATA, AND GENERAL METHODS 3.1 Study Areas and Data Sources Deerfield Beach, Florida study area. The first study area used for testing the dissertation research is located in the city of Deerfield Beach, Florida ( N, 80 5' 48 W). 30 cm resolution color infrared (CIR) digital aerial orthoimagery of the city was obtained from the Broward County Property Appraiser (flight date: Dec 31, 2008). The imagery contains 8-bit data for the near infrared (NIR), red, and green regions of the electromagnetic spectrum, in the form of three spectral bands. For the study area, a 4630 pixel x 4967 pixel (approximately 1400 m x 1500 m) subset was chosen that contains many of the types of land cover typically found in an urban area; including buildings and other sealed surfaces (concrete and asphalt), trees/shrubs, grass, swimming pools, and bare soil. Vehicles (cars, trucks, and boats) and shadows are also present in the image. Since there was only one small lake in the study area, it was masked out and not included for image classification purposes. The color infrared image of the study area is shown in Figure 2. 17

32 Figure 2: Color infrared image of the Deerfield Beach, FL study area Houston, Texas study area. The second study area is located in Houston, Texas, specifically in the Galeria area of southwest Houston. 1 m spatial resolution CIR digital aerial orthoimagery, acquired as part of the 2010 National Agricultural Imagery Program (NAIP), was used for this study area. The NAIP imagery contains 8-bit data for the NIR, red, green, and blue regions of the electromagnetic spectrum. This study area has a higher population 18

33 density than the Deerfield study area, and also contains many types of land cover typical in an urban area (e.g. buildings and other sealed surfaces, grass, trees/shrubs, pools, bare soil, vehicles, etc.). In addition, the size, shape, and rooftop colors/materials of buildings in this area are very diverse. All of these characteristic make it a good 2 nd study area for testing the ability of the developed methods to map land cover in a heterogeneous urban area. Figure 3: Color infrared image of the Houston, TX study area. 19

34 3.2 General Methods Image segmentation Deerfield study area. In this dissertation, image segmentation was performed in Definiens Professional 5 (currently with Trimble under the name of ecognition Developer ) using the Multiresolution Segmentation algorithm, which starts with one-pixel image segments, and merges neighboring segments together until a heterogeneity threshold is reached (Benz et al., 2004). The heterogeneity threshold is determined by a user-defined scale parameter which is calculated based on color/shape and smoothness/compactness weights. Benz et al. (2004) provide the formula for calculating this threshold. It should be noted that the scale parameter is an abstract term which does not have a physical meaning (i.e. a scale parameter of 20 does not produce segments with an average size of 20 pixels or 20 m 2 ); it merely determines the maximum heterogeneity allowed when neighboring segments are merged during the segmentation process (Definiens, 2006). In general, increasing the value of the scale parameter causes the average size of segments to increase, producing a coarser segmentation of the image. For the Deerfield study area imagery, a series of image segmentations was performed using seven different scale parameters ( at an interval of 20) so that the image could be analyzed at several scales. Based on preliminary analysis, it was clear that scale parameters smaller than 20 produced segments that were, in general, oversegmented relative to all of the land cover classes of interest in this study, and parameters larger than 140 produced segments that were under-segmented relative to all land cover 20

35 types of interest. For this reason, segmentation parameters smaller than 20 or larger than 140 were not used. The scale parameter interval of 20 was chosen in order to keep the number of segmentations to a reasonable level, while still adequately capturing the land cover features in the image at different scales. All three spectral bands were assigned equal weights for segmentation because all of them may contain useful information. Color/shape weights were set to 0.9/0.1 so that spectral information would be considered most heavily for segmentation. These color/shape weights are quite typical for optical imagery; increasing the shape factor above the minimum levels is more relevant for noisy or highly-textured images (e.g. Radar images), and is mainly done to limit highlyfractalized segmentation (Walter and Blaschke, 2008). Smoothness/compactness weights were set to 0.5/0.5 so as to not favor either compact or non-compact segments. To allow for a visual comparison of the effect that the scale parameter has on segmentation results, segments produced using three different scale parameters are shown for a small subset of the image in Figure 4. 21

36 Figure 4: Scale 20 (a), 80 (b), and 140 (c) segments overlaid on the color infrared imagery for a subset of the image. Different ground features are over- and undersegmented at each segmentation scale. For each segment, spectral information (mean values and variance for each band, mean brightness [sum of the mean DN of all spectral bands], mean NDVI), texture information (gray-level co-occurrence matrix [GLCM] contrast, correlation, and entropy for the NIR band [all directions]), and size/shape statistics (area, roundness, shape index, border index, length/width, rectangular fit, density, border length, and asymmetry) were calculated. The NIR band was used for GLCM texture calculations because it is the most useful spectral band for classifying vegetation land cover types, and it is also useful for non-vegetation land cover. In choosing the number of variables to use for image classification, the rationale was to err on the side of using too many variables rather than too few, since the Random Forest classification algorithm, described in section 3.2.2, is capable of handling high dimensional datasets and is not very sensitive to redundant variables. However, equations for each of the variables were investigated prior to choosing them to ensure that they were not too similar. For more information about the formulas used for texture, size, and shape calculations, readers are referred to the Definiens Professional 5 Reference book (Definiens, 2006). 22

37 Houston study area. For the Houston study area, image segmentation was performed using the same algorithm and software as the Deerfield study area. A series of seven image segmentations was performed using scale parameters ranging from at an interval of 10 so that the image can be analyzed at several scales. Based on a visual inspection of image segments, it was clear that scale parameters smaller than 10 produced segments that were over-segmented and parameters larger than 70 produced segments that were under-segmented for all land cover types of interest. For this image, the scale parameters at which excessive over- and under-segmentation occurred were smaller than those for the Deerfield Beach image, probably due in part to the coarser spatial resolution of the Houston image (1 m as opposed to 30 cm). The scale parameter interval of 10 was chosen to adequately capture the land cover at multiple scales, while keeping the number of segmentations to a reasonable level. NIR, green, and red spectral bands were assigned equal weights for segmentation. The blue spectral band was not included for segmentation because, as shown in Figure 5, it was highly correlated with the green spectral band. Color/shape weights (0.9/0.1) and smoothness/compactness weights (0.5/0.5) were the same as those chosen for the Deerfield Beach study area. 23

38 Figure 5: Scatter plot of segment values from the blue and green spectral bands of the scale 20 segmentation. There is a very high correlation between the two bands. Other bands combinations had lower correlations Random Forest classification algorithm. For all of the image classifications performed in this dissertation, a large number of spectral and non-spectral variables were considered for classification. Some of these variables may be correlated, so a classification algorithm that can handle high dimensional datasets containing some redundant variables is needed. The Random Forest algorithm proposed by Breiman (2001) was chosen for this study because it has been shown to perform well for classifying hyperspectral images (Ham et al., 2005; Lawrence et al., 2006), which also contain many input variables and redundant variables. To understand the Random Forest algorithm, it is helpful to first understand some of the 24

39 basics of Decision Tree classification. Decision Tree classification involves splitting the training data set into smaller subdivisions at nodes in what is called a Decision Tree, using decision rules. At each node, tests are performed on the training data to find the most useful variable and variable value for the split (Friedl and Brodley, 1997). An example of a very simple Decision Tree is shown in Figure 6. In this Decision Tree, there is only one node (the NDVI box) at which a test is performed to identify the variable and value that provides the most accurate split of the training data set into Vegetation and Non-vegetation classes. Data sets with more classification variables and more land cover classes will typically have many more nodes. For a more detailed description of the Decision Tree classification algorithm, readers are referred to Friedl and Brodley (1997). The Random Forest classifier is an ensemble classifier that uses a random subset of the input variables and a bootstrapped sample of the training data to perform a Decision Tree classification (Breiman, 2001). Typically, a large number of Decision Trees are created, and unweighted voting is used to determine final class assignments for each pixel or image segment. Previous research has found that using a Random Forest rather than a single Decision Tree increases classification accuracy and reduces the overfitting of decision rules to the training data (Breiman, 2001; Lawrence et al., 2006). Some other advantages of the Random Forest classifier are its speed, its relative insensitivity to user-defined parameters, its relative insensitivity to noise (e.g. misclassified training samples or noisy variables), and its ability to achieve results comparable to classifiers that are more computationally intensive (e.g. boosting) or 25

40 require more parameter calibration (e.g. support vector machines) (Breiman, 2001; Pal, 2005; Gislason et al., 2006). Figure 6: Example of a training data set with two classes and two classification variables (left), and the Decision Tree (right) produced using the training data. The data set is split based on NDVI at the base node because it more accurately classifies the training data than the NIR variable. The NDVI value of 0.6 indicates the best split based on the NDVI variable. Random Forest classification was performed using Weka 3.6.4, an open-source data mining program (Hall et al., 2009). There were two user-defined parameters required to perform classification: the number of Decision Trees to create and the number of randomly-selected variables considered for splitting each node in a tree. Previous 26

41 research has shown that the number of trees and the number of randomly-selected variables selected have a relatively small impact on classification accuracy (Breiman, 2001; Pal, 2005). Breiman (2001) reported good results for datasets of different sizes when the number of variables was set to log 2 M + 1, where M is the number of variables, and Lawrence et al. (2006) found that using 500 trees or more produced unbiased estimates of error. Based on the previous research, the number of trees was set to 500 for all of the image classifications in this study, and the number of variables used for splitting each node was set to log 2 M + 1. These settings minimized the computation time for each classification because optimum parameters did not need to be identified, which was important for this study because a large number of classifications was performed. In Chapters 4 and 5, the segmentation procedures described in this Chapter are used to perform the initial single-scale image segmentations, and the Random Forest classification algorithm and parameters described in this Chapter are used to perform image classification. 27

42 4. IMAGE CLASSIFICATION USING SUPER-OBJECT VARIABLES 4.1 Introduction Some segmentation algorithms, such as the commonly-used Multiresolution Segmentation algorithm mentioned in Chapter 1, produce a hierarchy in which segments generated at a fine scale are nested inside of segments generated at coarser scales, as shown in Figure 7. The larger segments are referred to as super-objects of the smaller segments, and the smaller segments are referred to as sub-objects of the larger segments (Definiens, 2006). Spectral and non-spectral information (e.g. size and shape) of these super-objects may be useful for image classification purposes. For example, spectral information from the finest-scale segments may be useful for targeting individual trees, while size and shape information from coarser-scale super-objects may be useful for separating buildings from concrete and other surfaces spectrally similar to building rooftops. Use of super-object information for classification may be able to overcome some of the limitations of single-scale GEOBIA studies (not all types of land cover are segmented best at one scale), and is also less reliant on expert knowledge and less subjective than traditional multi-scale classification methods that require segments to be investigated at every scale in order to determine the best scale(s) and rule(s) for classifying each type of land cover. 28

43 Figure 7: Segment generated at a fine scale (border shown as a grey line) located within a segment generated at a coarser scale (border shown as a black line). A previous study by Bruzzone and Carlin (2006) used the contextual information from super-objects of single pixels (i.e. spectral and non-spectral values of the segments that contain a pixel) to perform a pixel-based land cover classification, and found improved results when compared to a traditional pixel-based classification method and a generalized Gaussian pyramid decomposition classification method (also a pixel-based classification). However, as stated in Chapter 1, the object-based approach has been shown to outperform the pixel-based approach for classification of high resolution imagery, so it is possible that using a single-scale object-based approach will still work better than a pixel-based method that incorporates super-object information. For this reason, it is necessary to compare the classification accuracy achieved when pixels are 29

44 classified using super-object information with the accuracy achieved using a traditional single-scale GEOBIA approach in order to see which yields better results. Furthermore, it is necessary to compare a pixel-based based approach that incorporates super-object information with an object-based approach that incorporates super-object information (i.e. with segments as the base units for analysis) to see which, if either, is preferable for classifying high resolution imagery. The first multi-scale segmentation/classification method tested in this dissertation (i.e. classification method 1) will use spatial joins to assign spectral, texture, and context (size, shape) variables from coarser segmentations to finer segmentations or single pixels. Unlike previous research: (i) classification accuracies achieved when super-object variables are included for classification will be compared with classification accuracies achieved without super-object information (i.e. using traditional single-scale object-based methods), and (ii) single pixels and image segments of different scales will be tested as the base units for classification with super-object variables. Results of these comparisons should be useful for future multi-scale GEOBIA studies. 4.2 Methodology Assigning super-object statistics to segments. After all of the single-scale image segmentations described in section were performed for the Deerfield Beach image, and the spectral and non-spectral statistics for each segment were calculated, a series of spatial joins was performed in ArcGIS so that the smallest segments (i.e. the segments generated using a scale parameter of 20) could 30

45 be assigned the spectral, texture, and size/shape statistics of their super-objects (i.e. the segments from coarser-scale segmentations that contain them) as well. The end result was that segments generated using a scale parameter of 20 also contained the statistics of their super-objects generated using scale parameters from For the sake of comparison, this process was also used to assign super-object information to single pixels (in addition to the spectral values and NDVI of the pixels). For the Houston image, the segments from the finest-scale segmentation (i.e. the scale parameter 10 segments) were assigned the statistics of their super-objects generated using scale parameters from Single pixels were also assigned super-object information from all seven of the image segmentations Image classification Deerfield study area. Image classification was performed for: (a) image segments that contained superobject variables and (b) single pixels that contained super-object variables. Classification was also performed for the seven segmentations that did not include super-object variables (i.e. the single-scale segmentations) in order to assess the impact that using super-object information had on classification. Finally, a per-pixel classification was performed without super-object information to allow for a comparison to be made with most traditional form of image classification (i.e. pixel-based classification). The Random Forest algorithm was used for all classifications, with the parameters described in section An overview of the image classification workflow is shown in Figure 8. 31

46 To classify the segmentations that did not include super-object statistics, training data was collected for each type of land cover from segments generated using a scale parameter of 20, referred to from now on as the scale 20 segmentation. For each land cover class, an effort was made to choose training samples that represented the full range of spectral and spatial characteristics of the class (e.g. for the buildings class, training segments were chosen for buildings of different sizes, shapes, and rooftop materials). A total of 168 segments were used as training data for classification. Super-objects of these training segments were used as the training data for the other segmentations (i.e. superobjects from the scale 40 segmentation were used as training data to classify the scale 40 segmentation, and so on). As an example, training segments for two different segmentation scales are shown in Figure 9. For the pixel-based classification, one pixel within each of the scale 20 training segments was chosen as a training pixel. Figure 8: Flowchart of the classification methods used for classification method 1. 32

47 Figure 9: Building class training segment for the scale 20 segmentation (a) and the scale 100 segmentation (b). For the scale 20 classifications that included super-object information, scale 20 training segments were also assigned the information of their superobjects. For the classifications that included super-object information, spatial joins were performed, as described in section 4.2.1, so that the fine-scale training segments (or training pixels) could be assigned the statistics of their super-objects. Because, as discussed in the Introduction section, using segments larger than the actual features of interest (i.e. undersegmented image objects) typically results in lower classification accuracy, classification accuracy was calculated as super-object information from each of 33

48 the coarser segmentations was added to see if the accuracy decreased after a certain point. For example, classification was performed when super-object information from the scale 40 segmentation was assigned to the scale 20 segmentation, then again when the scale 60 information was also added, and so on. Since oversegmentation also affects classification accuracy, different base units for classification were tested as well (e.g. single pixels, scale 20 segments, and scale 40 segments). For example, when the scale 20 segments were used as the base units, the values of single pixels were included for classification, and when the scale 40 segments are used as the base units, the values of single pixels and scale 20 segments were not included for classification. Reference data for accuracy assessment was collected using a stratified systematic unaligned sampling scheme (Jensen, 2005). A grid consisting of 100 m x 100 m cells was overlaid on the image, and within each cell 3 random points were created. This sampling method ensured that the test data were randomly located, yet distributed across the entire image. In total, the reference data consisted of 507 points, with points being assigned to a land cover class based on visual interpretation of the high-resolution imagery. If a classified pixel or image segment that contained a reference segment was assigned to the same class as the reference point, it was considered to be correctly classified. If not, the pixel or image segment was considered to be classified incorrectly Houston study area. For the Houston image, super-object variables were added to the base classification units using the methods described in section 4.2.1, and image classification 34

49 was done using the methods described in section Training segments (323) were selected from the finest-scale segmentation (i.e. the scale 10 segmentation). Reference points were randomly generated using a systematic unaligned sampling scheme. A 500 m x 500 m grid was overlaid on the image, and five points were randomly generated within each grid cell. A total of 470 points were used as reference data for accuracy assessment. 4.3 Results of Classification Method 1 In section 4.3, the classification system used for both study areas will be described first. The results of classification method 1 will then be reported and discussed for the Deerfield study area. Finally, the results of classification method 1 will be reported and discussed for the Houston study area, and comparisons will be made between the results in the two study areas Land cover classification system. Image segments (or pixels for the pixel-based classifications) were classified into the following land cover classes: grass, trees/shrubs, buildings, concrete, asphalt, vehicles, bare soil, and pools. In some previous studies (Bruzzone and Carlin, 2006; Walker and Blaschke, 2008), buildings were split up into more than one class when training data were collected (white roof, red tile roof, etc.) so that spectral information would be more useful for classifying buildings. However, in the study areas, rooftops were so diverse in terms of color and building materials that this was not practical. Instead, we used one building class that included rooftops of different colors, 35

50 and relied more on non-spectral information for classifying buildings correctly. After classification, segments classified as concrete, asphalt, and vehicles were aggregated into a single land cover class called other impervious because concrete and asphalt are both impervious surfaces, and vehicles are likely to be located on top of an impervious surface Deerfield study area. Overall accuracies for each of the classifications of the Deerfield image are shown in Figure 10. The highest overall accuracy (84.4%) and Kappa coefficient (0.804) were achieved when either pixels or scale 20 segments were also assigned super-object information from the scale 40, 60, and 80 segmentations for classification (both classification had the same overall accuracy). For the pixel-based classification that did not include super-object information, the overall accuracy (73.2%) and Kappa coefficient (0.67) were much lower. Among the single-scale segmentations, overall accuracy was very similar when scale parameters between 20 and 60 were used, with the scale 40 segmentation achieving the highest overall accuracy (78.1%) and Kappa coefficient (0.73). As the scale parameter was increased past 60, overall accuracy of the single-scale segmentations decreased because segments started to consist of pixels from more than one land cover class. This decrease in overall accuracy as the image became more and more undersegmented was consistent with results found in past studies (Kim et al., 2010; Liu and Xia, 2010). 36

51 Table 2 shows the error matrices for: the pixel-based classification, the best single-scale classification, the best pixel-based classification with super-object variables, and the best object-based classification with super-object variables. To test whether or not the increase in classification accuracy achieved with super-object variables was statistically significant, pairwise z-tests (Congalton, 2009) were calculated to compare the error matrices produced with and without super-object variables. The null hypothesis of each pairwise z-test was that the two error matrices being compared have no significant difference. Based on the z-test comparing the most accurate classifications done with and without super-object variables, reported in Table 3, the error matrices were statistically different at a 95% confidence level (α = 0.05), confirming that super-object variables contributed significantly to improvement in classification accuracy. Figure 11 shows a subset of the aerial imagery, the classified scale 20 segments with super-object information from scale 40, 60, and 80 segmentations (i.e. the best multi-scale classification), and the optimal single-scale segmentation/classification (i.e. the scale 40 segmentation) to allow for a visual comparison. In general, the multi-scale classification produced a more accurate representation of the land cover in the aerial imagery. 37

52 Figure 10: Overall classification accuracies for the single-scale and per-pixel classifications (a), and the multi-scale classifications with: single pixels (b), scale 20 segments (c), and scale 40 segments(d) used as the base units for classification. Scale parameter of 0 indicates a pixel-based classification. For the multi-scale classifications, "Scale Parameters" indicate which segmentations were used for assigning super-object information to the base units for each classification (e.g. "Scale Parameters" of 0-80 indicate that super-object variables from the scale 20, 40, 60, and 80 segmentations was assigned to single pixels for classification). 38

53 39

54 Table 2: Error matrices for the pixel-based classification (a), the best pixel-based classification with super-object variables (b), the best object-based classification (c), and the best object-based classification with super-object variables (d). Note: G, grass; T, tree; B, building; I, other impervious; SH, shadow; SO, soil; P, pool; PA, producer s accuracy; UA, user s accuracy. Scale Parameter(s) Z score Significant at α=0.10? Significant at α=0.05? 40 vs yes yes 0 vs yes yes 0-80 vs no no Table 3: Pairwise comparison of error matrices for classifications performed with and without super-object variables. Scale Parameter 40 is the most accurate single-scale classification, is the most accurate scale 20 segmentation with super-object variables, 0 is the pixel-based classification, and 0-80 is the most accurate pixelbased classification with super-object variables. 40

55 Figure 11: Subset of the Deerfield image (a), and land cover maps produced from the most accurate multi-scale (b) and single-scale (c) classifications. In general, there is a better correspondence between the imagery and the multi-scale classification. In all of the cases that were tested, the use of super-object variables improved overall classification accuracy. However, accuracy decreased slightly from its highest levels when super-object information from the coarsest segmentations (scale 100, 120, 140 segmentations) was included for classification due to the fact that ground features surrounded by spectrally similar land cover (e.g. trees surrounded by grass, buildings surrounded by concrete/asphalt) were undersegmented in the coarse segmentations. A decrease in overall accuracy was also observed when larger segments were used as the 41

56 base units for classification. When the base units were changed from scale 20 segments to scale 40 segments, the highest overall accuracy that was achieved decreased to 82.6% from 84.4%, and the Kappa coefficient decreased to from This decrease in accuracy occurred because small features, such as single trees, became undersegmented. However, the decrease was not statistically significant at a 90% confidence level (α = 0.10) based on a pairwise z-test (z = 0.75), as reported in Table 4. Scale 60 segments were also tested as the base units for classification, and the highest accuracy (80.1%) and highest Kappa coefficient (0.745) was achieved when scale 60 segments were assigned super-object information from scale 80, 100, and 120 segments. This classification was significantly different at a 90% confidence level (α = 0.10) from the classifications which used pixels or scale 20 segments as the base units. Since a decreasing trend in overall accuracy was observed as larger segments were used as the base units for classification, classification was not performed using scale 80 or larger segments as the base units. Scale Parameters Z score Significant at α=0.10? Significant at α=0.05? vs no no vs yes no Table 4: Pairwise comparison of error matrices for classifications that included superobject variables, performed using different scales of segments as the base units is the most accurate scale 20 segmentation with super-object variables, is the most accurate scale 40 segmentation with super-object variables, and is the most accurate scale 60 segmentation with super-object variables. 42

57 For the best pixel-based and object-based classifications that included superobject variables, most classes achieved producer s and user s accuracies of 80% or better. As shown in Table 2, the producer s and user s accuracies of most land cover classes also increased when super-object variables were included for classification. The error matrices of segments with super-object information and pixels with super-object information are very similar (z = 0.004), but for land cover classes for which shape information is useful (i.e. building and tree classes), slightly higher producer s and user s accuracies were achieved when segments were used as the base units for classification. For classes without regular shapes (e.g. grass, soil, and other impervious ), accuracy was slightly higher when pixels were used as the base units. Figure 12 shows the land cover map of the entire study area produced when scale 20 segments were classified using super-object variables from scale 40, 60, and 80 segmentations. Visual comparison of the study area imagery in Figure 2 and the classified map in Figure 12 shows a relatively good correspondence. However, due to the spectral similarity between buildings and other land covers such as soil and Other impervious, there were a higher number of misclassifications for these classes. For example, soil segments on some baseball fields had shapes similar to buildings in the study area, so they were misclassified as building. It was also clear that, within the building class, buildings surrounded by vegetation (most single-family houses) were classified correctly more often than buildings surrounded by spectrally-similar land cover (e.g. concrete or asphalt ). This occurred because, after segmentation, some segments 43

58 that contained pixels of a building also contained pixels of the spectrally-similar land cover surrounding the building, causing the segments shapes to be inaccurate. Use of additional datasets for image segmentation and classification, such as Light Detection and Ranging (LIDAR) height data, would likely lead to fewer classification errors for buildings and trees surrounded by spectrally-similar land covers. However, while multispectral aerial imagery of the study area is typically acquired annually by the Broward County Property Appraiser s Office, LIDAR acquisition is rarer (LIDAR imagery is only available for 2004 and 2007). To assess the level of overall accuracy that could be achieved on an annual basis, only multispectral imagery was used in this study. 44

59 Figure 12: Land cover map produced by the best multi-scale object-based classification of the Deerfield image Comparison of Random Forest and a benchmark algorithm (decision tree). The Random Forest algorithm has not been used as extensively as many other classification algorithms in remote sensing studies. For this reason, results of the best Random Forest classification needed to be compared to those achieved by a more 45

60 frequently-used algorithm (i.e. a benchmark algorithm ) in order to determine that the Random Forest algorithm was well-suited for classifying the image. The Decision Tree classifier proposed by Quinlan (1993) was chosen for comparison because: 1) it has been used for image classification in a large number of remote sensing studies, including global and regional land cover mapping studies (Hansen et al., 2000), and 2) Decision Trees form the base of a Random Forest (a Random Forest consists of many bagged Decision Trees with randomly-selected input variables). Weka software was also used for Decision Tree classification. For the Decision Tree classification, the only parameter that needed to be set for classification was the confidence factor (CF), which adjusts the amount of pruning that occurs within the Decision Tree, with smaller values indicating more pruning (i.e. smaller, more simple Decision Trees) (Hall et al., 2009). To optimize the Decision Tree classification, CF values from were tested at an interval of 0.1 (i.e. 9 steps), and a 10-fold cross validation was done using only the training data (33% of training samples removed for validation each fold). The best classification, achieved when a CF of 0.1 was used, had an overall accuracy of 70.4% and a Kappa coefficient of These values were considerably lower than the overall accuracy and Kappa coefficient of the Random Forest classification (84.4% and 0.804, respectively), giving further evidence that the Random Forest algorithm is well-suited for classifying the highdimensional datasets used in this dissertation. Although the accuracy of the Decision Tree classification was relatively low, it is useful for identifying important variables for classifying each type of land cover, since the classification rules of a single Decision Tree can be easily shown. It is also helpful for 46

61 explaining the Random Forest classification process, since a Random Forest consists of many Decision Trees. The rules for the Decision Tree classification were as follows: Brightness_60 <= NDVI_60 <= -0.1: asphalt NDVI_60 > -0.1: shadow Brightness _60> NDVI_40 <= 0.05 NDVI_20 <= -0.23: pool NDVI_20 > NIR Standard Deviation_20 <= Border Index_60 <= 2.03 GLCM Entropy_20 <= 6.14: concrete GLCM Entropy_20 > 6.14: building Border Index_60 > 2.03 NDVI_20 <= GLCM Correlation_80 <= 0.58: building GLCM Correlation_80 > 0.58: concrete NDVI _20> Rectangularity_60 <= 0.51: concrete Rectangularity _60 > 0.51: soil NIR Standard Deviation_20 > 11.62: car NDVI_40> 0.05 Brightness_60 <= : tree Brightness_60 > : grass Where the variables are shown as variable name_segmentation scale (e.g. NDVI_40 signifies NDVI from the scale 40 segmentation, etc.). This Decision Tree shows the importance of multi-scale information for classification, as the spectral, shape, and texture variables from all of the image segmentation scales were useful for classification purposes. It also shows the Decision Tree s (and Random Forest s) ability to handle nonlinear classification tasks, as buildings and concrete were classified using several different decision rules each (texture information was used to separate some buildings and concrete, while shape information was used to separate other buildings and concrete) Houston study area. 47

62 Overall accuracies for each of the classifications of the Houston image are shown in Figure 13. The highest overall accuracy (77.7%) and Kappa coefficient (0.714) were achieved when either pixels or scale 20 segments were also assigned super-object variables from the scale 30, 40, 50, and 60 segmentations for classification. For the pixelbased classification that did not include super-object variables, the overall accuracy (71.7%) and the Kappa coefficient (0.640) were much lower, as was the case with the Deerfield image. For the single-scale segmentations, the highest overall accuracy (73.4%) and Kappa coefficient (0.660) were achieved when the scale parameter was set to 40. As the scale parameter was increased past 40, the overall accuracy of the single-scale segmentations decreased dramatically as segments started to consist of pixels from more than one land cover class. This large reduction in classification accuracy due to undersegmentation matches the results of the Deerfield study area and also the previous research by Kim et al., (2010) and Liu and Xia (2010). Table 5 shows the error matrices for the pixel-based classification, the most accurate single-scale segmentation/ classification, the most accurate pixel-based classification with super-object variables, and the most accurate object-based classification with super-object information. Pairwise z-tests were again performed to compare the error matrices from the most accurate classifications done with and without super-object variables to confirm that the increase in classification accuracy was statistically significant. The null hypothesis of each pairwise z-test was that the two error matrices being compared had no significant difference. Based on the z scores, reported in Table 6, the error matrices of the optimal single-scale classification and the optimal multi-scale classification were statistically 48

63 different at a 90% confidence level (α = 0.10) and the error matrices of the pixel-based classifications performed without and with super-object variables were different at a 95% confidence level (α = 0.05). These results confirm that super-object variables contributed significantly to improvement in classification accuracy, and are consistent with the findings from the Deerfield study area. Figure 13: Overall classification accuracies for the single-scale and per-pixel classifications (a), and the multi-scale classifications with: single pixels (b), scale 10 segments (c), scale 20 segments (d), and scale 30 segments (e) used as the base units for 49

64 classification. Scale parameter of 0 indicates a pixel-based classification. For the multiscale classifications, "Scale Parameters" indicate which segmentations were used for assigning super-object information to the base units for each classification (ex: "Scale Parameters" of 0-40 indicate that super-object variables from the scale 10, 20, 30, and 40 segmentations was assigned to single pixels for classification). 50

65 51

66 Table 5: Error matrices for the pixel-based classification (a), the best pixel-based classification with super-object variables (b), the best object-based classification (c), and the best object-based classification with super-object variables (d). Note: G, grass; T, tree; B, building; I, other impervious; SH, shadow; SO, soil; P, pool; PA, producer s accuracy; UA, user s accuracy. Scale Significant at Significant at Parameter(s) Z score α=0.10? α=0.05? 40 vs yes no 0 vs yes yes 0-60 vs no no Table 6: Pairwise comparison of error matrices for classifications performed with and without super-object variables. Scale Parameter 40 is the most accurate single-scale classification, is the most accurate scale 20 segmentation with super-object variables, 0 is the pixel-based classification, and 0-60 is the most accurate pixelbased classification with super-object variables. For the most accurate pixel-based and object-based multi-scale classifications (i.e. the pixel-based and scale 20 classification that included super-object variables from scale 30, 40, 50, and 60 segmentations), the majority of classes achieved producer s and user s accuracies of 70%-90%. However, bare soil had low producer s and user s accuracies due to its similar spectral signature to buildings and other impervious surfaces. It is possible that the texture information in higher resolution imagery would improve soil classification, as soil should typically have a less smooth texture than paved surfaces, but this hypothesis can t be confirmed in this study. 52

67 The error matrices in Table 5 show that the use of super-object information led to an increase in producer s and user s accuracy for almost every class. The only exception was for the producer s accuracy of the grass class, which was highest in the pixel-based classification without super-object information. This increase in producer s and user s accuracy for almost every class is consistent with the results in the Broward study area, and gives further evidence that using super-object variables is beneficial for most types of urban land cover. The land cover map produced by the best multi-scale object-based classification is shown in Figure 14, and a subset of the land cover maps of the best single-scale and multi-scale classifications are shown in Figure 15 to allow for a visual comparison of results. 53

68 Figure 14: Land cover map from the best multi-scale object-based classification of the Houston image. Dark green, trees/shrubs; light green, grass, purple, buildings; gray, other impervious; pink, bare soil; blue, pools. See Figure 15 for a visual map legend. 54

69 Figure 15: Subset of the Houston image (a), and land cover maps produced by the most accurate multi-scale (b) and single-scale (c) classifications. 55

70 Comparison of Random Forest and Decision Tree classifications. As was done for the Deerfield image, the Decision Tree classification algorithm was also tested for classifying the best object-based classification with super-object variables. For the Decision Tree classification of the Houston image, the optimal CF parameter was 0.1, and the overall accuracy and Kappa coefficients were 67.9% and 0.592, respectively. These accuracy measures were much lower than those achieved by the Random Forest algorithm (77.7% overall accuracy, Kappa coefficient). Producer s and user s accuracies from the Decision Tree classification were also lower for all types of land cover. These results confirm that the Random Forest algorithm was better suited for classification purposes. 4.4 Summary In both the Deerfield and Houston study area images, the use of super-object variables led to a significant increase in overall classification accuracy and Kappa coefficients. These findings are significant, as they show that classification method 1 provides a viable method for multi-scale object-based classification that is less subjective and more automated than traditional methods. Best results in both study areas were achieved when pixels or fine-scale image segments were assigned super-object variables. As coarser-scale segments were used as the base units for classification, accuracy tended to decrease due to undersegmentation of the base units. Based on these results, it is recommended that single pixels or very fine-scale segments (i.e. segments no larger than 56

71 any of the land cover types of interest) be used as the base units for a classification that includes super-object information, as the effect of oversegmentation had less of a negative impact on classification accuracy than undersegmentation (use of super-object variables can reduce errors caused by oversegmentation but not undersegmentation). A decrease in accuracy was also observed when super-object variables from highlyundersegmented segmentations were used for classifying the fine-scale base units. In the Deerfield image, there was a decrease after the scale parameter was increased past 80, in the Houston image and past 60 in the Houston image. Therefore, it is recommended to do classification in a stepwise manner (i.e. adding one super-object level at a time), as was done in this study, in order to ensure the best results. In comparing the results of the best classifications with super-object variables from the two study areas, it is clear that for most land cover classes, the producer s and user s accuracies were lower for the Houston study area than the Deerfield study area. It is possible that this was due to the lower spatial resolution of the Houston imagery (1 m as opposed to 30 cm), as the more detailed texture information in the 30 cm imagery may have been useful for separating some of the spectrally-similar types of land cover such as grass and trees (grass has a smoother texture than trees due to canopy gaps in trees, and this is more evident in 30 cm imagery). The use of spectral and non-spectral variables from super-objects of pixels or fine-scale image segments led to higher overall accuracy than that achieved when variables of segments at any single scale were used alone for classification. However, a limitation with using pixels or fine-scale segments as the base units for classification is 57

72 that there is not a one-to-one relationship between the classified pixels/image segments and land cover features of interest. For example, a tree or building may consist of several segments or pixels, rather than just one, if pixels or small segments are the base units for classification. For this reason, it should be emphasized that the pixels or image segments classified using classification method 1 produce accurate categorical information (i.e. class assignment), but relatively inaccurate spatial information (e.g. segment sizes and shapes) for land cover features of interest. A great deal of GEOBIA research has focused on identifying and/or optimizing image segmentation parameters to achieve accurate segment boundaries for features of interest (Jin and Davis, 2005; Kim et al., 2008; Marpu et al., 2010; Johnson and Xie 2011). Additional research has focused on optimizing segment boundaries after classification using expert knowledge (Tiede et al., 2010). Accurate segment boundaries are important for studies that aim to, for example, identify the number of trees or buildings in an image, or estimate each tree s canopy size/each building s footprint. When pixels or small segments are the base units for classification, this type of analysis is not possible without further processing. Merging adjacent pixels or segments of the same land cover class together (e.g. merging neighboring building segments into a single building) may provide more accurate boundaries and a more accurate count of the number of features, but this will not work well if two ground features of the same land cover class are adjacent to one another. For example, a group of trees will be mapped as just one tree if the neighboring pixels or segments are merged together. Additional research is necessary to identify ways to group these classified pixels or segments into units that more closely approximate features of interest. However, for 58

73 studies that are mainly interested in mapping land cover, such as this one, it is not really a problem if many image segments make up one feature of interest as long as they are classified correctly. For future studies, it may be interesting to test the use of different feature selection algorithms prior to classification to see if results further improve, and to compare classification accuracy achieved by the Random Forest algorithm with results obtained using other classification algorithms. Use of additional input data, such as Lidar height information, may also improve results. Classification methods that incorporate super-object information should also be tested in other types of environments (forested areas, wetlands, etc.) to see if they are applicable in non-urban areas where features have more irregular sizes and shapes. Finally, further research is necessary to identify methods for grouping the classified pixels/image segments into units that more closely approximate features of interest such as individual buildings or trees. 59

74 5. IMAGE SEGMENTATION REFINEMENT USING LOCAL HETEROGENEITY MEASURES 5.1 Introduction Image segmentation quality has been shown to have an impact on image classification accuracy (Dorren et al., 2003, Kim et al., 2009), so research that deals with evaluating image segmentation quality to identify good segmentation parameters has become a topic of interest in remote sensing. Most of these evaluation methods fall into one of three categories: visual, supervised, or unsupervised. Visual methods involve the user(s) identifying parameters that produce highquality image segmentations by visually comparing multiple segmentations. This method has been the most widely-used method of assessing image segmentation quality (Zhang et al., 2008). One problem with the visual method is that it can be highly subjective, as different people may have a different idea about what is the best segmentation (or set of segmentations) (Paglieroni, 2004). This process can also be time and labor intensive, since several segmentations need to be inspected in detail to identify the best one(s). Supervised evaluation methods, also called empirical discrepancy methods, involve comparing multiple image segmentations with a ground truth, or reference digitization that is created by the user for at least part of the image (Zhang, 1997). This comparison involves computing a measure of dissimilarity between the image segmentations and the reference digitization (Chabrier et al., 2006). Dissimilarity 60

75 measures can be based on differences in color, location, size, or shape (Abeyta and Franklin, 1998, Carleer et al., 2005, Moller et al., 2007, Neubert et al., 2008, Clinton et al., 2010). The segmentation most similar to the reference digitization is determined to be the optimal segmentation. Since different dissimilarity measures can indicate wildly different segmentations as best, it may be necessary to consider the results of multiple dissimilarity measures when choosing an optimal segmentation (Clinton et al., 2010). Most studies employing supervised methods used them to evaluate the quality of singlescale segmentations (Paglieroni, 2004, Carleer et al., 2005, Chabrier et al., 2006, Moller et al., 2007, Neubert et al., 2008). However, Trias-Sanz et al. (2008) compared different hierarchical, multi-scale segmentations using two reference segmentations, one containing compulsory edges and one with optional edges. The main disadvantage of the supervised approach is that creating a reference digitization can be difficult, subjective, and time-consuming (Zhang et al., 2008). Creating a reference digitization for a large image or for many different images may not be feasible. Unsupervised evaluation methods, or empirical goodness methods, allow segmentation quality to be assessed quantitatively without the need for a reference digitization or a detailed visual comparison of multiple image segmentations (Chabrier et al., 2006), making it more time and labor efficient and less subjective than the visual and supervised methods. Unsupervised evaluation methods involve scoring and ranking multiple image segmentations using some quality criteria, which are typically established in agreement with human perception of what makes a good segmentation (Chabrier et al., 2006). One widely-accepted definition of what constitutes a good segmentation is one in 61

76 which: (i) regions are uniform and homogeneous, (ii) regions are significantly different from neighboring regions, (iii) regions have a simple interior without many holes, and (iv) regions have boundaries that are simple, not ragged, and spatially accurate (Haralick and Shapiro, 1985). However, for highly-textured and natural images, only the first two criteria can realistically be applied (Zhang et al., 2008). So, for remote sensing images, a good segmentation should, in theory, maximize intra-segment homogeneity and intersegment heterogeneity. For this reason, most unsupervised evaluation methods involve calculating intra-segment and inter-segment heterogeneity measurements for each segment, and then aggregating these values into a global value (Chabrier et al., 2006). The global intra- and inter-segment values are then combined to assign an overall goodness score to the segmentation (Zhang et al., 2008). In remote sensing, relatively few studies have incorporated unsupervised segmentation evaluation methods. Stein and De Beurs (2005) used complexity metrics to quantify the semantic accuracy of image segmentations for two Landsat images. Chabrier et al. (2006) tested and compared six different evaluation criteria and three different algorithms for segmenting radar and multispectral aerial images. Espindola et al. (2006) measured intra-segment homogeneity using the weighted variance of the near infrared (NIR) band and measured intra-segment heterogeneity using a spatial autocorrelation measure, Global Moran s I, for the NIR band as well. The two values were normalized and combined to identify optimal segmentation parameters in an urban area. In a similar study, Kim et al. (2008, 2009) computed unweighted variance (NIR band) and Global Moran s I (NIR band) and graphed each separately to compare multiple segmentations 62

77 and identify optimal parameters for segmenting forest stands. The segmentation level where variance began to level off and Moran s I was lowest was found to be the optimal segmentation. Radoux and Defourny (2008) used a combination of unsupervised (normalized post-segmentation standard deviation) and supervised (border discrepancy) methods to evaluate segmentation results in a rural area. The unsupervised evaluation methods used in these studies all involved computing global goodness scores for image segmentations. However, since some intra- and inter-segment heterogeneity measures (e.g. variance and spatial autocorrelation) can be calculated for individual segments, this information may also be useful for assessing local segmentation quality. In this Chapter, a global unsupervised segmentation evaluation method similar to that of Espindola et al. (2006) is used for scoring and ranking multiple single-scale image segmentations. An unsupervised evaluation method was chosen because it does not require a reference digitization or expert knowledge of the landscape, and because intraand inter-segment heterogeneity information may be useful for assessing local segmentation quality (i.e. local over- and undersegmentation) as well as global segmentation quality. Unlike previous studies, undersegmented and oversegmented image segments in the optimal single-scale segmentation will be identified and refined using local heterogeneity statistics. The optimal single-scale segmentation will be refined by (a) further segmenting undersegmented regions at finer scales, and (b) merging oversegmented regions with spectrally similar adjacent regions that are also oversegmented. The refined segmentations will then be classified using the methods described in section 3.2.2, and classification accuracies will be compared to those 63

78 achieved by single-scale classification methods. Classification method 2 should be able to overcome some of the limitations of traditional single-scale object-based classification methods (many segments are under- or oversegmented in the single-scale segmentation) and multi-scale methods (subjective and reliant on expert knowledge of the study area). 5.2 Methodology Identifying the optimal single-scale segmentation Deerfield study area. As stated in the previous paragraph, classification method 2 involves identifying an optimal single-scale segmentation (out of the scale segmentations of the Deerfield Beach image) and then refining it. To evaluate the single-scale segmentations, global intra- and inter-segment goodness measures were calculated. The optimal image segmentation scale was defined as the segmentation at which intra-segment homogeneity and inter-segment heterogeneity were maximized. At this optimal scale, segments should, on average, be internally uniform and significantly different from their neighbors. These criteria fit with what is generally accepted as a good segmentation for natural images (Zhang, 2008). The global intra-segment goodness measure used was variance, weighted by each segment s area, calculated in Equation 1 as: (1) where v i is the variance and a i is the area of segment i. Variance was chosen as the intrasegment goodness measure because segments with low variance should be relatively 64

79 homogeneous. Weighted variance (wvar) was used for the global calculation so that large segments had more of an impact on global calculations than small ones. The inter-segment global goodness measure was Global Moran s I, a spatial autocorrelation metric that calculates, on average, how similar a region is to its neighbors (Fotheringham et al. 2000). Moran s I was chosen because it is a reliable indicator of statistical separation between spatial objects (Fotheringham et al, 2000), and was found to be a good indicator of segmentation quality in previous segmentation evaluation studies (Espindola et al, 2006, Kim et al, 2008). For this study, Global Moran s I (MI) is calculated using the formula in Equation 2: (2) where n is the total number of regions, w ij is a measure of the spatial proximity, y i is the mean spectral value of region R i, and is the mean spectral value of the image. Each weight w ij is a measure of the spatial adjacency of regions R i and R j. In this study, only adjacent regions (i.e. regions sharing a boundary) were considered for the Moran s I calculation, so if regions R i and R j are neighbors, w i j = 1, otherwise, w ij = 0. Low Moran s I values indicate high inter-segment heterogeneity, which is desirable for an image segmentation. For all image segmentations, both goodness measures were calculated for the NIR spectral band because previous research, performed as part of this dissertation research, showed that calculations for the NIR band alone provided a better estimate of segmentation quality than average values calculated when all three spectral bands were used (Johnson and Xie 2011). To allow for the intra-segment and inter-segment goodness 65

80 measures to be considered equally, both were rescaled to a similar range (0-1) using the normalization formula in Equation 3: (X - X min ) / (X max - X min ) (3) where X min and X max are the minimum and maximum value of weighted variance or Moran s I for the NIR band. The normalization equation causes segmentations with low weighted variance or Moran s I to have normalized values relatively close to zero. Since low variance and low spatial autocorrelation are desirable, low normalized values indicate better results. To assign an overall Global Score (GS) to each image segmentation, the normalized weighted variance and Moran s I values are combined using the formula in equation 4. GS = V norm + MI norm (4) where V norm is the normalized weighted variance and MI norm is the normalized Moran s I. This calculation is similar to the method used by Espindola et al. (2006). The optimal segmentation was defined as the one with the lowest GS, because at this level there is the lowest combined weighted variance and spatial autocorrelation for the NIR band Houston study area. The same methods described in section were used to identify the optimal single-scale segmentation of the Houston image. The only difference was that the segmentation parameters included for the comparison ranged from 10-70, as described in section

81 5.2.2 Refining undersegmented regions Deerfield study area. The optimal single-scale segmentation is defined as the scale at which the lowest GS is achieved. However, even at this optimal scale, it is likely that many segments will still be under- or oversegmented because different types of land cover have different optimal segmentation scales. To refine the optimal single-scale segmentation by reducing undersegmentation, local intra- and inter-segment heterogeneity statistics were used to identify undersegmented regions and segment them again at finer scales. Regions considered for further segmentation were those with high intra- and intersegment heterogeneity. The theoretical basis for using these criteria was that, if a segment is very heterogeneous internally, it is likely to be undersegmented. However, if the heterogeneous segment is also very similar to its neighbors, it may not actually be undersegmented. For example, a segment that contains pixels of a tree canopy may be heterogeneous internally (due to small gaps and shadows in the canopy), but very similar to neighboring segments that also contain pixels from the same tree. In this case, further segmentation is not desirable because the tree is actually already oversegmented. However, if a segment is heterogeneous internally and also very different from its neighbors, it is: (i) likely to need further segmentation and (ii) unlikely to be oversegmented already. To identify the regions that needed to be segmented at finer scales, local intra- and inter-segment heterogeneity statistics were be calculated for each segment. Variance was used as the intra-segment heterogeneity statistic, and the intersegment heterogeneity statistic was Local Moran s I (Anselin, 1995), a decomposed form 67

82 of Moran s I that measures spatial autocorrelation for each segment. Local Moran s I was calculated using the formula in Equation 5: I i = z i j i w ij z j (5) where the z i, z j are in deviations from the mean. Only neighboring segments were considered for calculations, so w=1 for neighboring segments, and w=0 for all other segments. Variance and Local Moran s I were chosen to measure local intra- and intersegment heterogeneity because they were quite similar to the global measures used to calculate the GS values. Variance and Local Moran s I values were normalized for the NIR band using the normalization equation from Equation 3, and a Heterogeneity Index (H) was developed to assign a heterogeneity value to each segment, using the formula in equation 6: H = (nvar - nmi) / (nvar + nmi) (6) where nvar is the normalized variance and nmi represents normalized Moran s I values. H values range from -1 to 1, with values close to 1 indicating high variance and low Moran s I. High H values indicate segments with high intra- and inter-segment heterogeneity. A map of H values for segments in a subset of the Deerfield Beach study area is shown in Figure 16. In this figure, it is clear that the segments with high H values tend to consist of a mixture of land cover with different spectral characteristics, causing them to be heterogeneous internally, and significantly different from their neighbors. Several thresholds were tested for selecting regions with the highest average H values, because it is difficult to know before-hand which threshold will produce the best results. For the Deerfield Beach image, the segments with the top 10%, 20%, 30%, 40%, 50%, 68

83 60%, and 70% of H values were selected as the thresholds [Note: as is reported in the section , classification accuracy dropped off after the 60% threshold was used, so no thresholds above 70% were used]. Polygon boundaries of these selected segments were used to extract the regions in the image that needed further segmentation. The extracted image regions were then re-segmented at finer scales (i.e. using smaller scale parameters) than that which was used for the original segmentation. Extracted undersegmented regions using two H thresholds are shown for a subset of the study area in Figure 17. As reported later in the Results and Discussion section, the optimal singlescale segmentation scale was achieved using a scale of 80, so the extracted image regions were segmented at scales of 60, 40, and 20. The refined segments were then merged back with the remaining segments from the scale 80 segmentation using the Merge tool in ArcGIS. This process led to the creation of three new segmented images for each H threshold (total of 21 segmented images). These segmentations will be referred to as two-scale segmentations because each is composed of segments of two different segmentation scales (i.e. 80 and either 60, 40, or 20). Image classification was performed for each of the two-scale segmentations in order to compare their classification accuracies with those of the single-scale segmentations. Classification was performed using the Random Forest algorithm with the software and parameters described in section

84 Figure 16: H values for a subset region of the Deerfield Beach image. High H values (red polygons) indicate regions that are likely to be undersegmented, low H values (blue polygons) indicate regions likely to be oversegmented. 70

85 Figure 17: Extracted undersegmented image regions with heterogeneity (H) thresholds of 30% (a) and 60% (b). Polygon boundaries are shown as black lines. Many of the extracted regions contain more than one type of land cover. 71

86 Houston study area. The same methods described in section were used to refine undersegmented regions in the optimal single-scale segmentation of the Houston image. The only differences for the Houston image were (a) the parameters of the optimal singlescale segmentation, (b) the thresholds used for extracting undersegmented regions that needed refining, and (c) the scale parameters used to refine the segments that needed refining. As reported later in the Results and Discussion section, the optimal single-scale segmentation was determined to be the scale 30 segmentation. The H thresholds used to extract undersegmented regions from the scale 30 segmentation were the top 10%, 20%, 30%, 40%, and 50% of H values. As reported in section , classification accuracy decreased when thresholds greater than 40% were used, so no thresholds higher than 50% were used. Since the optimal single-scale segmentation was the scale 30 segmentation, scale parameters of 20 and 10 were used to further segment the extracted undersegmented regions Refining oversegmented regions Deerfield study area. To reduce oversegmentation of the image, H values were used to select the least heterogeneous segments from the optimal single-scale segmentation. Segments with low H values are likely to be oversegmented because they are homogeneous internally and very similar to their neighbors. This high degree of similarity both within itself and with neighboring segments indicates the segment and its neighbor(s) are likely to be parts of 72

87 the same ground feature. For this study, four different H value thresholds were tested for selecting these homogeneous segments (lowest 10%, 20%, 30%, and 40% of segments), and the polygon boundaries for these segments were used to extract the regions in the image that needed to be merged with similar neighbors. As reported in section , classification accuracy dropped off as the H threshold was increased past 30%, so no thresholds higher than 40% were used. Extracted image regions using different thresholds are shown for a subset of the Deerfield image in Figure 18. Figure 18: Extracted oversegmented image regions with heterogeneity (H) thresholds of 10% (a), 20% (b), 30% (c) and 40% (d). Polygon boundaries are shown as black lines. Many of the segments represent only a part of a land cover feature. 73

88 Adjacent oversegmented regions were combined using the Spectral Difference algorithm in Definiens Professional 5. This algorithm merges neighboring segments together according to their mean layer intensity values. Segments are merged if the difference in layer mean intensities is below the value specified in the user input Spectral Difference parameter, and this algorithm is used to refine existing segmentation results by merging regions with similar values produced by previous segmentations (Definiens, 2006). For this study, all three spectral bands were given equal weight for the Spectral Difference segmentation, and four Spectral Difference parameters were tested for merging segments (5, 10, 15, 20). Parameters higher than 20 were not used because classification accuracy tended to drop off beyond 15, as reported in section The refined segments were combined with the remaining segments from the optimal two-scale segmentation using the Merge tool in ArcGIS. This process led to the creation of 20 three-scale image segmentations. An example of the merging process is shown for a subset of the Deerfield image in Figure 19. Each of the three-scale segmentations were classified using the Random Forest algorithm, with the software and classification parameters described in section The three-scale segmentation with the highest overall classification accuracy was chosen as the optimal three-scale classification, and the classification accuracy (overall accuracy and Kappa) achieved by this segmentation was compared with the accuracies achieved by the optimal single-scale segmentation and the optimal two-scale segmentation to assess the effect that refining under- and oversegmented regions had on classification accuracy. 74

89 Figure 19: Refined undersegmented regions (a), refined oversegmented regions (b), remaining segments from the optimal single-scale segmentation (c), and the final refined segmentation (d) created by merging (a), (b), and (c) together Houston study area. The same methods described in section were used for refining oversegmented regions of the Houston image. The only differences were the parameters of the optimal single-scale segmentation and the thresholds used for extracting oversegmented regions that needed refining. As described previously, the optimal singlescale segmentation was the scale 30 segmentation. The H thresholds used were the lowest 75

90 10%, 20%, 30%, 40%, and 50%. H thresholds higher than 50% were not used because, as detailed in section , classification accuracy was reduced when threshold was increased past 40%. The spectral difference parameters tested were 5, 10, 15, and 20. Spectral difference parameters greater than 20 were not tested because classification accuracy dropped off considerably when parameters higher than 15 were used. 5.3 Results In section 5.3, the results of classification method 2 will be reported and discussed for the Deerfield and Houston study areas, and comparisons will be made between the results found in the two areas Deerfield study area Identifying the optimal single-scale segmentation. The first step for classification method 2 was to identify the optimal single-scale segmentation. It should be noted that, for classification method 2, the optimal single-scale segmentation was defined as the segmentation with the lowest GS value. For classification method 1, it was defined as the segmentation with the highest overall classification accuracy. The reason for the different definition in classification method 2 is that the goal was to find the single-scale segmentation that minimized both over- and undersegmentation, and then refine the over- and undersegmented regions in this segmentation. Ideally, the segmentation with the lowest GS value would also be the segmentation with the highest classification accuracy, but because the negative impact 76

91 that undersegmentation has on image classification is stronger than that of oversegmentation, this may not always be the case. As shown in Table 7, the segmentation with the lowest GS value (0.662) was the scale 80 segmentation. The overall classification accuracy for this segmentation was 75.7% and the Kappa coefficient was (0.694). The single-scale segmentation with the highest overall accuracy and Kappa coefficient (78.1% and 0.73, respectively) was the scale 40 segmentation. Scale MI wvar Mi norm V norm GS Table 7: Moran s I (MI), weighted variance (wvar), normalized Moran s I (Mi norm ), normalized weighted variance (V norm ), and Global Score (GS) for each of the single-scale segmentations Refining the optimal single-scale segmentation. After the optimal single-scale segmentation was identified, the next step was to: 1) test different H thresholds for extracting undersegmented regions, 2) further segment the extracted regions at finer scales, and 3) merge them back with the unrefined segments. The resultant segmentations were referred to as two-scale segmentations, and there were 21 in total. Overall classification accuracies for each of the two-scale segmentations are shown in Table 8. The highest overall accuracy (79.7%) was achieved 77

92 when segments with the highest 60% of H values were further segmented using a scale parameter of 20. In general, there was an increasing trend in overall accuracy as the number of segments extracted for refinement increased, although it decreased when an H threshold of greater than 60% was used. H Threshold Scale Parameter Overall Accuracy H Threshold Scale Parameter Overall Accuracy 10% % 50% % % % % % 20% % 60% % % % % % 30% % 70% % % % % % 40% % % % Table 8: Overall classification accuracy for each of the two-scale segmentations. Next, oversegmented regions from the optimal single-scale segmentation were extracted for refinement using four different H thresholds (10%, 20%, 30%, and 40%), and several Spectral Difference parameters of 5, 10, 15, and 20 were tested for refining the extracted regions. H thresholds higher than 40% were not used because classification accuracy decreased as the threshold was raised past 30%, and Spectral Difference parameters higher than 20 were not used because classification accuracy decreased as the parameter was increased past 15. Once the extracted segments were refined, they were 78

93 then merged with the unrefined segments from the scale 80 segmentation, and the best refined undersegmented regions (i.e. the undersegmented regions extracted using the H threshold of 60% and scale parameter of 20). The result was 16 three-scale segmentations. The three-scale segmentations were then classified, and their overall accuracies are reported in Table 9. The highest overall accuracy (81.9%) and Kappa coefficient (0.772) was achieved when an H threshold of 30% (i.e. the most homogeneous 30% of segments) and a Spectral Difference parameter of 10 were used. Spectral H threshold Difference Parameter Overall Accuracy 10% % % % % 20% % % % % 30% % % % % 40% % % % % Table 9: Overall classification accuracy for each of the three-scale segmentations. The overall classification accuracy of the three-scale segmentation was higher than that of the unrefined segmentation (the scale 80 segmentation), and also better than that of the single-scale segmentation with the highest classification accuracy (the scale 40 79

94 segmentation). Producer s and user s accuracies for most land cover classes also increased, as shown in the error matrices in Table 10. The major exception was for producer s accuracy of the grass class, which was highest in the most accurate singlescale classification (i.e. the scale 40 classification). The results of pairwise z tests, reported in Table 11, show that the improvement in accuracy of the three-scale segmentation over the scale 80 segmentation was significant at a 95% confidence level, and the improvement over the scale 40 segmentation was significant at a 85% confidence level. These results provide strong evidence that using the H index to refine a single-scale segmentation can improve the overall classification accuracy of the resultant land cover map. The classified land cover map of the most accurate three-scale segmentation is shown in Figure 20, and a subset of the land cover maps of the most accurate single- and three-scale segmentations/classifications is shown in Figure 21 to allow for a visual inspection of results. 80

95 81

96 Table 10: Error matrices for the most accurate single-scale segmentation [scale 40] (a), the single-scale segmentation with the lowest GS value [scale 80] (b), the best two-scale segmentation (c), the best three-scale segmentation (d). Note: G, grass; T, tree; B, building; I, other impervious; SH, shadow; SO, soil; P, pool; PA, producer s accuracy; UA, user s accuracy. Z score Significant at α=0.15? Significant at α=0.05? Segmentations Scale 80 vs. best three-scale segmentation 2.43 yes yes Scale 40 vs. best three-scale segmentation 1.44 yes no Table 11: Pairwise comparisons of single-scale and refined three-scale segmentations. 82

97 Figure 20: Classified land cover map of the most accurate three-scale segmentation. 83

98 Figure 21: Subset of the Deerfield image (a), the most accurate three-scale segmentation/classification (b), and the most accurate single-scale segmentation/ 84

99 classification. In general, there is a better correspondence between the imagery and the three-scale classification Comparison of Random Forest and Decision Tree classifications. As was done for classification method 1 in section , the Random Forest classification of the most accurate multi-scale segmentation was compared to a Decision Tree classification of the same segmentation to see which performed better. For the Decision Tree classification, the highest overall classification accuracy (77.3%) and Kappa coefficient (0.713) were achieved using a CF of 0.2. These accuracy measures are much lower than those achieved using the Random Forest algorithm (81.9% and 0.772). The better performance of the Random Forest algorithm is consistent with results found for classification method Houston image Identifying the optimal single-scale segmentation. The procedure for identifying the optimal single-scale segmentation of the Houston image was identical to that of the Broward image. As shown in Table 12, the segmentation with the lowest GS value (0.761) was the scale 30 segmentation. The overall classification accuracy for this segmentation was 71.3% and the Kappa coefficient was (0.632). The single-scale segmentation with the highest overall accuracy and Kappa coefficient (73.4% and 0.66, respectively) was the scale 40 segmentation; so again the segmentation with the lowest GS value did not have the best overall classification 85

100 accuracy. It is interesting to note that, in this case, the most accurate single-scale segmentation was actually coarser than the segmentation with the highest GS score, so the higher classification accuracy is not likely to be due to the larger negative impacts of undersegmentation, as was thought to have happened in the Broward study area. The segmentation with the lowest GS value (i.e. the scale 30 segmentation) was chosen for refinement based on the approved dissertation research proposal, but for future research it may be interesting to compare the result with that obtained by refining the most accurate single-scale segmentation instead (i.e. the scale 40 segmentation). Scale MI wvar Mi norm V norm GS Table 12: Moran s I (MI), weighted variance (wvar), normalized Moran s I (Mi norm ), normalized weighted variance (V norm ), and Global Score (GS) for each of the single-scale segmentations Refining the optimal single-scale segmentation. Five different H thresholds (10%, 20%, 30%, 40%, and 50%) were tested for extracting undersegmented segments from the scale 30 segmentation, and for each H threshold, scale parameters of 10 and 20 were used to further segment the extracted 86

101 image regions. In total, there were 10 two-scale segmentations for the Houston image. As reported in Table 13, the scale parameter of 10 was generally more effective than the scale parameter of 20 for refining the scale 30 segments. The overall accuracy was highest when the most heterogeneous 20% of segments (i.e. the segments with the highest 20% of H values) were further segmented using a scale parameter of 10, so this was defined to be the best two-scale segmentation. As the H threshold was increased past 20%, accuracy tended to decrease. No thresholds higher than 50% were tested due to this decrease in accuracy. H Threshold Scale Parameter Overall Accuracy H Threshold Scale Parameter Overall Accuracy 10% % 40% % % % 20% % 50% % % % 30% % % Table 13: Overall classification accuracy for each of the two-scale segmentations of the Houston image. Next, the oversegmented regions were extracted using the methods described in section H thresholds of 10%, 20%, 30%, 40%, and 50%, and Spectral Difference parameters of 5, 10, 15, and 20 were tested. Finally, the refined regions were merged with the remaining segments from the best two-scale segmentation, resulting in the creation of 25 three-scale segmentations. The overall accuracies for each of these segmentations, reported in Table 14, shows an increasing trend in accuracy as the Spectral Difference parameter was increased from 5 to 15, and then a decrease when parameters higher than 87

102 15 are used. The highest overall classification accuracy (75.7%) was achieved using an H threshold of 40% and a Spectral Difference parameter of 15. The classified land cover map of the most accurate three-scale segmentation is shown in Figure 22, and a subset of the land cover maps of the most accurate single- and three-scale segmentations is shown in Figure 23 to allow for a visual comparison of results. Spectral Difference Parameter Spectral Difference Parameter H Threshold Overall Accuracy H Threshold Overall Accuracy 10% % 40% % % % % % % % 20% % 50% % % % % % % % 30% % % % % Table 14: Overall classification accuracy for each of the three-scale segmentations. 88

103 Figure 22: Classified land cover map of the best three-scale segmentation. See Figure 23 for the map legend. 89

104 Figure 23: Subset of the Houston imagery (a), the most accurate three-scale segmentation/classification (b), and the most accurate single-scale segmentation/ classification (c). In general, there is a better correspondence between the imagery and 90

ISPRS Journal of Photogrammetry and Remote Sensing

ISPRS Journal of Photogrammetry and Remote Sensing ISPRS Journal of Photogrammetry and Remote Sensing xxx (2013) xxx xxx 1 Contents lists available at SciVerse ScienceDirect ISPRS Journal of Photogrammetry and Remote Sensing journal homepage: www.elsevier.com/locate/isprsjprs

More information

AN INVESTIGATION OF AUTOMATIC CHANGE DETECTION FOR TOPOGRAPHIC MAP UPDATING

AN INVESTIGATION OF AUTOMATIC CHANGE DETECTION FOR TOPOGRAPHIC MAP UPDATING AN INVESTIGATION OF AUTOMATIC CHANGE DETECTION FOR TOPOGRAPHIC MAP UPDATING Patricia Duncan 1 & Julian Smit 2 1 The Chief Directorate: National Geospatial Information, Department of Rural Development and

More information

Classification of High Spatial Resolution Remote Sensing Images Based on Decision Fusion

Classification of High Spatial Resolution Remote Sensing Images Based on Decision Fusion Journal of Advances in Information Technology Vol. 8, No. 1, February 2017 Classification of High Spatial Resolution Remote Sensing Images Based on Decision Fusion Guizhou Wang Institute of Remote Sensing

More information

COMPARISON OF PIXEL-BASED AND OBJECT-BASED CLASSIFICATION METHODS FOR SEPARATION OF CROP PATTERNS

COMPARISON OF PIXEL-BASED AND OBJECT-BASED CLASSIFICATION METHODS FOR SEPARATION OF CROP PATTERNS COMPARISON OF PIXEL-BASED AND OBJECT-BASED CLASSIFICATION METHODS FOR SEPARATION OF CROP PATTERNS Levent BAŞAYİĞİT, Rabia ERSAN Suleyman Demirel University, Agriculture Faculty, Soil Science and Plant

More information

Urban land cover and land use extraction from Very High Resolution remote sensing imagery

Urban land cover and land use extraction from Very High Resolution remote sensing imagery Urban land cover and land use extraction from Very High Resolution remote sensing imagery Mengmeng Li* 1, Alfred Stein 1, Wietske Bijker 1, Kirsten M.de Beurs 2 1 Faculty of Geo-Information Science and

More information

The Road to Data in Baltimore

The Road to Data in Baltimore Creating a parcel level database from high resolution imagery By Austin Troy and Weiqi Zhou University of Vermont, Rubenstein School of Natural Resources State and local planning agencies are increasingly

More information

Urban Tree Canopy Assessment Purcellville, Virginia

Urban Tree Canopy Assessment Purcellville, Virginia GLOBAL ECOSYSTEM CENTER www.systemecology.org Urban Tree Canopy Assessment Purcellville, Virginia Table of Contents 1. Project Background 2. Project Goal 3. Assessment Procedure 4. Economic Benefits 5.

More information

URBAN LAND COVER AND LAND USE CLASSIFICATION USING HIGH SPATIAL RESOLUTION IMAGES AND SPATIAL METRICS

URBAN LAND COVER AND LAND USE CLASSIFICATION USING HIGH SPATIAL RESOLUTION IMAGES AND SPATIAL METRICS URBAN LAND COVER AND LAND USE CLASSIFICATION USING HIGH SPATIAL RESOLUTION IMAGES AND SPATIAL METRICS Ivan Lizarazo Universidad Distrital, Department of Cadastral Engineering, Bogota, Colombia; ilizarazo@udistrital.edu.co

More information

Trimble s ecognition Product Suite

Trimble s ecognition Product Suite Trimble s ecognition Product Suite Dr. Waldemar Krebs October 2010 Trimble Geospatial in the Image Processing Chain Data Acquisition Pre-processing Manual/Pixel-based Object-/contextbased Interpretation

More information

Object-based land use/cover extraction from QuickBird image using Decision tree

Object-based land use/cover extraction from QuickBird image using Decision tree Object-based land use/cover extraction from QuickBird image using Decision tree Eltahir. M. Elhadi. 12, Nagi. Zomrawi 2 1-China University of Geosciences Faculty of Resources, Wuhan, 430074, China, 2-Sudan

More information

A DATA FIELD METHOD FOR URBAN REMOTELY SENSED IMAGERY CLASSIFICATION CONSIDERING SPATIAL CORRELATION

A DATA FIELD METHOD FOR URBAN REMOTELY SENSED IMAGERY CLASSIFICATION CONSIDERING SPATIAL CORRELATION The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B7, 016 XXIII ISPRS Congress, 1 19 July 016, Prague, Czech Republic A DATA FIELD METHOD FOR

More information

Object-based feature extraction of Google Earth Imagery for mapping termite mounds in Bahia, Brazil

Object-based feature extraction of Google Earth Imagery for mapping termite mounds in Bahia, Brazil OPEN ACCESS Conference Proceedings Paper Sensors and Applications www.mdpi.com/journal/sensors Object-based feature extraction of Google Earth Imagery for mapping termite mounds in Bahia, Brazil Sunhui

More information

Urban Growth Analysis: Calculating Metrics to Quantify Urban Sprawl

Urban Growth Analysis: Calculating Metrics to Quantify Urban Sprawl Urban Growth Analysis: Calculating Metrics to Quantify Urban Sprawl Jason Parent jason.parent@uconn.edu Academic Assistant GIS Analyst Daniel Civco Professor of Geomatics Center for Land Use Education

More information

Using geographically weighted variables for image classification

Using geographically weighted variables for image classification Remote Sensing Letters Vol. 3, No. 6, November 2012, 491 499 Using geographically weighted variables for image classification BRIAN JOHNSON*, RYUTARO TATEISHI and ZHIXIAO XIE Department of Geosciences,

More information

Object-based classification of residential land use within Accra, Ghana based on QuickBird satellite data

Object-based classification of residential land use within Accra, Ghana based on QuickBird satellite data International Journal of Remote Sensing Vol. 28, No. 22, 20 November 2007, 5167 5173 Letter Object-based classification of residential land use within Accra, Ghana based on QuickBird satellite data D.

More information

AN INTEGRATED METHOD FOR FOREST CANOPY COVER MAPPING USING LANDSAT ETM+ IMAGERY INTRODUCTION

AN INTEGRATED METHOD FOR FOREST CANOPY COVER MAPPING USING LANDSAT ETM+ IMAGERY INTRODUCTION AN INTEGRATED METHOD FOR FOREST CANOPY COVER MAPPING USING LANDSAT ETM+ IMAGERY Zhongwu Wang, Remote Sensing Analyst Andrew Brenner, General Manager Sanborn Map Company 455 E. Eisenhower Parkway, Suite

More information

Object Based Land Cover Extraction Using Open Source Software

Object Based Land Cover Extraction Using Open Source Software Object Based Land Cover Extraction Using Open Source Software Abhasha Joshi 1, Janak Raj Joshi 2, Nawaraj Shrestha 3, Saroj Sreshtha 4, Sudarshan Gautam 5 1 Instructor, Land Management Training Center,

More information

An Automated Object-Oriented Satellite Image Classification Method Integrating the FAO Land Cover Classification System (LCCS).

An Automated Object-Oriented Satellite Image Classification Method Integrating the FAO Land Cover Classification System (LCCS). An Automated Object-Oriented Satellite Image Classification Method Integrating the FAO Land Cover Classification System (LCCS). Ruvimbo Gamanya Sibanda Prof. Dr. Philippe De Maeyer Prof. Dr. Morgan De

More information

Spatial Decision Tree: A Novel Approach to Land-Cover Classification

Spatial Decision Tree: A Novel Approach to Land-Cover Classification Spatial Decision Tree: A Novel Approach to Land-Cover Classification Zhe Jiang 1, Shashi Shekhar 1, Xun Zhou 1, Joseph Knight 2, Jennifer Corcoran 2 1 Department of Computer Science & Engineering 2 Department

More information

IMPROVING REMOTE SENSING-DERIVED LAND USE/LAND COVER CLASSIFICATION WITH THE AID OF SPATIAL INFORMATION

IMPROVING REMOTE SENSING-DERIVED LAND USE/LAND COVER CLASSIFICATION WITH THE AID OF SPATIAL INFORMATION IMPROVING REMOTE SENSING-DERIVED LAND USE/LAND COVER CLASSIFICATION WITH THE AID OF SPATIAL INFORMATION Yingchun Zhou1, Sunil Narumalani1, Dennis E. Jelinski2 Department of Geography, University of Nebraska,

More information

A COMPARISON BETWEEN DIFFERENT PIXEL-BASED CLASSIFICATION METHODS OVER URBAN AREA USING VERY HIGH RESOLUTION DATA INTRODUCTION

A COMPARISON BETWEEN DIFFERENT PIXEL-BASED CLASSIFICATION METHODS OVER URBAN AREA USING VERY HIGH RESOLUTION DATA INTRODUCTION A COMPARISON BETWEEN DIFFERENT PIXEL-BASED CLASSIFICATION METHODS OVER URBAN AREA USING VERY HIGH RESOLUTION DATA Ebrahim Taherzadeh a, Helmi Z.M. Shafri a, Seyed Hassan Khalifeh Soltani b, Shattri Mansor

More information

International Journal of Remote Sensing, in press, 2006.

International Journal of Remote Sensing, in press, 2006. International Journal of Remote Sensing, in press, 2006. Parameter Selection for Region-Growing Image Segmentation Algorithms using Spatial Autocorrelation G. M. ESPINDOLA, G. CAMARA*, I. A. REIS, L. S.

More information

Object-Oriented Oriented Method to Classify the Land Use and Land Cover in San Antonio using ecognition Object-Oriented Oriented Image Analysis

Object-Oriented Oriented Method to Classify the Land Use and Land Cover in San Antonio using ecognition Object-Oriented Oriented Image Analysis Object-Oriented Oriented Method to Classify the Land Use and Land Cover in San Antonio using ecognition Object-Oriented Oriented Image Analysis Jayar S. Griffith ES6973 Remote Sensing Image Processing

More information

KNOWLEDGE-BASED CLASSIFICATION OF LAND COVER FOR THE QUALITY ASSESSEMENT OF GIS DATABASE. Israel -

KNOWLEDGE-BASED CLASSIFICATION OF LAND COVER FOR THE QUALITY ASSESSEMENT OF GIS DATABASE. Israel - KNOWLEDGE-BASED CLASSIFICATION OF LAND COVER FOR THE QUALITY ASSESSEMENT OF GIS DATABASE Ammatzia Peled a,*, Michael Gilichinsky b a University of Haifa, Department of Geography and Environmental Studies,

More information

Landslide Classification: An Object-Based Approach Bryan Zhou Geog 342: Final Project

Landslide Classification: An Object-Based Approach Bryan Zhou Geog 342: Final Project Landslide Classification: An Object-Based Approach Bryan Zhou Geog 342: Final Project Introduction One type of natural hazard that people are familiar with is landslide. Landslide is a laymen term use

More information

APPLICATION OF REMOTE SENSING IN LAND USE CHANGE PATTERN IN DA NANG CITY, VIETNAM

APPLICATION OF REMOTE SENSING IN LAND USE CHANGE PATTERN IN DA NANG CITY, VIETNAM APPLICATION OF REMOTE SENSING IN LAND USE CHANGE PATTERN IN DA NANG CITY, VIETNAM Tran Thi An 1 and Vu Anh Tuan 2 1 Department of Geography - Danang University of Education 41 Le Duan, Danang, Vietnam

More information

Evaluating Urban Vegetation Cover Using LiDAR and High Resolution Imagery

Evaluating Urban Vegetation Cover Using LiDAR and High Resolution Imagery Evaluating Urban Vegetation Cover Using LiDAR and High Resolution Imagery Y.A. Ayad and D. C. Mendez Clarion University of Pennsylvania Abstract One of the key planning factors in urban and built up environments

More information

Land cover/land use mapping and cha Mongolian plateau using remote sens. Title. Author(s) Bagan, Hasi; Yamagata, Yoshiki. Citation Japan.

Land cover/land use mapping and cha Mongolian plateau using remote sens. Title. Author(s) Bagan, Hasi; Yamagata, Yoshiki. Citation Japan. Title Land cover/land use mapping and cha Mongolian plateau using remote sens Author(s) Bagan, Hasi; Yamagata, Yoshiki International Symposium on "The Imp Citation Region Specific Systems". 6 Nove Japan.

More information

MULTI-SOURCE IMAGE CLASSIFICATION

MULTI-SOURCE IMAGE CLASSIFICATION MULTI-SOURCE IMAGE CLASSIFICATION Hillary Tribby, James Kroll, Daniel Unger, I-Kuai Hung, Hans Williams Corresponding Author: Daniel Unger (unger@sfasu.edu Arthur Temple College of Forestry and Agriculture

More information

Preparation of LULC map from GE images for GIS based Urban Hydrological Modeling

Preparation of LULC map from GE images for GIS based Urban Hydrological Modeling International Conference on Modeling Tools for Sustainable Water Resources Management Department of Civil Engineering, Indian Institute of Technology Hyderabad: 28-29 December 2014 Abstract Preparation

More information

OBJECT-BASED CLASSIFICATION USING HIGH RESOLUTION SATELLITE DATA AS A TOOL FOR MANAGING TRADITIONAL JAPANESE RURAL LANDSCAPES

OBJECT-BASED CLASSIFICATION USING HIGH RESOLUTION SATELLITE DATA AS A TOOL FOR MANAGING TRADITIONAL JAPANESE RURAL LANDSCAPES OBJECT-BASED CLASSIFICATION USING HIGH RESOLUTION SATELLITE DATA AS A TOOL FOR MANAGING TRADITIONAL JAPANESE RURAL LANDSCAPES K. Takahashi a, *, N. Kamagata a, K. Hara b a Graduate School of Informatics,

More information

Parameter selection for region-growing image segmentation algorithms using spatial autocorrelation

Parameter selection for region-growing image segmentation algorithms using spatial autocorrelation International Journal of Remote Sensing Vol. 27, No. 14, 20 July 2006, 3035 3040 Parameter selection for region-growing image segmentation algorithms using spatial autocorrelation G. M. ESPINDOLA, G. CAMARA*,

More information

DEVELOPMENT OF DIGITAL CARTOGRAPHIC DATABASE FOR MANAGING THE ENVIRONMENT AND NATURAL RESOURCES IN THE REPUBLIC OF SERBIA

DEVELOPMENT OF DIGITAL CARTOGRAPHIC DATABASE FOR MANAGING THE ENVIRONMENT AND NATURAL RESOURCES IN THE REPUBLIC OF SERBIA DEVELOPMENT OF DIGITAL CARTOGRAPHIC BASE FOR MANAGING THE ENVIRONMENT AND NATURAL RESOURCES IN THE REPUBLIC OF SERBIA Dragutin Protic, Ivan Nestorov Institute for Geodesy, Faculty of Civil Engineering,

More information

Temporal and spatial approaches for land cover classification.

Temporal and spatial approaches for land cover classification. Temporal and spatial approaches for land cover classification. Ryabukhin Sergey sergeyryabukhin@gmail.com Abstract. This paper describes solution for Time Series Land Cover Classification Challenge (TiSeLaC).

More information

Pixel-based and object-based landslide mapping: a methodological comparison

Pixel-based and object-based landslide mapping: a methodological comparison Pixel-based and object-based landslide mapping: a methodological comparison Daniel HÖLBLING 1, Tsai-Tsung TSAI 2, Clemens EISANK 1, Barbara FRIEDL 1, Chjeng-Lun SHIEH 2, and Thomas BLASCHKE 1 1 Department

More information

Spectral and Spatial Methods for the Classification of Urban Remote Sensing Data

Spectral and Spatial Methods for the Classification of Urban Remote Sensing Data Spectral and Spatial Methods for the Classification of Urban Remote Sensing Data Mathieu Fauvel gipsa-lab/dis, Grenoble Institute of Technology - INPG - FRANCE Department of Electrical and Computer Engineering,

More information

A comparison of pixel and object-based land cover classification: a case study of the Asmara region, Eritrea

A comparison of pixel and object-based land cover classification: a case study of the Asmara region, Eritrea Geo-Environment and Landscape Evolution III 233 A comparison of pixel and object-based land cover classification: a case study of the Asmara region, Eritrea Y. H. Araya 1 & C. Hergarten 2 1 Student of

More information

Land Use MTRI Documenting Land Use and Land Cover Conditions Synthesis Report

Land Use MTRI Documenting Land Use and Land Cover Conditions Synthesis Report Colin Brooks, Rick Powell, Laura Bourgeau-Chavez, and Dr. Robert Shuchman Michigan Tech Research Institute (MTRI) Project Introduction Transportation projects require detailed environmental information

More information

Online publication date: 22 January 2010 PLEASE SCROLL DOWN FOR ARTICLE

Online publication date: 22 January 2010 PLEASE SCROLL DOWN FOR ARTICLE This article was downloaded by: On: 29 January 2010 Access details: Access Details: Free Access Publisher Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered

More information

Classification of the wildland urban interface: A comparison of pixel- and object-based classifications using high-resolution aerial photography

Classification of the wildland urban interface: A comparison of pixel- and object-based classifications using high-resolution aerial photography Available online at www.sciencedirect.com Computers, Environment and Urban Systems 32 (2008) 317 326 www.elsevier.com/locate/compenvurbsys Classification of the wildland urban interface: A comparison of

More information

OBJECT BASED IMAGE ANALYSIS FOR URBAN MAPPING AND CITY PLANNING IN BELGIUM. P. Lemenkova

OBJECT BASED IMAGE ANALYSIS FOR URBAN MAPPING AND CITY PLANNING IN BELGIUM. P. Lemenkova Fig. 3 The fragment of 3D view of Tambov spatial model References 1. Nemtinov,V.A. Information technology in development of spatial-temporal models of the cultural heritage objects: monograph / V.A. Nemtinov,

More information

EFFECT OF ANCILLARY DATA ON THE PERFORMANCE OF LAND COVER CLASSIFICATION USING A NEURAL NETWORK MODEL. Duong Dang KHOI.

EFFECT OF ANCILLARY DATA ON THE PERFORMANCE OF LAND COVER CLASSIFICATION USING A NEURAL NETWORK MODEL. Duong Dang KHOI. EFFECT OF ANCILLARY DATA ON THE PERFORMANCE OF LAND COVER CLASSIFICATION USING A NEURAL NETWORK MODEL Duong Dang KHOI 1 10 Feb, 2011 Presentation contents 1. Introduction 2. Methods 3. Results 4. Discussion

More information

DAMAGE DETECTION OF THE 2008 SICHUAN, CHINA EARTHQUAKE FROM ALOS OPTICAL IMAGES

DAMAGE DETECTION OF THE 2008 SICHUAN, CHINA EARTHQUAKE FROM ALOS OPTICAL IMAGES DAMAGE DETECTION OF THE 2008 SICHUAN, CHINA EARTHQUAKE FROM ALOS OPTICAL IMAGES Wen Liu, Fumio Yamazaki Department of Urban Environment Systems, Graduate School of Engineering, Chiba University, 1-33,

More information

AUTOMATIC EXTRACTION OF ALUVIAL FANS FROM ASTER L1 SATELLITE DATA AND A DIGITAL ELEVATION MODEL USING OBJECT-ORIENTED IMAGE ANALYSIS

AUTOMATIC EXTRACTION OF ALUVIAL FANS FROM ASTER L1 SATELLITE DATA AND A DIGITAL ELEVATION MODEL USING OBJECT-ORIENTED IMAGE ANALYSIS AUTOMATIC EXTRACTION OF ALUVIAL FANS FROM ASTER L1 SATELLITE DATA AND A DIGITAL ELEVATION MODEL USING OBJECT-ORIENTED IMAGE ANALYSIS Demetre P. Argialas, Angelos Tzotsos Laboratory of Remote Sensing, Department

More information

M.C.PALIWAL. Department of Civil Engineering NATIONAL INSTITUTE OF TECHNICAL TEACHERS TRAINING & RESEARCH, BHOPAL (M.P.), INDIA

M.C.PALIWAL. Department of Civil Engineering NATIONAL INSTITUTE OF TECHNICAL TEACHERS TRAINING & RESEARCH, BHOPAL (M.P.), INDIA INVESTIGATIONS ON THE ACCURACY ASPECTS IN THE LAND USE/LAND COVER MAPPING USING REMOTE SENSING SATELLITE IMAGERY By M.C.PALIWAL Department of Civil Engineering NATIONAL INSTITUTE OF TECHNICAL TEACHERS

More information

DETECTING HUMAN ACTIVITIES IN THE ARCTIC OCEAN BY CONSTRUCTING AND ANALYZING SUPER-RESOLUTION IMAGES FROM MODIS DATA INTRODUCTION

DETECTING HUMAN ACTIVITIES IN THE ARCTIC OCEAN BY CONSTRUCTING AND ANALYZING SUPER-RESOLUTION IMAGES FROM MODIS DATA INTRODUCTION DETECTING HUMAN ACTIVITIES IN THE ARCTIC OCEAN BY CONSTRUCTING AND ANALYZING SUPER-RESOLUTION IMAGES FROM MODIS DATA Shizhi Chen and YingLi Tian Department of Electrical Engineering The City College of

More information

The Attribute Accuracy Assessment of Land Cover Data in the National Geographic Conditions Survey

The Attribute Accuracy Assessment of Land Cover Data in the National Geographic Conditions Survey The Attribute Accuracy Assessment of Land Cover Data in the National Geographic Conditions Survey Xiaole Ji a, *, Xiao Niu a Shandong Provincial Institute of Land Surveying and Mapping Jinan, Shandong

More information

FEATURE SELECTION METHODS FOR OBJECT-BASED CLASSIFICATION OF SUB-DECIMETER RESOLUTION DIGITAL AERIAL IMAGERY

FEATURE SELECTION METHODS FOR OBJECT-BASED CLASSIFICATION OF SUB-DECIMETER RESOLUTION DIGITAL AERIAL IMAGERY FEATURE SELECTION METHODS FOR OBJECT-BASED CLASSIFICATION OF SUB-DECIMETER RESOLUTION DIGITAL AERIAL IMAGERY A. S. Laliberte a, D.M. Browning b, A. Rango b a Jornada Experimental Range, New Mexico State

More information

Neural Networks and Ensemble Methods for Classification

Neural Networks and Ensemble Methods for Classification Neural Networks and Ensemble Methods for Classification NEURAL NETWORKS 2 Neural Networks A neural network is a set of connected input/output units (neurons) where each connection has a weight associated

More information

Digital Change Detection Using Remotely Sensed Data for Monitoring Green Space Destruction in Tabriz

Digital Change Detection Using Remotely Sensed Data for Monitoring Green Space Destruction in Tabriz Int. J. Environ. Res. 1 (1): 35-41, Winter 2007 ISSN:1735-6865 Graduate Faculty of Environment University of Tehran Digital Change Detection Using Remotely Sensed Data for Monitoring Green Space Destruction

More information

USE OF LANDSAT IMAGERY FOR EVALUATION OF LAND COVER / LAND USE CHANGES FOR A 30 YEAR PERIOD FOR THE LAKE ERIE WATERSHED

USE OF LANDSAT IMAGERY FOR EVALUATION OF LAND COVER / LAND USE CHANGES FOR A 30 YEAR PERIOD FOR THE LAKE ERIE WATERSHED USE OF LANDSAT IMAGERY FOR EVALUATION OF LAND COVER / LAND USE CHANGES FOR A 30 YEAR PERIOD FOR THE LAKE ERIE WATERSHED Mark E. Seidelmann Carolyn J. Merry Dept. of Civil and Environmental Engineering

More information

A Method to Improve the Accuracy of Remote Sensing Data Classification by Exploiting the Multi-Scale Properties in the Scene

A Method to Improve the Accuracy of Remote Sensing Data Classification by Exploiting the Multi-Scale Properties in the Scene Proceedings of the 8th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences Shanghai, P. R. China, June 25-27, 2008, pp. 183-188 A Method to Improve the

More information

Fundamentals of Photographic Interpretation

Fundamentals of Photographic Interpretation Principals and Elements of Image Interpretation Fundamentals of Photographic Interpretation Observation and inference depend on interpreter s training, experience, bias, natural visual and analytical abilities.

More information

Principals and Elements of Image Interpretation

Principals and Elements of Image Interpretation Principals and Elements of Image Interpretation 1 Fundamentals of Photographic Interpretation Observation and inference depend on interpreter s training, experience, bias, natural visual and analytical

More information

Establishment of the watershed image classified rule-set and feasibility assessment of its application

Establishment of the watershed image classified rule-set and feasibility assessment of its application Establishment of the watershed classified rule-set and feasibility assessment of its application Cheng-Han Lin 1,*, Hsin-Kai Chuang 2 and Ming-Lang Lin 3, Wen-Chao Huang 4 1 2 3 Department of Civil Engineer,

More information

CUYAHOGA COUNTY URBAN TREE CANOPY & LAND COVER MAPPING

CUYAHOGA COUNTY URBAN TREE CANOPY & LAND COVER MAPPING CUYAHOGA COUNTY URBAN TREE CANOPY & LAND COVER MAPPING FINAL REPORT M IKE GALVIN S AVATREE D IRECTOR, CONSULTING GROUP P HONE: 914 403 8959 E MAIL: MGALVIN@SAVATREE. COM J ARLATH O NEIL DUNNE U NIVERSITY

More information

Characterization of Coastal Wetland Systems using Multiple Remote Sensing Data Types and Analytical Techniques

Characterization of Coastal Wetland Systems using Multiple Remote Sensing Data Types and Analytical Techniques Characterization of Coastal Wetland Systems using Multiple Remote Sensing Data Types and Analytical Techniques Daniel Civco, James Hurd, and Sandy Prisloe Center for Land use Education and Research University

More information

Deriving Uncertainty of Area Estimates from Satellite Imagery using Fuzzy Land-cover Classification

Deriving Uncertainty of Area Estimates from Satellite Imagery using Fuzzy Land-cover Classification International Journal of Information and Computation Technology. ISSN 0974-2239 Volume 3, Number 10 (2013), pp. 1059-1066 International Research Publications House http://www. irphouse.com /ijict.htm Deriving

More information

Classification trees for improving the accuracy of land use urban data from remotely sensed images

Classification trees for improving the accuracy of land use urban data from remotely sensed images Classification trees for improving the accuracy of land use urban data from remotely sensed images M.T. Shalaby & A.A. Darwish Informatics Institute of IT, School of Computer Science and IT, University

More information

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Bayesian learning: Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Let y be the true label and y be the predicted

More information

An introduction to clustering techniques

An introduction to clustering techniques - ABSTRACT Cluster analysis has been used in a wide variety of fields, such as marketing, social science, biology, pattern recognition etc. It is used to identify homogenous groups of cases to better understand

More information

Object-based Vegetation Type Mapping from an Orthorectified Multispectral IKONOS Image using Ancillary Information

Object-based Vegetation Type Mapping from an Orthorectified Multispectral IKONOS Image using Ancillary Information Object-based Vegetation Type Mapping from an Orthorectified Multispectral IKONOS Image using Ancillary Information Minho Kim a, *, Bo Xu*, and Marguerite Madden a a Center for Remote Sensing and Mapping

More information

1. Introduction. S.S. Patil 1, Sachidananda 1, U.B. Angadi 2, and D.K. Prabhuraj 3

1. Introduction. S.S. Patil 1, Sachidananda 1, U.B. Angadi 2, and D.K. Prabhuraj 3 Cloud Publications International Journal of Advanced Remote Sensing and GIS 2014, Volume 3, Issue 1, pp. 525-531, Article ID Tech-249 ISSN 2320-0243 Research Article Open Access Machine Learning Technique

More information

Impacts of sensor noise on land cover classifications: sensitivity analysis using simulated noise

Impacts of sensor noise on land cover classifications: sensitivity analysis using simulated noise Impacts of sensor noise on land cover classifications: sensitivity analysis using simulated noise Scott Mitchell 1 and Tarmo Remmel 2 1 Geomatics & Landscape Ecology Research Lab, Carleton University,

More information

USING LANDSAT IN A GIS WORLD

USING LANDSAT IN A GIS WORLD USING LANDSAT IN A GIS WORLD RACHEL MK HEADLEY; PHD, PMP STEM LIAISON, ACADEMIC AFFAIRS BLACK HILLS STATE UNIVERSITY This material is based upon work supported by the National Science Foundation under

More information

Potential Open Space Detection and Decision Support for Urban Planning by Means of Optical VHR Satellite Imagery

Potential Open Space Detection and Decision Support for Urban Planning by Means of Optical VHR Satellite Imagery Remote Sensing and Geoinformation Lena Halounová, Editor not only for Scientific Cooperation EARSeL, 2011 Potential Open Space Detection and Decision Support for Urban Planning by Means of Optical VHR

More information

USING HYPERSPECTRAL IMAGERY

USING HYPERSPECTRAL IMAGERY USING HYPERSPECTRAL IMAGERY AND LIDAR DATA TO DETECT PLANT INVASIONS 2016 ESRI CANADA SCHOLARSHIP APPLICATION CURTIS CHANCE M.SC. CANDIDATE FACULTY OF FORESTRY UNIVERSITY OF BRITISH COLUMBIA CURTIS.CHANCE@ALUMNI.UBC.CA

More information

A MULTI-SCALE OBJECT-ORIENTED APPROACH TO THE CLASSIFICATION OF MULTI-SENSOR IMAGERY FOR MAPPING LAND COVER IN THE TOP END.

A MULTI-SCALE OBJECT-ORIENTED APPROACH TO THE CLASSIFICATION OF MULTI-SENSOR IMAGERY FOR MAPPING LAND COVER IN THE TOP END. A MULTI-SCALE OBJECT-ORIENTED APPROACH TO THE CLASSIFICATION OF MULTI-SENSOR IMAGERY FOR MAPPING LAND COVER IN THE TOP END. Tim Whiteside 1,2 Author affiliation: 1 Natural and Cultural Resource Management,

More information

Region Growing Tree Delineation In Urban Settlements

Region Growing Tree Delineation In Urban Settlements 2008 International Conference on Advanced Computer Theory and Engineering Region Growing Tree Delineation In Urban Settlements LAU BEE THENG, CHOO AI LING School of Computing and Design Swinburne University

More information

Hyperspectral image classification using Support Vector Machine

Hyperspectral image classification using Support Vector Machine Journal of Physics: Conference Series OPEN ACCESS Hyperspectral image classification using Support Vector Machine To cite this article: T A Moughal 2013 J. Phys.: Conf. Ser. 439 012042 View the article

More information

Spatial Process VS. Non-spatial Process. Landscape Process

Spatial Process VS. Non-spatial Process. Landscape Process Spatial Process VS. Non-spatial Process A process is non-spatial if it is NOT a function of spatial pattern = A process is spatial if it is a function of spatial pattern Landscape Process If there is no

More information

A GLOBAL ANALYSIS OF URBAN REFLECTANCE. Christopher SMALL

A GLOBAL ANALYSIS OF URBAN REFLECTANCE. Christopher SMALL A GLOBAL ANALYSIS OF URBAN REFLECTANCE Christopher SMALL Lamont Doherty Earth Observatory Columbia University Palisades, NY 10964 USA small@ldeo.columbia.edu ABSTRACT Spectral characterization of urban

More information

Object Based Imagery Exploration with. Outline

Object Based Imagery Exploration with. Outline Object Based Imagery Exploration with Dan Craver Portland State University June 11, 2007 Outline Overview Getting Started Processing and Derivatives Object-oriented classification Literature review Demo

More information

Remote sensing of sealed surfaces and its potential for monitoring and modeling of urban dynamics

Remote sensing of sealed surfaces and its potential for monitoring and modeling of urban dynamics Remote sensing of sealed surfaces and its potential for monitoring and modeling of urban dynamics Frank Canters CGIS Research Group, Department of Geography Vrije Universiteit Brussel Herhaling titel van

More information

Land cover classification of QuickBird multispectral data with an object-oriented approach

Land cover classification of QuickBird multispectral data with an object-oriented approach Land cover classification of QuickBird multispectral data with an object-oriented approach E. Tarantino Polytechnic University of Bari, Italy Abstract Recent satellite technologies have produced new data

More information

Outline: Introduction - Data used - Methods - Results

Outline: Introduction - Data used - Methods - Results Mapping of land covers in South Greenland using very high resolution satellite imagery Menaka Chellasamy, Mateja Ogric, Mogens H. Greve and René Larsen Outline: Introduction - Data used - Methods - Results

More information

Object Oriented Classification Using High-Resolution Satellite Images for HNV Farmland Identification. Shafique Matin and Stuart Green

Object Oriented Classification Using High-Resolution Satellite Images for HNV Farmland Identification. Shafique Matin and Stuart Green Object Oriented Classification Using High-Resolution Satellite Images for HNV Farmland Identification Shafique Matin and Stuart Green REDP, Teagasc Ashtown, Dublin, Ireland Correspondence: shafique.matin@teagasc.ie

More information

Land cover classification methods

Land cover classification methods Land cover classification methods This document provides an overview of land cover classification using remotely sensed data. We will describe different options for conducting land cover classification

More information

SATELLITE REMOTE SENSING

SATELLITE REMOTE SENSING SATELLITE REMOTE SENSING of NATURAL RESOURCES David L. Verbyla LEWIS PUBLISHERS Boca Raton New York London Tokyo Contents CHAPTER 1. SATELLITE IMAGES 1 Raster Image Data 2 Remote Sensing Detectors 2 Analog

More information

GEOBIA For Land Use Mapping Using Worldview2 Image In Bengkak Village Coastal, Banyuwangi Regency, East Java

GEOBIA For Land Use Mapping Using Worldview2 Image In Bengkak Village Coastal, Banyuwangi Regency, East Java IOP Conference Series: Earth and Environmental Science PAPER OPEN ACCESS GEOBIA For Land Use Mapping Using Worldview2 Image In Bengkak Village Coastal, Banyuwangi Regency, East Java Related content - Linear

More information

Module 2.1 Monitoring activity data for forests using remote sensing

Module 2.1 Monitoring activity data for forests using remote sensing Module 2.1 Monitoring activity data for forests using remote sensing Module developers: Frédéric Achard, European Commission (EC) Joint Research Centre (JRC) Jukka Miettinen, EC JRC Brice Mora, Wageningen

More information

Data Mining und Maschinelles Lernen

Data Mining und Maschinelles Lernen Data Mining und Maschinelles Lernen Ensemble Methods Bias-Variance Trade-off Basic Idea of Ensembles Bagging Basic Algorithm Bagging with Costs Randomization Random Forests Boosting Stacking Error-Correcting

More information

Holdout and Cross-Validation Methods Overfitting Avoidance

Holdout and Cross-Validation Methods Overfitting Avoidance Holdout and Cross-Validation Methods Overfitting Avoidance Decision Trees Reduce error pruning Cost-complexity pruning Neural Networks Early stopping Adjusting Regularizers via Cross-Validation Nearest

More information

LAND COVER MAPPING USING OBJECT-BASED IMAGE ANALYSIS TO A MONITORING OF A PIPELINE

LAND COVER MAPPING USING OBJECT-BASED IMAGE ANALYSIS TO A MONITORING OF A PIPELINE Proceedings of the 4th GEOBIA, May 7-9, 2012 - Rio de Janeiro - Brazil. p.146 LAND COVER MAPPING USING OBJECT-BASED IMAGE ANALYSIS TO A MONITORING OF A PIPELINE M. V. Ferreira a *, M. L. Marques a, P.

More information

REMOTE SENSING APPLICATION IN FOREST MONITORING: AN OBJECT BASED APPROACH Tran Quang Bao 1 and Nguyen Thi Hoa 2

REMOTE SENSING APPLICATION IN FOREST MONITORING: AN OBJECT BASED APPROACH Tran Quang Bao 1 and Nguyen Thi Hoa 2 REMOTE SENSING APPLICATION IN FOREST MONITORING: AN OBJECT BASED APPROACH Tran Quang Bao 1 and Nguyen Thi Hoa 2 1 Department of Environment Management, Vietnam Forestry University, Ha Noi, Vietnam 2 Institute

More information

Quick Response Report #126 Hurricane Floyd Flood Mapping Integrating Landsat 7 TM Satellite Imagery and DEM Data

Quick Response Report #126 Hurricane Floyd Flood Mapping Integrating Landsat 7 TM Satellite Imagery and DEM Data Quick Response Report #126 Hurricane Floyd Flood Mapping Integrating Landsat 7 TM Satellite Imagery and DEM Data Jeffrey D. Colby Yong Wang Karen Mulcahy Department of Geography East Carolina University

More information

Detection of Sea Ice/ Melt Pond from Aerial Photos through Object-based Image Classification Scheme

Detection of Sea Ice/ Melt Pond from Aerial Photos through Object-based Image Classification Scheme NEMC 213 August 5-9, San Antonio, TX Detection of Sea Ice/ Melt Pond from Aerial Photos through Object-based Image Classification Scheme Xin Miao 12, Hongjie Xie 2, Zhijun Li 3, Ruibo Lei 4 1 Department

More information

Preliminary Research on Grassland Fineclassification

Preliminary Research on Grassland Fineclassification IOP Conference Series: Earth and Environmental Science OPEN ACCESS Preliminary Research on Grassland Fineclassification Based on MODIS To cite this article: Z W Hu et al 2014 IOP Conf. Ser.: Earth Environ.

More information

PRINCIPLES OF PHOTO INTERPRETATION

PRINCIPLES OF PHOTO INTERPRETATION PRINCIPLES OF PHOTO INTERPRETATION Photo Interpretation the act of examining photographic images for the purpose of identifying objects and judging their significance an art more than a science Recognition

More information

A SURVEY OF REMOTE SENSING IMAGE CLASSIFICATION APPROACHES

A SURVEY OF REMOTE SENSING IMAGE CLASSIFICATION APPROACHES IJAMML 3:1 (2015) 1-11 September 2015 ISSN: 2394-2258 Available at http://scientificadvances.co.in DOI: http://dx.doi.org/10.18642/ijamml_7100121516 A SURVEY OF REMOTE SENSING IMAGE CLASSIFICATION APPROACHES

More information

Joint International Mechanical, Electronic and Information Technology Conference (JIMET 2015)

Joint International Mechanical, Electronic and Information Technology Conference (JIMET 2015) Joint International Mechanical, Electronic and Information Technology Conference (JIMET 2015) Extracting Land Cover Change Information by using Raster Image and Vector Data Synergy Processing Methods Tao

More information

Land cover classification methods. Ned Horning

Land cover classification methods. Ned Horning Land cover classification methods Ned Horning Version: 1.0 Creation Date: 2004-01-01 Revision Date: 2004-01-01 License: This document is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported

More information

International Journal of Scientific & Engineering Research, Volume 6, Issue 7, July ISSN

International Journal of Scientific & Engineering Research, Volume 6, Issue 7, July ISSN International Journal of Scientific & Engineering Research, Volume 6, Issue 7, July-2015 1428 Accuracy Assessment of Land Cover /Land Use Mapping Using Medium Resolution Satellite Imagery Paliwal M.C &.

More information

Comparison between Land Surface Temperature Retrieval Using Classification Based Emissivity and NDVI Based Emissivity

Comparison between Land Surface Temperature Retrieval Using Classification Based Emissivity and NDVI Based Emissivity Comparison between Land Surface Temperature Retrieval Using Classification Based Emissivity and NDVI Based Emissivity Isabel C. Perez Hoyos NOAA Crest, City College of New York, CUNY, 160 Convent Avenue,

More information

COMBINING ENUMERATION AREA MAPS AND SATELITE IMAGES (LAND COVER) FOR THE DEVELOPMENT OF AREA FRAME (MULTIPLE FRAMES) IN AN AFRICAN COUNTRY:

COMBINING ENUMERATION AREA MAPS AND SATELITE IMAGES (LAND COVER) FOR THE DEVELOPMENT OF AREA FRAME (MULTIPLE FRAMES) IN AN AFRICAN COUNTRY: COMBINING ENUMERATION AREA MAPS AND SATELITE IMAGES (LAND COVER) FOR THE DEVELOPMENT OF AREA FRAME (MULTIPLE FRAMES) IN AN AFRICAN COUNTRY: PRELIMINARY LESSONS FROM THE EXPERIENCE OF ETHIOPIA BY ABERASH

More information

Publication I American Society for Photogrammetry and Remote Sensing (ASPRS)

Publication I American Society for Photogrammetry and Remote Sensing (ASPRS) Publication I Leena Matikainen, Juha Hyyppä, and Marcus E. Engdahl. 2006. Mapping built-up areas from multitemporal interferometric SAR images - A segment-based approach. Photogrammetric Engineering and

More information

Urban remote sensing: from local to global and back

Urban remote sensing: from local to global and back Urban remote sensing: from local to global and back Paolo Gamba University of Pavia, Italy A few words about Pavia Historical University (1361) in a nice town slide 3 Geoscience and Remote Sensing Society

More information

This is trial version

This is trial version Journal of Rangeland Science, 2012, Vol. 2, No. 2 J. Barkhordari and T. Vardanian/ 459 Contents available at ISC and SID Journal homepage: www.rangeland.ir Full Paper Article: Using Post-Classification

More information

Object-Based Change Detection

Object-Based Change Detection Object-Based Change Detection GANG CHEN, GEOFFREY J. HAY, LUIS M. T. CARVALHO, and MICHAEL A. WULDER Foothills Facility for Remote Sensing and GIScience, Department of Geography, University of Calgary,

More information

1st EARSeL Workshop of the SIG Urban Remote Sensing Humboldt-Universität zu Berlin, 2-3 March 2006

1st EARSeL Workshop of the SIG Urban Remote Sensing Humboldt-Universität zu Berlin, 2-3 March 2006 1 AN URBAN CLASSIFICATION APPROACH BASED ON AN OBJECT ORIENTED ANALYSIS OF HIGH RESOLUTION SATELLITE IMAGERY FOR A SPATIAL STRUCTURING WITHIN URBAN AREAS Hannes Taubenböck, Thomas Esch, Achim Roth German

More information