Grundlagen Fernerkundung - 12

Similar documents
Sub-pixel regional land cover mapping. with MERIS imagery

o 3000 Hannover, Fed. Rep. of Germany

Urban land cover and land use extraction from Very High Resolution remote sensing imagery

International Journal of Scientific & Engineering Research, Volume 6, Issue 7, July ISSN

GEOG 4110/5100 Advanced Remote Sensing Lecture 12. Classification (Supervised and Unsupervised) Richards: 6.1, ,

Deriving Uncertainty of Area Estimates from Satellite Imagery using Fuzzy Land-cover Classification

AN INVESTIGATION OF AUTOMATIC CHANGE DETECTION FOR TOPOGRAPHIC MAP UPDATING

GeoComputation 2011 Session 4: Posters Accuracy assessment for Fuzzy classification in Tripoli, Libya Abdulhakim khmag, Alexis Comber, Peter Fisher ¹D

Preparation of LULC map from GE images for GIS based Urban Hydrological Modeling

IMAGE CLASSIFICATION TOOL FOR LAND USE / LAND COVER ANALYSIS: A COMPARATIVE STUDY OF MAXIMUM LIKELIHOOD AND MINIMUM DISTANCE METHOD

Linear Classifiers. Michael Collins. January 18, 2012

NR402 GIS Applications in Natural Resources. Lesson 9: Scale and Accuracy

MetroCount Traffic Executive Individual Vehicles

1. Introduction. S.S. Patil 1, Sachidananda 1, U.B. Angadi 2, and D.K. Prabhuraj 3

Temporal and spatial approaches for land cover classification.

Assessing Limits of Classification Accuracy Attainable through Maximum Likelihood Method in Remote Sensing

Lesson 6: Accuracy Assessment

Comparative Analysis of Supervised and

A COMPARISON BETWEEN DIFFERENT PIXEL-BASED CLASSIFICATION METHODS OVER URBAN AREA USING VERY HIGH RESOLUTION DATA INTRODUCTION

A Method to Improve the Accuracy of Remote Sensing Data Classification by Exploiting the Multi-Scale Properties in the Scene

Hyperspectral image classification using Support Vector Machine

Overview on Land Cover and Land Use Monitoring in Russia

Combination of Microwave and Optical Remote Sensing in Land Cover Mapping

High resolution wetland mapping I.

Comparing CORINE Land Cover with a more detailed database in Arezzo (Italy).

Module 2.1 Monitoring activity data for forests using remote sensing

CS533 Fall 2017 HW5 Solutions. CS533 Information Retrieval Fall HW5 Solutions

1. Basics of Information

CHAPTER-7 INTERFEROMETRIC ANALYSIS OF SPACEBORNE ENVISAT-ASAR DATA FOR VEGETATION CLASSIFICATION

79 International Journal of Scientific & Engineering Research, Volume 4, Issue 12, December-2013 ISSN

Ordinals and Cardinals: Basic set-theoretic techniques in logic

When Dictionary Learning Meets Classification

M.C.PALIWAL. Department of Civil Engineering NATIONAL INSTITUTE OF TECHNICAL TEACHERS TRAINING & RESEARCH, BHOPAL (M.P.), INDIA

SATELLITE REMOTE SENSING

Using MERIS and MODIS for Land Cover Mapping in the Netherlands

KNOWLEDGE-BASED CLASSIFICATION OF LAND COVER FOR THE QUALITY ASSESSEMENT OF GIS DATABASE. Israel -

The Attribute Accuracy Assessment of Land Cover Data in the National Geographic Conditions Survey

Land Use and Land Cover Detection by Different Classification Systems using Remotely Sensed Data of Kuala Tiga, Tanah Merah Kelantan, Malaysia

A comparison of pixel and object-based land cover classification: a case study of the Asmara region, Eritrea

the Clinch/Hidden Valley study site were used in this mountainous classification.

ASSESSING THEMATIC MAP USING SAMPLING TECHNIQUE

Types of spatial data. The Nature of Geographic Data. Types of spatial data. Spatial Autocorrelation. Continuous spatial data: geostatistics

Spatial Process VS. Non-spatial Process. Landscape Process

Object-based feature extraction of Google Earth Imagery for mapping termite mounds in Bahia, Brazil

6. Image Classification

The Application of Extreme Learning Machine based on Gaussian Kernel in Image Classification

DIFFERENTIATING A SMALL URBAN AREA FROM OTHER LAND COVER CLASSES EMPLOYING LANDSAT MSS

Exercise Quality Management

DEVELOPMENT OF AN ADVANCED UNCERTAINTY MEASURE FOR CLASSIFIED REMOTELY SENSED SCENES

Accuracy Assessment of Land Use & Land Cover Classification (LU/LC) Case study of Shomadi area- Renk County-Upper Nile State, South Sudan

Comparison and Analysis of the Pixel-based and Object-Oriented Methods for Land Cover Classification with ETM+ Data

Online publication date: 22 January 2010 PLEASE SCROLL DOWN FOR ARTICLE

LAND USE MAPPING AND MONITORING IN THE NETHERLANDS (LGN5)

Biophysics of Macromolecules

Object Based Imagery Exploration with. Outline

Land cover/land use mapping and cha Mongolian plateau using remote sens. Title. Author(s) Bagan, Hasi; Yamagata, Yoshiki. Citation Japan.

Invited session on Thematic accuracy assessment by means of confusion matrices

Mapping Willow Distribution Across the Northern Range of Yellowstone National Park

Machine Learning Linear Classification. Prof. Matteo Matteucci

The Effects of Haze on the Accuracy of. Satellite Land Cover Classification

The Self-adaptive Adjustment Method of Clustering Center in Multi-spectral Remote Sensing Image Classification of Land Use

LAND COVER CATEGORY DEFINITION BY IMAGE INVARIANTS FOR AUTOMATED CLASSIFICATION

Object-based classification of residential land use within Accra, Ghana based on QuickBird satellite data

Chapter Goals. To introduce you to data collection

Generative Learning. INFO-4604, Applied Machine Learning University of Colorado Boulder. November 29, 2018 Prof. Michael Paul

Classification trees for improving the accuracy of land use urban data from remotely sensed images

Data Fusion and Multi-Resolution Data

EECS490: Digital Image Processing. Lecture #26

Accuracy Assessment of Land Cover Classification in Jodhpur City Using Remote Sensing and GIS

Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 9

GLOBAL/CONTINENTAL LAND COVER MAPPING AND MONITORING

Accuracy assessment of Pennsylvania streams mapped using LiDAR elevation data: Method development and results

BGD. Biogeosciences Discussions

Accuracy Assessment of Land Use/Land Cover Classification Using Remote Sensing and GIS

Urban Growth Analysis: Calculating Metrics to Quantify Urban Sprawl

New Coding System of Grid Squares in the Republic of Indonesia

Land Cover Classification

Page 1 of 8 of Pontius and Li

Comparison of MLC and FCM Techniques with Satellite Imagery in A Part of Narmada River Basin of Madhya Pradesh, India

IMPROVING REMOTE SENSING-DERIVED LAND USE/LAND COVER CLASSIFICATION WITH THE AID OF SPATIAL INFORMATION

Introduction to Machine Learning

USING HYPERSPECTRAL IMAGERY

STATISTICS 407 METHODS OF MULTIVARIATE ANALYSIS TOPICS

Object Based Land Cover Extraction Using Open Source Software

Resolving habitat classification and structure using aerial photography. Michael Wilson Center for Conservation Biology College of William and Mary

Module Module Component Aim LoI** Study programme Term expected calendar week/ time***

LAND COVER IN LEIHITU PENINSULA AMBON DISTRICT BASED ON IMAGE SPECTRAL TRANSFORMATION

Chemometrics: Classification of spectra

A STRATEGY FOR QUALITY ASSURANCE OF LAND-COVER/LAND-USE INTERPRETATION RESULTS WITH FAULTY OR OBSOLETE REFERENCE DATA

Statistical Methods in Particle Physics

Conservative bias in classification accuracy assessment due to pixelby-pixel comparison of classified images with reference grids

This is trial version

Joint International Mechanical, Electronic and Information Technology Conference (JIMET 2015)

Dr.N.Chandrasekar Professor and Head Centre for GeoTechnology Manonmaniam Sundaranar University Tirunelveli

Land Cover Classification Over Penang Island, Malaysia Using SPOT Data

D-optimally Lack-of-Fit-Test-efficient Designs and Related Simple Designs

LAND USE/LAND COVER CLASSIFICATION AND ACCURACY ASSESSMENT USING SATELLITE DATA - A CASE STUDY OF BHIND DISTRICT, MADHYA PRADESH

Land cover classification methods

Geomatica II Course Guide Version 10.1

Learning with multiple models. Boosting.

Transcription:

Grundlagen Fernerkundung - 12 Image classification GEO123.1, FS2015 Michael Schaepman, Rogier de Jong, Reik Leiterer 5/7/15 Page 1

5/7/15 Grundlagen Fernerkundung FS2011 Page 2

5/7/15 Grundlagen Fernerkundung FS2011 Page 3

Department of Geography Landcover Classification Improvement 75.0% 90.0% 92.1%

Today s topics & learning goals Feature space How are land cover classes separated from each other? Classification algorithms Which algorithms exist and how do they differ? Accuracy measures How valid is a classification result?

Feature Space (Eigenschaftsraum / Merkmalsraum) Albertz, 2001

Feature space: NIR vs RED

Department of Geography Feature space: NIR vs RED

Statistical description of spectral clusters Histogram / distribution curve Min, Mean, Max values Standard deviations

Spectral clustering using the feature space Ground features are defined based on their spectral response Points 1-3 are unknown features that need to be assigned to A-E! = wavelength Messwerte = reflectance

Method 1: Parallelepiped classification OG = Obergrenze (upper limit) UG = Untergrenze (lower limit) OG A2 UG A2 UG A1 OG A1

Method 1: Parallelepiped classification OG = Obergrenze (upper limit) UG = Untergrenze (lower limit) OG E2 OG A2 UG A2 = UG E2 UG A1 OG A1 UG E1 OG E1

Method 1: Parallelepiped classification Features B and C cannot be distinguished. You ll need to try other spectral channels for this purpose. Point 2 is therefore the most uncertain but with only this information classified as B

Method 2: Minimum-distance classification Criterium: smallest Euclidean distance to the mean value of the class (+) Boundaries are perpendiculars ( Mittelsenkrechte )! = wavelength Messwerte = reflectance

Method 2: Minimum-distance classification Without a max-distance threshold, point 3 changed from unclassified to E Point 2 has changed from B to C Less overlap between class B and C but more misclassifications of Bs as Cs?

Method 3: Maximum-likelihood classification Criterium: maximum likelihood ( Mutmasslichkeit ) according to a probabilitydensity function ( Wahrscheinlichkeitsdichte-Funktion ) Boundaries: number of standard deviations (sigma)

Method 3: Maximum-likelihood classification Point 3: more than x sigma distance and thus unclassified 1 sigma probability 2 sigma etc. Point 2: class B with 2 sigma distance

Common classification algorithms: overview Parallelepiped Minimum Distanz Maximum Likelihood

User supervision ( Überwachung ) Unsupervised classification The user does not provide information with respect to the classes Classes are defined based on statistical properties Given the same dataset and same method, each user obtains the identical result Supervised classification The user defines training classes to which unclassified pixels are being compared The result depends on the definition of the training classes and is therefore likely to change between users

Method 4: Unsupervised clustering Option 1: no a-priori information at all, (number of) clusters are estimated from statistics Cluster 1 Cluster 2 Cluster 4 Cluster 3

Method 4: Unsupervised clustering Option 2: the user defines (only) the number of classes, i.e. 5 Cluster 1 Cluster 2 After classification (option 1 or 2), clusters may be assigned to features of interest based on user knowledge Cluster 5 Cluster 3 Cluster 4

Classification methods Albertz, 2001 Unsupervised classification Supervised classification Clustering, (Segmentation)... Parallelepiped, Minimum distance, Maximum likelihood...

How accurate is your classification? Landsat Thematic Mapper, K 2,4,7 Looks good! Seems plausible! Is that enough though?

A classification is not complete until it has been assessed. (R.G. CONGALTON 1991:35)

For a given point, how likely is it that the classification represents the reality? How accurate are the various classes? How can we communicate the (in)accuracy to the users of our maps?

Options for accurcay assessment Vergleich mit Bild (visuell) -> sehr unzuverlässig, nicht-quantitativ Vergleich mit Karte (visuell) -> unzuverlässig, nicht-quantitativ Vergleich mit ground truth (visuell) -> bedingt abschätzbar, nicht-quantitativ Vergleich mit ground truth (digital) -> zuverlässig, häufig angewendet, quantitativ Unabhängige Verwendung von -> zuverlässig, häufig angewendet, quantitativ ground truth für Training des Klassifikators und Validierung (digital) Vergleich mit unabhängiger/ statistisch basierter Referenz (digital) -> best practice, sehr zuverlässig, quantitativ

Department of Geography Accuracy assessment Aerial image HRSC-AX Visual comparison: How accurate has Vegetation been classified? Digital comparison with ground truth map: Only 56% of the actual Vegetation has been classified as Vegetation Classes: Water, Vegetation, Bare / built-up

Error matrix ( Fehlermatrix ) (synonym: confusion matrix, matching matrix, contingency table) Classification Reference / ground truth Forest Arable Other vegetation Bare / built-up

Error matrix Referenz/ ground truth A B C D! Referenz/ ground truth A B C D! 15 0 0 0 15 0 9 0 0 3 1 24 2 0 0 0 10 18 Forest Arable Other vegetation Bare / built-up

Error matrix Referenz/ ground truth Referenz/ ground truth A B C D! A B 15 0 0 9 0 0 0 0 15 9 C 3 1 24 2 30 D 0 0 0 10 10! 18 10 24 12 64 Forest Other vegetation Arable Bare / built-up

Error matrix Referenz/ ground truth A B C D! A n AA nx AB nx AC nx AD n A+ X X X B n BA n BB n BC n BD n B+ X X X C n CA n CB n CC n CD n C+ D n X DA nx DB n DC n DD n D+ X! n +A n +B n +C n +D n ++ N A+ Number of pixels classified as A N +A Number of pixels found to be A in reference Correctly classified (classification == reference) Incorrectly classified (classification " reference) Grand total (total pixel count)

Error matrix Referenz/ ground truth A Referenz/ ground truth A B C D! 15 0 0 0 15 B 0 9 0 0 9 C 3 1 24 2 30 D 0 0 0 10 10! 18 10 24 12 64 Forest Arable Other vegetation Bare / built-up Correctly classified classification == reference Incorrectly classified classification " reference

Type 1 (commission) and type 2 (omission) error Referenz/ ground truth Referenz/ ground truth A B C D! A 15 0 0 0 15 B 0 9 0 0 9 C 3 1 24 2 30 D 0 0 0 10 10! 18 10 24 12 64 Incorrectly classified classification " reference -! Type 1: Commission Error Pixels were included in the class, although they should not have been -! Type 2: Omission Error Pixels were not included in the class, although they should have been

Type 1 (commission) and type 2 (omission) error Type 1 (commission error) example for class Forest Referenz/ ground truth Type 2 (ommission error) example for class Forest Referenz/ ground truth

Accuracy metrics (or: error metrics) Referenz/ ground truth A B C D! A n AA nx AB nx AC nx AD n A+ X X X B n BA n BB n BC n BD n B+ X X X C n CA n CB n CC n CD n C+ D n X DA nx DB n DC n DD n D+ X! n +A n +B n +C n +D n ++ N A+ Number of pixels classified as A N +A Number of pixels found to be A in reference Correctly classified (classification == reference) Incorrectly classified (classification " reference) Grand total (total pixel count) Accuracy metrics: measures for classification accuracy based on the error matrix

Accuracy metrics English Deutsch Overall Accuracy Gesamtgenauigkeit Producer Accuracy Produzenten-Genauigkeit User Accuracy Nutzer-Genauigkeit Average Accuracy durchschnittliche Genauigkeit Mean Accuracy mittlere Genauigkeit Kappa Coefficient Kappa Koeffizient

Overall/ Total Accuracy (OA) Referenz/ ground truth A B C D! A 15 0 0 0 15 B 0 9 0 0 9 C 3 1 24 2 30 D 0 0 0 10 10 Referenz! 18 10 24 12 64 Overall Accuracy = Count of correctly classified pixels Grand total (total pixel count) = 58 64 = 0.90625 (~90.6%)

Genauigkeitsmasse Overall/ Total Accuracy (OA) (GLC2000) Referenz/ ground truth A B C D! A 15 0 0 0 15 Die Overall Accuracy ist ein einfaches Genauigkeitsmass, aber wenig aussagekräftig! B 0 9 0 0 9 Fehler 1. Art (Commission) und C Fehler 3 12. Art 24 (Omission) 2 30 werden nicht berücksichtigt! D 0 0 0 10 10 Referenz! 18 10 24 12 64 Overall die Summe der richtig klassifizierten Pixel Accuracy = Gesamtanzahl aller Pixel/ Grundgesamtheit = 58 64 = 0.90625 (~90.6%)

Producer s Accuracy (PA) Referenz/ ground truth A B C D! A 15 0 0 0 15 B 0 9 0 0 9 C 3 1 24 2 30 The producer needs to know how well her/his classification (e.g. for class A) matches with the reference Referenz D 0 0 0 10 10! 18 10 24 12 64 PA 83.33 90 100 83.33 == Omission Error How many pixels should have been in the class but were not Producer s Accuracy = count of correctly classified pixels in class count of pixels in same reference class = 15 18 = 0.8333 (~83.3%)

User s Accuracy (UA) Referenz/ ground truth A B C D! UA A 15 0 0 0 15 B 0 9 0 0 9 C 3 1 24 2 30 100 100 80 The user needs to know how well a class (e.g. A) matches with the reality Referenz D 0 0 0 10 10! 18 10 24 12 64 100 == Commission Error How many pixels are in the class but should not have been User s Accuracy = count of correctly classified pixels in class count of all pixels in that class = 15 15 = 1 (100%)

Accuracy metrics OA, PA, UA Referenz/ ground truth A B C D! UA [%] A 15 0 0 0 15 100 If I classified a pixel as forest (A), there is in 100% of the cases indeed forest at that location B 0 9 0 0 9 100 C 3 1 24 2 30 80 I have captured 100% of the existing arable land (C) with my classification Referenz D 0 0 0 10 10! 18 10 24 12 64 100 But: 20% of the classified arable land (C) pixels has another land cover in reality PA [%] 83.33 90 100 83.33 OA [%] 90.63 But: 16.67% of the existing forest (A) was not captured

Accuracy metrics OA, PA, UA Referenz/ ground truth A B C D! UA [%] A 15 0 0 0 15 100 B 0 9 0 0 9 C 3 1 24 2 30 100 80 90.63% of all classified pixels matches with the reference / reality D 0 0 0 10 10 100 Referenz! 18 10 24 12 64 PA [%] 83.3 90 100 83.3 OA [%] 90.63

Hands-on! A A A B B A A A B B A A A B B B B B B B A A A A B A A A B B A A A B B B B B B B Referenz/ ground truth PA [%] OA [%] Referenz A B! A B! 20 UA [%]

Hands-on! A A A B B A A A B B A A A B B B B B B B A A A A B A A A B B A A A B B B B B B B Referenz/ ground truth Referenz A B! A 9 0 9 B 1 10 11! 10 10 20 PA [%] 90 100 OA [%] 95.00 UA [%] 100 90.9

Mean UA (or: average accuracy) Referenz/ ground truth A B C D! UA [%] Overall Accuracy 90.63 % A 15 0 0 0 15 B 0 9 0 0 9 C 3 1 24 2 30 100 100 80 Mean UA 95.00 % D 0 0 0 10 10 100 Referenz! 18 10 24 12 64 PA [%] 83.33 90 100 83.33 Average Accuracy = sum of all user accuracies number of classes = 380% 4 = 95%

Mean classification accuracy Referenz/ ground truth A B C D! A 15 0 0 0 15 B 0 9 0 0 9 C 3 1 24 2 30 UA [%] 100 100 80 Overall Accuracy 90.63 % Mean UA 95.00 % Mean Accuracy 92.82 % D 0 0 0 10 10 100 Referenz! 18 10 24 12 64 PA [%] 83.33 90 100 83.33 Mean Overall Accuracy + Mean UA Accuracy = 2 = 185.63 2 = 92.82%

Genauigkeitsmasse Kappa Coefficient/ KHAT statistic Referenz Referenz/ ground truth A B C D! A 15 0 0 0 15 B 0 9 0 0 9 C 3 1 24 2 30 D 0 0 0 10 10 Unterschied der Übereinstimmung zwischen > Referenz und Zufallsklassifikation > Referenz. Zu welchem Anteil sind die korrekt klassifizierten Pixel zufällig richtig? Zufalls-! 18 10 24 12 64 Um welchen Prozentsatz ist meine besser im Vergleich zu einer rein zufälligen?

Genauigkeitsmasse Kappa Coefficient Referenz/ ground truth A B C D! A 15 0 0 0 15 B 0 9 0 0 9 C 3 1 24 2 30 D 0 0 0 10 10 Referenz! 18 10 24 12 64 Kappa Coefficient ^ ( k ) = Overall Accuracy 1 - Zufallsübereinstimmung - Zufallsübereinstimmung

Genauigkeitsmasse Kappa Coefficient Referenz/ ground truth A B C D! A 15 0 0 0 15 B 0 9 0 0 9 C 3 1 24 2 30 Summe der richtig klassifizierten Pixel Grundgesamtheit = 15+9+24+10 = 58 = 64 Overall Accuracy = 0.90625 D 0 0 0 10 10 Referenz! 18 10 24 12 64 Kappa Coefficient = Overall Accuracy 1 - Zufallsübereinstimmung - Zufallsübereinstimmung

Genauigkeitsmasse Kappa Coefficient Referenz/ ground truth A B C D! A 15 0 0 0 15 Overall Accuracy = 0.90625 B 0 9 0 0 9 C 3 1 24 2 30 D 0 0 0 10 10 Zufalls- = Summe der übereinstimmung Produkte der Zeilen- und Spaltensummen Referenz! 18 10 24 12 64 Grundgesamtheit 2 Kappa Coefficient = 0.90625 1 - Zufallsübereinstimmung - Zufallsübereinstimmung

Genauigkeitsmasse Kappa Coefficient Referenz/ ground truth A B C D! A 15 0 0 0 15 Overall Accuracy = 0.90625 B 0 9 0 0 9 C 3 1 24 2 30 D 0 0 0 10 10 Zufalls- = Summe der übereinstimmung Produkte der Zeilen- und Spaltensummen Referenz! 18 10 24 12 64 Grundgesamtheit 2 Kappa Coefficient = 0.90625 1 - Zufallsübereinstimmung - Zufallsübereinstimmung

Genauigkeitsmasse Kappa Coefficient Referenz/ ground truth A B C D! A 15 0 0 0 15 Overall Accuracy = 0.90625 B 0 9 0 0 9 C 3 1 24 2 30 D 0 0 0 10 10 Zufalls- = Summe der übereinstimmung Produkte der Zeilen- und Spaltensummen Referenz! 18 10 24 12 64 4096 Kappa Coefficient = 0.90625 1 - Zufallsübereinstimmung - Zufallsübereinstimmung

Genauigkeitsmasse Kappa Coefficient Referenz/ ground truth A B C D! A 15 0 0 0 15 Overall Accuracy = 0.90625 B 0 9 0 0 9 C 3 1 24 2 30 D 0 0 0 10 10 Zufalls- = (15x18)+ übereinstimmung (9x10)+ (30x24)+ (10x12) Referenz! 18 10 24 12 64 4096 Kappa Coefficient = 0.90625 1 - Zufallsübereinstimmung - Zufallsübereinstimmung

Genauigkeitsmasse Kappa Coefficient Referenz/ ground truth A B C D! A 15 0 0 0 15 Overall Accuracy = 0.90625 B 0 9 0 0 9 C 3 1 24 2 30 Zufallsübereinstimmung = 1200 4096 = 0.2929 D 0 0 0 10 10 Referenz! 18 10 24 12 64 Kappa Coefficient = 0.90625 1 - Zufallsübereinstimmung - Zufallsübereinstimmung

Genauigkeitsmasse Kappa Coefficient Referenz/ ground truth A B C D! A 15 0 0 0 15 Overall Accuracy = 0.90625 B 0 9 0 0 9 C 3 1 24 2 30 Zufalls- = 0.2929 übereinstimmung D 0 0 0 10 10 Referenz Kappa Coefficient = 0.90625 1! 18 10 24 12 64-0.2929-0.2929 = 0.8674 Kann interpretiert werden als: Meine ist 86.74% besser im Vergleich zu einer rein zufälligen!

Genauigkeitsmasse Kappa Coefficient -! vielfältig interpretierbar -> es existieren verschiedene Richtwerte für die Beurteilung des Koeffizienten Kappa - Coefficient Level of Agreement to Reference 0.00 poor / - 0.00 0.20 slight / schwach 0.21 0.40 fair / leicht 0.41 0.60 moderate / mittelmäßig 0.61 0.80 substantial / gut 0.81 1.00 almost perfect / sehr gut!"#$%&'()'*)+,-).//01223 -! keine allgemeingültigen Aussagen möglich!

Genauigkeitsmasse Kappa Coefficient -! keine allgemeingültigen Aussagen möglich! A A A A A A A A A A A A A A A A A A A B Wald Sonstige Vegetation A A A A A A A A A C Ackerland Vegetationsfrei A A A A A A A A A D Referenz/ ground truth Kappa Coefficient = 0 OA = 85% Kappa - Coefficient Level of Agreement to Reference 0.00 poor / - 0.00 0.20 slight / schwach 0.21 0.40 fair / leicht 0.41 0.60 moderate / mittelmäßig 0.61 0.80 substantial / gut 0.81 1.00 almost perfect / sehr gut

Accuracy assessment summary Referenz A B C D! A n AA n AB n AC n AD n A+ Overall Accuracy = q! i = 1 n n ii PA = n ii n +i UA = n ii n i+ B n BA n BB n BC n BD n B+ C n CA n CB n CC n CD n C+ D n DA n DB n DC n DD n D+! N +A N +B N +C N +D n Mean UA Accuracy = q! UA i = 1 n Mean UA OA + Mean Accuracy = Accuracy 2 q = Anzahl an Klassen n = Grundgesamtheit n ii = Anzahl übereinstimmender Pixel n i+ = Summe der Zeilenwerte n +i = Summe der Spaltenwerte

Accuracy metrics applying them correctly Accuracy metrics depend on the sampling scheme of the reference x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x OA = 95.6% Forest ("20%) Forest (< 20%) Grassland Water Referenz/ ground truth A B C D! A 2 1 0 0 3 B 2 3 1 0 6 C 0 2 5 0 7 D 0 0 0 124 124! 4 6 6 124 140

Accuracy metrics applying them correctly Accuracy information depend on the choice of metric x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x OA = 95.6% Mean user accuracy = 72.0% Mean classification accuracy = 83.9% Forest ("20%) Forest (< 20%) Grassland Water PA 0.5 0.5 0.8 1.0 UA 0.7 0.5 0.7 1.0

Accuracy metrics applying them correctly Accuracy information depend on the interpretation of the metric

Accuracy Take-Home Messages A classification is not complete until it has been assessed. You need multiple accuracy metrics to get a good representation of the actual accuracy; a single metric may lead to an incorrect assessment The reference dataset must be representative for the classification Be critical when you communicate your classification to your users

Requirements regarding the previous two lectures (on Image analysis and Image classification ) Image analysis Understand the concepts of (visualizing) spectral bands Understand simple band operations and what they can be used for Know various change-detection techniques (concepts, no tech. details) Be able to describe and recognize low-pass and high-pass filters Classification Know different (types of) algorithms (characteristics, pros and cons), be able to describe them and to conceptually apply them Be able to describe and apply the various accuracy metrics

Thank you for your attention!