Open Access Repository

Methodological framework for ontology-driven geographic object-based image analysis (O-GEOBIA)


Downloads per month over past year

Rajbhandari, S ORCID: 0000-0002-9952-0801 2019 , 'Methodological framework for ontology-driven geographic object-based image analysis (O-GEOBIA)', PhD thesis, University of Tasmania.

PDF (Whole thesis (published material removed))
Rajbhandari_who...pdf | Download (11MB)

| Preview
[img] PDF (Whole thesis)
Rajbhandari_who...pdf | Document not available for request/download
Full text restricted


Geographic Object-Based Image Analysis (GEOBIA) has emerged as a new sub-discipline of Geographic Information Science. GEOBIA provides methods to delineate real-world geographic objects from Earth Observation (EO) data. GEOBIA methods rely on human expert (domain) knowledge to perform image segmentation, classification, and classification-based segmentation in an iterative process. However, GEOBIA lacks a systematic method to conceptualise and formalise an expert’s domain knowledge. With a high dependency on human experts and a lack of formalised knowledge, GEOBIA methods are currently subjective and not easily transferable. Ontology can be used to formalise an expert’s domain knowledge and has the potential to reduce subjectivity and to support automation and transferability.
The first objective of the research was to develop a framework for Ontology-driven Geographic Object-Based Image Analysis (O-GEOBIA) that employs an ontology to capture expert domain knowledge and that applies ontological rule-based reasoning for object-based image classification. A case study in Land Use and Land Cover (LULC) classification illustrates the applicability of this new approach.
The methodology involved the following four steps. Firstly, an ontology was constructed with appropriate LULC classes. Next, a multispectral QuickBird satellite image was segmented into image objects using multiresolution segmentation and the extraction of low-level features of the image objects. The ontology was then used to link the low-level image object features with high-level expert domain knowledge and classification rules were written in SWRL (Semantic Web Rule Language). Finally, the SWRL rules were executed using a semantic reasoner to perform a rule-based image object classification. The LULC classification from this ontological approach was then compared with results from a classification that did not employ ontology. The comparison showed the same results in terms of the number of classified objects, demonstrating the usability of OGEOBIA framework for formalisation of expert’s domain knowledge to create rules for image classification.
The second objective of the research was to benchmark and then to extend the proposed O-GEOBIA framework to construct a modular ontology and to support the extraction of threshold values for feature extraction using Machine Learning (ML) so that classification rules could be developed and applied.
Modularisation was required in order to separate domain and data specific rules. Domain rules are those which are robust and not data (case study) specific – and therefore transferable. Data specific rules are data (case study) specific and therefore not necessarily transferable. Modularisation assists in tackling transferability issues: the transferable domain ontology was separated from a data specific ontology that requires adaptation to make it transferable. Localised rules developed using feature ontology require a threshold value for each feature used in the rule. For this, localised rules were extracted from data using two ML techniques: Random Forest and inTrees. Extraction of feature thresholds was required in order to develop data specific rules. A method was developed to predict the threshold values of each feature variable defined in the localised rules. Finally, the rules written in SWRL were executed using a semantic reasoner to classify landslide classes.
The benchmarking of this work employed a previously published classification case study on landslide detection that used GEOBIA but without an ontology. An accuracy assessment resulted in an overall accuracy of 86.3% for both the frameworks, showing that the classification result from the ontological framework matched the non-ontological framework. This benchmarked the ontological framework and contributed feature threshold values extraction for new rules construction from data using ML techniques.
The third objective of the research was to further extend O-GEOBIA by developing a methodology for extracting new localised rules from fused multi-sensor data. The goal was to develop a methodology to find relevant object features that would enhance classification rules that cannot be identified only from the domain knowledge but may be discoverable in the data. Relevant feature variables were selected using two ML techniques: Random Forest and Boruta. Different semivariogram features derived within the GEOBIA environment were used as an input for rule-based classifications. A case study in Tasmanian forest-type mapping was developed to validate the proposed methodology.
The rule-based classifications employed (i) spectral features, (ii) vegetation indices, (iii) LiDAR, and (iv) variogram features, and resulted in overall classification accuracies of 77.06%, 78.90%, 73.39% and 77.06% respectively. Following data fusion, the use of combined feature variables resulted in higher classification accuracy (81.65%). The results demonstrated that classification accuracy was further improved through the use of relevant features (82.57%) in creating localised rules needed for O-GEOBIA.
The fourth and final objective of the research was to improve the O-GEOBIA framework using a deep learning model for automatic extraction of deep features useful for image object classification. Deep learning models like Convolutional Neural Network (CNN) make use of semantics to produce better classification result but are lacking in terms of their use of contextual information and expert domain knowledge. The fusion of pixel-wise CNN prediction and GEOBIA segmentation output was proposed to exploit contextual information for object-based classification.
A CNN model, SegNet, was used for semantic segmentation. Pixel-wise CNN classification results were fused with the segmentation result from GEOBIA to gain geometric and thematic insights on uncertain pixels and segmented objects. The pixels with a maximum class probability value less than a certain threshold were attributed as unclassified pixels and then these were segmented into unclassified objects. The spatial relationships between the unclassified objects and their neighbouring objects were identified. Using domain knowledge based on spatial relationships, ontological rules were defined to classify the unclassified objects into LULC classes.
Applied to the LULC case study, the overall accuracy of an ontology-based classification was 88.99% compared to CNN predicted accuracy of 88.98%. An F1-score was calculated to test per-class accuracy showing that Cars and Buildings classes improved with ontology-based classification against CNN from 0.8251 to 0.830 and 0.9484 to 0.9487 respectively. While the impacts on classification accuracy for this case study were not significant, the research contributes to a methodological approach that integrates deep learning into O-GEOBIA.
In summary, this thesis presents a methodological framework for Ontology-driven GEOBIA for image classification that executes ontological rules using a semantic reasoner. O-GEOBIA is extended by leveraging ML techniques to tackle the challenge of relevant feature selection, an important step for defining classification rules. A methodology for using Deep CNN for automatic feature selection in O-GEOBIA for semantic segmentation and classification is developed and demonstrated. The O-GEOBIA framework is not domain dependent, and so can be applied to any domain problem. In this thesis, the O-GEOBIA framework is applied to landslide detection, forest type mapping and LULC classification. This work has the potential to be developed into an operational tool for Ontology-driven GEOBIA. This thesis contributes to the development and application of GEOBIA by employing ontology for knowledge-driven rule-based classification and by leveraging machine and deep learning to extract relevant features for rules construction.

Item Type: Thesis - PhD
Authors/Creators:Rajbhandari, S
Keywords: GEOBIA, Ontology, Machine Learning, Deep Learning
DOI / ID Number: 10.25959/100.00034517
Copyright Information:

Copyright 2019 the author

Additional Information:

Chapter 3 appears to be the equivalent of a post-print version of an article published as: Rajbhandari, S., Aryal, J., Osborn, J., Musk, R., Lucieer, A., 2017. Benchmarking the applicability of ontology in geographic object-based image analysis, ISPRS international journal of geo-information, 6, 386. © 2017 by the authors. Licensee MDPI, Basel, Switzerland. The article is an open access article distributed under the terms and conditions of the Creative Commons Attribution 4.0 International (CC BY 4.0) license (

Chapter 4 appears to be the equivalent of a post-print version of an article published as: Rajbhandari, S., Aryal, J., Osborn, J., Lucieer, A., Musk, R., 2019. Leveraging machine learning to extend ontology-driven geographic object-based image analysis (O-GEOBIA): a case study in forest-type mapping, Remote sensing, 11(50, 503. © 2019 by the authors. Licensee MDPI, Basel, Switzerland. The article is an open access article distributed under the terms and conditions of the Creative Commons Attribution 4.0 International (CC BY 4.0) license (

Related URLs:
Item Statistics: View statistics for this item

Actions (login required)

Item Control Page Item Control Page