Geotagged Photos
Geotagged photos demonstration focuses on enhancing the accuracy of agricultural parcel monitoring by integrating geotagged photos with Earth Observation (EO) data. Image quality plays a crucial role in ensuring accurate classification, with key factors such as brightness, contrast, and blur directly affecting the performance of machine learning models. By defining image quality guidelines, the process ensures that high-quality photos are used, improving the accuracy of automatic classification.
The approach utilizes deep learning models, specifically Convolutional Neural Networks (CNNs) to classify the photos into groups based on the content. Pre-trained models such as EfficientNet and MobileNet are fine-tuned using a labeled dataset. By automating this classification process, the system can efficiently handle large numbers of geotagged photos, providing a more efficient method for monitoring and managing agricultural parcels.
The approach utilizes deep learning models, specifically Convolutional Neural Networks (CNNs) to classify the photos into groups based on the content. Pre-trained models such as EfficientNet and MobileNet are fine-tuned using a labeled dataset. By automating this classification process, the system can efficiently handle large numbers of geotagged photos, providing a more efficient method for monitoring and managing agricultural parcels.
The key findings from this use case on geotagged photo classification for monitoring agricultural parcels reveal several insights
Automated Classification Success: The study successfully automated the classification of geotagged photos of grassland parcels into three categories—"no management," "mowed," and "animals present"—using Convolutional Neural Networks (CNNs). Among the various models tested, the EfficientNet-b1 model performed the best, achieving an F1 score of 74.57%, balancing high precision and recall.
Principle of AI based analysis of geotagged photos (categorization of parcels by grassland management type) using Convolutional Neural Networks.
Challenges with Imbalanced Data: One challenge was that the dataset had a disproportionate number of photos for some classes (e.g., "mowed") compared to others (e.g., "animals"). This caused the model to struggle in recalling minority classes, like "animals," due to data imbalance.
Impact of Image Quality: Image quality was crucial in classification success. The study identified key image quality metrics (brightness, contrast, saturation, etc.) and established acceptable ranges to ensure effective classification. Lower image quality negatively impacted the model's performance, especially in metrics like blur and hue, which were particularly sensitive.
General Model Robustness: The CNN models, particularly EfficientNet-b1, were robust across a wide range of image transformations, although extreme alterations to image brightness, contrast, and other factors led to a decline in classification accuracy.
Example of the geotagged photo quality assessment (e.g. brightness croce - left) and recommended ranges of the main image quality indicators (right).