ICIP 2025 Workshop on Computer Vision for Ecological and Biodiversity Monitoring (CM-EBM)

We are facing a global environmental crisis caused by anthropogenic climate change, and the degradation of habitats and ecosystems due certain agricultural practices and urbanization. To understand the extent of this impact and environmental responses to intervention, it is necessary to monitor ecosystems through the collection and identification of patterns in various data, including remote sensing imagery, ground-level imagery, and video.

This workshop will bring together leading experts from both the computational and ecological research communities to share the latest innovations, datasets, and applications in a focused forum with two keynote speakers alongside regular paper submissions.

The CV-EBM workshop will be a full-day event, and all submitted papers undergoing the standard review process. Accepted papers will be included in the ICIP proceedings.

Nidia Dias & Google DeepMind / Better Images of AI / AI for Biodiversity / CC-BY 4.0

Call for Papers

We are now accepting regular paper submissions to CV-EBM!
Topics include (but are not limited to):

Scene analysis

  • Fine-grained/hierarchical classification
  • Habitat type/biodiversity classification
  • Land-use/land-cover mapping
  • Biomass/carbon sequestration estimation

Object detection & segmentation

  • Detection & counting of plants/animals
  • Handling small objects and occlusions
  • Segmentation of camouflaged organisms
  • Organ/anatomical segmentation

Technical challenges

  • Data-efficient learning
  • Learning from noisy labels
  • Multi-modal learning
  • Continual/lifelong learning

Imaging modalities & benchmark datasets

  • Ground-level imagery
  • Remote sensing/earth observation data
  • Camera trap/video data
  • Hyperspectral/multispectral imaging
  • Microscopic imaging

Tracking and temporal monitoring

  • Animal re-identification
  • Multi-object tracking (MOT)
  • Tracking in challenging environments
  • Population monitoring
  • Pose estimation

Applications

  • Species monitoring and identification
  • Predictive modelling of habitat loss/change
  • Prioritising areas for conservation/restoration
  • Monitoring harmful/invasive species

Important dates

Satellite Workshop Paper Submission Deadline: 28 May 2025
Satellite Workshop Paper Acceptance Notification: 25 June 2025
Satellite Workshop Final Paper Submission Deadline: 2 July 2025
Satellite Workshop Author Registration Deadline: 16 July 2025
Main Conference Dates: 14 – 17 September 2025
CV-EBM Satellite Workshop Date: TBC

Organising committee

University of Lincoln, UK

University of Oxford, UK

Image courtesy of Freepik

Our statement on diversity

We welcome submissions from authors of all backgrounds regardless of race, ethnicity, gender, sexual orientation, disability, and geographic location.

We especially encourage authors from underrepresented groups (minority ethnicities, women, LGBTQ+) to attend and contribute.

Extended Abstract

We are facing a global environmental crisis caused by anthropogenic climate change, and the destruction of habitats and ecosystems due to agriculture and urbanization. To understand the extent of this impact and environmental responses to intervention, it is necessary to monitor ecosystems through the collection and identification of patterns in various data, including remote sensing imagery, ground-level imagery, and video. State-of-the-art computer vision based on deep learning is an emerging technology for ecological applications and offers the potential to automate the analysis of many different species, habitats, their relationships, and their responses to intervention and management.

Computer vision systems have been successfully developed for ecological monitoring, however there are various open challenges that limit their applicability in real-world scenarios at scale; (i) handling geographic differences in species composition (domain adaptation/generalisation), (ii) lack of high-fidelity annotations (weak supervision), (iii) utilising and fusing information from different data sources (multi-modal learning), (iv) leveraging large amounts of unannotated data (self-supervised learning), (v) creation of high-fidelity publicly available benchmark datasets (e.g., Pl@ntNet).

Given the volumes of image and video data now routinely collected and catalogued by ecologists, there is significant untapped potential to develop approaches that can perform national or even global scale analyses that would otherwise be intractable through manual methods. This workshop will bring together leading experts from both the computational and ecological research communities to share the latest innovations, datasets, and applications in a focused forum with two keynote speakers alongside regular paper submissions. This interdisciplinary workshop is aligned with the UN’s Sustainable Development Goal 15, “Life on Land”, which aims to “Protect, restore and promote sustainable use of terrestrial ecosystems, sustainably manage forests, combat desertification, and halt and reverse land degradation and halt biodiversity loss.”