Skip to content Where Legends Are Made
Cooperative Institute for Research to Operations in Hydrology

CIROH Training and Developers Conference 2024 Abstracts

Authors: Cyril Thébault, Martyn Clark, Wouter Knoben – University of Calgary

Title: Large-sample hydrology across North America within a spatially distributed framework

Abstract: With the growing availability of hydro-meteorological data and the constant increase in computing resources, large-sample hydrology datasets are now widely used to evaluate hydrologic models. Large-sample hydrology datasets use a large set of catchments with variable climatic and geomorphological characteristics to derive robust conclusions. While this practice is increasingly used for lumped catchments all over the world (e.g. the Caravan database), considering spatial variability within spatially distributed approaches is much less common for large sample studies. This work creates a benchmark for spatially distributed model performance across a newly developed large-sample data set for North America. The CAMELS-spat (“Catchment Attributes and MEterology for Large-sample Studies for SPATially distributed modeling”) provides information on climate forcing (e.g. precipitation, temperature, potential evapotranspiration), observed flow data and geospatial characteristics (e.g. DEM, soil classes, landcover) for distributed catchments with limited anthropogenic influence across North America. We initially use the GR4J model coupled to the CemaNeige snow module to simulate daily streamflow at the outlet of 928 catchments for the period 1989-2009, in both lumped (benchmark) and spatially distributed configurations. GR4J is a sensible choice to start with since it is a parsimonious conceptual model widely tested across the world. Spatial distribution doesn’t offer any gain (nor degradation) compared with a common lumped approach (KGE values range from 0.14 to 0.98 with a median around 0.79 for both frameworks). This work is part of a larger project that seeks to define pathways to using multi-model mosaics for operational hydrologic prediction. The development of model performance benchmarks is a key step in finding the best models to include in a multi-model mosaic and enables evidence-based decision making about the models to use for operational prediction.