Why we should want models to disagree: the value of error diagnosis and low-skill models in climate science

Ryan O'Loughlin

seminar
Oct. 25, 2022

11:00 am – 12:00 pm MDT

Virtual

Webcast

Main content

Although scientists have been conducting relatively systematic climate model intercomparisons for decades, error diagnosis (i.e., identifying and attributing sources of model-model disagreement) has been far less systematized. In this talk, I analyze the strategies used to diagnose model errors based on what is reported in the scientific literature, going back to the first Atmospheric Model Intercomparison Project (AMIP). These strategies include, e.g., dimension reduction techniques, multi-model sensitivity analysis, physical reasoning about the radiative effects of clouds, and testing hypotheses about sources of potential model error. Based on the case studies considered, I suggest that a systematic model error repertoire can benefit climate modelers and other climate scientists alike. These benefits include proactively identifying which part(s) of a model are most in need of revision (and why) and demonstrating a deeper understanding of climate model behavior. Finally, I suggest that attending to error diagnosis in climate modeling invites us to ask new questions about the scientific value of model intercomparisons, e.g., concerning the knowledge gained from worse-performing models in model weighting analyses.

Contact

Please direct questions/comments about this page to:

Elizabeth Faircloth