The scientific objective of the Global Dynamics Section (GDS) is to increase understanding of the mechanisms and theoretical predictability of large-scale atmospheric variability on time scales of days to years. This objective will contribute to the scientific basis of predicting transient, global circulations in the atmosphere beyond present practical limits. GDS scientists take three approaches to their research: 1) numerical and theoretical studies using a hierarchy of physical models that range from the non-divergent vorticity equation to coupled atmosphere-ocean models, 2) investigation of the cause of low-frequency variability and experimentation with the Community Climate Model (CCM) to explore the practical skill of predicting low-frequency variability, and 3) sensitivity analysis of numerical prediction to atmospheric initial conditions and design of improved data assimilation for non-geostrophic flows, particularly for tropical and mesoscale forecasting.
Over the past several years, GDS scientists have been investigating the theoretical predictability of atmospheric variations of increasingly longer time scales. Studies are continuing on both ends of the temporal spectrum: short-term predictability of the atmosphere and climate determinism and almost intransitive behavior of the climate system. With respect to the climate determinism and (almost) intransitivity, one of most exciting developments in ocean modeling in recent years has been the discovery of the multiple equilibrium structure of the thermohaline circulation. Ocean models appear to be generically unstable to finite amplitude salinity perturbations. Sufficiently large, high-latitude, negative, salinity anomalies induce a "halocline catastrophe," where high-latitude sinking motion is suppressed and the model "flips" from one thermohaline circulation state to another. This multiple equilibrium structure has been studied extensively in ocean models of varying degrees of complexity, ranging from box models to global general circulation models (GCMs). However, all of these ocean models usually have highly simplified upper boundary conditions to represent the atmospheric fluxes of heat and freshwater. Therefore, the relevance of these ocean-only multiple equilibria to the real coupled ocean-atmosphere system remains an open question. Understanding the multiple equilibrium structure of the thermohaline circulation in the ocean-atmosphere system may be crucial to explaining long-term climate variability, both natural and anthropogenic.
Saravanan and McWilliams have been investigating the properties of the thermohaline circulation in an idealized coupled ocean-atmosphere model. The atmospheric component of this coupled system is a two-level global primitive equation model, using the moist primitive equations, and incorporating simplified parameterizations of radiation, precipitation, and albedo. The oceanic component is a two-dimensional zonally-averaged model of the thermohaline circulation, wherein all small-scale processes are represented by eddy diffusion coefficients. Integrations of the coupled ocean-atmosphere model were carried out at T21 resolution for the atmosphere and 3 degrees meridional resolution for the ocean with a one-hour time step. Since this coupled model runs at a speed of about 1 CPU hour per model year on a SUN Sparcstation10, it is possible to carry out integrations extending over many centuries very cheaply.
Numerical experiments using this idealized coupled model have led to several interesting conclusions. The multiple equilibrium structure seen in the ocean-only context seems to persist in the coupled ocean-atmosphere system. Although coupling to the atmosphere changes important details of the ocean circulation, there does seem to be a one-to-one correspondence between the coupled equilibria and the ocean-only equilibria. Coupled integrations with negative, high-latitude, salinity anomalies indicate that the mixed boundary conditions traditionally used in ocean models tend to destabilize the thermohaline circulation in an unrealistic manner. Capotondi (ASP) and Saravanan are investigating the use of more realistic boundary conditions for ocean models to better capture the atmospheric feedback processes. Another interesting result obtained from the coupled integrations is that the atmospheric meridional heat transport compensates very rapidly for changes in the oceanic meridional heat transport, so the sum of the atmospheric and oceanic meridional heat transports remains nearly constant. This appears to be due to the relative efficiency of the dynamical processes as compared to the radiative/thermodynamic processes in responding to changes in the equator-to-pole sea surface temperature gradient.
In recent years it has been suggested that one may not need a coupled climate system to produce the type of "flip" behavior indicative of almost intransitivity; the atmosphere alone in its nonlinear behavior may exhibit such variations. This is especially relevant when one examines signals of inter-seasonal and inter-annual duration but is also important for longer term signals, such as those associated with climate change. Before the significance of signals of climate change can be accurately tested, it is necessary to characterize the low-frequency variability that can occur in the climate system in the absence of external changes to the system. One facet of this characterization that is of interest is whether the system has more than one equilibrium. Some investigations of the CCM have suggested that an atmosphere with fixed boundary conditions can have two equilibria, but recent work has indicated insufficient data was used in these studies to draw definitive conclusions. Branstator and Anthony Hansen (visitor, Augsburg College) have begun a new investigation to settle this question. Using very long integrations of CCM0B, they are calculating probability density functions of various indices of planetary wave amplitude to determine whether multiple equilibria are detectable when sampling problems are not an issue. To date, only unimodal distributions have been produced.
There also remain some issues in short-range predictability of the atmosphere where, in particular, the question of mesoscale predictability remains controversial. To shed some light on the subtleties involved, Errico and Martin Ehrendorfer (visitor, Institute for Meteorology and Geophysics, Vienna) completed a study in which they estimated the numbers of growing singular vectors in a model for forecasts up to 24 hours duration. These are the numbers of independent, orthogonal perturbations that increase the value of a norm, expressed in terms of the perturbations, during a specified period. They determined that only approximately one quarter of one percent of the possible modes were growing ones if gravitational modes were excluded from consideration. These results help explain the lack of perturbation error growth observed in either mesoscale or global predictability studies for short time ranges. For this study they used a dry version of the NCAR Mesoscale Adjoint Modeling System version 1 (MAMS1), although the results would likely apply to a high-resolution global model as well. The results are also applicable to the design of data assimilation systems using singular vectors to reduce the computational size of the assimilation problem.
Lastly, the question of the intrinsic predictability of the El Nino phenomenon has been investigated by Martin Ehrendorfer (visitor, Institute for Meteorology and Geophysics) and Tribbia. In this regard they are studying the compatibility of a nonlinear dynamical model and a linear statistical forecast method that attributes loss of predictability to stochastic forcing. Some progress has been made for statistical modeling of the synthetic time series (simulated El Nino, Tziperman model). Embedding a time series generated by a nonlinear model in a higher-dimensional phase space may allow the description of the process in a linear framework. More and systematic experiments, as well as some form of hypothesis testing, are necessary.
A rigorous test of the understanding of phenomena in the atmosphere as embodied by our formulation of comprehensive models is the accuracy of such models in the forecast arena. This is especially true in the tropics and at extended range where the influences of the diabatic terms can be of paramount importance. In the tropics, efforts to validate and improve forecast performance have been hindered by the delay in the generation of diabatic forcing in atmospheric models when initiated with analyses of the atmosphere.
Kasahara, Mizzi, and Leo Donner (NCAR affiliate scientist from the Geophysical Fluid Dynamics Laboratory (GFDL)/NOAA) are conducting research on a unified scheme of diabatic initialization to improve the analysis of temperature, horizontal divergence, and moisture in the tropics. The unified scheme is a combination of the cumulus initialization and the traditional diabatic nonlinear normal mode initialization. The objective of the cumulus initialization is to adjust the distributions of temperature, moisture, and horizontal divergence through the inversion of a convective precipitation parameterization with additional dynamical and physical constraints in such a way that the calculated convective precipitation agrees very closely with observed precipitation rates. Forecast experiments using the NCAR CCM1 with the Kuo cumulus parameterization indicate that the application of this cumulus initialization scheme can ameliorate the problem of precipitation spinup. Also, it can be used to check the quality of the first-guess fields for objective analysis and to assimilate observed precipitation rates into atmospheric data analysis.
Kasahara has also started to investigate the adaptation of the present methodology of diabatic initialization for a forecasting model that adopts a different type of cumulus parameterization from the Kuo scheme. Currently, a cumulus initialization scheme is developed for CCM2 that uses a stability-dependent mass-flux parameterization for cumulus convection designed by Hack. Although this scheme adjusts only the moisture field at present, the inversion method is flexible enough to apply to a different type of cumulus parameterization as long as it is written in a subroutine form. The need of the temperature adjustment will be examined after testing this scheme in forecasting mode with the CCM2.
There are interesting similarities between the large-scale motions in the tropics and the mesoscale motions in general despite one order of magnitude difference in their horizontal length and time scales between them. One commonality is that the effect of diabatic heating is very important in supporting the vertical circulation. Also, both scales of motion are ageostrophic and divergent. It is well known that the problem of spinup in precipitation forecasts with mesoscale models is severe if only a conventional initialization is used. Various efforts have been made in the mesoscale modeling community to introduce a special procedure of initialization to overcome the spinup problem. Kasahara, working with Hiromaru Hirakuchi and Jun-ichi Tsutsui (visitors from Central Research Institute of Electric Power Industry, Japan), is developing a unified diabatic initialization for the NCAR/Pennsylvania State University (PSU) mesoscale model (MM4). One area to be focused on is the problem of diabatic initialization for tropical cyclones to which very little attention is paid so far. Preliminary results with MM4 forecasting a dual system of typhoons Nos. 18 and 19 in September, 1990, in the western Pacific show that the lack of diabatic initialization gives rise to a severe precipitation spinup problem. Since the latent heat of condensation is the major source of energy in tropical cyclones, the precipitation spinup problem affects the analysis and prediction of tropical cyclones significantly. The proxy data of precipitation rates necessary for the cumulus initialization of tropical cyclones will be estimated by using a combination of infrared (IR) data from the GMS and microwave data from the SSM/I.
Two additional efforts are examining issues in mesoscale analysis. Errico and Jian-Wen Bao (ASP) investigated the information content of observations used in analyses produced by nudging. They determined that, in the ways that it is normally applied, nudging effectively discards prior observations, effectively retaining only observations at the final analysis time. They performed this study by directly computing sensitivities of the analysis fit to observations with respect to the observations themselves, using a version of the MAMS1 incorporating the adjoint of the nudging terms. They derived the same qualitative results with a one-dimensional shallow water model. These results indicate that the so-called four-dimensional data assimilation using nudging has more in common with other three-dimensional procedures than the statistical dynamical four-dimensional techniques, such as Kalman smoothing and variational assimilation using adjoints. And in an effort that combines the incorporation of diabatic physics and small-scale information on the mesoscale, Vukicevic is examining the possibilities and limitations of the use of the MAMS1 for assimilation of cloud-related data. Specific issues are: a) evaluation of the tangent linear and adjoint model errors that are associated with the discontinuities due to "cloud parameterizations," and b) possibilities for minimizing these errors through a four-dimensional data assimilation procedure that uses constraints on these parameterizations.
The second important testing ground for atmospheric models is the monthly forecast. As is well known deterministic forecasting over such an extended range is impossible because of the intrinsic loss of predictability in the atmospheric system. Thus, a stochastic dynamic prediction method is needed and studies investigating the efficacy of ensemble prediction have continued. Prior to starting an ensemble forecast, an estimate of analysis uncertainty is needed. Baumhefner has addressed this by calculating differences from a five-year sample analysis between the National Meteorological Center (NMC) and the European Centre for Medium-Range Weather Forecasts (ECMWF) products to determine the global geographical distribution of analysis error. Not surprisingly, the largest differences were found in the stormtrack regions of the world with values exceeding 100m at 500mb. The very short-term (0-48 hour) error from simulations of analysis difference was examined for an East Coast cyclogenetic event at the request of Steven Tracton (NMC). Very rapid growth in error occurred for precipitation, vorticity, and other second-order quantities. These results were compared to other models at NMC that produced similar behavior.
With respect to ensemble construction using a small number of realizations, Ehrendorfer (visitor, Institute for Meteorology and Geophysics) and Tribbia have performed investigations related to the question of prediction of forecast skill or, more generally, to the question of predicting moments of the phase space probability density function (pdf) of the model state. The Liouville equation provides the conceptual framework. However, in view of the large dimensionality of the model phase space for operational models, direct solution of the Liouville equation is inefficient, and estimating moments can be (approximately) achieved by simulating the method of characteristics through ensemble prediction, thus avoiding questions of closure (as in stochastic-dynamic prediction). In this context, efficient sampling of the initial pdf becomes an important issue. The relationship between dynamical orthogonal patterns and the eigenfunctions of the covariance structure of the pdf provides a means to design efficient sampling strategies. Some experiments have been performed for a low-dimensional Lorenz system; results indicate that considerable savings over random sampling are possible. In light of the number of growing dynamical orthogonal patterns (DOPs) found for a mesoscale model (see above), it is likely that for operational models ensemble sizes of several hundred are necessary for sampling the initial density function exhaustively and for estimating higher (second) moments accurately. Another question relates to accelerating the convergence of estimated moments towards their statistical-mechanical equilibrium values.
The proof of any scientific endeavor lies in the ability to forecast future events, and Baumhefner has documented the success of the GDS effort to date in the area of extended-range forecasting. A paper was completed that described a direct comparison of eight 30-day forecast ensembles with a low-resolution climate model to forecast ensembles made with a much higher resolution model. The climate model forecasts were on average slightly better especially late in the period. The systematic error was reduced considerably, and the spread of the ensemble skill was as large as the high-resolution cases. The documentation of the sensitivity of these results to model physics changes and seasonal variations have begun. An extensive comparison of 6-10 day forecast skill from a T31 version of CCM1 with the operational NMC model during the winters of 1990-1993 was recently conducted. Twenty-two samples showed that the forecast error was slightly worse when the two control forecasts were compared; however, when the 10-member T31 ensembles were used, the CCM1 averaged forecasts were better. In summary, all comparisons to date strongly indicate the superiority in forecast skill of relatively low-resolution ensemble forecasts for periods beyond the deterministic daily limit of predictability.
The dynamics of long-lived, low-frequency flow regimes remain a topic of great interest as a satisfactory theoretical explanation of their manifestations has been elusive. Branstator has examined three aspects of this variability: eddy forcing, modal behavior, and external diabatic forcing.
An outstanding question in the theory of intra-annual to inter-annual atmospheric variability has to do with why fluctuations that occur on these time scales tend to be concentrated over the North Pacific and North Atlantic and what processes affect the structure of these fluctuations. One process that should affect such variability is the feedback from momentum fluxes by higher frequency, synoptic disturbances, since the distribution and structure of synoptic disturbances is influenced by lower frequency perturbations. Because the influence of this feedback process could not be separated from the influence of the stochastic variability of momentum fluxes by transients, past studies have not been able to isolate the effect of this feedback. Branstator has been able to overcome this difficulty by using a model of the stormtracks that he recently developed. Among other things this model indicates the time-averaged momentum fluxes produced by the synoptic disturbances that occur in reaction to a specified low-frequency anomaly. Calculations with the model indicate that the feedback resulting from these fluxes is highly geographically dependent. In particular, the feedback is especially strong and positive over the North Pacific and North Atlantic and, thus, is apparently one reason that low-frequency variability is concentrated in these regions. Further experiments with the stormtrack model suggest that the feedback is strong enough to change the structure of a low-frequency anomaly and helps to determine which low-frequency anomalies are especially prominent. Work along these lines is being used to diagnose the low-frequency behavior of NCAR's CCM.
Modal behavior is also a cause for persistence in the atmosphere and continues to be a topic of investigation. Branstator and Isaac Held (Princeton University) have extended their investigation of the influence of stationary waves on the frequency and structure of Rossby-Haurwitz modes. By tracking modes of the nondivergent barotropic vorticity equation as the model basic state is gradually changed from a state of rest to an observed climatological state, their work has demonstrated that some of the gravest Rossby-Haurwitz modes are only modestly affected by the time-mean waves. However, new calculations indicate that some modes that are sometimes thought to exist in nature (in particular the 16-day wave) cannot be unambiguously followed; for these modes the results of tracking depend on the path taken through basic state phase space. Thus, when it is influenced by stationary waves of significant amplitude, it is not meaningful to refer to what the structure of such a Rossby-Haurwitz mode is.
Frequently, specialized model algorithms can elucidate dynamical aspects of geophysical flows not easily observed using conventional algorithms. One such technique, contour dynamics, has been used by Saravanan to study tropospheric and stratospheric dynamical processes. One of the advantages of using contour dynamics to solve fluid equations is that it allows one to focus attention exclusively on the dynamically active regions of the flow. In modeling the life cycle of baroclinic waves, the dynamically most active regions are the planetary surface and the tropopause, which are regions of discontinuity in potential vorticity. Saravanan has used a three-dimensional quasi-geostrophic contour dynamics algorithm to construct a model of baroclinic wave evolution that confines all the dynamics to the tropopause and to the surface. It turns out that even this simple model is able to capture some of the important qualitative features of baroclinic wave life cycles seen in high-resolution primitive equation integrations, such as the sensitivity of wave breaking to the background jet shear.
Comparison of a model with observations is the first step in deductive improvement of scientific theories and models. Such studies have been proceeding with various versions of NCAR's CCMs.
One of the most important tests for validating an atmospheric GCM is to compare its mean simulated climate to the mean observed climate. The difference between the two is often referred to as the systematic error. One would like to reduce this systematic error because it is a symptom of the inadequacies and errors in the formulation of the GCM. The traditional approach to "improving" a GCM is to use our limited and sketchy physical understanding of the atmosphere to make a guess as to the source of the systematic error, use that guess to modify the model, and determine whether the modification reduces the systematic error. This approach, although physically motivated, is more of an art than a science. It is also computationally intensive, requiring long climate integrations to test the effect of each modification.
An alternative approach is to use statistical techniques to identify the sources of systematic error. Although the techniques themselves may not be physically motivated, it may be possible to find a physical interpretation for spatial error patterns resulting from such an approach. Saravanan and Baumhefner have carried out a statistical study of the systematic error in a recent version of NCAR's CCM2. In terms of applicability to long-range forecasting, one of the most important deficiencies of CCM2 is the large systematic error in the 500mb height field. One approach to identifying and isolating the source of this error is to estimate the systematic bias in initial tendencies when starting forecasts from initialized analyses. There are many potential sources of "noise" in computing the true initial tendency bias from analyses, including data quality, data interpolation schemes, analysis procedures, model spinup, and aliasing of the diurnal cycle. The only way to ascertain whether any estimates of initial tendency bias contain useful information is to use the estimated error patterns to attempt to correct for initial tendency bias in CCM2 itself and determine whether this attempt actually leads to a decrease in the systematic error.
Two independent sets of analyses, from NMC and from the ECMWF, for the winter of 1988-89, were used to obtain estimates of the systematic initial tendency error. Although there was broad agreement in the spatial patterns of the initial tendency bias obtained from both the analyses, the magnitude of the bias in the thermodynamic variables (T,q) was quite different. Since the ECMWF analyses were available four times daily, which considerably reduces the possibility of aliasing of the diurnal cycle, it was decided to use the initial tendency bias estimates from those analyses for attempts at bias correction in CCM2. A patch was applied to the CCM2 code to add the negative of the initial tendency bias pattern to the prognostic equations (for u, v, T, q) at every model time step. Since the initial bias in thermodynamic terms is strongly affected by the model spinup, it was necessary to reduce the amplitude of the correction by 50% for these terms. The ensemble average of five winter (DJF) runs with the bias-corrected version of CCM2 shows a significant reduction in the systematic error in the 500mb height field and also some improvements in the variability of the model, such as in the simulation of blocking. Additionally, it seems to virtually eliminate the large negative bias in the zonally averaged temperature in the summer high-latitude lower stratosphere. Work is in progress to isolate those parts of the initial tendency bias that are primarily responsible for these "improvements" and to study the effects of tendency correction on 30-day ensemble forecasts.
In addition to statistically derived modifications, several more traditional sensitivity experiments were conducted by Baumhefner to reduce the bias of the stationary waves in the model. The sensitivity of the lower boundary conditions (mountains, snow, ice, land type) was tested by modifying these fields to more closely fit the observations. The overall effect was small and slightly negative. For many climate interactions, precipitation is a key variable, so the validation of precipitation in climate integrations through comparisons with observations is a critical activity. Mizzi, with the aid of Merra Asres (student, 1993 Summer Employment Program) and Lana Stillwell (student assistant, Earth Observing System), developed a nonlinear multiple regression relationship between pentad Global Precipitation Climatology Project (GPCP) precipitation and pentad average outgoing longwave radiation (OLR) and albedo observations. They applied a univariate nonlinear model to five years of daily OLR observations for the months of January, April, July, and October to obtain estimates of the daily tropical precipitation. Comparison of temporal, spatial, and ensemble statistics from the GPCP, OLR-based, CCM1 and CCM2 tropical precipitation showed that the OLR-based climatology is approximately 20% larger than the GPCP results, while the spatial variance is approximately 7% smaller. Both these data sources showed that there is more precipitation over the land during January and April, while there is more over the oceans during July and October. For both CCMs the terrestrial precipitation exceeds the maritime precipitation throughout the year. The CCM tropical precipitation is approximately 9% smaller than the OLR results, and the spatial variance is approximately 160% larger. Similar results are found by examining the precipitation frequency distribution for the OLR, CCM1, and CCM2 distributions. We estimated observed frequency distributions from five years of Climate Analysis Center (CAC) tropical station data. The results showed that the CCM and CAC frequencies agreed very well for precipitation rates less than 12 cm/d. For more intense precipitation rates, the CCM1 and CCM2 frequencies are nearly identical and slope is smaller than for the CAC results. This suggests that the CCMs have too much spatial and temporal variance for tropical precipitation.
Recent attention has focused on model development of parameterized physical processes, but some investigations are re-examining the numerical aspects of climate models. Mizzi collaborated with Tribbia and James Curry (visitor, University of Colorado) to apply the spectral transform method to the vertical coordinate of low-resolution primitive equation models. These models were placed on tropical f and equatorial beta planes. The appropriate normal modes were used as the horizontal basis functions, and the vertical normal modes of Staniforth et al. (1985) were used as vertical basis functions. To avoid the use of artificial constraints to control velocity near the upper boundary, these models were based on geopotential, and the hydrostatic equation was used to calculate temperature from geopotential, ensuring that temperature went to zero when pressure was zero. This is the first time the spectral transform method has been applied to the vertical coordinate of a primitive equation model without artificial constraints on velocity or temperature.
Our experiments showed the upper-level velocities are sensitive to mass and velocity field imbalances present in the initial conditions or introduced during the integration. We have two possible explanations for this behavior. One is that the amplitude of the spectral basis functions near the upper boundary causes sensitivity of the upper-level velocity to small imbalances present in the initial conditions. These imbalances are not necessarily local and this behavior is a consequence of the global nature of the spectral expansion. The other is a consequence of the slow convergence for the vertical spectral expansion. We suggest that vertical spectral truncation introduces mass and velocity imbalances during the integration. These imbalances manifest themselves as oscillations in the upper-level velocity field. We are studying the role of each of these explanations, as well as the use of alternative vertical basis functions.
Giorgi and Vukicevic are examining the use of the adjoint method for the assimilation of global fields into a regional climate model. These global fields are provided either by the CCM2 integration or by the ECMWF or NMC analysis. We use the variational method for parameter estimation to determine a optimal, three-dimensional nudging coefficient for each model field for large scales only. Consequently, the regional model solution and the global fields are filtered for this purpose. This approach will be used to improve representation of the one-way interacting lateral boundary forcing for the regional climate model that has been developed in the Interdisciplinary Climate Systems Section.