POUNDING NAILS IN BOARDS WITH SHOES TO DECIDE WHICH SHOES TO BUY
Most operational atmospheric simulation models are deterministic. They provide estimates of the average time- and space-variations in the conditions (e.g., a mesoscale meteorological model), or they provide estimates of the average effects of such time- and space-variations (e.g., a dispersion model). The observations used to test the performance of these models are individual realizations (which can be envisioned as coming from some ideal ensemble), and are affected by stochastic variations unaccounted for within the simulation model. Air-quality dispersion models predict the mean concentration for a given set of conditions (i.e., an ensemble average), whereas observations are individual realizations drawn from various ensembles (see Venkatram, 1979; Fox, 1984; Weil, 1985; Weil et al., 1992).
The comparison of model predictions with observations, taken under supposedly the same conditions, will reveal large deviations between that which is predicted and that which is observed (Irwin et al., 1987; Hanna, 1993; Irwin et al., 2007). The deviations have a mean (or model bias) and a variance. The bias is due solely to internal model errors (e.g., physics, parameterizations, etc.), whereas the variance is due to four factors: 1) uncertainty in model input variables, 2) errors in the concentration measurements, 3) internal model errors, and 4) natural or inherent variability (see Weil, 1985, Fox, 1984, Venkatram, 1979).
So, if arc-maxima differ so greatly from the ensemble maxima, why do we see comparisions of arc-maxima with dispersion model ensemble-average maxima to assess model performance?
Excuse #1: Arc-maxima are an easy-to-compute estimate of the observed ensemble maximum concentration. Answer: Not likely a good or reasonable estimate, see Hanna (1993), Irwin et al., (2007).
Excuse #2: EPA uses dispersion models to estimate maximum concentrations, so we are providing air quality managers an assessment of how good the estimates are. Answer: This tells managers how well the model performs when misused, but says nothing about the true performance of the dispersion model.
MY VENT:
You can use a shoe to pound nails into a board. You can even decide which shoes to buy based on their ability to pound nails into a board. But shoes were never made to pound nails, and their ability to pound nails into a board makes for terrible selection criteria as to which shoes to buy!
Dispersion models were never constructed to estimate short-term maxima. You can misconstrue what dispersion models do and say they estimate individual realization maxima; you can even try to assess dispersion model performance by comparing model estimates with individual realization maxima.
It makes no sense to select shoes on their ability to pound nails, nor does it make sense to assess dispersion model performance through comparisons of modeling results with short-term arc-maxima.
March 2013 Power-Point Presentation (27.7 MB): Provides rationale and illustrates an evaluation procedure that focuses on assessing model performance through a comparison of group-average crosswind integrated concentration values and lateral dispersion values.
March 2013 Presentation Handout (87 KB): A 5-page handout that was provided at the talk. This provides in brief the rationale for the suggested evaluation procedure, and the final page illustrates how model bias crosswind integrated concentration values and lateral dispersion values can be illustrated by this procedure.
March 2013 Presentation Extended Notes (55KB): Provides my thoughts for each slide in the power-point presentation. At the end of the extended notes, I outline details of the analysis steps taken to create the comparisons shown in the presentation.
PROCESSED FIELD DATA:
Analysis Results of Project Prairie Grass Observations (47KB): Crosswind Integrated Concentration, Lateral dispersion, and Gaussian Centerline Maximum (10-minute averaging time)
Analysis Results of EPRI Kincaid Observations (119KB): Crosswind Integrated Concentration, Lateral dispersion, and Gaussian Centerline Maximum (1-hour averaging time)
REFERENCES
Fox, D.G. (1984): Uncertainty in air quality modeling. Bull. Amer. Meteoro. Soc. (65):27-36.
Hanna, S.R., (1993): Uncertainties in air quality model predictions. Bound.-Layer Meteor. Vol 62, p 3-20.
Irwin JS and Hanna SR. (2005): Characterizing uncertainty in plume dispersion models. Int J Environ and Poll, 25 (1/2/3/4): 16-24
Irwin, J.S., W.B. Petersen, and S.C. Howard (2007): Probabilistic characterization of atmospheric transport and diffusion. J. of Applied Meteorology and Climatology. (46):980-993.
Irwin, J.S., Rao, S.T., Petersen, W.B., and Turner, D.B., (1987): Relating error bounds for maximum concentration predictions to diffusion meteorology uncertainty. Atmos. Environ . Vol 21, p 1927-1937.
Venkatram, A. (1979): The expected deviation of observed concentrations from predicted ensemble means. Atmos. Environ. (11):1547-1549.
Weil, J.C. (1985): Updating applied diffusion models. J. Applied Meteorology. (24):1111-1130.
Weil, J.C., R.I. Sykes, and A. Venkatram (1992): Evaluating air-quality models: review and outlook. Journal of Applied Meteoro. (31):1121-1145