The Avalanche Review, VOL. 9, NO. 5, MARCH 1991
Copyright © All Rights Reserved; AAA

Forecasting in the Twilight Zone
Mysticism and Mountain Meteorology

by I. C. Itall (a/k/a Mark Moore), Northwest Avalanche Center

"Mostly fair except for areas of locally heavy snow . . ." You've heard it before, and you'll no doubt hear it again: a varying degree of hesitancy or uncertainty about the product on the part of mountain weather forecasters, that crazy group of troublemakers who make a living out of being foolish. Not by choice, mind you, but it comes down to the same thing. Let's examine this problem rationally, although from personal experience weather is anything but rational and I think it even exists primarily to consternate those who deal directly with it.

We've come a long ways since the early days of forecasting weather, I think. At least I think most of the forecasts now are better than those Hannibal used to cross the Alps, or Washington at Valley Forge or . . . maybe it's just that some professionals who do it or think they do it-have been foolishly led to believe that they have acquired an increasing and seemingly much better (divine?) handle on this most quirksome of forecast problems. Well, I suppose that in some ways we (as forecasters) do and other ways we don't, but at least we have a lot more information to confuse the issue than we've ever had before. I mean, there are spurious vorticity maximums (and we all know what they are and what they can do to a forecast right?) that show up in areas that never even considered precipitation until the forecast models suggested that some hellacious storm was lurking just upwind in those scattered high clouds shown in the satellite imagery. And then there are those notorious upper-level speed maxes that blast cloud fragments and/or precipitation through the strongest and most pugnacious of high pressures at speeds that approach warp. And that's not even considering the down-right depravity and insultingly convoluted performance of split flows and negatively tilted difluent upper level troughs. Why, back in the early days, most forecasters (me, anyway) thought that split meant you ought to leave right away; negatively tilted was how you looked when you ordered the next round; vorticity was what happened when you flushed the toilet (either negative or positive, depending); and speed maxes were the weather equivalent of hot dog skiers, short for mass radical motion.

Now I recently gave a talk on mountain weather and was accused of spending most of the talk apologizing for and back-pedaling about the state of the art, or lack thereof, of specific mountain weather forecasting. In fact, a fellow avalanche worker (imagine that) outlined the colorful and descriptive adjectives I used in my talk and kept a running total on how often I used each one, coming up with a list that ran much like this: "often" (4), "probably" (6), "possibly" (3), "mostly" (7), "likely" (2), "maybe" (3), "but not always" (6), "could", "might", "should" (10), and "perhaps" (5). He even showed this neat list to the class and came to the conclusion that we used figures like 10% and 50% much more often than we used those higher 80-90%+ confidence levels.

Of course this was instructive to me as I saw firsthand the obvious perception that forecasters are not God-like (He/She does not deal with percentages or and in fact are not even considered good mystics sometimes. As you might surmise, stunned I was, both personally and professionally. For of course I entered the field not because I actually liked weather (although I did), but because I had that unique ("Godlike"?) mix of undeniable good taste, eloquent verbosity, and good humor that are absolute necessities for any forecaster. And besides, what other work could I have entered which had such a rich and wide diversity of ready-made parameters to blame for "misses" or other wrongdoing?

Actually, though, despite a lot of conjecture and theories to the contrary, the weather itself hasn't changed much over the years, at least not in a general sense. True, we may be warming globally or cooling locally (depending on your location, time of day and year, and the current theory of rage), but we still have storms that byand-large move from west to east, most normally missing California (just ask someone who lives and/or used to patrol there) and hitting the Northwest and British Columbia before descending in a hit-or-miss trajectory through the Intermountains and the Rockies.

And naturally these storms have a wide variety of temperature, winds, clouds and precipitation associated with them (some variations obvious and some not so obvious, some forecastable and some--dare I say it?-not).

In any case, we have come a long way in what we as a forecast group think we can effectively disseminate about this phenomenon, accurately or otherwise. For example, in many forecast operations there are increasing attempts to actually accurately quantify (can you believe it?) forecast parameters for specific areas. Imagine the forecaster's surprise when the update of the update-or is that the amendment of the amendment?-is only 1% accurate. Oh, we do have our successes, like "Remember that time last year when it really snowed like the forecast said?" and "Remember back when we used to have snow and the winds blew hard and shifted sort of when we said they would?", but most tangible feedback centers around the lack of true sustainable accuracy "Yup, you reeeealllly missed it-should have forecast that yesterday!" or "Well, since that heavy snow didn't come today, what do you think (chuckle, chuckle) will happen tomorrow?" And maybe therein, in that little word "sustainable," lies the problem. Yes, the forecast models are getting better, and, yes, satellite/radar imagery and more reliable remote weather telemetry are helping refine the forecasts, but, no, all of this stuff doesn't always help make an otherwise good (okay? marginal? bad?) forecast great. Or at least not as great as either forecaster or user would like to come to rely upon as commonplace.

In short, there are just too many interactions between ocean and air that are only beginning to be understood and modeled, and way too many things happening on a smaller scale than are currently measured or ingestible by the models. So when you try to combine the uncertainties of general weather forecasting with a complex topography and a rapidly changing snowpack structure, you have all the makings for explosive wrongdoings. Sort of like trying to force a big avalanche through a small opening: it only works for some of the snowflakes. But, believe me, the bottom line is that the forecaster wants a great forecast as much or more than any field personnel.

I suppose the problem comes down to a negative feedback loop that spirals downward into an unfortunate abyss (where I often find myself headed when observations only remotely and randomly resemble the forecast). Namely, the loop of technology breeding forecast success/increased accuracy which breeds higher expectations and greater reliance on more accuracy, all of which lead to the need for more technology (to support the need for more consistent successes) which just isn't there yet. Great, so there you have it, an endlessly debilitating spiral since we don't have all the technology we need yet. Even then, will there be the "right" people who can hope to analyze all that flooding river of information? I don't even measure up to an XT class computer when it comes to what-if scenarios to the 8th power. So the end result of all this is that forecasts are often reduced to best guesses and probabilities of potential outcomes, since the facts remain that successes may be intermittent and sought after accuracy isn't totally reliable. Hence, whatever expectations that might follow are mostly unrealistic. Now notice I said "mostly"-it's a good adjective, and one of those many words that can go both ways and represent a great number of different virtual worlds. Okay, so here I am again apologizing for weather, for weather forecasts, and the general lack of structure in the universe. Better to just stop and give the General Rules for Mountain Weather Forecasting (Northwest version):

1. Never expect the forecast to verify; then, when and if it does, you'll be pleasantly surprised. First Corollary: the better the forecast seems to be going, the more likely it will go awry as soon as you leave. Second Corollary: as soon as you change the forecast, the previous version will verify perfectly,

2. Always express confidence in your forecast, no matter how confused or complex the situation might be. In short, bluff when you can't baffle (also see #5).

3. Never let reality cloud the forecast. After all, "if it wasn't forecast it wouldn't be happening."

4. In every forecast situation there are always anomalous weather stations that could wrongly influence the forecast. If the site doesn't fit the forecast, exclude it (or see #3).

5. Always use complex terminology to fit complex weather situations. Few people will take the time and effort to figure out what you really mean. And, even if they do, you'll have a new explanation by then. (Remember that the weather world is infinitely rich in possibilities, a luxurious tapestry woven across a gray stratus sky.)

6. No matter what you say, forecast users will always hear what they thought you might have said were you to say it. This means you can pretty much say anything.

7. Use common disclaimers often when forced to explain a forecast gone awry: "This has never happened before," "but it's raining/snowing everywhere else," "must be El Nino, a rainshadow, sunshadow, bad initialization, food poisoning, SCUD missile, poor model continuity, etc."

8. Utilize the fact that you hear more often from field observers when the forecast is wrong by issuing blatantly wrong forecasts when you desire field info. Or, to paraphrase an old consumer adage and apply it to mountain weather, "You never get more than you pay for, but you can certainly be stuck with much less."

I guess now I'd like to see some other Forecast Rules-Rocky Mountain, Intermountain, Sierra Nevada . . . no doubt they'd also enlighten me, and expand or significantly complement the above, and maybe, just maybe, you'd get what you paid for. But I doubt it.

The Avalanche Review, VOL. 9, NO. 5, MARCH 1991
Copyright © All Rights Reserved; AAA