Saturday, July 29, 2023

Severe Thunderstorm Views, July 28

At 556 pm MST on Fri Jul 28 2023 the National Weather Service issued a severe thunderstorm warning covering much of the Tucson area. The storm was located at the time over Sabino Canyon Recreation Area. The warning stated cell movement as southwest at 15 mph. My place is about 7-8 miles due west of the recreation area. (At the time I only heard a radio host's summary; details of the warning looked at much later.) It was about 6:10 pm before I heard thunder and went out on the patio to take a look. Then it took another 5 minutes to realize that I should go back in and get my phone to take some pictures.

The format for the image captions is MMDDHHMMSSPM. MST is GMT -7H. The thunder was coming from the anvil overhead. Maybe I was too far away and it was too bright, but I never noticed any cloud-to-ground lightning. The bright white blob on the right screamed hail. I quickly realized that the action was shifting south. The remainder of the images are looking southeast.

There were widespread reports of inch or more diameter hail on Tucson's east side.

And 60 mph wind gusts at both airfields on the distant right.

Awesome arch of darkness.

On going outside at 6:10 pm I was still in the excessive heat northwest wind. By 6:20 there was occasional moderately buffeting outflow and a few drops of rain. Later about 10-11 pm a secondary band of showers moved through, but for the entire evening I received only 0.01 inch of rain.

Monday, March 27, 2023

Tucson Precipitation, Update for the Last Three Winters

Starting almost 13 years ago I have been periodically posting here a figure similar to the one below. Newer versions of the figure incorporate recent years and occasional refinements. The last update was three years ago, so it's now time to add three more winters.

Since it has been awhile, I'll review my reasoning about what is plotted here. Why November through March? In any winter there are always periods of both wet and dry weather patterns. Though some patterns may be fleeting, others may persist for the better part of a month. A three-month winter could equally end up wet-dry-wet, or dry-wet-dry. I think that five months is a better window for capturing the overall winter. Since on average the months of November and March in Tucson are each drier than any of the other three months, in most years it matters little. But when it does rain in Tucson in those edge months, it is basically a winter pattern. Whatever those two months produce, I think their results deserve to contribute to the winter as a whole. So the vertical axis is Tucson Airport precipitation totaled for five months. Before turning to the horizontal axis, notice that the data point for this past winter of 2022-2023 is labeled 23, corresponding to the end of the five-month period, also to the year in January, the middle of the five-month period. That is what I use to categorize the winters by decade. Selected years are also labeled.

The Climate Prediction Center (CPC) issues a weekly update presentation on ENSO, with each update providing, among many other things, an explanation of and discussion about the Oceanic Niño Index (ONI). Summarizing, the calculation of ONI starts with a climate-adjusted dataset of monthly ocean surface temperature anomalies for a key area of the tropical Pacific. These monthly anomalies are averaged over three months (i.e., the January ONI is an average of the anomalies for the months of December, January and February), and then the ONI is defined to be that average rounded to one decimal place. I've repeated the three-month averaging calculation, but since I've rounded to two decimal places, same as the input dataset, technically what I have plotted is not ONI. The difference amounts to no more than the width of a plotted marker. Notice for the dry winters of 2020-2021 and 2021-2022 (unlabeled, cyan-diamond) the horizontal positions of their markers, plotted here with their (pseudo) ONI value rounded to two decimal places. For both years the January (DJF) official ONI rounds to -1.0.

The expectation that La Niña would rapidly diminish toward the end of this past winter was already well forecast at the beginning of last fall by a consensus of dynamical models. Back then it was already clear that the upcoming winter's La Niña was not going to be the same as the previous two winters. This year's ONI for January (DJF) was down to -0.7. The ONI for February (JFM) is not yet available, but will probably be close to the -0.5 threshold. Barring significant amounts of precipitation during the last two days of this month, the five-month winter of 2022-2023 ranks 23rd wettest among the last 74 winters. The decade of the 2020's, even with the two dry La Niña years, is/will be off to a good start (compared to, for example, the decade of the 2000's). There's every reason to expect that next winter's precipitation will be at least near normal, and maybe even above normal again.

Thursday, September 29, 2022

La NiƱa Nonsense

La Niña itself is not nonsense, nor is the fact that as of Sep 8 La Niña conditions are observed in the tropical Pacific and expected by the National Weather Service Climate Prediction Center to continue through the upcoming Northern Hemisphere winter.

What has been nonsense over the past 2-3 weeks is news coverage of supposed consequences of that expectation. An egregious example appeared in the Tucson paper on Sunday. The article was headlined Another La Niña could be more bad news for the Colorado River. The article quotes two experts. I'll call them Expert 1 and Expert 2. Their views are presented somewhat as a debate. Expert 1 enthusiastically supports the title of the article while Expert 2 says, Some La Niña years have produced near normal or above normal flows while others have seen much below normal flows as we have seen the last two years. So an objective title for the article would have been Experts disagree on whether another La Niña could be more bad news for the Colorado River.

The National Weather Service Climate Prediction Center (the newspaper article links to the same web page that I have linked to above; hereafter CPC) as of Sep 8 quantifies their expectation of a continuation of La Niña conditions as a 91% chance from September through November, decreasing to a 54% chance in January-March 2023. The newspaper article provides those CPC numbers, but misrepresents the probabilities, which actually apply to something the CPC defines objectively. Ocean surface temperature anomalies are determined for a specific portion of the equatorial Pacific, lying roughly south through southeast of Hawaii. There is averaging over time and space to generate a single number. There is an arbitrary threshold, and an additional requirement for duration. The result is an objective answer: La Niña conditions, or not. But the newspaper article describes the 91% and 54% probabilities as chances of a La Niña weather pattern dominating the Northern Hemisphere.

I think of Northern Hemisphere winter weather patterns as rolls of the dice. Pacific Ocean temperatures and associated tropical weather patterns load the dice. If I were to literally roll a single dice (die) once every 15 days this coming winter, I might expect that by the end of the winter each side would have come up once: {1, 2, 3, 4, 5, 6}. Of course by dumb luck one or more sides might come up more than once this winter. Over the long run, if I repeat the experiment every year for many years I would expect the average roll to be 3.50. But let's say I have a second dice loaded in a way that makes it impossible for it to land with the 6 facing up. A potential 6 result will always be turned into a 1. So the set of expectations will be {1, 1, 2, 3, 4, 5}. Over the long run the average roll with the loaded dice will be 2.67, not 3.50. Does that mean that roll 1 dominated the winters when I used the loaded dice, or that I would call each 1 in those years a loaded result? No, because one of the 1's would have happened anyway, and most of the time I still rolled a 2, 3, 4 or 5.

I expect there are people who will subjectively determine that the Northern Hemisphere midlatitudes this fall-winter will have been dominated by a La Niña weather pattern, no matter how the dice turn up. I have no idea how one would determine such a domination objectively, which would allow for a precise probability forecast calculated from historical data. I do know that the CPC issues probabilistic seasonal outlooks for precipitation. As I understand their discussion about those outlooks, they routinely adjust the historical data for recent trends, and that would mean they are somewhat siding with Expert 1 in the newspaper article (i.e., in effect, never mind that some La Niña years have produced above normal flows, look at the last two years with much below). But even with weighting toward recent trends, the CPC is predicting equal chances for the three categories (below-normal, near-normal, above-normal) for western Wyoming for Oct-Nov-Dec 2022, with that equal chances outlook expanding to cover much of the rest of the upper Colorado Basin for Dec-Jan-Feb 2022-2023.

In summary, how would I quantify the word could in the newspaper headline's phrase could be more bad news? I would say more than 50% (where 50% would be "could be bad, could be good") but less than 60% (much less than the tone of the newspaper article). That's based in part on the fact that the 54% chance of La Niña conditions continuing into January-March 2023 is effectively a 46% chance of a return to neutral conditions by then.

Monday, May 2, 2022

Viewing Blend Output

Viewing Blend Output

The National Weather Service provides public access to output from its National Blend of Models. I use the page at NOMADS. There, after selecting directories first for the latest date and then for the latest hour (UTC; latest hour becomes available near the end of that hour or a few minutes into the next hour), I select the "text" directory. Then (using Safari browser) I control-click->Download_Linked_File for whichever file(s) cover(s) the period of interest. In NBM terminology: h=hourly (1 day), s=short-range (3 days), e=extended-range (7 days), x=super-extended-range (8-10 days). Each file takes about 20 seconds to download. Meanwhile I've moved on to other web browsing. Later, offline, in the Terminal application I use the following little shell script to peruse one station at a time.

#!/bin/zsh
# The first $1 is the station identifier, which needs to be provided as an argument to the script.
# The second $1 is the beginning of the awk range pattern, looking for "sta" in field number 1.
awk -v sta="$1" '$1 ~ sta, /SOL/ ' IGNORECASE=1 Downloads/blend_nb[hsex]tx*

The IGNORECASE variable is recognized by the GNU version of awk, installed using Homebrew overriding the pre-installed awk. The setting can be removed for portability, but then one has to remember to capitalize the station (i.e., ./myb DMA vs. lazier fingers ./myb dma).

A list of available stations can be searched here. A complete key to the text bulletins can be found here.

Tuesday, November 23, 2021

Update on Technical Details

Three months ago, during an extended break in the monsoon, I wrote here Technical Details As Acknowledgements. Much of that post was about the process of installing free software, which I've been using for a variety of things, including producing custom weather maps on my MacBook. One motivation for providing information about the process was that lessons learned might be helpful for anyone interested in doing something similar with various Python packages. Another goal was to acknowledge the effort that people put into making these software packages work and remain freely available.

Most of the details from three months ago remain relevant. A few things have changed, and I'll incorporate those changes into a description of the process starting from scratch on a new MacBook. I had been limping along on a MacBook Air that I bought almost eight years ago, cheaply at the time as a refurbished unit. But recently I succumbed to temptation and bought a new one, which is equipped with a processor chip in the Apple M1 family, known as Apple Silicon, but also as ARM to distinguish from Intel.

I had been reading online discussions about M1, and based on advice in those discussions I expected for now to be able to get only so far with installing things natively on the new Macbook; for some things I would need to keep using my old Macbook. Some online posts advised running Intel versions of software through Apple's Rosetta 2 transition program on the M1 machines. Some recommended that the only hope for doing things natively was to use one of the Conda packages. Those other online recommendations may have been true a few months ago, but it is not so now. Everything that I had been using is now running natively on my new M1 Macbook, with no need for a middle-man software manager like miniconda. (Homebrew is of course a middle-man, but it is much closer to do-it-yourself.) It's time to shut down the eight-year-old Macbook for good.

I have my apple ID registered as a developer. It doesn't cost anything to do just that. That allows me to download the latest versions of Xcode and the Command Line Tools from http://developer.apple.com/downloads/more. Xcode downloads as an .xip file, and the CLT as a .dmg file. To start the install process for either one, double click. It's my understanding that only the CLT are needed by Homebrew. But I always download the latest version of Xcode as well because I need it for a few small stand-alone programs. I always do a separate install of the CLT after installing Xcode, and that seems to ensure that the Homebrew formulas find the CLT in the expected place.

Once again here is the Homebrew installation instruction web page. After completing those instructions on the M1 Macbook, I did brew install gcc and then brew install python@3.9. If either of those failed, there would be no point in trying to go farther. But everything installed natively and automatically with no problems, including several dependencies for each. I then did brew install geos and brew install proj. Those two libraries are needed by the cartopy python package. Three months ago cartopy required an older version of proj, but now cartopy uses the current version. There are a couple of dependencies automatically installed with proj. Everything continued to install natively and smoothly.

A few python packages can be installed either with a Homebrew formula or with the ordinary Python package manager. That was the case several years ago for numpy and scipy, and for awhile I kept them updated with their Homebrew formulas. Then it seemed that these formulas were unsupported, and the recommendation was to install/update numpy and scipy as ordinary Python packages. So I had been doing that recently. But in the online discussions about M1 there were reports of problems with installing these packages, and the recommendation was to use the Homebrew formulas, which have been updated recently. So brew install numpy and then brew install scipy. Again there are several dependencies, and again everything installed natively and smoothly.

Just a few more manual brew install's: hdf5 netcdf eccodes and pkg-config. Then it's on to the python package manager, already installed as pip3 by the Homebrew python@3.9 formula. There is a deprecation warning that is printed with each package installed. It appears that Homebrew will have to change something in the future, but the warning can be ignored for now. I started with pip3 install matplotlib, which automatically installs a number of required packages. Most if not all of these appear to install as native, pre-compiled binary wheels. Then on to
pip3 install shapely --no-binary shapely
pip3 install pyshp

pip3 install pyproj. These three are from the cartopy installation page. Then pip3 install cartopy.

One of the python packages that can be installed with a Homebrew formula is ipython. But the latest version of python, 3.10, was released just last month. The Homebrew formula for ipython is already set to require that recently released version of python as a dependency, while many other packages still depend on python 3.9. So it's easier to just use the alternative, pip3 install ipython. That also installs a number of dependencies, and they all go into Homebrew's 3.9 site packages folder.

The pandas package installed with no problems with pip3, though it took a long time to compile. Then pip3 install cfgrib, pip3 install xarray and pip3 install MetPy, and back in business plotting grib files on my new Macbook.

Saturday, September 4, 2021

A Tale of Two Noras

A Tale of Two Noras

During the late afternoon hours of this past Tuesday the precipitable water values around Tucson and in all directions away from Tucson were about as high as they ever get. For good reason the National Weather Service had a flash flood watch in effect for Tucson. Below is what about as high as precipitable water values ever get around Tucson looks like.

I use a color range maxed out at 1.7 inches because at Tucson's elevation it doesn't get much higher, and attention is usually on the transition from slightly below to slightly above one inch. Obviously on Tuesday in order to get to the one inch neighborhood you had to go a long distance, either to the highest elevations in the sectors north through southeast of Tucson, or to the moderately high elevations of the Baja peninsula.

By Tuesday Nora's circulation had completely dissipated at all levels, leaving a broad and deep southerly flow at low to mid levels that was continuing to transport moisture northward. The mid level southerly flow veered to a 70+ knot southwesterly upper level jet extending across northern Baja just south of the California border. I picked up about half an inch of rain during the evening hours of Tuesday tapering off into the first few hours after midnight on Wednesday. Most of the eastern part of Tucson, east of about Swan Road, picked up about an inch. By sunrise on Wednesday the threat was past Tucson. The sky was clearing from the southwest as precipitable water levels were already dropping. It was obvious from the large scale radar composite that the upward motion associated with the upper level jet had shifted north and east of Tucson. The radio station that I listen to in the morning continued to report the forecast of a 40-50% chance of rain, but the on-air personality simply ignored the fact that officially the flash flood watch was still in effect for Tucson. The local paper took the opposite tack. The following morning, 24 hours later, the Thursday online edition still featured a story, last updated 21 hours earlier, detailing the flash flood preparations, without noting that the watch had finally been cancelled 18 hours earlier.

One week ago, when the final details of Nora itself were still uncertain, but the threat of flooding for the Southwest was already being publicized, the local paper recalled its snarky coverage of the 1997 edition of Nora, which produced only a few drops of rain in Tucson instead of the 6, 4, 2 inches that had been forecast. The 2021 paper claimed, It's possible the same thing could happen with Nora 2.0.

Here is what the precipitable water looked like on the afternoon of September 25, 1997, the day before the snarky 1997 newspaper article.

By that afternoon the low level circulation of 1997 Nora had had a complicated interaction with the Baja peninsula as Nora-1997 moved rapidly north toward the north end of the Gulf of California. Thick high level outflow debris clouds had overspread Tucson, darkening the sky dramatically compared to the morning sunshine. The wind had become a bit gusty from the south-southeast. I recall walking across the U of A campus with a colleague, a hydrologist, and he mentioned the anticipated (by anyone following the official NWS forecast) heavy rain. I said, "I wouldn't be surprised if we don't get anything." My colleague looked around at the overcast sky and at the effects of the wind gusts and sniffed, "Well obviously the hurricane is coming." The problem is that a hurricane and its environment can't be conceived as a system analogous to a baseball traveling through the air. If the hurricane and its far-flung moisture field were like a solid body, then the precipitable water would have increased dramatically at Tucson as the center of the hurricane raced north. In fact precipitable water did increase during that day over the western half of Pima County, but remained almost constant around Tucson from morning to evening, consistent with what the operational numerical models of the day had been predicting. Even scientists, who should know better, will stick with a conceptual model that is wrong long past the point when reality has given them an answer they didn't want to hear.

Access to the reanalysis dataset through NCAR is gratefully acknowledged:
National Centers for Environmental Prediction/National Weather Service/NOAA/U.S. Department of Commerce. 2005, updated monthly. NCEP North American Regional Reanalysis (NARR). Research Data Archive at the National Center for Atmospheric Research, Computational and Information Systems Laboratory. https://rda.ucar.edu/datasets/ds608.0/. Accessed 28 Aug 2021.

September 17: Corrected paragaraph before second figure, September vs. November.

Friday, August 27, 2021

Technical Details As Acknowledgements

The gauge on my patio has collected over twelve inches of rain since the last week of June. So this week's extended break from the monsoon was welcome, and it provided an opportunity to document some topics neglected in the previous two posts. Also, suggestions and lessons learned might be helpful for anyone wanting to create their own custom weather maps. The first lesson learned is that the link I provided in the previous post quickly turned bad. Here are replacement links to brief summary descriptions about the GFS, which includes the GDAS. Those summaries are on the NOMADS page, which provides public access to gridded data from a variety of National Weather Service models, as well as links to data from other modeling centers. Separately the National Weather Service provides a Model Analysis and Guidance page, where the gridded data have already been processed into standard weather map images of many varieties. If those standard maps are adequate, it might not be necessary for a user to deal with the gridded data.

My custom journeys through the model data are spread out across the screen of my MacBook in the form of maps generated through the python packages matplotlib and Cartopy. Other python packages are involved, which I'll discuss later. I like to keep track of what I'm installing and why. So my starting point is the Homebrew package manager. The starting point for Homebrew is its installation instruction web page. After Homebrew is installed, installation of a particular piece of software under Homebrew is handled by a Formula. Among the Homebrew formulae I've installed and kept updated for a long time are python itself (the formula includes installation of pip) and ipython. Some python packages require access to a library, which can be installed with Homebrew. A tricky detail is that Cartopy requires an older version of the proj library, which happily Homebrew provides as formula proj@7. A recent discovery for me is the formula eccodes, which installs the ECMWF package for decoding GRIB files. This library is needed for a python package to be discussed later. Independent of its companion python package, eccodes installs a set of command line tools. One of these tools is grib_ls, which comes in handy for figuring out what is actually in the file that was downloaded from the NOMADS site.

I keep the standard python packages (cartopy, pandas, numpy, matplotlib, scipy, etc.) updated manually with pip. There is one more tricky thing with Cartopy. A required package, Shapely, needs to be installed using the pip option --no-binary. For easy reading of GRIB files, the python packages cfgrib (interfaces with the eccodes library) and xarray are needed. I came to cfgrib through xarray, and to xarray through the python package MetPy. While intending to eventually use all of MetPy's computational features, for now I'm especially taking advantage of its addition to the map features available in Cartopy (i.e., from metpy.plots import USCOUNTIES).

Here's the easy reading part, illustrated by a line from my python script:
ds = xr.open_dataset(myFilename, engine="cfgrib", filter_by_keys={'typeOfLevel': levelType[levelTypeIndex], 'level':level})
where xarray was imported as xr, and myFilename was set to one of the files I downloaded from NOMADS. The optional argument filter_by_keys gets passed to cfgrib. This filtering is generally necessary, and will be discusssed later.

Jumping to the end of the process for now, assume that a downloaded GRIB file, or at least a subset of it, has been successfully ingested by xarray. Then it's only a matter of adapting from the many MetPy examples available. My personal preference for just viewing the fields is to rely on two matplotlib functions: pcolormesh for the scalars and streamplot for the wind components. For some reason pcolormesh understands the lon/lat/variable inputs when they are supplied as xarray DataArray structures, but streamplot requires extracting the .values property in order to supply ordinary numpy arrays.

So let's back up in the process to discuss downloading. As mentioned near the beginning of this post, the NOMADS page provides access to a wide variety of model gridded fields. I've been routinely looking at the GDAS analysis and the GFS forecast fields on their 0.25 degree grid. Those are just 2 of the nearly 100 datasets listed on the NOMADS page. Most of the datasets are equipped with the grib filter option. Clicking on the link in the previous sentence takes you to the NOMADS page of general instructions. As the instruction page explains, grib filter is best used for creating regional subsets. Following the instructions and finally reaching the "Extract" page, I initially skip past the Extract Levels and Variables section, and so that I don't forget about it scroll down to the Extract Subregion section. It's not enough to just fill in the top/bottom/left/right lat/lon. Also be sure to check the box after the words "make subregion." Returning to the select variables section, if I want just the variables corresponding to standard radiosonde observations I'll select the abbreviated names HGT, TMP, RH, UGRD and VGRD. Depending on the dataset you chose, there might be many more variables. You may need to view Parameter Table Version 2 to figure out the abbreviations. For the levels desired it might be safest to just select "all". Instead I select each level that I want. Some of the level options are actually levelTypes, as can be verified later by running grib_ls on the downloaded file. The variable PWAT needs the level (levelType) "entire atmosphere (considered as a single layer)," while REFC needs "entire atmosphere."

Here is the promised later discussion about the filtering that is generally necessary in connection with xarray's opening of a grib file to create a dataset. For an explanation of why filtering is needed, see in this early documentation for CFGRIB the section Devil In The Details. Basically each message (i.e., each 2_D grid) in a file downloaded from one of the NOMADS "Extract" pages will be a unique combination of the three keys typeOfLevel/level/parameter. But xarray tries to merge all the levels and all the parameters in the file into a level/parameter array. In order to keep xarray happy, it's often enough to restrict xarray's attention to one typeOfLevel. But it may be necessary to also restrict attention to one level, as in my line of code above. An example is when TMP and RH are at level 2 m above ground but UGRD and VGRD are at level 10 m. Xarray with cfgrib's help tries to create a table with 2 levels and 4 parameters, but is disappointed to find that half of the table's entries would be empty. Working out what is in the GRIB file and thus what filtering is needed is where the command line tool grib_ls comes in handy.

Once into a routine of what I want to download, I follow the suggestion at the bottom of the grib filter instruction page, the section "Scripting file retrievals." However instead of a script I just enter the commands interactively in the terminal window. At first I was always copying from my text editor (BBEdit) and pasting to the terminal window, with the first paste being the mutiple lines where I change to my local directory and set the date and time parameters, and the second paste being the curl line. But then I learned to just use the terminal window: ctrl-r to search the history file for text in the last use of the multi-line command, move with arrow keys and make minor edits, ctrl-o to execute that multi-line and bring up the next history line, which is the curl line ready to substitute the new ${hr} ${fhr} and ${date}. Hit return. It takes a little over 5 seconds to do the typing, and another 5 seconds for my 12 by 12 degree lat/lon regional subset 630 Kbyte file of 182 grib messages to download.