Derived Motion Winds - Total Operational Weather Readiness - Satellites (TOWR-S)
GOES-R Derived Motion Winds
Frequently Asked Questions about the GOES-R Derived Motion Winds Product
1) What is this product?
The GOES-R Derived Motion Winds product calculates wind speeds and directions at different heights in the atmosphere. Winds are produced using GOES-R Channel 2 (0.64 um), Channel 7 (3.9 um), Channel 8 (6.2 um), Channel 9 (7.0 um), Channel 10 (7.4 um) and Channel 14 (11.2 um). Winds are only calculated for Channel 2 during daytime hours, and Channel 7 during nighttime hours. Channels 8, 9, 10, and 14 have winds calculated during all hours. The horizontal resolution of this product is 10 km. The vertical resolution of this product is dependent on the cloud top altitude, however for clear-sky, the water vapor winds are calculated at the 200 mb. The GOES-R Derived Motion Winds product is produced both day and night (with the 0.64um channel producing winds only during the day). The winds produced have a measurement range of 0 - 300 knots for speed and 0 to 360 degrees for direction. The product's accuracy is 7.5 m/s (roughly 15 knots). The product's quality is quantitative out to 62 degrees local zenith angle and qualitative beyond that value.1
This product is useful when trying to identify the speed and direction of the wind at various levels in the atmosphere. This can inform parameters such as warm air/cold air advection and mean wind flow. This can also help in identifying the jet stream, jet streaks, and centers of low pressure (such as mid-latitude cyclones or tropical storms).
2) How often do I receive this data?
The cadence of the Derived Motion Winds product depends on the scene being viewed. For Full Disk images, it is every 60 minutes. For CONUS images, it is every 15 minutes. For Mesoscale images, it is every 5 minutes.
3) How do I display this product in AWIPS-II?
To display this product in AWIPS-II, go to the "GOES-R" tab of the CAVE menu, then select select the satellite of interest (GOES-E, GOES-W, or GOES Test) at the bottom of the drop-down menu under the "Derived Motion Winds" subheading . Then, select the area of interest (Full Disk, CONUS, or Mesoscale). Finally, select the
Alternately, use the AWIPS Product Browser. Select "Sat", then either "GOES-16" or "GOES-17". From there, choose "Full Disk", "CONUS", or "Mesoscale", then select "DMW." Finally, select the satellite channel form which the winds were derived.
4) How do I interpret the color maps associated with this product?
Wind barbs are colored based on the speed being displayed. Winds less then 30 knots are colored green. Winds between 30 and 50 knots are colored yellow. Winds faster than 50 knots are colored red.
5) What other imagery/products might I use in conjunction with this product?
The GOES-R Derived Motion Winds product can be used with several model fields to identify advection. These include dewpoint and TPW (moisture advection), relative vorticity (vorticity advection), and temperature fields (warm-air/cold-air advection). They can also be used to compare the model upper-level wind patterns and determine which model is best capturing the atmospheric situation at the time (giving confidence for which model to trust for the next 12-24 hours). The location of the jet stream and jet streaks can also be identified (and compared to models) for additional confidence. This is important as the location of a jet streak's right-entrance and left-exit region are areas of enhanced vertical motion and can potentially lead to cyclogenesis.
6) How is this product created?
In order to estimate motion, one must have a sequence of images separated by some, preferably fixed and relatively short, time interval. The DMW algorithm described here uses a sequence of three images to compute a pair of vector displacements (one for an earlier time step and one for a later time step) that are averaged to obtain the final motion estimate. The current version of the algorithm requires that the three images be equal in size. The algorithm uses the middle image to perform the initial feature targeting, then searches the before and after images for traceable (coherent) features to derive motion estimates.
The algorithm is designed to run on segments of data provided by the framework and consisting of multiple scan lines. Processing begins after a data buffer containing the brightness temperature values from three consecutive images is filled. The data buffer also contains output from the cloud mask and cloud height algorithms which must execute before the Derived Motion Winds algorithm. It should be noted that the cloud data is only required for the middle image time because this is the image that is processed for targets. On the other hand, brightness temperature values are required for all three image times because this is the quantity being tracked. In practice, the buffer is a data structure holding the 2-dimensional arrays of brightness temperatures for three image times and the cloud information for a single image time.
Once the data buffer is full, the middle portion of the buffer is divided into small “target” scenes NxN pixels and each scene is analyzed to determine if it is a suitable tracer. Only the brightness temperature field from the middle image time is processed for targets and it is these targets that will be tracked over time to derive the motion. Processing only the middle portion of the buffer allows for the features to drift over time but still remain within the domain of the buffer. Within each target scene, the algorithm locates the strongest 2-D gradient in the brightness temperature field and recenters the NxN target scene at this location. A brightness temperature gradient threshold is used to prevent target selection on very weak gradients.
After the target scene is re-centered on the maximum gradient, tests are performed to determine whether or not the scene would be a suitable tracer. These tests eliminate target scenes that lack the gradients necessary to track reliably while also removing scenes that are suspected to contain multiple cloud layers.
If a potential tracer makes it through the target quality control, a search region, much larger in size than the target scene, is defined in each of the tracking images. At this point, depending on the channel being processed, one of two tracking strategies is employed. Both strategies use the Sum of Squared Differences (SSD) similarity measure to locate the target scene in the preceding and succeeding images.
When processing cloud-top features from the 0.6, 3.9, 6.2 or 11.2 micron channels, a tracking strategy called nested tracking is used to estimate motion. In this approach, a small 5x5 pixel box is “nested” within the outer target scene and a local motion vector is derived at each interior pixel. A 2-pixel offset is used near the boundary of the outer target scene. The field of local motion vectors that results is then analyzed with a cluster analysis algorithm to find the dominant motion. The dominant motion is computed by averaging the displacements associated with the largest motion cluster found by using a cluster analysis algorithm. The wind vector is then assigned a representative height after examining the cloud top pressure or brightness temperatures associated with the pixels in the largest cluster. When processing the visible, SWIR or LWIR channels, a median cloud top pressure is found by examining the cloud-top pressure values of all pixels in the largest cluster. When processing one of the three water vapor channels the height assignment process is slightly different. Here, the water vapor channel brightness temperature values are examined and a median temperature is found from the pixels in the largest cluster. The median brightness temperature is then compared to a GFS forecast temperature profile to find the pressure where the two values agree. The pressure at which these two values agree serves as the representative height of the derived motion wind.
When processing the clear sky portions of a water vapor (6.2um, 7.0um or 7.3um) image, the strategy for tracking features is more conventional. For these cases, the target is assigned a height before it is tracked. The height is computed using a sample of pixels from the coldest portion of the scene. After the target is assigned a height, a search is performed to find the closest match of the target in the preceding and succeeding images in the image triplet. This conventional approach produces a single motion vector associated with the motion of the entire target scene.
Both tracking approaches use a forecast wind (from the center of the target scene) to locate and place the center of the search region in the next image. This practice of using the forecast to “guide” the search serves two purposes. First, it reduces the number of “false positives” in the tracking step. Secondly, it minimizes the computational expense of the search.
During the tracking process, correlation thresholds are applied to screen out false positives. When nested tracking is employed, only matching scenes possessing a correlation score of 0.8 or higher (1.0 is perfect) are allowed to influence the final solution. For conventional tracking, where nested tracking is not invoked and the larger target scene is tracked, the correlation threshold is reduced to 0.6.
Two sub-vectors are generated in the tracking process, one vector for the backward time step and one vector for the forward time step. Accelerations between sub-vectors exceeding a user defined threshold (10 m/s) are not permitted (vectors are discarded). In addition, gross errors in the height assignment and tracking estimates are removed by comparing the satellite-derived motion wind to a numerical forecast wind and discarding those satellite-derived wind vectors which differ significantly from the forecast wind. These gross error thresholds are band-dependent.
Once the last line segment is processed, the entire set of derived winds undergoes a more rigorous quality control process. Two related algorithms will make up the Automatic Quality Control (AQC) of the GOES-R DMW processing. The first one is the quality indicator (QI), based on work done at EUMETSAT. The second is the Expected Error (EE) principles developed at the Bureau of Meteorology, Australia.
The ancillary data used in the Derived Motion Winds product includes:
- Land Mask / Surface Type: A land mask file is needed such that each ABI pixel can be classified as being over land or water?
- Derived Motion Winds Configuration File: A configuration file is needed to set six variables within the DMWA processing algorithm:
1.GOES-R ABI channel number – Channel number to use for feature tracking
2.Time step between images
3.Target box size – In pixel space
4.Nested tracking flag – to enable or disable nested tracking.
5.Expected Error (EE) filter flag
6.Clear-sky WV flag – to enable or disable clear sky processing.
- Numerical Weather Prediction Forecast Data:
1.Short-term forecast temperature and wind data on pressure surfaces from National Centers for Environmental Prediction’s (NCEP) Global Forecast System (GFS) model are used to calculate target heights and for calculating model shear and model temperature gradients used in the Expected Error algorithm.
2.Short-term GFS forecast wind profiles are also used to center the search box on the predicted locations of targeted features being tracked in the first and last images of the loop sequence.
- Expected Error Coefficients File: A set of regression coefficients corresponding to a number of predictors used to compute the Expected Error quality flag that is appended to each DMW that is computed.
The derived data used in the Derived Motion Winds product includes:
- Cloud Mask: The cloud mask is used by the algorithm as part of the cloud amount est when selecting which target scenes to process. it is also used to screen out pixels that do not have a cloud top pressure associated with them.
- Cloud Top Pressure: This information is used by the algorithm to assign a representative height to the target scene being tracked.
- Cloud Top Pressure Quality: This information is used by the algorithm to assign a representative height to the target scene being tracked.
- Cloud Top Temperature: This information is used by the algorithm to assign a representative height to the target scene being tracked.
- Low Level Inversion Flag: This information is used by the algorithm to assign a representative height to the scene being tracked within a GFS model designated low-level inversion.
- Solar Zenith Angle: This information is used by the algorithm to determine day/night pixels.
1Daniels, Jaime, Wayne Bresky, Steve Wanzong, Chris Velden, and Howard Berger. NOAA NESDIS Center for Satellite Applications and Research GOES-R Advanced Baseline Imager (ABI) Algorithm Theoretical Basis Document for Derived Motion Winds v.2.0. 30 September 2010. http://www.goes-r.gov/products/ATBDs/baseline/Winds_DMW_v2.0_no_color.pdf