Question about output projection

Dear community,

I am wondering how to change the output grid projection from lambert to regional_latlon.

I tried to do the following:

1. In my config.sh, I set "PREDEF_GRID_NAME="RRFS_CONUS_25km" ".

2. Based on the values in WRTCMP_output_grid=("rotated_latlon" "lambert_conformal" "regional_latlon") in valid_param_vals.sh, I changed to WRTCMP_output_grid="regional_latlon" on "RRFS CONUS domain with ~25km cells" section.

 

When I ran run_fcst, I met the following errors (slurm file also attached):

 max hourly downdraft vel, output name=dnvvelmax
 max hourly 0-3km updraft helicity, output name=uhmax03
 max hourly 0-3km updraft helicity, output name=uhmin03
 max hourly 2-5km updraft helicity, output name=uhmax25
 max hourly 2-5km updraft helicity, output name=uhmin25
 af create fcst fieldbundle, name=phy_nearest_stodrc=           0
 af create fcst fieldbundle, name=phy_bilinearrc=           0
 in fv_phys bundle,nbdl=           2
 add 3D field to after nearest_stod, fld=skebu_wts
 add 3D field to after nearest_stod, fld=skebv_wts
 add 3D field to after nearest_stod, fld=sppt_wts
 add 3D field to after nearest_stod, fld=shum_wts
 in fcst,init total time:    173.06252121925354
 af fcstCom FBCount=            3
 af allco wrtComp,write_groups=           1
application called MPI_Abort(comm=0x84000002, 1) - process 10
application called MPI_Abort(comm=0x84000002, 1) - process 11
srun: Job step aborted: Waiting up to 32 seconds for job step to finish.
slurmstepd: error: *** STEP 2273889.0 ON knl-0037 CANCELLED AT 2021-11-08T19:54:07 ***
srun: error: knl-0038: tasks 24-47: Killed
srun: error: knl-0040: tasks 72-95: Killed
srun: error: knl-0042: tasks 120-143: Killed
srun: error: knl-0044: tasks 168-191: Killed
srun: error: knl-0041: tasks 96-119: Killed
srun: error: knl-0045: tasks 192-215: Killed
srun: error: knl-0046: tasks 216-239: Killed
srun: error: knl-0039: tasks 48-71: Killed
srun: error: knl-0043: tasks 144-167: Killed
srun: error: knl-0037: tasks 0-23: Killed

ERROR:
  From script:  "exregional_run_fcst.sh"
  Full path to script:  "/lcrc/project/OW_UFS/ufs-srweather-app/regional_workflow/scripts/exregional_run_fcst.sh"
Call to executable to run FV3-LAM forecast returned with nonzero exit
code.
Exiting with nonzero status.

ERROR:
  From script:  "JREGIONAL_RUN_FCST"
  Full path to script:  "/lcrc/project/OW_UFS/ufs-srweather-app/regional_workflow/jobs/JREGIONAL_RUN_FCST"
Call to ex-script corresponding to J-job "JREGIONAL_RUN_FCST" failed.
Exiting with nonzero status.
 

I tried to generate regular lat/lon projection instead of lambert.

I'm wondering which step that I did is wrong. Or is it possible that my predefined grid (in this case is RRFS 25km) can not change to regional lat/lon?

Does that mean if I want my output as regional lat/lon, I have to use other like GSL (I think this is the only output option that has regional lat/lon).

 

Best regards,

Haochen

 

Best regards,

Haochen

Correction:

2. Based on the values in WRTCMP_output_grid=("rotated_latlon" "lambert_conformal" "regional_latlon") in valid_param_vals.sh, I changed to WRTCMP_output_grid="regional_latlon" on "RRFS CONUS domain with ~25km cells" section in set_predef_grid_params.sh file.

Hi Haochen,

For future reference, can you share the hashes in your Externals.cfg file (directly under your ufs-srweather-app directory)?  Helps in debugging.

 

In order to get the regional_latlon type of write-component grid working, I commented out the whole section in set_predef_grid_params.sh for lambert_conformal and introduced a new set of variables and values as follows:


  if [ "$QUILTING" = "TRUE" ]; then
#    WRTCMP_write_groups="1"
#    WRTCMP_write_tasks_per_group="2"
#    WRTCMP_output_grid="lambert_conformal"
#    WRTCMP_cen_lon="${ESGgrid_LON_CTR}"
#    WRTCMP_cen_lat="${ESGgrid_LAT_CTR}"
#    WRTCMP_stdlat1="${ESGgrid_LAT_CTR}"
#    WRTCMP_stdlat2="${ESGgrid_LAT_CTR}"
#    WRTCMP_nx="199"
#    WRTCMP_ny="111"
#    WRTCMP_lon_lwr_left="-121.23349066"
#    WRTCMP_lat_lwr_left="23.41731593"
#    WRTCMP_dx="${ESGgrid_DELX}"
#    WRTCMP_dy="${ESGgrid_DELY}"

    WRTCMP_write_groups="1"
    WRTCMP_write_tasks_per_group=$(( 1*LAYOUT_Y ))
    WRTCMP_output_grid="regional_latlon"
    WRTCMP_cen_lon="${ESGgrid_LON_CTR}"
    WRTCMP_cen_lat="${ESGgrid_LAT_CTR}"
    WRTCMP_lon_lwr_left="-120.0"
    WRTCMP_lat_lwr_left="26.0"
    WRTCMP_lon_upr_rght="-75.0"
    WRTCMP_lat_upr_rght="49.0"
    WRTCMP_dlon="0.287"
    WRTCMP_dlat="0.225"

  fi

Note that a different set of variables need to be set for "regional_latlon" than for "lambert_conformal".  The definitions of these variables can be found in the file 

ufs-srweather-app/regional_workflow/ush/templates/model_configure

This is a jinja template file, and only parts of it will be included in the actual model_configure file in your run directory (which is at, for example, expts_dir/my_expt/2019070100/model_configure).

 

The way I came up with WRTCMP_dlon and WRTCMP_dlat is:

WRTCMP_dlon = ESGgrid_DELX/(rad_earth*cos(WRTCMP_cen_lat)) = [(25 km)/((6371.2 km)*cos(38.5 deg*(pi/180 deg)))]*(180 deg/pi) = 0.287 deg
WRTCMP_dlat = ESGgrid_DELY/rad_earth = [(25 km)/(6371.2 km)]*(180 deg/pi) = 0.225 deg

To come up with WRTCMP_lon_lwr_left, ..., WRTCMP_lat_upr_rght, I made a plot of the native (ESG) grid and then eyeballed an approximate location for the lower-left and upper-right corners such that the write-component grid lies within the native grid (although it doesn't have to; cells on the write-component grid that lie outside the native grid will be filled with a netcdf default fill value).


I've attached a couple of plots of the resulting grids, one of the whole domain and another of the northwest corner.  The red is the boundary of the native grid (with orange cells denoting the 4-cell-wide halo used to feed in the lateral boundary conditions), and the blue is the write-component grid.  (You can ignore the green.)

Permalink

In reply to by gerard.ketefian

Thank you so much for your reply!

Your answer is very helpful. However, I do have 2 small questions:

1) Does the write-component grid must smaller than (within) the native predefined grid? Like the second figure you upload?

2) Does the write-component grid resolution (dlon and dlat) must similar to the native predefined grid? Is it ok that I use native 25km native predefined grid and use 3km as the write-component grid resolution? Does that mean I still actually have a 25km result but it is interpolated (or regrided) to 3km?

Permalink

In reply to by htan2013

You're welcome.  Answers to your questions:

1) The write-component grid doesn't have to be within the native (ESG) grid.  Any cells on the write-component grid that lie outside the native grid will get assigned the netcdf fill-value for the variable, which is usually something like 1e+37.  That just indicates that the value there is invalid (the equivalent in another language, e.g. matlab, might be a NaN (Not a Number)).  I prefer to place my write-component grid completely within the native grid so as not to waste storage with fill-values, but that's just a personal preference.

2) I always use a dlon and dlat that is about the same as the resolution of the native grid.  You can use a 3km write-component grid with a 25km native grid, but that will not gain you any new small-scale information.  Your write-component fields will look ok since the write-component will just be using a smooth interpolation method (e.g. bilinear) to get the values on the 3km grid from the ones on the 25km grid.  But it is usually a waste of disk space to do that because the resolution that matters is the one for the native grid.  So yes, you still actually have a 25km result even if you use a 3km write-component grid.

Gerard

Attach Files