Basic GRASS GIS with BASH

I love GRASS… GIS

But this wasn’t always the case.

GRASS GIS was, for a long time, something I dismissed as ‘too complex’ for my everyday geospatial operations. I formulated any number of excuses to work around the software and could not be convinced it had practical use in my daily work. It was ‘too hard to set-up’, ‘never worked well with QGIS’, and ‘made my scripting processes a nightmare’.

I am here to officially say I’ve been very wrong for a very long time. GRASS GIS is pretty amazing and is a wonderfully easy tool to script.

Recently I scripted a model for extracting river centrelines from high resolution elevation data. During the process I thought how useful it would have been, way back, to have a simple example for setting up an environment and scripting a BASH process using GRASS. So, I built one for myself.

The following example provides the steps for a simple catchment extraction on a piece of LINZ 8m elevation data. The example is only for demonstration of running a basic BASH/GRASS set up with a hydrology command and NOT a demonstration for how to do hydrology using GRASS. The 8m elevation data is not the best data for hydrological extraction; however, these data do provide a nicely sized dataset to use as practice and will give results. Also please note, catchment extraction on a square raster is will not give accurate results at the edges of the raster.

In this example we will:

1. Download a small piece of elevation data from the LINZ Data Service
2. Build a GRASS environment to process these data
3. Build a BASH script to process the catchments
4. Import the elevation into the GRASS environment
5. Perform some basic GRASS operations (fill and watershed)
6. Export raster format for viewing
7. Export the vector catchments to shapefile

The following assumes you are working in a Linux environment and have a basic knowledge of BASH scripting. I tested this process using Ubuntu 18.04 and GRASS 7.4. I have 16GB RAM on my machine.

If you do not all ready have it, you can install GRASS as follows:

$ sudo apt-get install grass-core

If interested, but not necessary, you can install functionality for building GRASS plug-ins:

$ sudo apt-get install grass-dev

Create yourself a directory to work in:

$ mkdir grass_test

Download a piece of elevation data from the LINZ Data Service, place in your directory:

https://data.linz.govt.nz/layer/51768-nz-8m-digital-elevation-model-2012/data/

From the Tiles Table, I downloaded the JM tile in EPSG:2193.

Within your directory you will need to build a ‘PERMANENT’ folder in order for GRASS to do its magic. This folder will be set to operate in the projection of your data. Be sure your data is in the same projection as the environment you built. The data is downloaded in EPSG:2193, New Zealand Transverse Mercator (NZTM), so we set the GRASS environment to work in this projection:

$ grass -c epsg:2193 -e grass_test/GRASS_ENV

‘-c’ will create your directory using epsg:2193 and ‘-e’ will exit once this operation is complete.

Running this command will build a PERMANENT folder in:

grass_test/GRASS_ENV

providing all the bits GRASS needs to operate. Your folder structure will look like:

grass_test/GRASS_ENV/PERMANENT

Have a look inside the folder and see what GRASS built for itself.

The key from now on is to always run your BASH script through this environment. Running your script in this environment will allow you to perform BASH and GRASS commands at the same time.

First, let’s look at the the basic command to run a BASH script using a GRASS environment.

$ grass grass_test/GRASS_ENV/PERMANENT --exec sh grass_test/catchment.sh

The above says, launch GRASS using this environment

grass_test/GRASS_ENV/PERMANENT

and execute, –exec, this script

/grass_test/catchment.sh

Let’s build the BASH script. In this we will:

1. Import the elevation data. GRASS likes to work in its own data formats
2. Set the region for where the operation will be performed. GRASS needs to know where the operation is going to be performed.
3. Perform the operations, r.fill.dir and r.watershed
4. Export a raster to .tif format
5. Export the vector outputs to .shp format


#!/bin/bash

# set base path
outDir=grass_test

# Set raster as variable
raster=${outDir}/JM.tif

# Set a base name for the data. This is used to demonstrate that normal
# BASH commands can be used in this process, along side GRASS
rasterName=$( basename $raster | sed 's/.tif//g' )

# Import raster data
r.in.gdal input=$raster output=$rasterName --overwrite

# Set region. IMPORTANT so GRASS knows where the data is located.
# This region is set for the duration of the following commands
g.region rast=$rasterName

# Fill sinks
fillDEM=${rasterName}_filldem
directionDEM=${rasterName}_directiondem
areasDEM=${rasterName}_areasDEM
r.fill.dir input=$rasterName output=$fillDEM direction=$directionDEM areas=$areasDEM --overwrite

# Export a raster for viewing
areaOut=${outDir}/${rasterName}_areas.tif
r.out.gdal input=$areasDEM output=$areaOut

# Run watershed operation on fill sink raster
threshold=100000
accumulation=${rasterName}_accumulation
drainage=${rasterName}_drainage
stream=${rasterName}_stream
basin=${rasterName}_basin
r.watershed elevation=$fillDEM threshold=$threshold accumulation=$accumulation drainage=$drainage stream=$stream basin=$basin --overwrite

# Convert Basin (watershed) to vector format
basinVect=${rasterName}_basinVect
r.to.vect input=$basin output=$basinVect type=area column=bnum --overwrite

# Export catchment to vector format
basinVectOut=${outDir}/${rasterName}_basinVectOut.shp
v.out.ogr input=$basinVect output=$basinVectOut type=area format=ESRI_Shapefile --overwrite

There you have it. A simple BASH script, set up for running some GRASS commands, running on your LINUX machine.

Putting it all together

Set up you environment, copy the above script, paste it into a text editor, save it as:

grass_test/catchment.sh

Now run

$ grass grass_test/GRASS_ENV/PERMANENT —exec sh grass_test/catchment.sh

Your output catchments shapefile should be similar to the image below

wshed_output

 

Click here for the next step in this process; how to clip your raster using the watershed layer using GDAL and loops.

Let me know if this was helpful or if you would like to see any changes.

Experimenting with Hydrological Analysis using TauDEM

Blue Rivers Ordered

Over the past few years, I’ve played around with developing ordered rivers networks for different projects. I am not an expert in hydrology, but I can get close for cartographic purposes. I am an expert; however, in asking for help from those who know best and I rely on a lot of very smart people to guide me on my journey.

Recently, I decided to put together a visualization of ordered rivers for New Zealand. I came across a very nice data set offered through the Ministry for the Environment via the Koordinates website and thought I’d like to put it to use.

The rivers vis project made me wonder if I could build this base dataset myself using some of the recently released elevation data sets via the LINZ Data Service. The short answer to my question is “sorta”. Doing it open source is not an issue, but building an accurate ordered river centerline network is another story. This is a task I cannot take on as a solo project right now, but I could do a little experimentation. Below, I’ll offer some of methods and things I learned along the way.

Tools and Data

The method I tested used TauDEM and a 1m DEM raster accessed from the LINZ Data Service. I down sampled the DEM to 2m and 5m resolutions and used small areas for testing. Finding and open source tool was easy. I sorted through a few available methods and finally landed on “Terrain Analysis Using Digital Elevation Models” (TauDEM). There are additional methods through GRASS and SAGA GIS. I chose TauDEM because I never used it before.

Method Tested

To my knowledge, there is no open source tool where a person can put in a DEM and get a networked rivers centerline vector out the other side. It requires a number of steps to achieve your goal.

The basic run down to process the DEM is to:

  1. Fill sinks
  2. Determine flow directions
  3. Determine watersheds
  4. Determine flow accumulation
  5. Stream classification
  6. Export to vector

TauDEM does require a few extra steps to complete the process, but these steps are explained in the documentation of the tool. It was more about keeping all my variables in the right places and using them at the right time. I recommend using the variable names TauDEM provides.

BASH script here

Click the arrow to the left to view the full BASH script below:

#!bin/bash

#Rough sketch for building river centerlines. Rasters have been clipped prior

BASEPATH=/dir/path/to/base

raster_list=$( find $BASEPATH -name "*.tif" )

taudem_outputs=/dir/path/to/outputs

reso=resolution_number

for i in $raster_list
do

	INPUT_RASTER=$i

	file_name=$( basename $i )

	strip_input_extension=$( echo $file_name | sed 's/.tif//' )

	reso_name=$taudem_outputs/${strip_input_extension}_${reso}res

	gdal_translate -tr $reso $reso -of GTiff $i $reso_name.tif

	fel=${reso_name}_fel.tif
	p=${reso_name}_p.tif
	sd8=${reso_name}_sd8.tif
	ad8=${reso_name}_ad8.tif
	ang=${reso_name}_ang.tif
	slp=${reso_name}_slp.tif
	sca=${reso_name}_sca.tif
	sa=${reso_name}_sa.tif
	ssa=${reso_name}_ssa.tif
	src=${reso_name}_src.tif

	ord=${reso_name}_strahlerorder.tif 
	tree=${reso_name}_tree.dat
	coord=${reso_name}_coord.dat
	net=${reso_name}_network.shp
	w=${reso_name}_watershed.tif 

	processed_input_file=$reso_name.tif

	#TauDEM Commands
	mpiexec -n 8 pitremove -z $processed_input_file -fel $fel

	mpiexec -n 8 d8flowdir -fel $fel -p $p -sd8 $sd8 

	mpiexec -n 8 aread8 -p $p -ad8 $ad8 -nc

	mpiexec -n 8 dinfflowdir -fel $fel -ang $ang -slp $slp

	mpiexec -n 8 areadinf -ang $ang -sca $sca -nc

	mpiexec -n 8 slopearea -slp $slp -sca $sca -sa $sa

	mpiexec -n 8 d8flowpathextremeup -p $p -sa $sa -ssa $ssa -nc

	mpiexec -n 8 threshold -ssa $ssa -src $src

	mpiexec -n 8 streamnet -fel $fel -p $p -ad8 $ad8 -src $src -ord $ord -tree $tree -coord $coord -net $net -w $w

done

The script is a rough sketch, but does get results.

Challenges in the Process

One major challenge for this project was the size of the input DEM and my computers available RAM. I work primarily off a laptop. It’s a good machine but no match for a proper server set up with some spacious RAM. My laptop struggled with the large hi-resolution DEMs, so I needed to down-sample the images and choose a smaller test area to get it to work.

Clip the tiff with gdal_translate -projwin and down sample with -tr

gdal_translate -tr xres yres -projwin ulx uly lrx lry input.tif output.tif


The second challenge came up because I used a bounding box to clip my test regions. I recommend not doing this and instead clip your regions using a watershed boundary. Having square shapes for your test regions will give you very inaccurate and unhelpful results. For example, major channels in your DEM will be cut at the edges of your raster. You will not get accurate results.

Clipping a raster using a shapefile, like a watershed boundary, can be achieved using gdalwarp.

gdalwarp –cutline input.shp input.tif output.tif


Results

I ran my process and QCed the results against Aerial Imagery and a hillshade I developed from the DEM. The first run gave me good enough results to know I have a lot of work to do, but I did manage to develop a process I was happy with. The tool did a great job, but the accuracy of the DEM was a little more challenging. It’s a start. I captured a good number of river channels despite my incorrect usage of a square DEM, learned a lot about how DEM resolution affects outputs, and gained knowledge around how to spot troublesome artifacts.

Well Defined ChannelsImg 1: River capture in well defined channel.

From this experiment, there are a few ideas I’d like to explore further:

1. Accuracy of the DEM. The particular DEM I worked with had a number of ‘dams’ in the flows. Notably, bridges, culverts, vegetation artifacts, and other general errors that caused water to flow in interesting directions. When working with a data set like this, I am curious how manage these artifacts.

Road issueImg 1: River diversion at road.

Artifact issueImg 1: River diversion at culvert or bridge.

2. How to go beyond borders. This analysis can be broken down by watershed, but it will be necessary to link the outflows of those watersheds to the next for accurate results.

Edge issueImg 1: Flow not captured at edge.

3. As DEMs are released with better resolution, there is a need for scaled up computing power. The process needs a large amount of RAM. What is the best computational set up for capturing the largest area?

4. Did I do this correctly? I perform this task about once every two years and usually weekends when the surf is flat and the garden is weeded, so I am not an expert. There is a lot more research to be done to determine if I am using the tools to the best of their abilities.