Processing and Visualizing Auckland 1m DEM/DSM Elevation Data

About two years ago, I took on a cartographic project visualizing the Auckland 1m DEM and DSM found publicly via the LINZ Data Service (LDS) here: DEM, DSM. The goal at the time was to develop a base map for the extraction of high resolution images for use in various static media. It was a good piece of work with some fun challenges in scripting and gradient development. Included herein are notes about processing the data with QGIS and BASH, building the gradients, and blending the base maps using QGIS.
two_volcanoes_forWebImg 1: Conference media developed for International Cartography Conference (ICC) 2018, Washington DC.
Processing the Data
The original data download was 7GB per data set (DEM and DSM). Each data set contained 6423 individual files at 1.4MB.  This set up was pretty hard to work with in QGIS; so, initially I processed each data set for ease of viewing.  This processing included grouping the data into larger files matching the LINZ Topo 50 Map Grid Sheets and running a few processes like GDALADDO (overviews), GDALDEM (hillshades), and GDALBUILDVRT (virtual mosaic).

The basic idea of formatting the data is as follows:

gdaldem hillshade -multidirectional -compute_edges input.tif output.tif
gdaladdo -ro input.tif 2 4 8 16 32 64 128
gdalbuildvrt outputvrt.vrt *.tif
Click the arrow to the left to view the full BASH script below:


# The purpose of this script is to process the Auckland 1m DEM and DSM elevation
# data into more manageable pieces for easier viewing in QGIS...  

#... The original
# elevation tile downloads from LDS contain 6423 individual tiles. The
# downloaded elevation tiles are reworked into tiffs the same size as the NZ
# LINZ Topo50 Map Sheets 
#(  In
# this case, the original data contains an identifier, like 'AZ31', within 
# the tile name that associates it with Topo50 Map Sheets.  This script 
# extracts that identifier, makes a list of the files containing the identifier
# name, makes a vrt of the items in the list, creates hillshades from that vrt,
# then formats for quicker viewing in QGIS.

# All data is downloaded in EPSG:2193 and in GeoTiff format
# Auckland DEM here:
# Auckland DSM here:

# Place ZIPPED files in directory of choice

# Set root directory for project. PLACE YOUR OWN DIRECTORY HERE.

# Create supporting variables

# Create file structure
mkdir $BASEDIR/lists
mkdir $dSm_dir
mkdir $dEm_dir
mkdir $dSm_list_dir
mkdir $dEm_list_dir

# Extract data
unzip $BASEDIR/ -d $dSm_dir
unzip $BASEDIR/ -d $dEm_dir

# Delete zipped files
# rm -rf $BASEDIR/
# rm -rf $BASEDIR/

# Loop to process both DEM and DSM data
demdsm="dEm dSm"
for opt in $demdsm
# Variables, dEm and dSm, are created for naming purposes and moving data to
# the correct directories 

	# Identify associated Topo50 map sheet name.  Make it as a list held as a variable
	unique=$( find ${!tempvar} -name "*.tif" | sed "s#.*$capvar##" | sed 's#_.*##' | sort | uniq )

	# from the 'unique' variable, create a list of files with similar Topo50 idenifier
	for i in $unique
		# List all available tiffs in directory 
		namelist=$( find ${!tempvar} -name "*.tif" -maxdepth 1 )
		# Compare unique name to identifier in available tiffs name.  If 
		# there is a match between the unique name and identifier in the 
		# tiff name, the name is recorded in a list.
		for j in $namelist
			namecompare=$( echo $j  | sed "s#.*$capvar##" | sed 's#_.*##' )
			echo $namecompare
			if [ $i = $namecompare ]
				echo $j	>> ${!tempvar_list}/$i.txt	

	# Create list of available .txt file 
	listsnames=$( find ${!tempvar_list} -name "*.txt" )

	for k in $listsnames
		# list contents of .txt file into variable
		formerge=$( cat $k )
		# prepare file name to use as vrt name
		filename=$( basename $k | sed 's#.txt##' )
		#echo $filename
		#echo $formerge
		# Build VRT of elevation files in same size as Topo50 grid 
		gdalbuildvrt ${!tempvar}/$filename.vrt $formerge 

	# Change directory to 'Merged DEMs'
	cd ${!tempvar}

	# Make directory to store hillshade files
	mkdir hs

	# Clean out overviews
	find -name "*.vrt" | xargs -P 4 -n4 -t -I % gdaladdo % -clean

	# Create hillshade from VRTs
	find -name "*.vrt"  | xargs -P 4 -n4 -t -I % gdaldem hillshade -multidirectional -compute_edges % hs/%.tif

	# Create external overviews of VRTs
	find -name "*.vrt" | xargs -P 4 -n4 -t -I % gdaladdo -ro % 2 4 8 16 32 64 128

	# Create vrt of elevation VRTs
	gdalbuildvrt $opt.vrt *.vrt

	# change directory to hillshade directory
	cd ${!tempvar}/hs

	rename s#.vrt## *.tif

	# Clean out old overviews
	find -name "*.tif" | xargs -P 4 -n4 -t -I % gdaladdo % -clean

	# Create external overviews of HS tiffs
	find -name "*.tif" | xargs -P 4 -n4 -t -I % gdaladdo -ro % 2 4 8 16 32 64 128

	# Create vrt of Hillshade tiffs
	gdalbuildvrt "$opt"_hs.vrt *.tif


Building the Gradients
Getting a natural transition through the land and sea was difficult. I was presented with two challenges; 1. building a colour gradient for elevations spanning mountain tops to undersea and 2. determining which intertidal feature to model. 

Studying Aerial Imagery from the region, I determined three zones I’d develop gradients for:

  1. Bathymetric
  2. Intertidal
  3. Terrestrial

With the gradient zones in place, I developed a few additional rules to keep the project linked visually.

  1. The colours for each gradient would be linked in tone, but distinct from each other.
  2. The bathymetry colour would frame the intertidal and terrestrial data.
  3. The deep sea blue would anchor the colour pallet.

Finally, I needed an intertidal model. Since, there are a number of different intertidal zones represented around Auckland: estuaries, sandy beaches and rocky shores and I am using only one DEM, I needed to choose which intertidal zone to represent in the image.  One colour gradient does not fit all. I had to make a choice.  I decided to use the mud flats and estuaries as the primary intertidal model.  The DEM included the channels in the mudflats and I really liked the shapes they made.  They also covered wide areas and were a prominent feature.elevation_cross_sectionImg 2: Elevation model focusing on marshy tidal zones

With the intertidal model determined and the zones set, I developed the colour gradients.

Elevation Value Colour Zone
-1.0 Bathymetric
-0.75 Bathymetric
0.5 Intertidal
1.0 Intertidal
1.7 Intertidal
1.8 Terrestrial
25 Terrestrial
100 Terrestrial
500 Terrestrial

Note: The elevation values used do not necessarily correlate with the actual elevations where these zones transition. They are a best estimation based on samplings from aerial imagery.

Blending the Layers
For the final step, I blended the layers in QGIS.  I needed the hillshades I developed from the DEM and DSM, plus the original DEM elevation. That’s it.  Three layers for the whole thing. The DEM elevation carried all the colour work, the DSM hillshades gave the detail, and the DEM hillshade added some weight to the shaded areas. Here is the order and blending for the project in QGIS:

  1. DSM Hillshade: multiply, brightness 50%, black in hillshade set to #333333
  2. DEM Hillshade: multiply, brightness 50%
  3. DEM: contrast 10%

Overall, the base image proved to be a success and the script has been useful across a number of projects. The images have ended up in a good bit of internal and conference media and I have seen steady use for almost two years now. For me, the image is getting tired and I’d eventually love to redevelop the gradients; but, I am happy to have a chance to write about it and get some more external exposure. In the future, I am looking to develop this map a bit further and present it as a web map as well. Time will tell whether this happens or not. I think it would take a directive from an outside source.

I’d be keen to hear your comments below or get in touch if you are interested in learning more.

rando_forWebImg 3: Promotional media

Note: All imagery was produced during my time at Land Information New Zealand. Imagery licensing can be found here:
“Source: Land Information New Zealand (LINZ) and licensed by LINZ for re-use under the Creative Commons Attribution 4.0 International licence.”

Open Source GIS Data Processing and Cartography Tools


This list is designed as a basic overview of tools I use for open source geospatial data processing and visualization. This list is not definitive. Instead, it is a collection of tools I use at various times to complete static or scalable cartographic projects as of mid-2018. This particular post is not a ‘how to’. It is just the tools with a brief description. Other posts on this site describe in detail how they are implemented.

Eight percent of my work is processing data leading to a visual product. Although unseen, it is this data work that remains the bulk of my cartographic projects and the portion I can truly say remains open source.

Some of my visual cartographic work does not involve open source tools. One disclaimer I feel is necessary to confess to the open source world is this: I use open source tools for geospatial processing; however, I rely heavily on Adobe CS for post processing static visual work. There are open source tools replacing InDesign, Illustrator, and Photoshop; however, in my professional experience, very few programs compare to the image processing capabilities of Adobe CS. Also, I must confess occasionally look to ESRI for interpolation and visualizing LiDAR point clouds.

Open Source Tools in Geospatial Data Processing

I work with Linux (Ubuntu) and Windows. I use Ubuntu for all my GIS data processing and base cartographic work. I run Adobe CS off a Windows machine for post processing static published work.  Linux has a number of flavors and all provide the ease and control of terminal processing. Ubuntu, one flavor of Linux, has several GIS packages, like GDAL as part of the standard install.

QGIS is a GIS application/editor. It is my go to for testing data and static visualization work.  QGIS is always evolving. Every year it gets better.  I rely on QGIS for the construction of base images. Basically, the base map visuals that go into more complex cartographic layouts and posters. Layouts, legends, supporting visuals, and labeling are most often handled using Adobe.

GDAL is a geospatial command line raster processing tool. It is fast, reliable, and covers almost all the basic needs of geospatial raster processing. Reprojection, transformation, interpolation, masking, calculations, and a number of things I am completely unaware of. GDAL commands are easy to add to scripts using BASH or Python and are a breeze for batch processing. Many QGIS raster tools are built using GDAL in the backend.

Along with GDAL, users get OGR2OGR for their vector processing and conversions.

PostgreSQL is an open source database manager. PostGIS is the geospatial extension added to this database. Processing geospatial data inside a database is clean and efficient. PostgreSQL offers all the power of scripting and allows for complex operations and analysis on data. PostgreSQL integrates well with QGIS and tables are easily visualized using the QGIS DBManager. I use PostgreSQL/PostGIS for as much vector processing as I can and integrate it into my BASH scripts.

See also psql2shp and shp2psql for moving shapefiles between data bases. (

Ubuntus’ command processor tool, BASH offers a selection of programming tools like: for loops, while loops, variables, arrays , and lists to wrap around command line operations. BASH does not get too complex, but will go a long way. Anything that can be executed in the terminal can be built into a BASH script. Need to locate five hundred rasters of the same resolution, move and rename them? BASH can do that. Need to set up a series of GDAL processes in succession that might take three days to complete? Run them through BASH. Need to create a geospatial plugin for QGIS? BASH will NOT do that. See Python for all your plug in construction needs.

MapProxy is an open source tool for creating raster tile caches and serving scalable maps. For me, its most useful advantage is in its ability to create tile caches in any projection. MapProxy also acts as a map server. It will render tiles on the fly or serve tile caches as needed. This unassuming tool is easy to use and allows for a lot of customization in outputs. It also creates WMTS GetCapabilites documents and serves WMTS for testing.

CartoCSS is a syntax language used to style geospatial data into dynamic, scalable maps. CartoCSS allows for ease of styling across multiple zoom levels. CartoCSS is a good language to incorporate raster blending and gradients.

Yes, I still prefer raster tiling and use Tilemill to design scalable maps. I like raster tiling because it still allows me to use raster data. Half the data used in GIS. Plus very little compares to the visual quality. Please do not leave me comments extolling the benefits of vector tiling. I do know these benefits, especially in labeling, and will discuss them later.

As part of CartoCSS, I should also note the use of Mapnik XMLs to format CartoCSS for MapProxy. Although a small step, knowing how to manipulate these documents goes a long way in developing custom projections.