Static Vector Tiles II: Openlayers with Custom Projection

Building off the work from my previous post on vector tiles, I wanted to develop a second process for use in Openlayers. There are many tutorials out there demonstrating the use of vector tiles with predefined styles and tutorials explaining how to build a fully vector tile map online. My aim instead is to demonstrate the processes to build:

  1. A vector tile set up overlaying a raster tile cache
  2. How to integrate a custom style.json
  3. How to do all of this in a custom projection using Openlayers

As always, there is more than one method to do any of this. What is shown below can definitely be improved upon. Please take this and make it better.

Github Respository is here.

Website example is here.

This example uses Openlayers6. The map is served in the NZTM projection (EPSG:2193). No effort has been made in ordering the labels. The labels display as the Openlayers decluttering orders them. Included in the JS code is a basic pop-up window demonstrating how to get information from the vector tile.

The project is built as a static set up. The vector tile cache is built directly into the website. THIS IS NOT OPTIMAL, but does demonstrate the principle. Ideally, you would have a location like AWS S3, to serve your tile cache from.

In order to use a custom projection, you will need to build an XYZ tile cache. MBTiles do not handle projections other than Web Mercator (EPSG:3857).

Basic steps

  1. Download or reproject the shapefile in NZTM
  2. Upload shapefile to PostgreSQL database with PostGIS extensions
  3. Tile PostgreSQL table into NZTM (EPSG:2193) XYZ tile cache using TRex
  4. Construct Openlayers6 JS for tile consuption

Sample Data

https://data.linz.govt.nz/layer/50280-nz-geographic-names-topo-150k/

Note:

The Geographic Names layer is clipped and filtered for this example. I clipped only to the Wellington Region and filtered the data only to use:

desc_code = BAY, METR, LOC, POP, TOWN, SBRB

Upload to PostgreSQL

shp2pgsql -s 2193 /data/wellyRegion_townBay.shp public.wellyRegion_townBay_nztm | psql -h localhost -d <yourDatabaseHere> -U <youUserNameHere>

TRex Tiling

TRex will create an XYZ tile cache in the projection of your choosing. You will need to know the resolutions and bounding box of your projection in order to make this work. I was fortunate to have this information at hand thanks to a great tutorial from LINZ.

TRex uses a config file for tiling. The config used in this example is here

The command used to run TREX:

t_rex generate --progress true --maxzoom=14 --minzoom=0 --extent=174.627603,-41.613839,176.259896,-40.737190  --config /configpsql_points.toml

TRex will generate gzip pfb’s. If you prefer to unzip them:

find . -type f | xargs -n1 -P 1 -t -I % gzip -d -r -S .pbf %
find . -type f | xargs -n1 -P 1 -t -I % % %.pbf

Openlayers JS

The Openlayers for this is version 6.  <script> tags needed are:

<link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/openlayers/openlayers.github.io@master/en/v6.5.0/css/ol.css" type="text/css">

<script src="https://cdn.jsdelivr.net/gh/openlayers/openlayers.github.io@master/en/v6.5.0/build/ol.js"></script>

<script src="//cdnjs.cloudflare.com/ajax/libs/proj4js/2.3.15/proj4.js"></script>

<script src="https://unpkg.com/ol-mapbox-style@6.3.2/dist/olms.js" type="text/javascript"></script>

For the full JS example

NZTM Construct in Openlayers

Building the projection for Openlayers

// set NZTM projection extent so OL can determine zoom level 0 extents.
// Define NZTM projection
proj4.defs("EPSG:2193","+proj=tmerc +lat_0=0 +lon_0=173 +k=0.9996 +x_0=1600000 +y_0=10000000 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs");

// Register projection with OpenLayers
ol.proj.proj4.register(proj4);

// Create new OpenLayers projection
var proj2193 = new ol.proj.Projection({
	code: 'EPSG:2193',
	units: 'm',
	extent: [827933.23, 3729820.29, 3195373.59, 7039943.58]
});

// NZTM tile matrix origin, resolution and matrixId definitions.
var origin = [-1000000, 10000000];
var resolutions = [ 8960, 4480, 2240, 1120, 560, 280, 140, 70, 28, 14, 7, 2.8, 1.4, 0.7, 0.28, 0.14, 0.07 ];
var matrixIds = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16];

Applying to Raster and Vector Tiles

Raster Tiles

Another great tutorial from LINZ regarding the set up of an XYZ for raster tiles.

// Tile Services Map
var urlTemplate =
  "https://tiles.maps.linz.io/nz_colour_basemap/NZTM/{z}/{x}/{y}.png";

// Set raster layer
var layer = new ol.layer.Tile({
  source: new ol.source.XYZ({
    url: urlTemplate,
    projection: proj2193,
    attributions: ['<a href="http://data.linz.govt.nz">Data from LINZ. CC BY 4.0</a>'],
    tileGrid: new ol.tilegrid.TileGrid({
      origin: origin,
      resolutions: resolutions,
      matrixIds: matrixIds,
      extent: [827933.23, 3729820.29, 3195373.59, 7039943.58]
    })
  })
});
Vector Tiles

Set up the vector layer to use custom projection:

// Set vector layer
var placeSource = new ol.source.VectorTile({
  cacheSize: 0,
  overlaps: true,
  tilePixelRatio: 1, // oversampling when > 1
  tileGrid: new ol.tilegrid.TileGrid({ 
    origin: [-1000000, 10000000],
    maxZoom: 16,
    tileSize: 4096,
    extent: [827933.23, 3729820.29, 3195373.59, 7039943.58],
    resolutions: resolutions,
  }),
  extent: [827933.23, 3729820.29, 3195373.59, 7039943.58],
  format: new ol.format.MVT(),
  projection: ol.proj.get('EPSG:2193'),
  url: 'https://xycarto.github.io/static.vector.tiles.openlayers.nztm/tiles/wellyRegion_townBay_nztm/{z}/{x}/{y}.pbf'
});

var vectorMap = new ol.layer.VectorTile({
  declutter: true,
  source: placeSource,
  renderMode: 'vector',
  zIndex: 10
  })

Styling

For the style file example

  1. The method in this example is loading the vector tile and overaying it on a raster tile cache. In order to accomplish this, a vector tile cache must be loaded first to the map, THEN the rules from the style JOSN are applied using:
fetch('./styleText.json').then(function(response) {
  response.json().then(function(glStyle) {
    olms.applyStyle(vectorMap, glStyle, 'wellyRegion_townBay_wgs');
  });
});
  1. The above uses olms.applyStyle. To access this function you will need to add the scipt tag to your HTML:
<script src="https://unpkg.com/ol-mapbox-style@6.3.2/dist/olms.js" type="text/javascript"></script>

Notes

  1. Not fully complete. Working example only
  2. Runs slow in Safari
  3. This is in no way the only way to do this. My hope is someone takes this and make it better.
  4. Still needs label work for use on mobile devices.

RGB Elevation Creation for 3D Online Mapping (Terrain RGB)

Mt. Ruapehu, Ngauruhoe, and Tongariro

The following is about how to build an elevation file to work with procedural-gl.js. I just really wanted to build my own elevation dataset and thought it would be helpful to share how I did it.

Procedural-gl.js has changed the game in attainable 3D online mapping. When I came across this project, I was very excited and wanted to get to mapping ASAP.  There is so much potential in just a few lines of JS code; just add a base map, an elevation file and boom, you have a smooth functioning 3D map, mobile ready. I cannot thank Felix Palmer enough for putting this out there. You can check out the source code and some really great maps he developed here.

I’m not one to leave well enough alone. I wanted to develop a method to use New Zealand specific elevations with my goal being to eventually incorporate New Zealand’s LiDAR elevations. However, before I get to building elevation models for LiDAR, I wanted to test building an elevation model with the LINZ 8m DEM. This dataset covers the entire nation and was light weight enough to test on the ol’ home Linux box.

Method

As mentioned above, to run procedural-gl.js in its most basic form, you’ll need a tile cache for your base map and a tile cache for your elevation file.  The base map I have covered, but I was unfamiliar for what was needed for the elevation tile cache. Fortunately, I came across SyncPoints tutorial on how to do this.

I am not going to rewrite their entire process.  They did a really good job explaining the what, why, and how.  Instead, I will layout what I did for the NZ specific data. Much of this process can be further refined and this is really just the proof of concept.

Two quick notes before I get started:

  1. For now, this process is only done in Web Mercator.  I always want to push the limits and do my online work in NZTM, but for now, I am going to work with what I have.
  2. I am not going to talk about building a raster tile cache for the base map.  I was fortunate to have a web mercator tile cache for NZ available.

We want to create an elevation file, in PNG rgb format, rendered into an XYZ tile cache directory, with 512x512px tiles.

The basic steps are this:

  1. Remove all errant noData pixels from a geotiff.  
  2. Convert your elevation geotiff to rgb (single band to three band), in Byte format.
  3. Render your rgb geotiff into xyz raster tiles. Use a directory format.
  4. Consume through procedual-gl.js

List of Tools

Data

The data I used for this was the full LINZ 8m DEM elevation. Make sure your data is in Web Mercator projection (EPSG:3857).  See some earlier blogs about how to retroject across all the tiles using BASH and gdalwarp.

LINZ 8m DEM

Process

1. With the VRT, we can gather all the elevation tiles under one file name.  

gdalbuildvrt elevation.vrt *.tif

2. Remove nodata

Using the VRT, we can preform the noData conversion across all the elevation tiles as if it were a mosaic.  Better yet, the output from this command will produce a mosaic out the other side. GDAL is magic.

gdalwarp  -t_srs EPSG:3857 -dstnodata None  -co TILED=YES  -co COMPRESS=DEFLATE  -co BIGTIFF=YES  elevation.vrt elevation_noData_mosaic.tif

3. RGB-ify the elevation mosaic.

rio rgbify -b -10000 -i 0.1 /elevation_noData_mosaic.tif elevation_noData_mosaic_rgb.tif

4. Create your XYZ tile cache in EPSG:3857

GDAL2Tiles is a quick way to render xyz tile caches if you have single layer and are working in Web Mercator.  Be sure to use 3.1 or greater to get the functionality of xyz caching and tile size control.  You need these both.  In the end, I used the GDAL Docker developed by perrygeo, with modification, --with-python, and got access to GDAL 3.2.  It was a lifesaver, however, I did need to modify the Dockerfile to add GDALs Python bindings and rebuild the Docker.

gdal2tiles.py --s_srs=EPSG:3857 --zoom=0-16 --xyz --tilesize=512 --processes=7 elevation_noData_mosaic_rgb.tif /data/tile-cache

Localhost Testing

Once I built the tile cache, I built a quick site on my localhost to test. I embedded the tile cache into the site (place the tile cache in the same directory as your web files), then I ran it under a local server.  The advantage of embedding your tile cache for testing is that it allows the local server to serve out the raster tiles as well. There is no need to set up a tile server. The local server does this for you.

I switched it up this time used the npm http-server to avoid some cors issues I was encountering. 

http-server . —cors 8000

Code and Website

You can view the code for the HTML and JS here:

https://github.com/xycarto/3D_Aotearoa/

The running website can be found here:

https://xycarto.github.io/3D_Aotearoa/

It works great on mobile devices, but will happily display on your desktop.

Open Source Stack for Raster Tiling in Custom Projections

This post is a high level look at the recent stack I built for a raster tiling set up. I am working out some kinks in my online and network delivery of cartographic products, so I thought it was time to set up a raster tiling service to access XYZ and WTMS services from my raster tile caches. I’ll be adding maps and zooms levels in the future, so check back now and again. Antarctica is on it’s way soon!

nz_from_basemap_service

Basic Demo Service using NZTM projection is here: https://xycarto.github.io/

See below for WMTS links

Raster tiling is not the only method, but it is still a viable choice for delivering nice looking maps online, serving across networks, and designing with raster data. I am particularly enamored with the quality of the visual outputs. For me, it is akin to the difference between music in vinyl and digital formats. In addition, the process is well documented and fairly straight forward. By virtue of having been around for a while, raster tiling has a wealth of information and standards to work with, delivery from S3 is a robust process, and there is nice integration with QGIS, Leaflet and Openlayers.

I break the stack in to three areas: analysis, rendering, and delivery

Analysis
QGIS: Sketching, QC, and general geospatial work.

GDAL: Processing raster data. Configuring your rasters in an optimal format from the beginning will greatly improve your rendering speeds. I recommend creating a good set of overviews and gathering everything into a virtual raster tile (VRT).

Postgres/PostGIS: Handling your vector data. Pulling all your data from a database significantly improves rendering speeds. Don’t forget to index!

Rendering
Tilemill/Mapnik XML: Yes, I still design using CartoCSS when working with raster data. I love the simplicity of the language. Tilemill is easy enough to containerize these days too. Tilemill exports into the Mapnik XML format, essential for my process further down the line. Here is how to hack Tilemill to work in a custom projection.

Mapnik: Support for using Mapnik XML

Mapnik with Python Bindings: Necessary for using Mapnik XML documents in MapProxy

MapProxy: MapProxy is a map server and tile renderer . It is easy to build on your machine, though I recommend using a container like Docker. Specifically, I use a hack provided by PalmerJ at Github to increase rendering speeds through multi-threading.

Delivery
Amazon S3: Simple Storage Service. Amazon is pretty cheap, free in many cases, and a good place for storing your tile cache. You get an easily accessed URL for your tiles and a home for your WMTS GetCapabilities document.

WMTS: For me, the real power in a base map service is the WMTS, so, below are two links to the WTMS service for you to set up in QGIS if you’d like to have a play. Here is a quick tutorial about how to set up WMTS if you are unfamiliar.

https://s3-ap-southeast-2.amazonaws.com/basemaps.temp/nz_colour_basemap/WMTSCapabilities.nz_colour_basemap.xml
https://s3-ap-southeast-2.amazonaws.com/basemaps.temp/nz_topo_basemap/WMTSCapabilities.nz_topo_basemap.xml

XYZ: Building a web map? If your tile cache is in S3, in a TMS structure, and public you should be able to access it via simple XYZ request like so:

https://{s3-your-region-here}/{your_bucket}/{project_name}/{projection}/{z}/{x}/{y}.png

Leaflet: Leaflet will handle all the XYZ requests to the server and allow for custom projections. Have a look here for the basic HTML, CSS and JS set up.

Experimenting with Hydrological Analysis using TauDEM

Blue Rivers Ordered

Over the past few years, I’ve played around with developing ordered rivers networks for different projects. I am not an expert in hydrology, but I can get close for cartographic purposes. I am an expert; however, in asking for help from those who know best and I rely on a lot of very smart people to guide me on my journey.

Recently, I decided to put together a visualization of ordered rivers for New Zealand. I came across a very nice data set offered through the Ministry for the Environment via the Koordinates website and thought I’d like to put it to use.

The rivers vis project made me wonder if I could build this base dataset myself using some of the recently released elevation data sets via the LINZ Data Service. The short answer to my question is “sorta”. Doing it open source is not an issue, but building an accurate ordered river centerline network is another story. This is a task I cannot take on as a solo project right now, but I could do a little experimentation. Below, I’ll offer some of methods and things I learned along the way.

Tools and Data

The method I tested used TauDEM and a 1m DEM raster accessed from the LINZ Data Service. I down sampled the DEM to 2m and 5m resolutions and used small areas for testing. Finding and open source tool was easy. I sorted through a few available methods and finally landed on “Terrain Analysis Using Digital Elevation Models” (TauDEM). There are additional methods through GRASS and SAGA GIS. I chose TauDEM because I never used it before.

Method Tested

To my knowledge, there is no open source tool where a person can put in a DEM and get a networked rivers centerline vector out the other side. It requires a number of steps to achieve your goal.

The basic run down to process the DEM is to:

  1. Fill sinks
  2. Determine flow directions
  3. Determine watersheds
  4. Determine flow accumulation
  5. Stream classification
  6. Export to vector

TauDEM does require a few extra steps to complete the process, but these steps are explained in the documentation of the tool. It was more about keeping all my variables in the right places and using them at the right time. I recommend using the variable names TauDEM provides.

BASH script here

Click the arrow to the left to view the full BASH script below:

#!bin/bash

#Rough sketch for building river centerlines. Rasters have been clipped prior

BASEPATH=/dir/path/to/base

raster_list=$( find $BASEPATH -name "*.tif" )

taudem_outputs=/dir/path/to/outputs

reso=resolution_number

for i in $raster_list
do


	INPUT_RASTER=$i

	file_name=$( basename $i )

	strip_input_extension=$( echo $file_name | sed 's/.tif//' )

	reso_name=$taudem_outputs/${strip_input_extension}_${reso}res

	gdal_translate -tr $reso $reso -of GTiff $i $reso_name.tif

	fel=${reso_name}_fel.tif
	p=${reso_name}_p.tif
	sd8=${reso_name}_sd8.tif
	ad8=${reso_name}_ad8.tif
	ang=${reso_name}_ang.tif
	slp=${reso_name}_slp.tif
	sca=${reso_name}_sca.tif
	sa=${reso_name}_sa.tif
	ssa=${reso_name}_ssa.tif
	src=${reso_name}_src.tif

	ord=${reso_name}_strahlerorder.tif 
	tree=${reso_name}_tree.dat
	coord=${reso_name}_coord.dat
	net=${reso_name}_network.shp
	w=${reso_name}_watershed.tif 

	processed_input_file=$reso_name.tif

	#TauDEM Commands
	mpiexec -n 8 pitremove -z $processed_input_file -fel $fel

	mpiexec -n 8 d8flowdir -fel $fel -p $p -sd8 $sd8 

	mpiexec -n 8 aread8 -p $p -ad8 $ad8 -nc

	mpiexec -n 8 dinfflowdir -fel $fel -ang $ang -slp $slp

	mpiexec -n 8 areadinf -ang $ang -sca $sca -nc

	mpiexec -n 8 slopearea -slp $slp -sca $sca -sa $sa

	mpiexec -n 8 d8flowpathextremeup -p $p -sa $sa -ssa $ssa -nc

	mpiexec -n 8 threshold -ssa $ssa -src $src

	mpiexec -n 8 streamnet -fel $fel -p $p -ad8 $ad8 -src $src -ord $ord -tree $tree -coord $coord -net $net -w $w

done

The script is a rough sketch, but does get results.

Challenges in the Process

One major challenge for this project was the size of the input DEM and my computers available RAM. I work primarily off a laptop. It’s a good machine but no match for a proper server set up with some spacious RAM. My laptop struggled with the large hi-resolution DEMs, so I needed to down-sample the images and choose a smaller test area to get it to work.

Clip the tiff with gdal_translate -projwin and down sample with -tr

gdal_translate -tr xres yres -projwin ulx uly lrx lry input.tif output.tif

The second challenge came up because I used a bounding box to clip my test regions. I recommend not doing this and instead clip your regions using a watershed boundary. Having square shapes for your test regions will give you very inaccurate and unhelpful results. For example, major channels in your DEM will be cut at the edges of your raster. You will not get accurate results.

Clipping a raster using a shapefile, like a watershed boundary, can be achieved using gdalwarp.

gdalwarp –cutline input.shp input.tif output.tif

Results

I ran my process and QCed the results against Aerial Imagery and a hillshade I developed from the DEM. The first run gave me good enough results to know I have a lot of work to do, but I did manage to develop a process I was happy with. The tool did a great job, but the accuracy of the DEM was a little more challenging. It’s a start. I captured a good number of river channels despite my incorrect usage of a square DEM, learned a lot about how DEM resolution affects outputs, and gained knowledge around how to spot troublesome artifacts.

Well Defined ChannelsImg 1: River capture in well defined channel.

From this experiment, there are a few ideas I’d like to explore further:

1. Accuracy of the DEM. The particular DEM I worked with had a number of ‘dams’ in the flows. Notably, bridges, culverts, vegetation artifacts, and other general errors that caused water to flow in interesting directions. When working with a data set like this, I am curious how manage these artifacts.

Road issueImg 1: River diversion at road.

Artifact issueImg 1: River diversion at culvert or bridge.

2. How to go beyond borders. This analysis can be broken down by watershed, but it will be necessary to link the outflows of those watersheds to the next for accurate results.

Edge issueImg 1: Flow not captured at edge.

3. As DEMs are released with better resolution, there is a need for scaled up computing power. The process needs a large amount of RAM. What is the best computational set up for capturing the largest area?

4. Did I do this correctly? I perform this task about once every two years and usually weekends when the surf is flat and the garden is weeded, so I am not an expert. There is a lot more research to be done to determine if I am using the tools to the best of their abilities.

Wellington Elevations: Interpolating the Bathymetry

It is important to note something from the very beginning. The interpolated bathymetry developed in this project does not reflect the actual bathymetry of the Wellington Harbour. It is my best guess based on the tools I had and the data I worked with. Furthermore, this interpolation is NOT the official product of any institution. It is an interpolation created by me only for the purposes of visualization.

welly_harbour-colour-and-aerial_FULLVIEW

Part of the goal when visualizing the Wellington landscape was to incorporate a better idea about what may be happening below the surface of the harbor. Various bathymetric scans in the past have gathered much of the information and institutions like NIWA have done the work visualizing that data. As for myself, I did not have access to those bathymetries; however, I did have a sounding point data set to work with, so I set about interpolating those points.

The data set, in CSV format, was over a million points; too dense for a single interpolation. I worked out a basic plan for the interpolation based on splitting the points into a grid, interpolate the smaller bits, then reassemble the grid tiles into a uniform bathymetry.

Conversion from CSV to shp
Using the open option (-oo) switch, OGR will convert CSV to shp seamlessly

ogr2ogr -s_srs EPSG:4167 -t_srs EPSG:4167 -oo X_POSSIBLE_NAMES=$xname* -oo Y_POSSIBLE_NAMES=$yname*  -f "ESRI Shapefile" $outputshapepath/$basenme.shp $i

Gridding the Shapefile
With the shapefile in place, I next needed to break it into smaller pieces for interpolation. For now, I create the grid by hand in QGIS using the ‘Create Grid’ function. This is found under Vector>Reasearch Tools>Create Grid. Determining a grid size that works best for the interpolation is a bit of trial and error. You want the largest size your interpolation can manage without crashing. Using the grid tool from QGIS in very convenient, in that it creates an attribute table of the xmin, xmax, ymin, ymax corrodinates for each tile in the grid. These attributes become very helpful during the interpolation process.

Interpolating the Points
I switched things up in the interpolation methods this time and tried out SAGA GIS. I have been looking for a while now for a fast and efficient method of interpolation that I could easily build into a scripted process. SAGA seemed like a good tool for this. The only drawback, I had a very hard time finding examples online about how to use this tool. My work around to was to test the tool in QGIS first. I noticed when the command would run, QGIS saved the last command in a log file. I found that log, copied out the command line function, and began to build my SAGA command for my script from there.

Here is look at the command I used:


saga_cmd grid_spline "Multilevel B-Spline Interpolation" -TARGET_DEFINITION 0 -SHAPES "$inputpoints" -FIELD "depth" -METHOD 0 -EPSILON 0.0001 -TARGET_USER_XMIN $xmin -TARGET_USER_XMAX $xmax -TARGET_USER_YMIN $ymin -TARGET_USER_YMAX $ymax -TARGET_USER_SIZE $reso -TARGET_USER_FITS 0 -TARGET_OUT_GRID "$rasteroutput/sdat/spline_${i}"

I tested a number of methods and landed on ‘grid_spline’ as producing the best results for the project. It was useful because it did a smooth interpolation across the large ‘nodata’ spaces.

Once the initial interpolation was complete, I needed to convert the output to GeoTIFF since SAGA exports in an .sdat format. Easy enough since GDAL_TRANSLATE recognizes the .sdat format. I then did my standard prepping and formatting for visualization:


gdal_translate "$iupput_sdat/IDW_${i}.sdat" "$output_tif/IDW_${i}.tif"
gdaldem hillshade -multidirectional -compute_edges "$output_tif/IDW_${i}.tif" "$ouput_hs/IDW_${i}.tif"
gdaladdo -ro "$output_tif/IDW_${i}.tif" 2 4 8 16 32 64 128
gdaladdo -ro "$ouput_hs/IDW_${i}.tif"2 4 8 16 32 64 128

Here is look at the interpolated harbour bathymetry, hillshaded, with Wellington 1m DEM hillshade added over top
welly_harbour_bw_all

And here is a look at the same bathy hillshade with coloring
welly_harbour_bw-and-aerial

Visualizing the Bathymetry
With the bathymetry, complete it was simply a matter of building it into the existing visualization I built for the Wellington Region. Learn more about the project here. The visualization was four steps:

Hillshade
addedbathy_bathyonlypng
Color
addedbathy_bathyonly_withcolor
Aerial Imagery
addedbathy_bathyonly_withcoloraerial
Then merge the models together
addedbathy_final

Easy as, eh? Let me know what you think!

Note: All imagery was produced during my time at Land Information New Zealand. Imagery licensing can be found here:
“Source: Land Information New Zealand (LINZ) and licensed by LINZ for re-use under the Creative Commons Attribution 4.0 International licence."