# Gridded Datasets¶

In [1]:
import xarray as xr
import numpy as np
import holoviews as hv
from holoviews import opts
hv.extension('matplotlib')

opts.defaults(opts.Scatter3D(color='Value', cmap='fire', edgecolor='black', s=50))


In the Tabular Data guide we covered how to work with columnar data in HoloViews. Apart from tabular or column based data there is another data format that is particularly common in the science and engineering contexts, namely multi-dimensional arrays. The gridded data interfaces allow working with grid-based datasets directly.

Grid-based datasets have two types of dimensions:

• they have coordinate or key dimensions, which describe the sampling of each dimension in the value arrays
• they have value dimensions which describe the quantity of the multi-dimensional value arrays

There are many different types of gridded datasets, which each approximate or measure a surface or space at discretely specified coordinates. In HoloViews, gridded datasets are typically one of three possible types: Regular rectilinear grids, irregular rectilinear grids, and curvilinear grids. Regular rectilinear grids can be defined by 1D coordinate arrays specifying the spacing along each dimension, while the other types require grid coordinates with the same dimensionality as the underlying value arrays, specifying the full n-dimensional coordinates of the corresponding array value. HoloViews provides many different elements supporting regularly spaced rectilinear grids, but currently only QuadMesh supports irregularly spaced rectilinear and curvilinear grids.

The difference between uniform, rectilinear and curvilinear grids is best illustrated by the figure below:

In this section we will first discuss how to work with the simpler rectilinear grids and then describe how to define a curvilinear grid with 2D coordinate arrays.

## Declaring gridded data¶

All Elements that support a ColumnInterface also support the GridInterface. The simplest example of a multi-dimensional (or more precisely 2D) gridded dataset is an image, which has implicit or explicit x-coordinates, y-coordinates and an array representing the values for each combination of these coordinates. Let us start by declaring an Image with explicit x- and y-coordinates:

In [2]:
img = hv.Image((range(10), range(5), np.random.rand(5, 10)), datatype=['grid'])
img

Out[2]:

In the above example we defined that there would be 10 samples along the x-axis, 5 samples along the y-axis and then defined a random 5x10 array, matching those dimensions. This follows the NumPy (row, column) indexing convention. When passing a tuple HoloViews will use the first gridded data interface, which stores the coordinates and value arrays as a dictionary mapping the dimension name to a NumPy array representing the data:

In [3]:
img.data

Out[3]:
{'x': array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]),
'y': array([0, 1, 2, 3, 4]),
'z': array([[0.48510674, 0.50325088, 0.31535198, 0.11453187, 0.55194292,
0.95232808, 0.86077944, 0.467197  , 0.10089953, 0.94894205],
[0.98716956, 0.3415935 , 0.20625179, 0.28461674, 0.95970214,
0.35433381, 0.64744376, 0.02188068, 0.49298846, 0.40083158],
[0.33648117, 0.18816079, 0.89971983, 0.37765315, 0.671843  ,
0.68731615, 0.62717507, 0.62601911, 0.88932214, 0.88563426],
[0.80393662, 0.09917556, 0.80870086, 0.30466743, 0.83136121,
0.25500116, 0.87447758, 0.40504659, 0.06728174, 0.51888748],
[0.92136212, 0.01201242, 0.78078864, 0.01817616, 0.58764789,
0.8937471 , 0.14908855, 0.27323052, 0.40029169, 0.25030753]])}

However HoloViews also ships with an interface for xarray and the GeoViews library ships with an interface for iris objects, which are two common libraries for working with multi-dimensional datasets:

In [4]:
arr_img = img.clone(datatype=['image'])
print(type(arr_img.data))

try:
xr_img = img.clone(datatype=['xarray'])

print(type(xr_img.data))
except:
print('xarray interface could not be imported.')

<class 'numpy.ndarray'>
xarray.core.dataset.Dataset


In the case of an Image HoloViews also has a simple image representation which stores the data as a single array and converts the x- and y-coordinates to a set of bounds:

In [5]:
print("Array type: %s with bounds %s" % (type(arr_img.data), arr_img.bounds))

Array type: <class 'numpy.ndarray'> with bounds BoundingBox(points=((-0.5,-0.5),(9.5,4.5)))


To summarize the constructor accepts a number of formats where the value arrays should always match the shape of the coordinate arrays:

1. A simple np.ndarray along with (l, b, r, t) bounds
2. A tuple of the coordinate and value arrays
3. A dictionary of the coordinate and value arrays indexed by their dimension names
3. XArray DataArray or XArray Dataset
4. An Iris cube

# Working with a multi-dimensional dataset¶

A gridded Dataset may have as many dimensions as desired, however individual Element types only support data of a certain dimensionality. Therefore we usually declare a Dataset to hold our multi-dimensional data and take it from there.

In [6]:
dataset3d = hv.Dataset((range(3), range(5), range(7), np.random.randn(7, 5, 3)),
['x', 'y', 'z'], 'Value')
dataset3d

Out[6]:
:Dataset   [x,y,z]   (Value)

This is because even a 3D multi-dimensional array represents volumetric data which we can display easily only if it contains few samples. In this simple case we can get an overview of what this data looks like by casting it to a Scatter3D Element (which will help us visualize the operations we are applying to the data:

In [7]:
hv.Scatter3D(dataset3d)

Out[7]:

### Indexing¶

In order to explore the dataset we therefore often want to define a lower dimensional slice into the array and then convert the dataset:

In [8]:
dataset3d.select(x=1).to(hv.Image, ['y', 'z']) + hv.Scatter3D(dataset3d.select(x=1))

Out[8]:

### Groupby¶

Another common method to apply to our data is to facet or animate the data using groupby operations. HoloViews provides a convenient interface to apply groupby operations and select which dimensions to visualize.

In [9]:
(dataset3d.to(hv.Image, ['y', 'z'], 'Value', ['x']) +
hv.HoloMap({x: hv.Scatter3D(dataset3d.select(x=x)) for x in range(3)}, kdims='x'))

Out[9]:

### Aggregating¶

Another common operation is to aggregate the data with a function thereby reducing a dimension. You can either aggregate the data by passing the dimensions to aggregate or reduce a specific dimension. Both have the same function:

In [10]:
hv.Image(dataset3d.aggregate(['x', 'y'], np.mean)) + hv.Image(dataset3d.reduce(z=np.mean))

Out[10]:

By aggregating the data we can reduce it to any number of dimensions we want. We can for example compute the spread of values for each z-coordinate and plot it using a Spread and Curve Element. We simply aggregate by that dimension and pass the aggregation functions we want to apply:

In [11]:
hv.Spread(dataset3d.aggregate('z', np.mean, np.std)) * hv.Curve(dataset3d.aggregate('z', np.mean))

Out[11]:

It is also possible to generate lower-dimensional views into the dataset which can be useful to summarize the statistics of the data along a particular dimension. A simple example is a box-whisker of the Value for each x-coordinate. Using the .to conversion interface we declare that we want a BoxWhisker Element indexed by the x dimension showing the Value dimension. Additionally we have to ensure to set groupby to an empty list because by default the interface will group over any remaining dimension.

In [12]:
dataset3d.to(hv.BoxWhisker, 'x', 'Value', groupby=[])

Out[12]:

Similarly we can generate a Distribution Element showing the Value dimension, group by the 'x' dimension and then overlay the distributions, giving us another statistical summary of the data:

In [13]:
dataset3d.to(hv.Distribution, 'Value', [], groupby='x').overlay()

Out[13]:

## Categorical dimensions¶

The key dimensions of the multi-dimensional arrays do not have to represent continuous values, we can display datasets with categorical variables as a HeatMap Element:

In [14]:
heatmap = hv.HeatMap((['A', 'B', 'C'], ['a', 'b', 'c', 'd', 'e'], np.random.rand(5, 3)))
heatmap + hv.Table(heatmap)

Out[14]:

## Non-uniform rectilinear grids¶

As discussed above, there are two main types of grids handled by HoloViews. So far, we have mainly dealt with uniform, rectilinear grids, but we can use the QuadMesh element to work with non-uniform rectilinear grids and curvilinear grids.

In order to define a non-uniform, rectilinear grid we can declare explicit irregularly spaced x- and y-coordinates. In the example below we specify the x/y-coordinate bin edges of the grid as arrays of shape M+1 and N+1 and a value array (zs) of shape NxM:

In [15]:
n = 8  # Number of bins in each direction
xs = np.logspace(1, 3, n)
ys = np.linspace(1, 10, n)
zs = np.arange((n-1)**2).reshape(n-1, n-1)
print('Shape of x-coordinates:', xs.shape)
print('Shape of y-coordinates:', ys.shape)
print('Shape of value array:', zs.shape)

Shape of x-coordinates: (8,)
Shape of y-coordinates: (8,)
Shape of value array: (7, 7)

Out[15]:

## Curvilinear grids¶

To define a curvilinear grid the x/y-coordinates of the grid should be defined as 2D arrays of shape NxM or N+1xM+1, i.e. either as the bin centers or the bin edges of each 2D bin.

In [16]:
n=20
coords = np.linspace(-1.5,1.5,n)
X,Y = np.meshgrid(coords, coords);
Qx = np.cos(Y) - np.cos(X)
Qy = np.sin(Y) + np.sin(X)
Z = np.sqrt(X**2 + Y**2)

print('Shape of x-coordinates:', Qx.shape)
print('Shape of y-coordinates:', Qy.shape)
print('Shape of value array:', Z.shape)

qmesh

Shape of x-coordinates: (20, 20)
Shape of y-coordinates: (20, 20)
Shape of value array: (20, 20)

Out[16]:

## Working with xarray data types¶

As demonstrated previously, Dataset comes with support for the xarray library, which offers a powerful way to work with multi-dimensional, regularly spaced data. In this example, we'll load an example dataset, turn it into a HoloViews Dataset and visualize it. First, let's have a look at the xarray dataset's contents:

In [17]:
xr_ds = xr.tutorial.open_dataset("air_temperature").load()
xr_ds

Out[17]:
<xarray.Dataset>
Dimensions:  (lat: 25, lon: 53, time: 2920)
Coordinates:
* lat      (lat) float32 75.0 72.5 70.0 67.5 65.0 ... 25.0 22.5 20.0 17.5 15.0
* lon      (lon) float32 200.0 202.5 205.0 207.5 ... 322.5 325.0 327.5 330.0
* time     (time) datetime64[ns] 2013-01-01 ... 2014-12-31T18:00:00
Data variables:
air      (time, lat, lon) float32 241.2 242.5 243.5 ... 296.49 296.19 295.69
Attributes:
Conventions:  COARDS
title:        4x daily NMC reanalysis (1948)
description:  Data is from NMC initialized reanalysis\n(4x/day).  These a...
platform:     Model
references:   http://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanaly...

It is trivial to turn this xarray Dataset into a Holoviews Dataset (the same also works for DataArray):

In [18]:
hv_ds = hv.Dataset(xr_ds)[:, :, "2013-01-01"]
print(hv_ds)

:Dataset   [lat,lon,time]   (air)


We have used the usual slice notation in order to select one single day in the rather large dataset. Finally, let's visualize the dataset by converting it to a HoloMap of Images using the to() method. We need to specify which of the dataset's key dimensions will be consumed by the images (in this case "lat" and "lon"), where the remaing key dimensions will be associated with the HoloMap (here: "time"). We'll use the slice notation again to clip the longitude.

In [19]:
airtemp = hv_ds.to(hv.Image, kdims=["lon", "lat"], dynamic=False)
airtemp[:, 220:320, :].opts(colorbar=True, fig_size=200)

Out[19]:

Here, we have explicitly specified the default behaviour dynamic=False, which returns a HoloMap. Note, that this approach immediately converts all available data to images, which will take up a lot of RAM for large datasets. For these situations, use dynamic=True to generate a DynamicMap instead. Additionally, xarray features dask support, which is helpful when dealing with large amounts of data.

It is also possible to render curvilinear grids with xarray, and here we will load one such example. The dataset below defines a curvilinear grid of air temperatures varying over time. The curvilinear grid can be identified by the fact that the xc and yc coordinates are defined as two-dimensional arrays:

In [20]:
rasm = xr.tutorial.open_dataset("rasm").load()
rasm.coords

Out[20]:
Coordinates:
* time     (time) object 1980-09-16 12:00:00 ... 1983-08-17 00:00:00
xc       (y, x) float64 189.2 189.4 189.6 189.7 ... 17.65 17.4 17.15 16.91
yc       (y, x) float64 16.53 16.78 17.02 17.27 ... 28.26 28.01 27.76 27.51

To simplify the example we will select a single timepoint and add explicit coordinates for the x and y dimensions:

In [21]:
rasm = rasm.isel(time=0, x=slice(0, 200)).assign_coords(x=np.arange(200), y=np.arange(205))
rasm.coords

Out[21]:
Coordinates:
time     object 1980-09-16 12:00:00
xc       (y, x) float64 189.2 189.4 189.6 189.7 ... 45.28 44.79 44.29 43.8
yc       (y, x) float64 16.53 16.78 17.02 17.27 ... 43.97 43.84 43.7 43.55
* x        (x) int64 0 1 2 3 4 5 6 7 8 ... 191 192 193 194 195 196 197 198 199
* y        (y) int64 0 1 2 3 4 5 6 7 8 ... 196 197 198 199 200 201 202 203 204

Now that we have defined both rectilinear and curvilinear coordinates we can visualize the difference between the two by explicitly defining which set of coordinates to use:

In [22]:
hv.QuadMesh(rasm, ['x', 'y']) + hv.QuadMesh(rasm, ['xc', 'yc'])

Out[22]:

Additional examples of visualizing xarrays in the context of geographical data can be found in the GeoViews documentation: Gridded Datasets I and Gridded Datasets II. These guides also contain useful information on the interaction between xarray data structures and HoloViews Datasets in general.

# API¶

## Accessing the data¶

In order to be able to work with data in different formats Holoviews defines a general interface to access the data. The dimension_values method allows returning underlying arrays.

#### Key dimensions (coordinates)¶

By default dimension_values will return the expanded columnar format of the data:

In [23]:
heatmap.dimension_values('x')

Out[23]:
array(['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B', 'C', 'C', 'C',
'C', 'C'], dtype='<U1')

To access just the unique coordinates along a dimension simply supply the expanded=False keyword:

In [24]:
heatmap.dimension_values('x', expanded=False)

Out[24]:
array(['A', 'B', 'C'], dtype='<U1')

Finally we can also get a non-flattened, expanded coordinate array returning a coordinate array of the same shape as the value arrays

In [25]:
heatmap.dimension_values('x', flat=False)

Out[25]:
array([['A', 'B', 'C'],
['A', 'B', 'C'],
['A', 'B', 'C'],
['A', 'B', 'C'],
['A', 'B', 'C']], dtype='<U1')

#### Value dimensions¶

When accessing a value dimension the method will similarly return a flat view of the data:

In [26]:
heatmap.dimension_values('z')

Out[26]:
array([0.79316582, 0.59570894, 0.7451207 , 0.92902691, 0.39054081,
0.57369324, 0.26237319, 0.37263367, 0.49281484, 0.02597026,
0.6359647 , 0.35958673, 0.37464033, 0.69497175, 0.72402514])

We can pass the flat=False argument to access the multi-dimensional array:

In [27]:
heatmap.dimension_values('z', flat=False)

Out[27]:
array([[0.79316582, 0.57369324, 0.6359647 ],
[0.59570894, 0.26237319, 0.35958673],
[0.7451207 , 0.37263367, 0.37464033],
[0.92902691, 0.49281484, 0.69497175],
[0.39054081, 0.02597026, 0.72402514]])