Here is a phenomenon that threw me. I hope this note might save someone else from tearing their hair out. The reason is obvious once you know it, but it took me a while.
An example dataset will help, so that we can use actual numbers. This is aerial imagery for the Wellington region, 2006-2007, at 0.3mpx resolution, published as 1073 GeoTIFF tiles, each 8000 x 12000px, in NZTM projection (EPSG:2193). The tiles are subdivisions of Land Information New Zealand (LINZ) printed map sheets. The LINZ sheets measure each cover 24000m x 36000m, and have bounding coordinates at integer values in NZTM. The imagery dataset cuts each map sheet into 10 divisions in X and Y, so 100 image tiles per sheet. For example, map sheet BP32 has its lower left corner at NZTM (1756000, 5442000). The lower left image tile within this sheet is BP32_5K_1001, and has the same lower left corner. ...Or does it? If we import this image GeoTIFF tile into Manifold 9, the projection is recognised as EPSG:2193, the resolution is recognised as 0.3mpx, but the offsets look like this: Local offset X: 1756000 Local offset Y: 5441990.4 Why is the Y offset different from what we expect? 5442000 - 5441990.4 is 9.6m. Not insignificant. It's the same for all the tiles in the dataset. The X offset always matches, but the Y offset is always 9.6m less than the expected from the corner of the tiling scheme. Did the data publisher decide to shift the origin, to better fit the dataset, or due to some processing oversight? Short answer: no. Manifold 9 did it.
First thing to note. The different Y offsets seem to suggest that imported images would not line up with a drawing of the LINZ map sheets, or of their exact 10 x 10 subvisions. But opening both in a map, they do. Further, if we bring the same iamges into Manifold 8 or Global Mapper, the reported tile bounds do match the expected integer values. What's up with the offset(s) in Manifold 9?
Manifold subdivides all images into tiles (or subtiles). Tile dimensions in powers of two make it easy to align image data for efficient processing on GPGPU (and possibly AVX). For this dataset it chooses a tile size of 256 x 256px Each source image in this dataset measures 8000 x 12000px. That requires 8000 / 256 = 31.25 tiles in X, and 12000 / 256 = 46.875. We can't have part tiles, so will need 32 x 47 whole tiles, with some pixels at the edges of each image padded with invisible (null) values. In this case we need (32 - 31.25) * 256 = 192 pixels of padding on one side, and (47 - 46.875) * 256 = 32 pixels of padding above or below. Looking at the vertical padding, 32 pixels at 0.3mpx is 9.6m. Aha.
So we can tell what Manifold 9 has done here. It has subdivided the image into tiles starting at the top left corner, leaving the right-hand column of tiles, and the bottom row, to be supplemented with invisible padding. That explains everything except for one obvious question. If Manifold started subdividing source images at the bottom left corner, instead of top left, then the residual padding would not affect image offsets. The reported offsets in both X and Y would have the same values as expected from source metadata, and when comparing with (some) other packages. Then the invisible extra pixels really would be invisible--we wouldn't have to think about them at all. Hmmm.
|