Subscribe to this thread
Home - Cutting Edge / All posts - Manifold System 9.0.168.8
adamw


8,447 post(s)
online
#29-Jan-19 18:18

9.0.168.8

Here is the long-awaited build.

The focus of the build was GPGPU. We used to have window raster functions (aspect, slope, etc) that were running on GPGPU in 8 but not in 9, because the processing pipeline in 9 is very different from that in 8 and the functions needed a couple of features from the pipeline that 9 didn't yet have. With the new year and the cleanup this brought and the switch to the new version of CUDA due to the cleanup, we decided to close this gap once and for all, enhance the processing pipeline in 9 and put the functions onto GPGPU. This spurred changes in several other places as you can see below. Plus we couldn't resist adding a couple of extra optimizations. The result is before you.

Future builds will return to legends, etc. The time between builds will decrease as well, we want to have them more often.

manifold-9.0.168.8-x64.zip

SHA256: 7e2eb54f89192948dff9879dcf4b79d20dd922a39cc9e45a9c66142043e77904

manifold-viewer-9.0.168.8-x64.zip

SHA256: f49bbf258f92659eaad8d3dc8bb0ff77d3073457def3dafea0c7b22aa67d7fe5

adamw


8,447 post(s)
online
#29-Jan-19 18:21

GPGPU

CUDA is restricted to 64-bit builds. (This change was long coming with NVIDIA gradually reducing support for CUDA in 32-bit everywhere. For example, we have been compiling CUDA modules for 32-bit and 64-bit using completely different toolsets for years already. We could technically continue to support 32-bit CUDA for a while longer, but with increasing costs of doing that like 64-bit modules being at times subject to the lowest common denominator between 32-bit and 64-bit CUDA worlds, with a need to debug modules separately, etc, and with the writing for 32-bit CUDA being on the wall for some time, we decided to stop supporting it starting with this build. The main appeal of CUDA is performance. But if someone is after performance, it is time to switch to 64-bit.)

CUDA is enabled in Viewer.

GPGPU code produced by the query engine supports processing tiles with borders of up to 8 pixels. Calls to tile functions which use borders of more than 8 pixels automatically go to the CPU, and can freely intermix with calls going to the GPGPU.

GPGPU code produced by the query engine allows individual operations to store intermediate results on the GPGPU device without passing them to CPU and back. (This saves quite a bit of time when chaining operations that, for example, depend on neighbor pixels.)

New query function: TileJson - takes a tile and prints it into a JSON: [ p1, p2, p3, ... ]. Pixels are ordered by Y and then by X, row 0 is going first. There are no extra separators between rows. If there are multiple channels, each pixel is printed as the value of channel 0, then the value of channel 1, etc: [ ... c0, c1, c2, ... ]. Invisible pixels are printed as 'null' (an allowed keyword in JSON). If there are multiple channels, invisible pixels are printed as a sequence of 'null' values for each channel.

New query function: StringJsonTile - takes a string, number of pixels by X and Y, number of channels and parses the string into a tile. The format is the same as used by TileJson. The function also takes a boolean 'strict' flag which defines what to do when the number of values in the string is less or more than the X * Y * channels - if 'strict' is true, the function returns NULL, otherwise the function returns a completed tile ignoring the extra values in the string, or an incomplete tile padded with invisible pixels.

New query function: TileRemoveBorder - removes the border of specified size from a tile. If the border size is too big for the passed tile, returns NULL. If the border size is negative or zero, returns unmodified tile. TileAspect, TileBlur, TileBlurDirection, TileBlurGaussian, TileEdges, TileEdgesDirection, TileSharpen, TileSlope no longer remove the border from the passed tile automatically; corresponding transform templates are reworked to use TileRemoveBorder.

New query function: TileFilter - takes an input tile, a radius and a tile with filter coefficients, and performs a linear filter. (This is a generalization of TileBlur and similar functions, which imply specific ways to create the filter tile.)

There is a GPGPU version of TileFilter. Limitations: radius must be at most 8 (otherwise the query engine will use CPU).

Query functions for built-in filters have been reworked to produce filter definitions to pass to TileFilter. (This automatically makes transforms for built-in filters use GPGPU. Also, one can now easily check built-in filter definitions by using TileJson or TileToValues.)

  • TileBlur - renamed to TileFilterDefBlur, 'power' parameter changed to 'center' (provides the value for the center pixel, forced to be at least 1, higher values result in more contribution from the original pixel = less blur).
  • TileBlurDirection - renamed to TileFilterDefBlurDirection, 'power' parameter changed to 'center'.
  • TileBlurGaussian - renamed to TileFilterDefBlurGaussian, 'power' parameter changed to 'center', filter shape is changed to use decaying exponent.
  • TileEdges - renamed to TileFilterDefEdges, 'power' parameter changed to 'center', filter shape is changed to laplacian of gaussian, normed to center value.
  • TileEdgesDirection - renamed to TileFilterDefEdgesDirection, 'power' parameter changed to 'center'.
  • TileSharpen - renamed to TileFilterDefSharpen, 'power' parameter changed to 'center'.

New query function: TileFilterDefCross - produces the definition for a cross-shaped filter of specified radius. New transform: Blur, Cross - performs a cross-shaped linear filter.

New query function: TileFilterMedian - takes an input tile, a radius and a tile with filter coefficients, and performs a median filter. Pixels with non-zero filter coefficients participate in computing the median, whereas pixels with zero filter coefficients and invisible pixels do not. New transforms: Median - performs a square median filter. Median, Cross - performs a cross-shaped median filter.

There is a GPGPU version of TileFilterMedian. Limitations: radius must be at most 8, the number of non-zero coefficients must be at most 32.

Coordinate systems store local offset and local scale values as well as units for Z. Z metrics are reported and edited as part of coordinate system metrics. (This change is coming in advance of universal support for vertical datums which we are planning to add in the future. Even before that, we are planning to use Z metrics for many things - eg, for shading. This is a big addition that echoes in many places, from dataports to transforms.)

New query function: ComponentCoordSystemScaleXYZ - takes a component and returns a FLOAT64X3 value with metric scales for X, Y and Z. If the component is lat/lon, metric scales for X and Y are approximated: images use scales at the center latitude of the image, drawings use scales at the latitude of 0.

TileAspect and TileSlope query functions take an additional parameter for XYZ scales.

New query functions: TileCurvGaussian, TileCurvMean, TileCurvPlan, TileCurvProfile - take an input tile and a radius, and compute curvature of the specified type. New transforms: Curvature, Gaussian / Mean / Plan / Profile.

There are GPGPU versions of TileAspect, TileSlope, TileCurvGaussian, TileCurvMean, TileCurvPlan, TileCurvProfile query functions. Limitations: radius must be at most 8.

TileMakeNew query function creates big tiles significantly faster. If the requested tile size is too big, the function returns NULL instead of throwing an error.

Other

The About dialog reports release ID for Windows 10 and Windows Server 2016 (eg, 1809).

WEBP files are read progressively. (Canceling the process of reading a big file is more responsive, etc.)

(Fix) Exporting a big drawing to SHP no longer splits the produced files way before they reach the limit of 2 GB.

There is a new dataport for HEIF files (*.HEIC, there is a video version: *.HEVC). The dataport uses a system component provided by Windows 10 (for the current release of Windows 10 it has to be installed from the Windows Store - provided at no charge, future releases of Windows 10 will likely have it built-in). The dataport can both read HEIF files and write them.

MySQL dataport supports MySQL 8.0 (which adds plenty of things related to geometry, including support for coordinate systems).

End of list.

vincent

1,767 post(s)
#30-Jan-19 17:29

(Fix) Exporting a big drawing to SHP no longer splits the produced files way before they reach the limit of 2 GB.

The bug with the exported lines that I reported looks fixed, thank you. However, M9 exported 738 points with empty geometry along with the shapefile of lines.

I made contour lines from a raster. I expected to obtain just contour lines when I export to shp. Why exporting a point without geometry for every elevation value ? Seems...pointless.

Dimitri


5,359 post(s)
#30-Jan-19 18:51

If I recall correctly, the sample data set you posted had numerous points with empty geometry. If those are in the drawing, they should be exported. If they are not wanted in the exported shapefile, they should be removed before export.

Are they in the drawing, and thus, should be exported?

vincent

1,767 post(s)
#30-Jan-19 19:46

The drawing was generated by Contour Line transform in M9 from a raster. Before the last update in SHP export, no points were exported with this Transform (so no point were created thus ?).

I give a go to this workflow (import raster / Contour Line transform / export shp) today, and points are generated and exported.

dchall8
578 post(s)
#30-Jan-19 20:44

Last week I got a point SHP file after exporting contour lines created from a DEM image. I ignored the point SHP. It was unexpected, but it didn't occur to me it might be a bug.

tjhb

8,657 post(s)
#31-Jan-19 01:14

This is not a bug.

To recap, when creating contours, you specify elevations with a range and a step.

A contour is made at every specified elevation within the range, regardless of whether the source image has any pixels at that elevation.

When an image has no pixels at a given elevation, the corresponding contour metric is NULL. (As an aside, it is a not a null line, not a null point, not a null area, but just NULL. Likewise a function like GeomIsLine() (...) also returns NULL--meaning the object type is unknown. Very nice.)

When you export the resulting drawing to SHP, the exporter sees that you are trying to export some lines plus some NULL metrics. What should it do? Bear in mind that the exporter does not know how the drawing was made, where the NULL metrics came from, or whether they "should be" lines, areas or points.

So it does the best it can, exporting valid lines as polylines, valid areas as polygons, valid points as points (multipoints separately)--and any NULL metrics as simply as possible, as invalid points.

An alternative might be for the exporter to skip the NULL metrics, either silently (bad) or with a warning (OK).

Seems...pointless.

Nice! But I think there is a point. If the invalid points are a surprise, they may show an incorrect assumption about data or a workflow error, which may be important. Bearing in mind (again) that whatever is done for one drawing must be done for any other, whatever it represents and however it was made (and by whom).

To avoid exporting the invalid points, we can do one of two things:

(a) Specify a range of contours that matches the elevation range of the image. For this you can use the statistics calculated by the Style panel (a very close estimate of the actual range, and super fast). Or, to be absolutely exact, use a query like

SELECT MIN([Value]AS [min], MAX([Value]AS [max]

FROM 

    (

    SELECT 

    SPLIT CALL TileToValues([Tile])

    FROM [Image Tiles]

    )

;

(b) Alternatively (or as well), check the contours drawing after creation, and remove any NULL geometry before export, using a query like

DELETE FROM [Image Contour Lines Table]

WHERE [Geom] IS NULL

;

tjhb

8,657 post(s)
#31-Jan-19 01:31

[The manual puts it better:

The returned table contains a record for each unique height. Some of the geoms in the returned table might be NULL values.

See SQL Functions under TileContourLines, TileContourAreas.]

adamw


8,447 post(s)
online
#31-Jan-19 08:44

Tim explained it all above - not a bug.

Two additions:

In addition to using a query, one can also get rid of NULL geoms by using the Select pane: Null Values - Geom, Replace Selection, then Edit - Delete.

We'll consider adding an option to remove NULLs automatically to the Contour Areas / Lines *transforms* (not functions).

tjhb

8,657 post(s)
#31-Jan-19 08:57

We'll consider adding an option to remove NULLs automatically to the Contour Areas / Lines *transforms* (not functions).

Is this worth thinking about instead or as well? It might an annoying exception from a design point of view (unless another use can be found for other transforms):

A Suggest... button added to the Contour Areas and Contour Lines transform templates (perhaps below the Max height box and above the Step box), which would run the same fast stats as the Style panel, and deposit the (estimated) min and max elevation values (floor and ceil ceil and floor?) into the Min height and Max height boxes.

adamw


8,447 post(s)
online
#31-Jan-19 09:26

This seems like a good idea.

Dimitri


5,359 post(s)
#31-Jan-19 08:05

Tim has discussed this, but there would be less need to guess and less mushing of separate issues together (always unproductive when debugging...) if you would have answered my question:.

I asked:

If I recall correctly, the sample data set you posted had numerous points with empty geometry. [...]

Are they in the drawing, and thus, should be exported?

That's a simple question to answer. Open the m9.map file for which you posted a link and open the drawing's table. Do you see any NULL values in the Geom field?

That's an important question because if the answer is "Yes" you know shapefile export is OK and can move on to learning about how and why those NULLs got in there.

This bit...

The drawing was generated by... [etc. ...]

... doesn't answer that key question.

So.... I tracked down the link you posted, downloaded your sample m9.map file, opened it, and took a look: your drawing does in fact contain records with NULL Geom values.

Given that the drawing you told Manifold to export to shapefiles contains NULL Geom values, you should not be surprised that Manifold did as you instructed. All the objects, including NULL geom values, got exported.

How it is that your workflow created records with NULL Geom values is a different question: "When I create contours I end up with records that have NULL Geom values... anybody know why?"

Tim had he insight to guess what the situation might be and has covered what's likely going on. But it would be better for a debugging situation to try to separate out what should and should not happen in export from what results one should get from contouring.

adamw


8,447 post(s)
online
#29-Jan-19 18:23

Performance of raster functions

Here are several tests illustrating the performance of raster functions compared to 8. We compare to 8 because using GPGPU for raster computations makes for a big difference and 8 can use it.

Correctness

The results of raster computations on 8 and 9 can be slightly different for several reasons. First, GPGPU code in 8 uses 32-bit floating-point math, while 9 by default uses 64-bit floating-point math. Second, computing coordinate system scales in 9 is more accurate than 8. Third, in a couple of places 9 uses faster code that produces values that are different from those produced by 8, but are semantically the same - eg, 8 can return an aspect value as -180 and 9 can return the same value as 180 = the angle is the same, but there is a numeric difference. Fourth, 9 better protects from errors accumulating during matrix computations.

All in all, differences from 8 are typically very small - frequently 0 (for comparisons of CPU to CPU for 64-bit floating point or GPU to GPU for 32-bit floating-point) and typically on the range of 1e-16 or so otherwise.

Tests

All timings are in seconds. The test machine has an 8-core CPU and a GPGPU (GTX 950, CUDA compute 5.2).

Test 1: 3,500 x 3,500 pixels, aspect with radius 1.

8, CPU: 3.390

9, CPU, forced to use a single thread: 1.905

9, CPU, multiple threads: 0.557

8, GPU: 4.223

9, GPU, multiple threads: 1.573

Test 2: 3,500 x 3,500 pixels, aspect with radius 2.

8, CPU: 94.984

9, CPU, forced to use a single thread: 38.077

9, CPU, multiple threads: 7.316

8, GPU: 4.421

9, GPU, multiple threads: 2.285

Test 3: 3,500 x 3,500 pixels, aspect with radius 3.

8, CPU: 176.687

9, CPU, forced to use a single thread: 69.646

9, CPU, multiple threads: 13.532

8, GPU: 4.816

9, GPU, multiple threads: 2.732

We can see that 9 outperforms 8 heavily in all cases. Single-thread performance is better partly because of optimizations in CPU versions of raster functions, but more importantly because of better data storage. Performance on GPU is better because of optimizations in GPU versions of raster functions (many quite significant) and because while 8 tries to get the entire image onto the device which creates a long wait before processing can even begin, 9 uses a much better streaming model which masks delays. Plus 9 can use multiple threads, obviously, which also helps a lot.

(By the way, Windows 10 shows GPGPU activity in Task Manager - one can now see the spikes from Manifold queries there, both from 8 and 9. See the Performance tab, the GPU section and be sure to switch the frames from the default Video Encode / Decode to Compute_0 / 1 / etc.)

Let's increase the data size.

Test 4: 10,240 x 10,240 pixels, slope with radius 1.

(no longer reporting numbers for CPU)

8, GPU: 34.346

9, GPU, multiple threads: 6.461

Test 5: 10,240 x 10,240 pixels, slope with radius 2.

8, GPU: 36.481

9, GPU, multiple threads: 11.300

Test 6: 10,240 x 10,240 pixels, median filter -> multiply by constant -> slope, all with radius 1.

8, GPU: 36.346 (the time is dominated by uploading data to the device)

9, GPU, multiple threads: 6.413

Let's increase the data size one more time.

Test 7: 20,480 x 20,480 pixels, slope with radius 1.

8, GPU: 397.654

9, GPU, multiple threads: 77.969

Also, in all tests above, numbers for 8 do not include the time to produce intermediate levels and numbers for 9 do.

Enjoy.

Some examples of using new query functions tomorrow.

tjhb

8,657 post(s)
#29-Jan-19 18:38

Frabjous and vorpal!

Brilliant how you've exposed TileFilter (and new readable unpacking function). A whole nother level of tool, very powerful. And... self-similarity again.

As for new transport model (to GPU, leaving on GPU between kernels)... I will post some test timings too, using chains of functions in 8 and 9.

adamw


8,447 post(s)
online
#30-Jan-19 14:05

8 does not specifically attempt to keep intermediate results on GPU, however if the processing is sequential (eg, Filter1(Filter2(Filter3(...)))), then the driver will be able to hide a fair bit of the transfers by remapping memory cleverly (when 8 transfers the result of Filter3 onto CPU and then immediately tries to load it back to GPU as an input to Filter2, this can go fast because the data is still in various caches - simplifying things). If the processing is non-sequential (eg, Filter1(Filter2(...)+Filter3(...))), however, the transfers will be visible.

The biggest problem with 8 is that it is attempting to load an entire image into memory. This is a fair approach for small amounts of data, but as data size grows, this just stops working. As data size inches closer to the limit, the performance decreases significantly due to all kinds of non-linear effects. 8 has that in spades in that if you are working with large images, with each operation touching a lot of memory things just start taking longer and longer - then the disk starts trashing seemingly at random and when it stops things get a little better, then they start getting worse again, etc. 9 avoids all that by processing rasters by tiles. Each tile takes a predictable amount of memory and processes at predictable speed, whatever the image size. And with GPU being fed from multiple threads, data transfers between CPU and GPU are heavily masked - there's always a couple of tiles being coped in each direction and always a couple of tiles on which GPU is currently working. It's just a better approach.

dchall8
578 post(s)
#30-Jan-19 14:23

Does this mean I can go back to cacheing tiles from Google Earth? When I first started this job we had slow Internet, so I cached every tile. At first it was fast, but as I built up almost 300 gigabytes of tiles, loading and saving in M8 could take 40 minutes. After we got a higher speed line to the building, I stopped cacheing. It takes slightly more time to zoom and pan around with the GE layer on, but it's not that bad. Would I now want to return to cacheing the tiles in M9?

Dimitri


5,359 post(s)
#30-Jan-19 14:45

The discussion of tiles in how they are used within GPGPU is not connected to tiles brought down from GE or other imageservers... two very different things.

But, to answer your question, although 9 is faster, why burn a lot of space in projects caching the tiles you use? With 9, caching those tiles is on a per-project basis, not into some central cache on disk. That's a good idea because it enables such caches to travel with the project, but it does result in some big project sizes.

My advice is to leave it with how the default now is set, which is to cache for the duration of the session but not to cache between sessions.

Dimitri


5,359 post(s)
#30-Jan-19 15:51

A quick heads up to anyone who wants to repeat such timings with ArcGIS or other packages such as QGIS: you can't compare the more interesting timings, at a Radius of 2 or a Radius of 3, because both Arc and Q in their slope and aspect functions are limited to a Radius of 1.

If you take a look at the timings when both GPU parallelism and CPU parallelism are turned off, giving a classic single-threaded, one core, CPU-only computation, you can see that computing using a Radius of 3 takes about 35 times longer than using a Radius of 1.

There is relatively little computation to do with a Radius of 1, so little that with Manifold parallel CPU is usually faster than dispatching to GPU. I think that's why Q and Arc limit their computation to Radius 1, since that's straightforward and doesn't take forever as data sizes get larger. With bigger data, computing Aspect with Radius 3 could take hours. If you use GPU, however, Radius 3 is nothing to be scared of. Even with bigger data it still goes fast.

(You can use Radius up to 20, I think, but that's getting absurd...)

adamw


8,447 post(s)
online
#30-Jan-19 15:08

A couple of illustrations on how to use the new functions.

Let's see what TileJson is about. That is, let's create a new tile filled with a constant using TileMakeNew, then convert it into JSON using TileJson and look at the result.

? TileJson(TileMakeNew(3, 3, 1))

nvarchar: [

 1, 1, 1,

 1, 1, 1,

 1, 1, 1

]

Fine. Let's try to go the other direction:

? StringJsonTile('[ 1, 1, 1, 1, 1, 1, 1, 1, 1 ]', 3, 3, 1, true)

tile: <tile, 3x3, float64>

Seems to work. We had to tell the function the dimensions of the tile and the number of channels.

Let's try changing some of the pixels and round-trip them:

? TileJson(StringJsonTile('[ 1, 2, 3, 4, 5, 6, 7, 8, 9 ]', 3, 3, 1, true))

nvarchar: [

 1, 2, 3,

 4, 5, 6,

 7, 8, 9

]

Good.

Let's try filtering the last tile with blur:

? TileJson(TileFilter(

    StringJsonTile('[ 1, 2, 3, 4, 5, 6, 7, 8, 9 ]', 3, 3, 1, true),

    1, -- radius

    TileFilterDefBlur(1, 1))) -- radius and then center

nvarchar: [

 nullnullnull,

 null, 5, null,

 nullnullnull

]

Border pixels are NULL = invisible. The center pixel remained at 5. Let's change the source tile slightly, creating a peak in the center:

? TileJson(TileFilter(

    StringJsonTile('[ 1, 1, 1, 1, 2, 1, 1, 1, 1 ]', 3, 3, 1, true),

    1, TileFilterDefBlur(1, 1)))

nvarchar: [

 nullnullnull,

 null, 1.1111111111111112, null,

 nullnullnull

]

The peak got averaged. Let's now create a slightly bigger tile and save it to stop typing the pixels:

VALUE @t TILE = StringJsonTile(

 '[ 1, 1, 1, 1, 1,' +

 '  2, 2, 3, 3, 2,' +

 '  2, 3, 3, 3, null,' + -- last pixel in row invisible

 '  1, 2, 1, 3, 2,' +

 '  1, 1, 1, 4, 3  ]', 5, 5, 1, true);

...got an empty table as a result, but no errors. Let's try filtering the tile using Gaussian blur:

? TileJson(TileFilter(@t, 1, TileFilterDefBlurGaussian(1, 1)))

nvarchar: [

 nullnullnullnullnull,

 null, 2.010513231195137, 2.9676335821954316, 2.967629759157991, null,

 null, 2.9675154787355518, 2.9785011237702075, 2.999522509802396, null,

 null, 1.9786192272300867, 1.0539833974942394, 2.97849858437863, null,

 nullnullnullnullnull

]

...how does Gaussian blur work, by the way? Let's check:

? TileJson(TileFilterDefBlurGaussian(1, 1))

nvarchar: [

 0.00012340980408667956, 0.011108996538242306, 0.00012340980408667956,

 0.011108996538242306, 1, 0.011108996538242306,

 0.00012340980408667956, 0.011108996538242306, 0.00012340980408667956

]

OK. Each pixel gets contributions from surrounding pixels which decay fast with distance. Makes sense.

Let's compute slope:

? TileJson(TileSlope(@t, 1, VectorMakeX3(1, 1, 1)))

nvarchar: [

 nullnullnullnullnull,

 null, 41.90884788180326, 45.392449120640315, 45.392449120640315, null,

 null, 31.002719133873985, 25.239401820678918, 18.43494882292201, null,

 null, 40.35910045666124, 39.80557109226519, 27.791305644779218, null,

 nullnullnullnullnull

]

...seems highest on the top edge of the tile where 1s change to 3s and low near the center. How does that change if we tell the function that the horizontal distance between pixels is 20 times bigger than whatever the Z units are?

? TileJson(TileSlope(@t, 1, VectorMakeX3(20, 20, 1)))

nvarchar: [

 nullnullnullnullnull,

 null, 2.5695028228896355, 2.9018215173024653, 2.9018215173024653, null,

 null, 1.7210061534334906, 1.3502244696996157, 0.9548412538721887, null,

 null, 2.433138796850444, 2.385944030388813, 1.5095270002735288, null,

 nullnullnullnullnull

]

Slopes just got much smaller. By the way, this all was being computed on GPU because we didn't say anything to the query engine and by default it sends functions like TileSlope to GPU, if there is one. Let's check what the numbers are if we compute on CPU:

PRAGMA ('gpgpu'='none');

...and then:

? TileJson(TileSlope(@t, 1, VectorMakeX3(20, 20, 1)))

nvarchar: [

 nullnullnullnullnull,

 null, 2.5695028228896355, 2.9018215173024653, 2.9018215173024653, null,

 null, 1.7210061534334908, 1.3502244696996157, 0.9548412538721887, null,

 null, 2.433138796850444, 2.385944030388813, 1.5095270002735288, null,

 nullnullnullnullnull

]

...same as on GPU. Which is good. Let's try median filter. With a cross shape. On slope data. First switch back to GPU:

PRAGMA ('gpgpu'='auto');

...then:

? TileJson(TileFilterMedian(

    TileSlope(@t, 1, VectorMakeX3(20, 20, 1)),

    2, -- radius

    TileFilterDefCross(1, 1)))

nvarchar: [

 nullnullnullnullnull,

 null, 2.5695028228896355, 2.9018215173024653, 2.9018215173024653, null,

 null, 2.433138796850444, 1.7210061534334908, 1.5095270002735288, null,

 null, 2.385944030388813, 2.385944030388813, 1.5095270002735288, null,

 nullnullnullnullnull

]

Checking for the center pixel which got chosen as a median from itself plus the four pixels to the left / right / top / bottom = a median from { 2.901..., 1.721..., 1.350..., 0.954..., 2.385... } = 1.721... . Seems fine.

Hope this helps.

Mike Pelletier


1,575 post(s)
#31-Jan-19 21:52

Thanks Adam for providing these examples. I've tried to step through these within the query window but cannot get started. I don't see any result. Is the first query creating a tile in memory that subsequent queries modify? Could you please put the first few queries into a .map file or perhaps show how to run these on existing images. Sorry for these rookie questions.

Haven't had time to play with it much but the median filter runs really fast on some large data. This is indeed a wonderful build!

tjhb

8,657 post(s)
#31-Jan-19 22:02

Mike,

Starting from the first SQL expression (it's not a query, that may be the problem?):

  • Open a new SQL Command window
  • Paste in:

? TileJson(TileMakeNew(3, 3, 1))

  • Highlight the pasted text
  • Press Alt-Enter to evaluate the highlighted expression

In the Command Log, you get (the same expression, followed by) Adam's recorded result:

nvarchar: [

 1, 1, 1,

 1, 1, 1,

 1, 1, 1

]

Same with the following expressions and statements, following Adam's sequence (mainly to be able to use VALUE and PRAGMA statements, which become part of state).

Mike Pelletier


1,575 post(s)
#31-Jan-19 23:39

Yup that was the problem and now I understand what is going on. Thanks Tim!

adamw


8,447 post(s)
online
#30-Jan-19 15:20

One more note on tile functions.

Current transform templates for images set the pixel type of the produced image to the pixel type of the source image. This is a pretty reasonable default, but when you have an INT16 image and want to compute its slope, you usually want the type of the produced image to change to something like FLOAT64 (if you keep the type at INT16, produced slope values will lose their fractional parts). We are planning to extend the transform pane to allow specifying the type for the produced image, and will perhaps also auto-suggest the type, but until we do that, it makes sense to convert source images to FLOAT32 or FLOAT64 before running transforms like slope.

Here's how to do that using SQL (changing pixels in an image named 'ASTGTM2_N45W122_dem' to FLOAT64):

-- drop index on int tiles

ALTER TABLE [ASTGTM2_N45W122_dem Tiles] (

  DROP INDEX [X_Y_Tile_x]

);

 

-- convert tiles from int to float

UPDATE [ASTGTM2_N45W122_dem Tiles] SET

  [Tile] = CASTV([TILE] AS FLOAT64);

 

-- readd index on float tiles

ALTER TABLE [ASTGTM2_N45W122_dem Tiles] (

  ADD INDEX [X_Y_Tile_x] RTREE ([X][Y],

    [Tile] TILESIZE (128, 128) TILETYPE FLOAT64),

  ADD PROPERTY 'FieldTileType.Tile' 'float64'

);

 

-- ask index to rebuild intermediate levels

TABLE CALL TileUpdatePyramids([ASTGTM2_N45W122_dem]);

We'll perhaps add a separate transform to change the pixel type of an existing image as well.

KlausDE

6,300 post(s)
#02-Feb-19 10:35

Here is the german UI file for this build.

I have prepended the string used as name for the System Data folder with an unprintable Chr(2), STX = "Start of text". This results in a system folder that always is on top of the alphabetic order in the Project pane.

That is a simple idea to handle the System data folder as a pinned item and may be interesting for the default strings as well.

It's simple to replace it when one day we'll get more flexibility to sort the Project tree.

I found an error between the default.ui.txt and my ui.de.txt by chance that is not easy to detect. A string used in the query builder had a different number of parameters in default and in the translation.

If somebody ever notices such errors (or others) PLEASE report.

Attachments:
ui.de.txt

adamw


8,447 post(s)
online
#13-Feb-19 14:45

Status update.

We are planning to issue the next build at the very end of this week.

The build has a mix of features for both UI and analysis.

Manifold User Community Use Agreement Copyright (C) 2007-2017 Manifold Software Limited. All rights reserved.