Subscribe to this thread
Home - Cutting Edge / All posts - Manifold System

9,628 post(s)
#07-Mar-19 17:31

Here is a new build. Among the mix of changes and additions, there is a big set of additions for better handling of indexed character data on external databases.

SHA256: 2296f518bdbd61efc442e93944ea5ebc3b686474b9d749ae66f45c42536694b7

SHA256: 08f9d1086279ee3846055abbed469f516241c628ba606da89e6dc21964cdab86


9,628 post(s)
#07-Mar-19 17:32


Collations for text fields in btree indexes are specified as a single string with all options embedded into it. Eg, instead of ... COLLATE 'en-US' NOCASE NOACCENT ..., the collation is specified as ... COLLATE 'en-US, nocase, noaccent'. (All available options: 'noaccent', 'nocase', 'nokanatype', 'nosymbols', 'nowidth', 'wordsort'.)

The default collation language of '' is specified as 'neutral'. The only option supported by 'neutral' is 'nocase', all other options are ignored. The short form of '' for the entire default collation is accepted, but 'neutral, nocase' has to spell the collation language as 'neutral' and cannot omit it or use an empty string.

Btree indexes on ANSI text fields are limited to the 'neutral' language.

(Why indexes on ANSI text fields are limited to 'neutral'? Because the definition of ANSI text in 9 is 'interpret single-byte characters using whatever is the current language on the current system'. If we move a MAP file from a German system to a Portuguese system, what ANSI characters map to changes: what was being displayed as 'a' can now become 'b'. Consequently, when building indexes on ANSI character data we cannot rely on the linguistic meaning of characters, because that changes, and can only rely on what does not change, that is, on character codes. That's what the 'neutral' language does.)

The type system is extended to support external collations for character data.

(An explanation of what the change is about: the main purpose is to better support indexes on character data in databases. Databases very often use text values as unique identifiers, so being able to use indexes on text fields is very important for performance. There is, however, a difficulty, in that indexes on text fields frequently use fairly involved rules to tell whether the values are equal to each other and / or how they are ordered relative to each other. This happens because the default settings on many databases are set to use a linguistic collation, whether English or not. Additionally, although support for Unicode has been gradually growing, many databases are only fully starting to support it now as we speak -- many changes for that are currently in development branches, not even in released versions of databases -- and before that they were implementing linguistic collations for various languages using custom code. To support indexes on character data in databases, that is, to allow using these indexes in queries and various internal operations, we needed to (a) extend our type system to allow plugging in arbitrary code for implementing collations, and (b) implement custom collations used by each database - at least the most common ones. The above change is (a), changes for (b) follow.)

SQL Server dataport supports binary collations and collations based on Windows. (This covers the vast majority of collations available on SQL Server.)

PostgreSQL dataport supports binary collations, collations based on Windows and collations based on ICU. (This covers the vast majority of collations available on PostgreSQL, including the coming ICU.)

MySQL dataport supports binary collations, collations in the 'latin' family and collations based on ICU. (This covers a sizeable portion of the most commonly used collations.)

Database dataports list available external collations using the system mfd_collate table. Each collation has a unique name and a definition that can be used in a query. Unsupported collations has NULL definitions.

Loading a MAP file with indexes with external collations that cannot load (this can now happen when, for example, a MAP file created on a later build which supports more external collations will try to load into this build, or when we deprecate a particular external collation, or when an external collation that depends on data stored outside of EXT.DLL will not be able to find it, etc) no longer makes the entire MAP file fail to load, and instead makes the index inoperable (reads via the index fail) and the table readonly (writes fail). The index that failed to load reports the failure in the log window and can be dropped cleanly. If all indexes that failed to load for a particular table are dropped, the table becomes writable automatically.

Collate query function is replaced with CollateCode - takes a string with the collation definition and returns a numeric code. If the collation definition is external, the function returns NULL.

New query function: CollateCodeDef - takes a numeric code for a (non-external) collation and returns the collation definition string.

'Xxx, Intl' transforms use a new collation picker with a readout and a button with a drop-down menu on the right. The drop-down menu allows selecting a favorite collation, editing the list of favorite collations, or composing a custom collation using a dialog. The custom collation dialog displays the list of available languages (with a filter) and collation options.



9,628 post(s)
#07-Mar-19 17:33


The Style tab of the Record pane arranges style pickers similarly to the Style pane and includes a separate picker for full style.

The query engine allows inlining a call to an arbitrary aggregate query function with a variable number of arguments using INLINE: INLINE Max(3, 4, 5) or INLINE Covar(1, 1, 2, 5, 3, 6, 4, 1). If the aggregate function takes more than one argument, the number of arguments passed to the inline call has to be a multiple of that.

StringJoinTokens aggregate query function uses a personalized separator for each passed value. (Previously, the function was always using the first passed separator value and ignoring all other values.)

Query builder descriptions for clauses and functions have been cleaned up to use the same terms as much as possible - we are now only using '<query>' when we mean a query component, otherwise we are using '<table>', etc.

Renamed transform template: Maximum Value -> Limit Low (parameters - Value: / At least:).

Renamed transform template: Minimum Value -> Limit High (parameters - Value: / At most:).

New transform template: Limit (parameters - Value: / At least: / At most:).

New transform templates for tiles: Limit / Limit Low / Limit High - limit pixel values from one or both sides.

Removed query functions that are no longer needed after INLINE: GeomMergeAreasPair, GeomMergeLinesPair, GeomMergePointsPair, GeomUnionAreasPair, GeomUnionRectsPair, ValueMax, ValueMin. Renamed query function: GeomIntersectLinesPair -> GeomIntersectLines.

TileFill query function is reworked to keep the original tile type and accept xN values. (Previously, the function was forcing the tile to FLOAT64.) New transform for single-channel images: Fill.

New query function: TileFillMissing - fills invisible pixels in a tile with the specified value and makes them visible. Keeps the original tile type. New transform for single-channel images: Fill Missing.

TileCombine query function is renamed to TileFillMissingCopy and reworked to keep the original tile type (was forcing both tiles to FLOAT64) and do nothing if tile sizes do not match (was returning NULL).

New query function: TileFillMissingNearest - fills invisible pixels in a tile with values of first found visible pixels in the specified radius. Keeps the original tile type. New transform for images: Fill Missing Nearest.

Exporting SHP writes character data to DBF as UTF-8 and produces CPG file describing the encoding in DBF as UTF-8 for third-party products.

Exporting DBF (including as part of SHP) computes maximum length of each text field, increases it to the nearest multiple of 8 to accommodate for future edits, and uses that as DBF field length. (DBF files cannot store fields longer than about 250 characters. Previously, all text fields were exported with maximum length, this was wasting a lot of space and making text field sizes snowball in third-party products.)

Reading TIFF reads mask images.

Reading TIFF reads Z offset / scale / unit data if available.

Reading TIFF recognizes GDAL metadata and no longer reports it as unrecognized. (The metadata itself is appended into a comments component - now with Unix line ends converted to Windows.)

Exporting images to BIL, FLT, XYZ and other formats applies Z offset / scale to pixel values.

(Fix) Reading GRD Surfer no longer sometimes fails.

(Fix) Hotine oblique Mercator no longer uses a wrong default value for missing rectified grid angle.

(Fix) Reading TIFF no longer sometimes misreads CIELab images.

Specifying style for image channels allows reverting data by setting channel minimum value to be higher than channel maximum value. (For example, switching the bounds for the improperly interpreted alpha channel from min = 0, max = 255 to min = 255, max = 0 will invert the interpretation without doing anything to the stored pixel values.)

Reading TIFF no longer reorders channels in color images to BGR and no longer inverts A from 0 = fully transparent to 0 = fully opaque. Instead, all data necessary to properly display the image is put into its style.

End of list.


9,763 post(s)
#07-Mar-19 23:35

The INLINE syntax is intriguing. I like the choice of keyword too. It suggests "ducks in a row", and also a simplified table.

As I understand it:


Aggregate functions taking a single field operate exactly as before. No extra keyword is required. This operates "down the column".


But now we can also apply an aggregate "across" a specified sequence of values or fields. This requires adding the INLINE keyword before the name of the aggregate function.

If the aggregate function takes just one argument, then the sequence is a simple list. If the aggregate takes more than one argument, the sequence is a list of lists, or in other words a serialized array. For example, if the aggregate function takes 3 arguments, we can enter just 3, or 3 then another 3, or any multiple of 3, all in a line.

In one sense this syntax feels a little bit crude and possbly error-prone. Would it be improved by requiring each individual vector of parameters to be enclosed in parenthesis, separated by commas? I think so. This would make no difference for aggregates taking a single argument (like Max or Min). But for e.g. Covar, which takes two arguments, I would prefer to have to write, not

 INLINE Covar(1, 1, 2, 5, 3, 6, 4, 1)

but instead

 INLINE Covar((1, 1), (2, 5), (3, 6), (4, 1))

because I think that is both more immediately explanatory, and less error-prone. It also looks exactly like what SQL does for rows in a VALUES statement, which is nice.

Putting that aside, this new construct looks very powerful. I suspect it is going to open up significant new possibilities--though I can't imagine quite what yet!

It also feels elegant to subsume the various *Pair functions into the corresponding aggregates. And for the price of the INLINE keyword, we also get versions for Triplet, Quad, Quint, ... and so on indefinitely, for "free". Very cool.

Two interesting questions for now, in advance of testing.


Will INLINE operate on a user-defined aggregate function in the same way as it operates on built-in aggregates?


How does INLINE integrate with GROUP BY? Does it always produce a single record per group, or can it also produce a table (which can then be passed to SPLIT, or a different aggregate function)?

Taking the Covar() example above, will I get a TABLE having 5 rows for each GROUP? Or is covariance calculated first within each pair, then between successive pairs, to produce a single final value--a reduction?

Now to test. Very intriguing. I suspect this is an iceberg.


9,763 post(s)
#07-Mar-19 23:58

This was wrong (but in a good way):


Aggregate functions taking a single field operate exactly as before. No extra keyword is required. This operates "down the column".

Aggregate functions taking a single argument can operate exactly as before, without the INLINE keyword. Straight down the column.

But if the INLINE keyword is added, then they can operate over an explicit vector of values or fields--as it were across an inline field. Of arbitrary length.

The version I called (b), addressing an aggregate taking more than one argument, differs mainly in that it also may (given the INLINE keyword) operate over a vector of multiple pairs or sets of arguments, arranged as a serialized array. (I still need to test to work out exactly what the effect of this will be per group.)

Looking forward to examples. If I have any useful examples in the meantime then I will add them.


9,628 post(s)
#08-Mar-19 10:00

Yes, the (a) and (b) above are correct.

If you use an aggregate function without the INLINE keyword, that's the regular use of the aggregate, no changes from before:


SELECT Max(popFROM cities;

-- population of the biggest city

This goes from city to city, takes a population of each city and computes the maximum population. The result is a single number.

However, you can now also use the INLINE keyword and that makes the aggregate take all its values from the argument list:


SELECT city, INLINE Max(pop2010, pop2014, pop2018) FROM cities;

-- peak population of each city out of measurements in 2010 / 2014 / 2018

This goes from city to city, and for each city, takes its population in 2010, 2014 and 2018, then computes the maximum of these three values. The result is a table with a separate number for each city.

The moment we add INLINE, the aggregate becomes a simple function call, it does not need any imposed groups, it operates directly on the values provided in the arguments.

INLINE works similarly to this function:


FUNCTION inlinemax(@a INT32, @b INT32, @c INT32INT32 AS (

  SELECT Max(result) FROM (VALUES (@a), (@b), (@c))



? inlinemax(2, 5, 3) -- returns 5

INLINE Max(2, 5, 3) -- same

...only you don't have to write a separate function for each new type / aggregate / number of arguments. Plus we can optimize INLINE better.

On to the questions:

(1) will INLINE operate on a user-defined aggregate -- Yes, absolutely. The implementation is generic at the core. There are various optimization pins that advanced aggregates can use, but they are optional. Custom aggregate functions - when we allow such a thing - will quite likely be supported by INLINE automatically without any additional requirements.

(2) how does INLINE integrate with GROUP BY -- INLINE converts an aggregate to a normal function call, INLINE Max(...) is semantically the same as Sin(...) - there are no group semantics. Or like Coalesce(...), because, like INLINE aggregate, Coalesce also takes a variable number of arguments.

Regarding the grouping of values for aggregates that take multiple arguments, eg, Covar((1, 1), (2, 2)) instead of Covar(1, 1, 2, 2) - we might do this, but we'd wait until we have more support for tuples - because grouping arguments with (...) looks an awful lot like tuples. We've briefly considered using a different separator between argument groups, eg, Covar(1, 1; 2, 2), but that feels too alien. Anyway, we are not saying no to anything, let's just give it some time.


9,628 post(s)
#08-Mar-19 10:18

One more note regarding user-defined aggregates - one can do them right now, but the user-defined function has to take a table of values. When I am talking about us maybe adding more support for user-defined aggregates in the future, I mean us designing a system where the user would write, eg, two functions: one for accepting a new value, and another for computing the result of an aggregate on all accepted values, and using that as an aggregate.

In a lot of cases this additional infrastructure does not really add much on top of what is already possible in 9 today, in that if the aggregate is such that all values have to be stored until the end, then the first function just ends up creating something analogous to a temporary table and putting all accepted values into that table, and the second function is what you can use as an aggregate right now. But if the aggregate does not need to store all values until the end and can be computed incrementally, then the additional infrastructure helps, and if it ends up being needed often enough, we'll implement it.


9,628 post(s)
#08-Mar-19 11:13

A couple of random notes on various things in the build.

TileFill and other functions keeping the original tile type

TileFill and several other functions mentioned in the build notes now keep the original tile type. What does this mean? Here is a simple illustration. Consider:

? TileJson(TileFill(TileMakeNew(3, 3, CAST(1 AS UINT8)), 5))

This creates a 3x3 tile of UINT8 filled with 1, then refills it with 5, then outputs the result. The result is what you'd expect: [ 5, 5, 5, ... ]

Let's now do this:

? TileJson(TileFill(TileMakeNew(3, 3, CAST(1 AS UINT8)), -1))

This creates the same tile, but tries to refill it with -1. The result is: [ 255, 255, 255 ... ]. What happened? -1 got converted to UINT8 and this produced 255. Because TileFill kept the original tile type.

Other frequent conversions are from doubles to ints (get cut) and from numbers to xN or vice versa (get padded or cut).


9,628 post(s)
#08-Mar-19 11:15


TileFillMissingNearest is somewhat similar to surface interpolation functions in 8 (Interpolate / InterpolateRow, etc), but there are two important differences.

First, there is no way to use TileFillMissingNearest with an unlimited radius. One could do this with InterpolateXxx but this was only because the surfaces were small (9 can handle much bigger images than 8 can) and, in, more importantly, the result was questionable. If we don't know the height in New York, Park Ave and don't know any heights in North America, can we just use the height of Kilimanjaro because that's what we know and that's what randomly happened to be closest? Usually, searching for a closest known value still has some limits beyond which it is better to give up (and maybe use a constant or some average if we absolutely require a pixel to be filled with something).

Second, when there are multiple pixels at the same distance to the missing pixel, TileFillMissingNearest does not attempt to average them like 8 did and just uses the first found value. This has the benefit of not introducing new values where the image stores some sort of unique codes that cannot be averaged, unlike heights (or if the image stores xN values where channels are somehow connected to each other and per-channel averaging is undesirable). We could compute a median instead, but that's way more expensive than computing an average or picking a random value, so that's not the default. If median is desired, we can add that as an option (and / or return all closest pixels to a user-defined function, and / or specify the metric that controls what's considered to be 'closest', etc).


9,628 post(s)
#08-Mar-19 11:15

ANSI collates limited to the neutral language

The build notes explain why ANSI collates are limited to the neutral language (because we cannot rely on linguistic meaning of the characters that changes between machines), here is also a simple illustration of what follows from this restriction:

If field 'f' is ANSI, then ... ORDER BY f COLLATE 'fr-FR' uses the neutral language. There is currently no error raised, COLLATE 'fr-FR' is accepted and then replaced with 'neutral' at runtime. There are two reasons for why there is no error raised. First, we previously allowed ANSI fields in indexes to use arbitrary collates and that information is already stored in existing MAP files, so in some cases we cannot raise any errors, and these cases are important. Second, query operations on text fields frequently convert ANSI text to Unicode so even if we start raising an error on an incompatible collate for ... ORDER BY f, with f being a field, we wouldn't be able to raise anything for ... ORDER BY f & 'x' or for ... ORDER BY Coalesce(f, ''), because in both of these cases the expressions are Unicode - so why even try to raise an error on ... ORDER BY f.

The second point above is actually a feature. If you have an ANSI field and still want to use a linguistic collate on it, you can convert it to Unicode dynamically: ... ORDER BY CAST(f AS NVARCHAR) COLLATE 'fr-FR' will use 'fr-FR'.


9,628 post(s)
#08-Mar-19 11:16

Title case in the neutral language

We used to implement casing for the neutral language using linguistic rules for en-US, we are now treating the neutral language differently, so here's an explanation of what the title case for the neutral language does:

All uninterrupted sequences of letter + digit + apostrophe are treated as separate words. The first letter in each word is converted to upper case. All other letters are converted to lower case. Digits / apostrophes are left unchanged.

Linguistic rules for en-US are somewhat similar to the above, but they also handle abbreviations (eg, 'USA' stays as 'USA' because all letters are upper case), etc.

Several examples:

> ? StringToTitleCase('mary')

nvarchar: Mary

> ? StringToTitleCase('anDRew')

nvarchar: Andrew

> ? StringToTitleCase('abc123')

nvarchar: Abc123

> ? StringToTitleCase('123abc')

nvarchar: 123Abc

> ? StringToTitleCase('o\'hara')

nvarchar: O'hara --- why not O'Hara? let's check en-US:

> ? StringToTitleCaseCollate('o\'hara', CollateCode('en-US'))

nvarchar: O'hara --- well, same here


9,628 post(s)
#08-Mar-19 11:17

TIFF and channel 4

Our TIFF dataport sometimes misdetects the 4th channel as alpha when it is not alpha. For example, this happens on the NAIP example linked in other threads on this forum.

This happens when the information in the file regarding whether or not the 4th channel is alpha is inconclusive. In a nutshell, there is a tag that is supposed to control that. When the tag value is 'this is alpha', we correctly interpret the channel as alpha (use it as alpha for the image). When the tag value is 'this is not an alpha', we correctly interpret the channel as not alpha (set alpha for the image to be constant = fully opaque). The problem is, sometimes the tag value is neither. When this happens, we have to make a judgement call. We saw images where the correct interpretation is that the channel is alpha as well as images where the correct interpretation is that the channel is not alpha, so we just decided to look into which error is simpler to fix for the user. Since switching from alpha to not-alpha in the Style pane is easier than the other way around (select Value opposite to select Channel + enter bounds remembering to revert them if needed), we report ambiguous cases as alpha.


6,393 post(s)
#09-Mar-19 15:11

The german UI file for


Politics is the art of making the impossible unavoidable

Bernd Raab46 post(s)
#09-Mar-19 16:10

i have now some issues with 168.10. I used a image from a wms-server. In 168.9 everything is ok - Background is transparent, but in 168.10 the image is now almost black but not really inverted. Works well in QGIS 3.6 and GlobalMapper20.



9,763 post(s)
#09-Mar-19 17:10

See the last two paragraphs in the second post of the release notes.

You need to reverse the interpretation for the Alpha channel.

Bernd Raab46 post(s)
#09-Mar-19 18:29

thank you Tim,

but the wms-data are not editable. I wonder why it works without any problems in 168.9


6,627 post(s)
#10-Mar-19 19:28

Reversing the interpretation for the alpha channel is done in Style. It's not editing the WMS data, it is just saying how it should be interpreted. Adamw discussed the likely issue a few posts above.


9,628 post(s)
#11-Mar-19 08:23

Can you post a URL for the server? If the server info is private, please contact tech support.

(That might be related to the alpha channel being inverted, yes, and if so, you should be able to repair the situation by inverting it back manually, true -- but you shouldn't have to do anything like that. If the alpha channel is oriented wrong, that's our bug and we should fix it.)

Bernd Raab46 post(s)
#11-Mar-19 08:45

i haved tried to change the alpha-channel in the Style-Panel as Dimitri and Tim suggested, but all channels are grayed out and i cannot change anything. unfortunately the server is restricted. I will ask for a shorttime permission for tech.

As reported, it works in all earlier builds.


9,628 post(s)
#11-Mar-19 09:00

You can copy / paste the image without the table from the server data source into the parent MAP file - then change the style of the image in the MAP file. Both images will still refer to the table on the server and get tiles from the server.

Bernd Raab46 post(s)
#11-Mar-19 09:31

very curious, when i load the data to try your suggestion, all is fine. Maybe it was on the server-side.


9,628 post(s)
#11-Mar-19 14:26

Could it be that now the server is returning PNG and previously it was returning TIFF? Not sure why it would suddenly switch, but if it did, that would explain the difference in behavior - 168.10 changed the way alpha is handled for TIFF, but not for PNG.


9,628 post(s)
#12-Mar-19 11:56

Status: we are planning to issue the next build very early next week. The focus is on the UI tools.

gjsa96 post(s)
#12-Mar-19 23:40

Experiencing an issue in Build 10 that was not present in Build 9 when exporting a drawing as a shapefile.

Attached zip file contains the same shapefile exported from the same M9 project using each build.

If you drag and drop each shapefile into QGIS3, the extent of the Build 10 shapefile is not correct.



9,763 post(s)
#13-Mar-19 01:00

Manifold 8.0.30, Manifold, Manifold and Global Mapper 18.2 all open both .shp sets correctly. Their geometry and extents seem to match.

The only differences I can see are:

  • The .10 export includes a .cpg file declaring UTF-8.
  • After import back into either .9 or .10, the table for the .9 drawing has a FieldBounds.Geom property. The .10 drawing does not.
  • The CEA field for the .10 version is empty (in all the above software).

Projections are identical.

Can you give more workflow details? In particular, what "drag and drop each shapefile into QGIS3"--do you drag and drop all files in each shapefile set, or only the .shp files? Can you also give the "original" geometry (whatever that means in the context)?

Maybe this is a bug in QGIS.

Or maybe QGIS has a different understanding of a .cpg file? What happens if you remove the .cpg file for the .10 shapefile set?

What happens if you remove the FieldBounds.Geom property for the .9 table before export to .shp (or add it before export for the .10 version)?

gjsa96 post(s)
#13-Mar-19 01:49


gjsa96 post(s)
#13-Mar-19 02:21

Trying to type a reply , take 2:

  • removing .cpg file makes no difference to QGIS3 - problem still exists with .10 export

  • FieldBounds.Geom property is not obvious to me so couldn't test
  • Yes, the problem is specific to .10 and QGIS3 (I can load successfully .10 shapefile in SAGA 2.3)

  • The missing CEA atttributes in .10 shapefile .dbf are a separate issue and should be looked into. Here is the link to the M9 project file.

  • tjhb

    9,763 post(s)
    #13-Mar-19 06:00

    I don't know what to make of this yet. Very interesting.


    6,627 post(s)
    #13-Mar-19 05:48

    edited: didn't see the link to the original file...

    CPG files are code page files used by ESRI software, for example, by ArcGIS Desktop as optional ESRI extensions to shapefiles. It's not the first place I'd look for spatial issues such as missing objects, or complex object geometry that errors in software might make it seem objects are missing.


    9,628 post(s)
    #13-Mar-19 09:05

    We'll check, thanks for the note / data.

    Added: from a brief inspection of the files in the hex viewer, this looks like a bug on our part introduced on the way from .9 to .10. I'll likely report later conclusively. If this is a bug, we'll fix it.

    Added more: we'll check the CEA attributes, too.


    1,932 post(s)
    #13-Mar-19 13:15

    Importing shapefile from ArcGIS and exporting from M9.0.168 build 10 :

    - Back in ArcGIS : ok

    - Back in QGIS 3.4 : completely wrong location. The Max X of the XY Extent is displayed as 0.0000, which is wrong :


    178400.6441999999806285,0.0000000000000000 : 5599079.7497000005096197,5335004.0000000000000000


    9,628 post(s)
    #14-Mar-19 14:50


    We found both the issue with the bounding box and the issue with the CEA values not exporting well - these are our bugs and we will fix them.

    Apologies for the regressions, both were related to the recent enhancements.


    9,628 post(s)
    #20-Mar-19 14:29

    Status update: we have several features finishing at roughly the same time but with a bit of a gap, so we have changed plans slightly and are aiming to issue the build next week instead of this one, likely on Monday.


    9,763 post(s)
    #21-Mar-19 20:14

    Any chance of a new public build soon? The 9.0.168 series will be 8 months old on 6 April.

    (No, I don't know why this matters. It's nice to have solid ground now and then. Largely symbolic. Perhaps helpful for prospective purchasers.)


    9,628 post(s)
    #22-Mar-19 08:05

    Absolutely, we will have a new public build after several more cutting edge builds which will finish several directions that we started (nothing is every truly finished in that we are always going to be adding and extending things in the future, but hopefully what I am saying is clear enough - we want to make things reasonably complete).


    1,065 post(s)
    #29-Mar-19 11:32

    Is there hope to getting Regression Kriging (like here in (A Practical Guide toGeostatistical Mapping) from Tomislav Hengl) in Manifold 9 ?


    6,627 post(s)
    #29-Mar-19 15:25

    Is there hope to getting Regression Kriging

    This is the wrong thread for that, but all the same... If you want something, participate in the process of turning hope into reality. See the Suggestions page for tips.

    184 post(s)
    #30-Mar-19 03:26

    Yes, probably not appropriate for Cutting Edge, but has been briefly discussed elsewhere on the forum

    Tomislav Hengl is good but gets into working between Manifold and R discussed here:

    I have used a very simple-minded approach to regression kriging for inductive social science work in 8 that I can now do much faster in 9 using an outside stat program to do the initial regression model. For it cases are points in geographic space. For each point the regression produces a predicted value. That minus actual value is the residual or error. So one would like to have a map showing where the regression predicted well vs where it under/overestimated controlling for spatial near neighbor autocorrelation and experimenting with different sized spatial areas where some pattern might be revealed. Days of run time with 8 and so much faster with 9.

    Manifold User Community Use Agreement Copyright (C) 2007-2019 Manifold Software Limited. All rights reserved.