Subscribe to this thread
Home - General / All posts - Kriging for large areas
dchall8
1,008 post(s)
#23-May-20 22:12

This is not really a question but more of a status report for a LiDAR project in M9. Maybe this will help someone, including me.

I'm using LiDAR data from the state to create a relatively high resolution surface for parts of my county. The LiDAR data comes in small tiles, 30 to a quad. Each tile has about 9 million records in the point cloud. Of those about 4 million are the ground surface points I'm interested in. I'm looking at doing this somewhat efficiently and getting a good result. I've tried combining the point clouds for the ground returns into one large drawing and running the Kriging transform. Up to about 20 million records, that works fine. This group of records is 4.5 KM (north to south) and 1.5 KM wide. When I added another set of records, my computer choked. Running one Krigging transform on 20 M points seems like a fairly efficient way of doing the job. Here is an image of the result zoomed to 1:2000.

As a point of reference, the horizontal distance from the peak in the image to the bottom of the creek is 45 meters.

I also tried another approach. I ran the Kriging transform on each of the five tile sets and added all those results to a map. Then I Merged the images. This was very fast; however, the result has a 'screen door' effect which is not satisfactory. Here's what that looks like at the same zoom.

The next step for this will be to Krig batches of 20 million records (5 tiles), accumulate a few of those, and then merge the larger images and see what that looks like.

Attachments:
Krigging individual tiles and Merging Images.jpg
Krigging with 20M Records.jpg

tjhb
10,094 post(s)
#23-May-20 23:01

What you must do, is to look at the report in the system Log, to see what model, neighbour count and radius the system guessed to use. It is immensely useful.

When you get a great result for your test data (can be a small subset), note the values, and specify them for larger runs.

tjhb
10,094 post(s)
#23-May-20 23:06

If you want a reasonable default (from experience), try exponential model, 12 neighbours, using 0 for radius (auto-computed).

How's that?

[This is from memory. I should also check. But more importantly: test.]

adamw


10,447 post(s)
#24-May-20 08:33

What are the coordinate systems of the merged images and what coordinate system are you setting for the resulting image? I am specifically interested in whether the systems are the same and only differ in local offsets / scales, and if so, the difference in local scales (pixel sizes).

The effect on the second screen makes me think of nearest-neighbor-like artifacts, we had similar effects in image reprojection and there they are solved by using bilinear / bicubic interpolation or sub-pixel reprojection. But the artifacts mostly appear when either the coordinate transformation is curvilinear or the pixel sizes are different, so I'd like to understand what specifically happens in your case.

If you could share the file with the source images and the merged result, that would help. Exporting a project to MXB should reduce the file size considerably. If after this the file is still too big to post on some kind of an online storage or if you don't want to use public storage, write to tech support and they will provide you with FTP space. We'll then be able to take a closer look.

dchall8
1,008 post(s)
#24-May-20 22:30

This fixes it I think. This time using DEM data from the same website I merged 30 files. The original coordinate system for each image tile was UTM Zone 14(N). When merging the default went to EPSG 3857. Here is the result.

Then I changed all the images to EPSG 6343 and then merged them into EPSG 6343. Here's the result of that.

It seems to matter that I have a lot of images to merge. When merging 6 images the effect does not happen. When merging 8 images, it does.

Here is a link to the raw DEM data.

Attachments:
No screening effect after merge.jpg
Screen door effect on merged images.jpg

dchall8
1,008 post(s)
#25-May-20 00:11

I returned to the LiDAR imaging using Kriging using the EPSG 6343 for both the images and the Merge. That fixed the screen effect showing in the top images. I also noticed the images from the DEM are considerably less detailed than the images from Kriging the LiDAR tiles. Hopefully that is detail and not noise, because it takes a lot longer to process the images. I'll start another topic on that to revive an old topic. Way back when, imported LiDAR data did not generate an index, and I think that is messing with the bulk import of files.

adamw


10,447 post(s)
#25-May-20 15:46

I think the problem is not in the number of images, but rather in the coordinate systems / local scales = pixel sizes. The coordinate systems and pixel sizes have to be controlled. The Merge dialog starts with defaults based on the coordinate system of the map, and if the coordinate systems of one or more images are different, they are going to be re-projected and during the re-projection and depending on the relation between pixels sizes there might appear staircase effects. We can reduce these effects by performing interpolation - and we will probably provide such an option in the future - but the important thing here is simply to watch the coordinate system and pixel size of the target image and how they relate to the coordinate systems and pixels sizes in the source images. We try to print various numbers and show icons in the Merge dialog to help with that.

adamw


10,447 post(s)
#24-May-20 08:42

One more thing:

You could try running Kriging on big data sets with the maximum radius. To determine the specific value for the radius, take an example subset of points (does not have to be particularly big, 1-2 million points is fine) and do several runs starting with the radius value that would make each pixel reach 10-15 points, then increase that by 50% or so three-four times. Compare the quality of the interpolated images - there should be improvements with diminishing returns - and pick the smallest radius that looks good enough, then try using that radius when Kriging the entire data set.

dchall8
1,008 post(s)
#24-May-20 20:29

I got a slightly better result using the Rational model versus default. The others were the same or worse.

I tried adjusting Neighbors (values 3, 5, 10, and 12) and always got pixelated holes in the result.

I'll look at radius next. I'll also see if I can post a smaller version of the file.

I don't get much of a "report" in the Log. Is there a setting to get better Log data?

Here is a link to download the original data set. It went fairly quickly considering the size of the file.

LandSystems73 post(s)
#24-May-20 22:09

Is this something that could be added such that an optimal or minimal radius for each point cloud could be readily determined from the point cloud statistics?

adamw


10,447 post(s)
#25-May-20 15:53

As Art suggests, we can show a variogram of data points, that would help gauge which radius makes sense to use. We will likely add that in the future.

artlembo


3,400 post(s)
#25-May-20 02:42

I think it would be beneficial to show a variogram of the data points - maybe make that an option? That would help better determine the model to use, and also give some hints as to the nugget, sill, and range. I wrote a variogram query in M8, and could share it if that helps.

adamw


10,447 post(s)
#25-May-20 15:52

Agree, this is a good idea. We have a wishlist item to add that.

dchall8
1,008 post(s)
#31-May-20 20:12

I've learned some things. I'll summarize what I've done and explain below.

Linked to 30 LAZ files, extracted classified records to 1 large table, used the Kriging transform to create many surface images, and joined them into one DEM type image. Then I tried making a surface image using Interpolate, Triangulation and found it much faster to work with.

I started with 30 LAZ files downloaded from the site linked above. The files cover an area of 67 square km (25 sq miles). I used a query to extract the ground returns (classification = 2) into one large table. The new table has 150 million records. It took about 45 minutes for the query to run. The file size with the 30 files linked is 30 GB. Before I created the large table Manifold could tackle the Kriging transform with 20 million records. 30 million would cause the computer to slow to a crawl midway through. By crawl I mean it was processing fewer than 1 records per second. It took 20 seconds to process a few records. At that point I canceled the process to try it with fewer records. After building the large file of points, the new limit for Kriging seemed to be 10 million records, and it processed right through. Thus I selected 15 slices of the records to get through it. That process was tedious enough that it took 2 days of manually selecting 10 million records, running the transform, and repeat while ensuring to select some overlapping records between selections. That part of the process made this a project only a mother could love. I was hoping to knit together several hundred square miles, but not with Kriging.

Since I had the large table built, I tried the Interpolate, Triangulation transform to see if would runs faster or with more records selected. Starting out conservatively I selected 5m points. Then went to 15m, 30m, 40m, and finally 60m. It ran through all of them with no issues. What the heck? I tried running it on all 150 million points and it never stopped. 47 minutes later it was done. The results are very similar to Kriging except where there are voids in the data.

Using the Triangulation is a process that I can work with. Download the data, link to it, extract the ground data into one table, and use Interpolate Triangulation to create the surface image. I should be able to do all that in 2 hours.

Attachments:
Triangulation Image 30 LAZ files.jpg

Dimitri


7,413 post(s)
#01-Jun-20 05:30

it was processing fewer than 1 records per second.

That can happen when there isn't an index on a field the query uses. What are the fields and the indexes on the big table?

dchall8
1,008 post(s)
#01-Jun-20 18:15

The query removed the indices so new records could be added to the old table (ground_cloud). Then the last line of the query was this.

ALTER TABLE [ground_cloud] (  ADD [mfd_id] INT64, ADD INDEX [mfd_id_x] BTREE ([mfd_id]));

Here's the resulting table schema.

The query ran for several minutes before it slowed down.

Attachments:
Schema for ground cloud.jpg

Dimitri


7,413 post(s)
#02-Jun-20 05:43

... and the rest of the query? :-)

From the schema, you don't have indices on fields other than Geom or mfd_id. Suppose you're doing something intricate with the other fields. If you don't have indices on a field essential to big doings, you'll get a slow down like you describe.

Take a look at this topic, and see how indexes are added to fields that are used in the join. Do that same topic on a bigger data set, like Switzerland instead of Cyprus, and failing to have indexes will result in many hours of processing instead of tens of seconds.

lionel

995 post(s)
#02-Jun-20 23:27

I think there is a lot to read before master index . As always the manifold doc is wonderfull but take time to read !! not for understand howto use the gui to do thing but why we need to do this or that to be shure we choose a not to bad ( optimum) choice( Speed) in any specific ( SQL Query ?) context !

When see the capture above , a new user ll ask does the first line ( btree) is for use for specific object type since the 2 other line show tile and geometry ?

2) best is index chapter but what is the difference beetwen bounding rectangle AND box ?

find in "

in indexx chapter :

rtree

A spatial index that is a balanced tree structure utilizing bounding rectangles or boxes.

3) here some manifold doc links

SQL Example: Using the mfd_id Field and mfd_id_x Index

btree 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68

btreedup 1 2 3 4 5 6 7 8 9 10 11

btreedupnull 1 2 3

btreenull 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28

btreexxx 1 2 3 4 5

rtree 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50

Attachments:
manifold_index.png


Book about Science , cosmological model , Interstellar travels

Boyle surface fr ,en

dchall8
1,008 post(s)
#05-Jun-20 00:16

Here's the query abbreviated so you don't have to look at 30 repeated groups of lines

SELECT * INTO ground_cloud FROM [usgs18-70cm_14RMT875835]::[usgs18-70cm_14RMT875835] WHERE [classification] = 2;

ALTER TABLE ground_cloud ( DROP INDEX mfd_id_x, DROP mfd_id );

SELECT * INTO test_select FROM [usgs18-70cm_14RMT875850]::[usgs18-70cm_14RMT875850] WHERE [classification] = 2;

ALTER TABLE test_select ( DROP INDEX mfd_id_x, DROP mfd_id );

INSERT INTO ground_cloud SELECT * FROM [test_select];

DROP TABLE test_select

SELECT * INTO test_select FROM [usgs18-70cm_14RMT875865]::[usgs18-70cm_14RMT875865] WHERE [classification] = 2;

ALTER TABLE test_select ( DROP INDEX mfd_id_x, DROP mfd_id );

INSERT INTO ground_cloud SELECT * FROM [test_select];

DROP TABLE test_select

.

.

...for a total of 30 linked LAZ files

.

.

ALTER TABLE [ground_cloud] (  ADD [mfd_id] INT64, ADD INDEX [mfd_id_x] BTREE ([mfd_id]));

I will rerun that query and add indices for the fields for X, Y, Z, and ScaledZ, before running the Kriging Transform and see what I get.

tjhb
10,094 post(s)
#05-Jun-20 02:11

There is a basic mistake here:

The query removed the indices so new records could be added to the old table (ground_cloud). Then the last line of the query was this.

ALTER TABLE [ground_cloud] ( ADD [mfd_id] INT64, ADD INDEX [mfd_id_x] BTREE ([mfd_id]));

You should generally not remove mfd_id, without a very good reason. It should usually always be present, for any table you might need to manipulate further, unless the table has a BTREE index on a different index field. (There are times when you really don't need it, but generally not many.)

Needing to add new records to a table is not a good reason to remove mfd_id. I think that points to a misunderstanding. I am following along...

You have probably tried to add records to a table, and had the error message "Can't add record", as many of us have.

The reason, as you have inferred or already know, is that the query tried to add a repeated value to the mfd_id field, which is not allowed (unique values required).

But the correct response to this is not to remove the index. It is simpler: just avoid supplying values for the mfd_id field.

You don't need to supply mfd_id values. Leave Manifold to supply them automatically--then they will always be unique, that is guaranteed, part of the hard wiring of SQL9.

(You can supply mfd_id values yourself, but there is basically never a good reason to so (see next para). If you do supply them, you need to be certain that values will be unique. I can't think of a reason why this would ever be worth the slight mental effort. Just don't. Leave it to the SQL engine.)

On the other hand, there are many common situations where it makes sense to keep track of the source mfd_id value for each new record in the target table. We often might want to know that. In that case, give the target table a new field, of the same type (INT64), calling it something like source_id (my habit).

Now, so long as that field does not have a BTREE index on it (though it can have BTREEDUP/BTREENULL/BTREEDUPNULL, if you need one of those), then you can fill it with values from any source mfd_id field, arbitrarily, whether including duplicates or not. Safe and easy.

This is almost always a better strategy than to remove existing standard indexes, planning to rebuild them later.

adamw


10,447 post(s)
#05-Jun-20 11:38

Perhaps dropping and re-creating MFD_ID and MFD_ID_X was done because this was easier than reciting all fields except MFD_ID. But I would argue that the query fragment for each table becomes semantically simpler even though the list of fields is long:

--SQL9

 

SELECT * INTO temp FROM [1.1]::[1.1];

 

INSERT INTO temp (

  [Geom][X][Y][Z][intensity][return_number][number_of_returns],

  [scan_direction_flag][edge_of_flight_line][classification][scan_angle_rank],

  [user_data][point_source_ID][gps_time][ScaledX][ScaledY][ScaledZ]

SELECT 

  [Geom][X][Y][Z][intensity][return_number][number_of_returns],

  [scan_direction_flag][edge_of_flight_line][classification][scan_angle_rank],

  [user_data][point_source_ID][gps_time][ScaledX][ScaledY][ScaledZ]

FROM [1.2]::[1.2];

 

INSERT INTO temp (

  [Geom][X][Y][Z][intensity][return_number][number_of_returns],

  [scan_direction_flag][edge_of_flight_line][classification][scan_angle_rank],

  [user_data][point_source_ID][gps_time][ScaledX][ScaledY][ScaledZ]

SELECT 

  [Geom][X][Y][Z][intensity][return_number][number_of_returns],

  [scan_direction_flag][edge_of_flight_line][classification][scan_angle_rank],

  [user_data][point_source_ID][gps_time][ScaledX][ScaledY][ScaledZ]

FROM [1.3]::[1.3];

The query above merges data from 3 tables, I do SELECT INTO for the first one and INSERT / SELECT for the next two. This is also faster than dropping and re-creating fields and indexes.

Also, the list of fields can be generated automatically using the query builder - drop one of the source tables into the query builder, right-click it and invoke Insert Field List, then you only have to remove MFD_ID from the list, and then it's all copy / paste / adjust name for the next table like before.

dchall8
1,008 post(s)
#05-Jun-20 18:20

Thank you Adam. That is exactly (almost) how I ran it last night. I tuned it up and ran again successfully...which you already knew if you have the same data I have. For two files it took 744 seconds to get classification = 2 (9.2 million records).

SELECT * INTO ground_cloud2 FROM [usgs18-70cm_14RMT875835]::[usgs18-70cm_14RMT875835] 

WHERE [classification] = 2;

INSERT INTO ground_cloud2 ([Geom][X][Y][Z][intensity][extended_return_number]

[extended_number_of_returns][extended_classification_flags][scan_direction_flag]

[edge_of_flight_line][classification][user_data][extended_scan_angle][point_source_ID]

[gps_time][ScaledX][ScaledY][ScaledZ])

select [Geom][X][Y][Z][intensity][extended_return_number][extended_number_of_returns]

[extended_classification_flags][scan_direction_flag][edge_of_flight_line][classification]

[user_data][extended_scan_angle][point_source_ID][gps_time][ScaledX][ScaledY][ScaledZ]

FROM [usgs18-70cm_14RMT875850]::[usgs18-70cm_14RMT875850] WHERE [classification] = 2;

Thank you for the shortcut to select all the fields. In my old job I had 35 or so fields to deal with. Selecting them all would have been a great feature. That was many M9 versions ago last fall. There's another shortcut I'm looking for. I believe I saw it in a video where you can copy a drawing style and paste it into another drawing. Rewatching all the videos helps reinforce the tools, so...I'm enjoying the journey.

Moving on to the next step, after reading the topic referred by Dimitri and trying a few things, I'm not seeing the speed benefit he hinted at. Looking at the Kriging query, it uses the fields X, Y, and ScaledZ (my selection). Adding indices on those did not seem to improve the speed. The query itself seems to establish some indices...

-- Transform - Interpolate, Kriging - Add Component

--

CREATE TABLE [ground_cloud2 Interpolate, Kriging] (

  [X] INT32,

  [Y] INT32,

  [Tile] TILE,

  [mfd_id] INT64,

  INDEX [X_Y_Tile_x] RTREE ([X][Y][Tile] TILESIZE (128, 128) TILETYPE FLOAT64),

  INDEX [mfd_id_x] BTREE ([mfd_id]),

  PROPERTY 'FieldCoordSystem.Tile' 'EPSG:6343',

  PROPERTY 'FieldTileSize.Tile' '[ 128, 128 ]',

  PROPERTY 'FieldTileType.Tile' 'float64'

);

The result of combining data from 30 tables is a table with 150 million records (about 4 hours to combine). Running the Kriging transform causes my computer to slow down to a processing speed of less than 1 record per second. Dimitri suggested if I had the proper indices it might take 10s of seconds to run through the transform. I don't really care if it takes an hour or so as long as it continues to process at something more than 1(ish) per second.

adamw


10,447 post(s)
#06-Jun-20 07:25

Whole-component interpolations like Kriging or triangulation build their own indexes internally, so for them it should not matter whether the original data is indexed. Indexes help a lot of other operations though, eg, they make a huge difference for joins. I think the point of Dimitri's question was to see the entire query to check whether there's anything in it that could be helped by building appropriate indexes. After merging the tables is improved by using SELECT INTO for the first table and INSERT / SELECT for the rest, the part that performs Kriging is pretty much what it should be. We'll take a look at the interpolation of a big number of records to check what's likely the bottleneck. The 1 record / sec might be more of a reporting issue, not reflecting the amount of work that is actually done. Or it might be a genuine slowdown and if so, we'll try to see which specific resource the algorithm is running short of to understand which specific actions would improve its performance the most.

dchall8
1,008 post(s)
#06-Jun-20 18:37

Using SELECT INTO for the first table and INSERT / SELECT for the rest shaved 3 hour off the time to build that table. So 50% off, 3 hours instead of 6. Life is good.

I'll try the Kriging transform on the entire table and let it run. If it was a reporting issue, then I stopped it prematurely. We'll see. Thanks.

dchall8
1,008 post(s)
#06-Jun-20 20:42

OH MY

Dimitri was right about 10s of minutes. It took just 57 minutes for the Kriging Transform to finish.

dchall8
1,008 post(s)
#07-Jun-20 19:45

I downloaded a new quarter quad of LAZ files adjacent to the previous quarter quad and ran through the process. The times were as mentioned above, so that was good. The new table had 156 million records, a few million more than the previous quarter quad. Then I went to save it and got a Cannot write data error, apparently due to a lack of RAM. When I bought the computer 16 MB of RAM was a lot, but no more. Since it did not save as a .MAP I tried copying the resulting Kriging table and drawing to a different file. The drawing appeared with a red flag. I deleted the drawing and tried to create a new drawing from the table. The drawing was not created. I abandoned that idea and exported the original file to a .mxb file. That took awhile, but it seemed to generate a file. With the file still open, I turned off all the other apps on the computer to free up some RAM. I tried to save again as a normal project and eventually got a Windows notification that Manifold had stopped working (Check the Internet for a solution or turn off Manifold).

Now when I try to import that file I get a Cannot write data error. Oh I also increased the cache to 8 MB or half the RAM as Adam suggested earlier.

I'm looking in the manual for the red flag code but not sure what the terminology is. Is there another way to copy the Kriging table and drawing to a new project? I probably should have deleted the 156 million record table and drawing and saved with only the Kriging table and drawing, but here we are. I'm going to try that now.

tjhb
10,094 post(s)
#07-Jun-20 21:08

It sounds like your user TEMP folder is full.

dchall8
1,008 post(s)
#08-Jun-20 22:21

You are clairvoyant. However, clearing the nearly 100MB of temp files Manifold had built did not help with this data set. I've also noticed that my hard drive is nearly filled with these laz files and the associated Manifold files. But anyway...

Instead of running all 30 laz files, I ran 15 of them. Again, unexpected results. Here's the unexpected image.

Trying to use the Kriging Transform on that drawing returned me to the 0/sec speed, pegged all my CPU cores, and basically locked up the computer. I let it run for 1.5 hours before canceling. Previously it had been taking about 57 minutes for the transform to run on all 30 laz files, so 1.5 hours was clearly excessive. The Kriging masters among you will probably understand why it choked. I assumed it did not like running with that step geography, so I deleted all the selected points in the image and reran the Kriging Transform. This time it took 16 minutes. Progress.

I'm rewriting my queries to run only 10 laz files at a time so I can get this done.

Attachments:
Ground Cloud - 15 files.jpg

dchall8
1,008 post(s)
#10-Jun-20 19:06

10 files did not work as it gave me another step looking bundle of points. Apparently the way these numbered files sort, 12 files is the magic number to get even blocks with no outlying clouds. So I ran 12, 12, and the final 6 files to create workable sized ground clouds for Kriging. Then I joined the three result drawings using the projection from the metadata and styled.

While this has been an interesting educational exercise, USGS and TNRIS also provide DEM files already done for you, so I'm going that direction for speed. I believe the provided DEM files are slightly lower quality, but I might be fooling myself. For what I'm doing, they seem to be fine. Some of the DEMs are 'broken' in that they show elevations of -9999 meters. I will look at the cloud data for those few areas.

If I were to build a computer for processing a lot of LiDAR data, should I emphasize CPU core count, GPU core count, RAM, and/or HDD speed?

adamw


10,447 post(s)
#11-Jun-20 07:19

Everything helps, but for LiDAR, HDD space / speed and the amount of RAM would probably help the most.

The -9999 values in the DEM likely denote missing data = invisible pixels.

Dimitri


7,413 post(s)
#08-Jun-20 05:39

The drawing appeared with a red flag.

What do you mean by "red flag?" Could you attach a screenshot?

I deleted the drawing and tried to create a new drawing from the table. The drawing was not created.

What did you do, step by step, to create a new drawing from the table, and what happened (all details, including any messages from the Log window).

Also, I trust you mean "GB" and not "MB" in the memory values reported.

dchall8
1,008 post(s)
#11-Jun-20 18:30

I need special Manifold glasses. It's small, but it's shaped like a stop sign with an exclamation point in it.

Step by step:

  1. Created image in Manifold
  2. Opened second instance of Manifold
  3. Copied image and table for image from 1st instance.
  4. Pasted image and table into 2nd instance.

Here's the log window.

2020-06-11 12:28:01 -- Manifold System 9.0.172.2 Beta

2020-06-11 12:28:01 -- Starting up

2020-06-11 12:28:01 Log file: C:\Users\David 2014\AppData\Local\Manifold\v9.0\20200611-682064534.log

2020-06-11 12:28:01 -- Startup complete (0.597 sec)

2020-06-11 12:28:01 -- Create: (New Project) (0.008 sec)

2020-06-11 12:28:11 -- Paste: (root) (0.471 sec)

Attachments:
Manifold Red Flag.jpg

Dimitri


7,413 post(s)
#11-Jun-20 19:19

That's a message. Launch View - Messages to read it. It's probably advising you that intermediate levels are out of date or some other routine message, and provides a button to resolve the issue.

dchall8
1,008 post(s)
#11-Jun-20 21:42

Thanks Dimitri. I did not know about messages. That fixed it. The idea was to simplify the project and not have to carry around all the quarter quad files. The file so far is still big at 315 GB for a 130 square mile area.

Also it was GB and not MB.

Right now I'm having trouble importing a GEOTiff file from USGS. (EDIT: 172.3 fixed the import). As the docs say, I'm an inexpert user of GEOTiff files, but I hope to get past non-novice this week. If I can't figure this out, I'll start a new topic, but I have a lot to read yet. The goal here is to overlay the historic USGS maps with the DEM features a la Scott Reinhard.

Attachments:
130 Sq Mi Bandera and Medina Counties.jpg

tjhb
10,094 post(s)
#11-Jun-20 23:17

I have a question.

When you were having trouble with LiDAR points having step geography, what Kriging settings did you use?

In particular, did you use a specified radius? (But all parameters would be best please.)

I am thinking that for points far along the inside edge of the "L", using no radius could impose an undue search burden, and perhaps demand fine interpolation over the "empty" area between, which would be pointless.

I think this could be avoided.

dchall8
1,008 post(s)
#11-Jun-20 23:33

Here are the settings. I made no other adjustments.

Attachments:
Kriging Settings.jpg

tjhb
10,094 post(s)
#11-Jun-20 23:41

Right. That makes sense then, to me anyway.

If you have time, can you try using

margin: 0

resolution: 1

radius: 40

neighbours: 24

model: exponential

on an irregular data extent (e.g. L-shaped)? I'd be interested to know if it does (not) choke with those settings.

Also for a regular extent--you might like to compare results there too.

(Please substitute this for what I said above here. I have checked my notes now. I should have checked before sorry.)

tjhb
10,094 post(s)
#11-Jun-20 23:57

P.s. In case Adam is reading later, is there any chance we could have a catenary model as well as the other options?

AFAIK no one does this, but for a while my strong intuition has been that this is what the landscape itself does, over time (how erosion settles).

The math for catenary interpolation is beyond me. I think it is hyperbolic and necessarily numerical. (I suppose that would make it no longer Kriging?) I am working slowly on this for a related purpose. Really I need to go back to university and do mathematics not philosophy and law. There may be an age limit on that though. (Any hints very welcome.)

tjhb
10,094 post(s)
#12-Jun-20 00:21

[This paper looks exactly on point for my question. I wish I could read it. I'll try though.]

Dimitri


7,413 post(s)
#12-Jun-20 06:03

Really I need to go back to university and do mathematics not philosophy and law. There may be an age limit on that though. (Any hints very welcome.)

A really wonderful book: Mathematics: From the Birth of Numbers by Jan Gullberg. Read it from the beginning and you get a broad overview of math, and from then you can dive into textbooks for more specific areas.

I think the key thing for learning math is getting a good text. You read ahead for lectures in university anyway, and I think the main function of instructors is editing, choosing an efficient path through fat textbooks that matches the time students have for their course. It's probably less efficient to just read the full text books and do all of the problems at the end of each section, but you'll definitely learn the subject that way.

If you ever get stuck on a concept, there are plenty of math forums.

tjhb
10,094 post(s)
#12-Jun-20 06:08

No.

I will get hold of Gullberg, thanks. But.

Almost all math teachers and textbooks and online courses are provided by people who have no idea why what they are presenting makes sense, and absolutely don't care. Mainly just formalisms.

At the other extreme, there are teachers who seem to care about why, but would really like to make things difficult enough that they they can explain them and get credit.

This is also what is wrong with physics.

What I absolutely hate, is that talent for putting concepts into plain language has no currency at all. That is just wrong.

Dimitri


7,413 post(s)
#12-Jun-20 07:33

Almost all math teachers and textbooks and online courses are provided by people who have no idea why what they are presenting makes sense, and absolutely don't care. Mainly just formalisms.

At the other extreme, there are teachers who seem to care about why, but would really like to make things difficult enough that they they can explain them and get credit.

You'll like Gullberg, as he's the exact opposite of both the above. His book is as much history and commentary as it is about teaching the math. It's a book by someone who loves math, who wants to help others love math, too.

As for physics, I recommend the Feynman Lectures on Physics as a basic introduction. Richard Feynman, besides being a great genius, was one of the truly unique personalities of the Physics world, famous for his bongo drumming, and revered as "the Great Explainer."

Also wonderful is "Gravitation," by Misner, Thorne and Wheeler, a wonderful book for insisting that general relativity can be made accessible to undergraduates and to autodidacts.

rk
621 post(s)
#12-Jun-20 07:48

I have found Grant Sanderson's Essence of linear algebra series extremly helpful

https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw/playlists

as well as Pavel Grinfeld's lectures (and his character )

https://www.youtube.com/channel/UCr22xikWUK2yUW4YxOKXclQ/playlists

Grinfeld has two books in progress on Patreon. One on Complex Numbers and the other on Tensor Calculus. I can say that I'm slowly starting to grasp tensors.

I also liked Tristan Needham's Visual Complex Analysis. I have only digested parts of it.

My algebra teacher took very abstract approach: sets -> monoid -> ring -> group -> field -> vector space.

adamw


10,447 post(s)
#12-Jun-20 08:00

We'll check out the model. Thanks for the link.

ColinD

2,081 post(s)
#12-Jun-20 08:58

Do it Tim, no age limit on learning. I graduated from my PhD (which you thankfully assisted with by way of some SQL) as a 70 yo.


Aussie Nature Shots

tjhb
10,094 post(s)
#12-Jun-20 12:10

Thanks everyone for the encouragement (the encouragement not to be unduly daunted).

As far as the paper linked above goes, I was wrong, it is not useful in the way I had thought. In a nutshell, there is no way to map a catenary onto a smooth spiral. The curvature of a catenary is not symmetrical over the plane--that is its whole point--so any radial mapping of a catenary curve cannot be represented (or modelled) analytically.

The paper is interesting and well written but does not add an extra dimension for the question I was asking.

adamw


10,447 post(s)
#08-Jun-20 09:17

The red flag was likely a report that the table is missing the spatial index. It doesn't really matter what it was and why because the problem is definitely in the 'Cannot write data' error. That error was probably lack of disk space on the TEMP drive, as Tim says. After you run out of temp space (= virtual memory), everything goes south - most operations need at least some memory and if they cannot get it, they fail, so you might get multiple error messages all related to the same issue of there being no free temp space. The most you can do is to try and move some files from the drive hosting the TEMP folder somewhere else, then try to save the data to a new file, preferably on the other drive as well (perhaps best to export an MXB, that takes more time but costs less space), then shut down Manifold, delete any stray temp files if there are any (eg, use Disk Cleanup tool, shut down all sessions of Manifold if you have multiple running before you do that, to clean up as much as possible), move the files you temporary moved away from the drive back, and start from the saved file.

adamw


10,447 post(s)
#02-Jun-20 08:17

One thing you could try doing to improve the performance of operations with large amounts of data is allow Manifold to use more memory for cache than the default 4 GB. You can do that in Tools - Options. On a machine with 24 GB it might make sense to increase the default 4 GB to 8 / 10 / 12 GB. There's no need to set the limit higher than half of the RAM, cache is not the only thing that needs memory plus there are other applications besides Manifold, which also need some space. If you change the setting, you have to restart Manifold for the new value to come into effect. You can see the amount of memory currently used for cache / other things in the About dialog.

Manifold User Community Use Agreement Copyright (C) 2007-2021 Manifold Software Limited. All rights reserved.