I've re-read the documentation of the raster model I'm trying to recreate in Manifold. The model evaluates categories of surface variablility within a physical distance:
90 meter DEMs: 3 pixel radius sampling kernel
30 meter DEMs: 10 pixel radius sampling kernel
10 meter DEMs: 31 pixel radius sampling kernel
5 meter DEMs: 63 pixel radius sampling kernel
2 meter DEMs: 151 pixel radius sampling kernel
1 meter DEMS: 301 pixel radius sampling kernel.
For vertical relief variability, the maximum elevation within the search radius minus the minimum elevation within the search radius.
Slope diversity within the search radius
Aspect diversity (Aspect recoded into 17 categories, with 0 representing no signficant slope or aspect).
The key factor is that the model is based on a physical distances (approximately 300m).
Obviously, I could resample the data to a smaller pixel value (2, 3, or 5 meters), but I'd like to be able to evaluate the impacts of micro-relief (1 to 3 meters horizontal) that would be lost when resampling high-res lidar.
Manifold already has most of the functions built-in, but the data set I'm trying to evaluate is quite dense.
I started with (and still have a 1m DEM of Garrett County, Maryland that Art Lembo graciously shared with me in relation to this posting: http://www.georeference.org/forum/t142200.26#142201
It seemed ambitious to ask for 301 pixel radius masks, so I stepped back to 151 pixel radius, expecting to resample the DEM to 2m pixels.
I haven't tackled sending TileValues to a table and creating a layer of points, but given the underlying capability of Manifold to handle big data, I wonder if it would be more attainable to tackle the model that way, using buffers.