Subscribe to this thread
Home - General / All posts - Threshold area anaylisis??
tomasfa
182 post(s)
#15-Apr-16 21:59

Hi Manifolders, I haven't been able to do or figure out how to do this in Manifold.

I want to create a area based on a threshold value. Basically, I have to drawings. One of points lets call it "Stores" and an area drawing with census units with population counts.

I need to create a circle or area around each point "Store" created when the amount people in the census units reach a certain threshold. For example once it reaches 5000 people. In urban areas it would be a small ring, like a mile or so, and in a rural area, I'll supposed it would be of 6 miles around the store location as an example.

Any ideas how we can do this in Manifold???

I don't use ESRI, but I read in the Business Analysts documentation that they do have such an analysis tool. Some links, for reference.. https://desktop.arcgis.com/en/arcmap/latest/tools/business-analyst-toolbox/threshold-trade-areas.htm

Any idea would be appreciated. Maybe possible with Spatial SQL?

Best wishes to all.

tjhb
10,094 post(s)
#15-Apr-16 22:17

Interesting problem! Can you post some sample data?

And do you ideally want rings, or convex hulls?

The "obvious" way to do it is, for each store, rank all census units by distance (what distance? minimum, maximum? weighted centroid?), then pick the closest N for each, up to a population sum M, and draw an enclosing circle or a hull.

But that will quickly get out of hand and slow.

This really calls for custom code, but SQL might get close enough, if it's OK to process in groups, e.g. easy cases, somewhat difficult cases, then the remainder.

For real-world use, there would usually be a calculation of travel time, or network distance, between each store and all neighbouring points. But radial distance might be correct, or a good enough approximation.

tomasfa
182 post(s)
#19-Apr-16 17:51

Hi guys, here is the sample data. Basic town and data. You´re ideas sound great! Sorry I didn't post faster.

Answering your comments, I agree that in a Real-world application a travel time would be much better. The roads in the sample do have Speed data just in case, but the radial distance is a first useful approach to do. From nothing to a buffer, its an improvement. It would be better to have a hull area instead of a regular radial distance, but then again, I wouldn't know how to implement them.

Thinking about if every census unit must be attributed to a single store. I think, we shouldn't worry on that, and any census unit can be assign to any store, even to several. Because it will help in a cannibalization or overlapping analysis of trade areas. It would be useful to see those overlapping areas. And if the attributes have to be distributed, I´ll go for Proportionately.

In Stores table, the columns have the minimum population requests, to know the threshold limits.

Thank you for the support and Yes this is an interesting problem to solve. Go Manifold!!!!!

Attachments:
Threshold_v1.map

tjhb
10,094 post(s)
#15-Apr-16 22:33

For convenience, here is tomasfa's ESRI URL as a link.

https://desktop.arcgis.com/en/arcmap/latest/tools/business-analyst-toolbox/threshold-trade-areas.htm

tjhb
10,094 post(s)
#15-Apr-16 22:39

If tomasfa does get around to posting sample data then it would be really cool if someone with relevant ESRI experience could test and post results and times to compare.

tjhb
10,094 post(s)
#15-Apr-16 23:15

And (fundamental): must each census unit be attributed to only one store? Wholly, proportionately, evenly?

artlembo


3,400 post(s)
#16-Apr-16 00:31

I will try it with ArcGIS and see what we find.

artlembo


3,400 post(s)
#19-Apr-16 23:37

well I'll be a monkey's uncle. This does it! Just link a drawing to this query.

SELECT convexhull(unionall(g)) as g, max(sumpop) as sumpop, NAME

FROM

(

SELECT a.name, SUM(a.totpop) AS sumpop, UnionAll(a.[geom (i)]as g

FROM 

       (SELECT  stores.name, censusunit.[TotPop], censusunit.[Geom (I)],

    distance(censusunit.[Geom (I)],stores.[Geom (I)]as dist

    FROM stores, censusunit

    order by name, dist) AS a, 

    (SELECT  stores.name, censusunit.[TotPop], censusunit.[Geom (I)],

    distance(censusunit.[Geom (I)],stores.[Geom (I)]as dist

    FROM stores, censusunit

    order by name, dist) AS b

WHERE a.name = b.name

AND a.dist <= b.dist

GROUP BY a.name, b.dist

)

WHERE sumpop < 5000

GROUP BY name

just copy and paste this query into your map and link a drawing to it.

The only problem I see is that it selects out for areas under 5,000. So, if you have 4,000 it will grab that, and not circle out further to get to 5,000.

If you don't understand the code, I'll try to write up a walkthrough of it on my blog - this will make for a nice Manifold SQL post. Might even try it in Postgres as well.

artlembo


3,400 post(s)
#20-Apr-16 02:47

for those interested, this works for PostGIS as well (I think this is a good discussion item - notice how similar this is):

SELECT ST_ConvexHull(ST_Collect(g)) as geometry, max(sumpop) as sumpop, name

INTO zones

FROM

(

SELECT a.name, SUM(a.totpop) AS sumpop, ST_Collect(a.geometry) as g

FROM 

    (SELECT  stores.name, censusunit.totpop, censusunit.geometry,

    ST_Distance(censusunit.geometry,stores.geometry) as dist

    FROM stores, censusunit

    ORDER BY name, dist) AS a, 

    (SELECT  stores.name, censusunit.totpop, censusunit.geometry,

    st_distance(censusunit.geometry,stores.geometry) as dist

    FROM stores, censusunit

    ORDER BY name, dist) AS b

WHERE a.name = b.name

AND a.dist <= b.dist

GROUP BY a.name, b.dist

AS T1

WHERE sumpop < 5000

GROUP BY name

mtreglia
155 post(s)
#20-Apr-16 02:51

That's awesome - Thanks for posting, Art!

artlembo


3,400 post(s)
#20-Apr-16 02:57

still trying to figure out a clever way to guarantee that all the values will be above 5,000.

tjhb
10,094 post(s)
#20-Apr-16 04:08

I'm digesting your queries Art. Apart from the substance, there are some things we can do for speed, to help the task scale to more data.

For example, it is more efficient to convert A and B into a separate query (since the results are the same), then call the result twice from the present query.

Secondly, both ORDER BY clauses can be removed (from both queries) since they are not used (though for that reason the Manifold and/or postgreSQL engines might ignore them).

But I have a strong feeling there is more we can do here.

This--

still trying to figure out a clever way to guarantee that all the values will be above 5,000.

--would at first sight be outside the objective, since 5,000 should be a threshold rather than exceeded. It should be possible to include the next-closest census area, if it is OK to exceed the threshold by some amount.

tomasfa
182 post(s)
#20-Apr-16 06:56

Art this Query is Amazing. Works very well. Like tjhb I'm digesting and figuring out the query itself, pretty cool. About the clever way to... above 5000; I support tjhb saying that that number should be a threshold with some kind of fuzzy logic or as a flexible threshold, getting as close to the value as possible even if getting another censusunit to it. Its better to reach the threshold like 5005 or 5050, rather then not reaching it.

Great work! Creative and useful, keeping it simple with just a query, not a long script. Usually the simplest answers are the best and the hardest to obtain.

Best regards, Tomas

tjhb
10,094 post(s)
#20-Apr-16 10:10

Here's my go at it. Similar but broken down more and with some differences. These are the main ones.

(1) In the [Stores] table is a column named [Total Pop Needed]. I read this as a target population threshold per store, rather than there being a fixed common target.

So for each store, census areas are added (going outward) until the total population reaches [Total Pop Needed] for that store. (In one of the examples the target is 9000, for the other two stores it is 2500.) This might not be what's needed, I'm not sure. (Easy to remove.)

(2) The nearest cumulative population is found for each target. E.g. given a target population of 9000, a total of 8900 out to census area X, and a total of 9050 out to census area X+1, all areas out to X+1 are included. (The mechanism for this uses the [rank] and [target area rank] columns.)

There are four queries. Running query 4 is enough if you only want the result, since it calls the others, but these are useful for testing and for understanding what happens. To get the resulting combined area for each store, link a drawing from [Census areas] in query 4.

There are notes and tests, and a headnote for each query saying what it does.

[1 Census areas by store]

-- For each store, list all census units

-- with their populations and distances

SELECT

    [D].[ID] AS [store ID],

    [D].[Name] AS [store name],

    [D].[Total Pop Needed] AS [target P],

    [E].[ID] AS [census area ID],

    [E].[TotPop] AS [census P]-- note 1

    Distance([D].[ID][E].[ID]AS [R] -- notes 2, 3

FROM 

    [Stores] AS [D]

    CROSS JOIN

    [CensusUnit] AS [E]

;

-- Notes

-- 1

-- Source column [TotPop] is FP64

-- but was originally formatted to show 0 decimal places

-- (I have changed this)

-- The raw result here *inherents the source formatting*

-- (so if rounded at source, rounded here too)

-- but an *operation* on the result

-- (here or in the next query)

-- will use the underlying value, unrounded

-- 2

-- Better? use Distance(CentroidWeight to CentroidWeight)

-- Best?   use sum of Distance(CentroidWeight to each vertex) (vectors)

-- 3

-- For a large dataset, only search within a given radius

-- Add a PARAMETER, and replace replace CROSS JOIN with LEFT JOIN

[2 Cumulative population by store]

-- For each store

-- sum the cumulative population

-- out to (and including) each census area

SELECT

    [T].[store ID][T].[store name][T].[target P],

    [T].[census area ID][T].[census P][T].[R],

    COUNT([U].[census area ID]AS [rank],

    --SUM([U].[census P]) AS [closer P], -- for testing

    --SUM(CAST([U].[census P] AS INTEGER)) AS [closer P'], -- for testing

    [T].[census P] + Coalesce(SUM([U].[census P]), 0) AS [cumulative P]

        -- population out to (and including) this census area

        -- may not be integer (note 1)

FROM

    [1 Census areas by store] AS [T]

    LEFT JOIN

    [1 Census areas by store] AS [U]

    ON [T].[store ID] = [U].[store ID]

    --AND [T].[store name] = [U].[store name] -- implicit

    --AND [T].[census area ID] <> [U].[census area ID] -- implicit

    --AND [T].[R] > [U].[R] -- without tiebreaker (see below)

    AND CASE

        WHEN [T].[R] > [U].[R] THEN TRUE

            -- census unit in T further from store than census unit in U

        WHEN [T].[R] < [U].[R] THEN FALSE

            -- census unit in T closer to store than census unit in U

            -- (ignore)

        ELSE -- census units are equidistant (unlikely but possible)

            [T].[census area ID] < [U].[census area ID]

            -- tiebreaker to ensure unique ordering

    END

GROUP BY

    [T].[store ID][T].[store name][T].[target P],

    [T].[census area ID][T].[census P][T].[R]

ORDER BY -- optional

    [T].[store ID] ASC,

    [T].[R] ASC

;

-- Notes

-- 1

-- See note 1 to query [1 Census areas by store]

[3 Nearest match population by store]

-- For each store

-- find the census area 

-- where the total census population

-- living as close or closer

-- is nearest to the target population

SELECT

    [store ID][store name][target P],

    FIRST([census area ID]AS [target area ID],

    FIRST([rank]AS [target area rank],

    --FIRST([R]) AS [target R],

    FIRST([cumulative P]AS [nearest cumulative P],

    ABS([target P] - FIRST([cumulative P])) AS [delta P] -- for checking

FROM

    (SELECT

        [store ID][store name][target P],

        [census area ID][census P][R][rank][cumulative P]

    FROM [2 Cumulative population by store]

    ORDER BY

        --[store ID] ASC, -- optional

        ABS([target P] - [cumulative P]ASC

        -- closest match first

    )

GROUP BY

    [store ID][store name][target P]

;

[4 Combine census areas with matched population]

-- For each store

-- combine the census nearby areas 

-- having the total population

-- nearest the target

OPTIONS CoordSys("Stores" AS COMPONENT);

SELECT

    [T].[store ID][T].[store name][T].[target P],

    [U].[nearest cumulative P],

    SUM([T].[census P]AS [cumulative census P],

        -- should match [nearest cumulative P]

    UnionAll([T].[census area ID]AS [Census areas]

FROM

    [2 Cumulative population by store] AS [T]

    RIGHT JOIN

    [3 Nearest match population by store] AS [U]

    ON [T].[store ID] = [U].[store ID]

    AND [T].[rank] <= [U].[target area rank]

    --AND [T].[R] <= [U].[target R] -- implicit

GROUP BY

    [T].[store ID][T].[store name][T].[target P],

    [U].[nearest cumulative P]

;

Attachments:
1 Census areas by store.txt
2 Cumulative population by store.txt
3 Nearest match population by store.txt
4 Combine census areas with matched population.txt
Result.png

artlembo


3,400 post(s)
#20-Apr-16 12:28

I think the order by is necessary because that is how I am able to get the cumulative sum of population without doing an actual iteration.

And yes, a better option is to write out temporary tables, but I just couldn't help myself in trying to do this in one step. I am amazed that all this got done in a single, small query.

Also, as has already been alluded to elsewhere, Radian allows one to issue multiple queries (like Postgres) and can use multiple threads so that can handle any speed problems.

tjhb
10,094 post(s)
#20-Apr-16 12:46

Art,

Our queries are much more similar than they look, the same DNA.

As to ORDER BY, while my #3 makes it do some actual work, to make the FIRST operator meaningful (so as to get the closest sum to the target), yours don't use them, you can take them out with no change to the logic.

that is how I am able to get the cumulative sum of population without doing an actual iteration

It's not ORDER BY that does this, but the INNER JOIN (which you write as a comma form). This orders the two (identical) tables so that one set "lags behind" the other. That's the ordering which allows the cumulative sum. It's the same mechanism in my #2 (with a LEFT JOIN rather than INNER, and slightly different wiring, but basically the same).

artlembo


3,400 post(s)
#20-Apr-16 13:04

yes, you are correct. I dropped out the order by - which makes this query even shorter than before. When I get to campus today, I'm going to try this on some larger data sets to see how it goes.

I'm definitely going to have to do a lecture on this one, as it illustrates even more reason to use SQL for solving GIS problems. Again, as Tomas said, I am astounded that a little clever logic allows one to create a cumulative sum up to a threshold without having to write a script that performs iteration.

Even though my course on spatial SQL is published, I might have to update it, just to include this part. I'll now be looking for other examples to apply this logic to.

mdsumner


4,260 post(s)
#20-Apr-16 13:09

It's certainly powerful but it's not exactly user-friendly, it's a very high bar here. :)

The non-expert value I see is in the ease of wrapping specific expert implementations into light-weight front-ends. Is there any general work on wrapper languages for SQL, things like CoffeeScript for Javascript, Markdown for HTML?


https://github.com/mdsumner

artlembo


3,400 post(s)
#20-Apr-16 14:58

Mike,

that is an excellent point. This could easily be turned into a function. But, since Dimitri isn't here right now, let me challenge your view of "user-friendly" :-)

What has surprised me so much with this query (I know we say this stuff never ceases to amaze me), is how simple it is, given what it is doing. This query is no more complex than what an entry level engineering undergraduate might do with Excel for a class project (think about those COUNTIF and IIF statements that you can string together in Excel). And, look at what it is doing:

- accumulated summation of a spatial distance search

- convex hull creation

I could write this in a for loop with a do..while and an if..then. Now, THAT is not exactly user-friendly. This query took about 8-10 minutes to create from conception to running it. And, I haven't said that SQL surprises me in a long time - this one still has me shaking my head as to how easy it is to do what it does.

BTW, I created a post that describes the query line-by-line here. It references PostGIS, although it's almost identical for Manifold.

jkelly


1,234 post(s)
#21-Apr-16 04:06

Is there any general work on wrapper languages for SQL

I suppose Linq (https://msdn.microsoft.com/en-us/library/bb397926.aspx) could be considered an attempt at making SQL more accessible. Truth be told though, personally I find the abstraction leaky and more often than not, writing pure SQL is just the easiest way to solve the problem.


James Kelly

http://www.locationsolve.com

adamw


10,447 post(s)
#21-Apr-16 08:30

If we are talking about Linq, we should mention various ORM tools preceding it. They all have the same issues: the abstractions are leaky, the expressiveness / flexibility is a mere subset of that of SQL (and the moment the implementation tries to be expressive, it runs into performance issues).

dale

630 post(s)
#21-Apr-16 05:14

<off topic> I'm part way though Art's most excellent piano spatial SQL course, and I'm realising just how user friendly SQL is. For those ( not Mike S!) put off by the apparent unfriendly nature of SQL, invest some time, and $50USD or less, do Art's course.

adamw


10,447 post(s)
#21-Apr-16 08:21

Is there any general work on wrapper languages for SQL, things like CoffeeScript for Javascript, Markdown for HTML?

Never saw anything like that.

Usually, the next step higher on the "make it easier" scale is just forms that allow you to enter some values / check some boxes and generate SQL from these.

rk
621 post(s)
#21-Apr-16 09:18

Is there any general work on wrapper languages for SQL, things like CoffeeScript for Javascript, Markdown for HTML?

My personal feeling very briefly - I like what's behind SQL (set theory and logic) and for that I like to write mostly SQL (not loops), but I do not like SQL syntaxat all (I have learned to see through SQL cruft). Mike, is it the same thing bothering you? You can take a look at projects at www.try-alf.org and www.andl.org

tjhb
10,094 post(s)
#21-Apr-16 10:54

My personal feeling very briefly - I like what's behind SQL (set theory and logic) and for that I like to write mostly SQL (not loops), but I do not like SQL syntax at all (I have learned to see through SQL cruft).

Might it be time to put in more work? SQL has zero "cruft" when it is well formatted, well commented, well reasoned. Compare to e.g. C#, where you can't move for explanations of the obvious.

mdsumner


4,260 post(s)
#22-Apr-16 02:58

It was a bit of a "leading" question :) dplyr for R is where it's at for me:

I'm going to integrate this with my R package for reading from Manifold, and flesh out this trade example using it. I can do the trade analysis in R now but it's very idiomatic and probably less friendly than Art's SQL (which is great, I do agree).

(I feel we're on the verge of a new framework on top of SQL, and I think Manifold can help lead the charge with this R approach. Other languages could implement dplyr as well)


https://github.com/mdsumner

rk
621 post(s)
#22-Apr-16 13:20

Might it be time to put in more work?

Me? I do not think I can love SQL more than I already do. I love it because it's the best practical language/tool available for GIS and many other things. At the same time I *am* passionate about pure relational theory and better (for my taste, but not only) languages for it.

I cannot agree with you, Tim, truly, that SQL has zero "cruft". But that's a purist talking in me.

I may have been too terse. No one should get the impression that I do not approve the focus on SQL in GIS/Manifold/this forum/this thread. I'm fully and enthusiastically on board. It's just that I have this additional thing going on :-)

tjhb
10,094 post(s)
#23-Apr-16 01:20

Yes you're right Riivo. I was being provocative (though your word "cruft" was probably provocative too, a bit!).

I remember a few years ago you sent me some references and links about bringing SQL (and related existing or new languages) better into line with relational theory. They were well over my head at the time and probably still are, but it might be a good time to revisit them. I'll take a new look.

No way I would doubt your effort, ability, enthusiasm. Rhetorical comment only!

Plenty of room for both natural Platonists and natural Aristotelians to admire (and improve) SQL. It is a already a remarkably pure thing--though with some accommodation for the dust of the world. Therefore durable.

mdsumner


4,260 post(s)
#22-Apr-16 02:59

Thanks! These look good


https://github.com/mdsumner

rk
621 post(s)
#20-Apr-16 13:20

ORDER BY in subqueries should be meaningless. Table operand must not be treated as sorted. In v8 it sometimes isn't so and that can be used for some clever tricks to iterate over table in preferred order (I’ve used it). But notice that that kills [any potential] parallelism.

Reminds me that I should lobby for efficient "rank" calculation operators for tables. All kinds of "rank" or "quota" queries are expressible with self join but rank can be calculated more efficiently under the hood (by sorting. See!).

[edit]

tjhb
10,094 post(s)
#20-Apr-16 13:44

ORDER BY in subqueries should be meaningless.

(a) That is good set theory Riivo but in my opinion pointlessly purist for SQL, because (b) it entails that FIRST and LAST must always be meaningless (OK, arbitrary; but if the operators spelt "first" and "last" return arbitrary set elements then they are very close to meaningless, though not useless). FIRST and LAST exist, and are both useful and meaningful, but only if ORDER BY can be meaningful in subqueries.

You are right about parallelism.

On the other hand, regarding your second paragraph... but that must be carried on elsewhere (tomorrow for me).

artlembo


3,400 post(s)
#20-Apr-16 15:00

in my query, the ORDER BY does not effect the final outcome, but I would encourage you to run a portion of the query with the ORDER BY in there. By running that inner portion, the logic of what is happening becomes very clear (because we've ordered it by name and distance, you can see just how the accumulated sum can be built. I describe this in my post here.

rk
621 post(s)
#21-Apr-16 07:51

First and Last aggregate functions are legacy of MS Access. DBMS's don't have it like that.

tjhb
10,094 post(s)
#21-Apr-16 09:31

That is reasoning backwards, deriving objectives from facts.

Maybe FIRST, LAST should be removed from next-generation SQL. OK: why?

On the contrary, why not extend them? This is what window functions do in other dialects, hugely. (Yes you can use window functions without relying on ordering, but it's atypical.)

And the same, or better, can be done in a different way. So should FIRST, LAST be dropped? If so, it's certainly not because they are not pure enough, not set-theoretic enough, somehow wrong. It could only be because we had more of the same, stronger and better, which also fitted parallelism better.

tjhb
10,094 post(s)
#21-Apr-16 10:46

Cosmetic uses of ORDER BY are fine, obviously. They help us look at a set (which in turn helps us code, as Art has rightly said).

I am not defending my #3 above (which puts ORDER BY and FIRST to non-cosmetic work) on grounds of beauty. It's ugly but useful and (in 8) the alternatives are not as good.

In future we have a much better, beautiful tool.

tjhb
10,094 post(s)
#23-Apr-16 04:37

I just noticed an interesting inefficiency in Art's approach here.

Take the first subquery level.

SELECT a.name, SUM(a.totpop) AS sumpop, UnionAll(a.[geom (i)]as g

...

GROUP BY a.name, b.dist

In English, this is roughly: for each store, for each successive accumulation of census areas (moving outwards from the store), give the total population, and combine all the census areas out to here.

Now take the outermost level, which draws on that.

SELECT convexhull(unionall(g)) as g, max(sumpop) as sumpop, NAME

...

WHERE sumpop < 5000

GROUP BY name

In English again: for each store, find the accumulation of census areas having the largest population up to a limit of 5000; then give me that largest population, and combine all the census areas out to here (and make a convex hull).

So we do that combine all the census areas out to here twice, for whichever accumulation of census areas wins the bid for each store. That's potentially an expensive geometric operation.

Is the previous UnionAll() cached by the engine? Maybe, if it detects that it has already combined the same set of geometry at the previous level. It could potentially do that, but it seems a lot to ask.

Add to that that UnionAll() is performed for all the successive accumulations for each store, including all the eventual losers--even though area is not a criterion for success.

It would be better to use ORDER BY and FIRST, so that the engine can rank by population (with a limit or target), then filter and pick the top result, then perform the UnionAll() just once for that result (and no others).

My sequence below does UnionAll() once, in query 4, after we know which accumulated census area has won the ballot for each store.

Art's query is faster on a small dataset (it is doing less, even if it repeats some steps), but I think is unlikely to scale as well to a larger dataset.

tjhb
10,094 post(s)
#23-Apr-16 06:57

I didn't get the English quite right for the outermost level. The bit in italics there should read

combine all the successively combined census areas out to here

What does "successively combined" mean? Let's say for a given store S there is a sequence of census areas, proceeding outwards by distance from S--areas A, B, C, D and so on up to the most distant area. Let's assume that area C wins, because the cumulative census population out to C is just under 5000.

What has happened at the first subquery level is: for A, it is unioned with itself (identity); for B, it is unioned with A; C is unioned with A and B; D... and so on up to the furthest census areas. (Why? Exactly--none of these union operations are useful, and there could be a very large number.)

Now at the outer level, we pick a winner, C, and we union again, all of the previous unions up to and including the union for C. So A, plus A plus B, plus A plus B plus C. Again that's mostly wasted effort.

What we should do instead is wait until we know the winner, then just union once.

tpg-land
146 post(s)
#06-May-16 20:50

This support forum is so informative! Many thanks to you experts for your transformation of our questions/ideas into working queries!

I had to make a run of this query as it, as Art said, could have many applications.

I found a fault that could seriously impact one's study results if utilizing it to create boundaries. (I cannot follow the flow in SQL and barely in thb's 'english' version)

The query, through its result set, appears to look to the closest polygons for its 'sumpop' and then combining the ones that add up to just under the threshold.

The problem with this is identifiable in the original example by just changing the 'threshold' to 15,000 instead of the original 5,000.

What happens is evident in the attached map with the threshold changed. You could get result polygons that appear to ignore where the population really is (see 'Max Liberia' - blue).

A solution to this would be to centroid the population polygons and then 'convexhull' around the centroid group. *I tried making the poly's centroids and running query as is, didn't work.

I'll bet the correction is a few keystrokes...

Thanks again!

Scott

Attachments:
Threshold_question.map

Manifold User Community Use Agreement Copyright (C) 2007-2021 Manifold Software Limited. All rights reserved.