How come ?
Without thinking about it deeply, my guess is that expanding the "fuzziness" of the precision to four meters makes solutions indeterminate, and that just one is picked at random. After all, a tolerance of four meters is bigger than the "inside/outside" difference in the complicated object used as an overlay.
Suppose you have three doorways that are each one meter wide, with one meter between each doorway. Suppose you stand somewhere in front of the three doorways, and someone asks the question, "If we know the layout of the doorways give or take four meters, which doorway are you in front of?" Well, give or take four meters you can't tell what is a doorway and what is a wall between doorways, so you could be in front of any of the three or none of the three. Pick at random and you could get different results with exactly the same data.
would return equal results
I'd expect it to return equal results if a) there wasn't fuzziness that allowed more than one result to be picked at random, and if also b) the data were the same.
But even setting aside the fuzziness, the data aren't the same. The object that you use as an overlay is substantially different in the two different cases.
That's why I asked you questions 2 and 3 in my prior post, because besides the obvious visual differences I'm curious what process you used that might have also made the topology different.
You can see using very different objects for the overlay causes a difference in the results: Take your Wbn_dp_merged2 drawing layer from Map2 and use it in Map, rewriting your query for that map to use that layer (a single 2 character added to the VALUE ...):
VALUE @overlay TABLE = [Wbn_dp_merged2];
Run that query using the same overlay object in both cases and you'll see the results are identical when using a tolerance of 4 in both maps. [I've done a normalize and a topology : clean generalize on all the drawings involved, just in case there was some pathology left over from how they were created.]
It's interesting to note that the results are the same when using the same overlay, but why exactly the results are what they are I suspect is a matter of a possible fuzziness issue caused by using a tolerance that is larger than some of the defining features of the overlay, resulting in topological pathologies or unknowable shape. I don't know if part of what the auto setting does is to guard against fuzziness that makes a solution indeterminate in the case of deep pathology.
But it's a very interesting test case, and I bet it would certainly be useful for testing how algorithms work in extreme cases and either break down or don't when set up with parameters that cause unknowable shape or other pathology. I suspect that just trying it on Arc would crash it. :-)