Subscribe to this thread
Home - General / All posts - Memory and CPU usage parameters in Manifold 9
StanNWT73 post(s)
#27-Jul-18 23:35

I'm wondering if there's support for a way to set a maximum amount or memory in bytes or as a percentage as well as the maximum number of CPU logical processors as a percentage that manifold could use to prevent it from interfering with other things going on or hanging the system. I know we all love that manifold 9 users everything you can throw at it, but might there be some efficacy in having the option to set limits?

tjhb

8,167 post(s)
#27-Jul-18 23:56

Currently 9 does not use more than 4GB physical RAM for its cache. (You can force it to load more than that into physical memory for raster data, as Adam has explained, but you really have to try.) The reason why this is enough is that the Radian streaming data model is absolute rocket science.

An option to restrict the number of CPU cores in use sounds good, but perhaps it could not work. Manifold launches threads, and we can control the number of threads used per operation; but it does not control the number of logical or physical cores deployed as far as I know--cores are allocated by Windows. I think a limit to the number of concurrent threads would be possible, but that would not limit cores in use. Maybe I'm wrong--maybe Manifold could restrict processor affinity, if it mattered that much.

Do you have an example of a specific case where current settings have proved inefficient, or have seemed to hang the system?

StanNWT73 post(s)
#28-Jul-18 00:34

The example I would use is when I was decomposing to coordinates the river network layer for Canada. It used 99% of all 96GB of RAM on my workstation. Yes Manifold said it only had 4GB allocated but the RAM usage was as a result of what I was percent in Manifold. I don't recall if it used all CPU resources in that instance. I just am postulating the idea since wer all have or currently work in an organization IT environment where there's the typical IT bloat ware that runs in the background and needs resources. So if there was a way to make sure Manifold couldn't lock up because of its beautiful ability to use all the resources that you can throw at it, that might be useful. I'm sure we all have brows running office documents etc and wer do things while our GIS is running a script or transform. Perhaps in just used to waiting for hours to do things in typical GIS.

The large DEM that I uploaded for support to use as an example of a support case is a 1 arc second DEM with bounds of 180W, 89N, 89E, 57N. It's ~30m resolutionand a really big DEM. I contoured it in 7.78 hours but it used all resources available to do it (8 logical processors 96GB RAM, quadro m4000, quadro p4000).

My new machine is a dual hexacore (24 logical orocessors) dual quadro p4000, samsung 960 Pro 2TB OS drive.

My new machine had a lot more umph to throw at manifold which is awesome. Just wondering if there's a way to make sure something is left for the other crap we all have to have running on our respective OS.

tjhb

8,167 post(s)
#28-Jul-18 00:46

Umm spellcheck. Do you re-read at all?

Windows controls most of what you are discussing.

Consider, in your second sentence, what "it" means. (With due acknowlegement to Bill Clinton.) Was that Manifold, Windows, or something else I wonder.

Many of us are able to design, specify and control our own systems, so can avoid corporate stupidity. Manifold should design for the correct case, not the stupid ones.

If you have IT bloat, that's bad, but it's not something Manifold should respond to. Pick up the phone and object. If IT gets in the way of your work/productivity, then obviously that matters.

tjhb

8,167 post(s)
#28-Jul-18 01:07

My new machine had a lot more umph to throw at manifold which is awesome.

Your new machine is fantastic, and much more expensive than you need to run Manifold 9 efficiently--provided you are not crippled by IT colleagues.

Just wondering if there's a way to make sure something is left for the other crap we all have to have running on our respective OS.

There is a way: get them to remove the crap. Speak up. If you don't, you are to blame.

StanNWT73 post(s)
#30-Jul-18 17:57

So Manifold users are to blame when corporate IT policy and thus the Windows group policies that are set up for even Admin users on the users corporate network, are to blame, when we can't get rid of their corporate IT security and monitoring software... interesting opinion. Does this translate to national governments where corporate IT policies about software installations and the standardized corporate image that is installed as a base upon which all computers are delivered to the end users, even those with admin rights use?

In a home computer, or small business where the end user has a good deal of control your observation about blame has merit, however, not in large organizations where the end user is a peon (as viewed by corporate IT).

I have spoken up about many things over the time I've been in my organization, but IT is generally inflexible on many issues. Some are not of their decision but others in the organization that give them their marching orders on security policies.

tjhb

8,167 post(s)
#30-Jul-18 22:37

If you loudly and insistently object, every single time you don't have enough resources to do your work efficiently, then no, and good on you for furthering the cause.

Otherwise yes, your own fault.

tjhb

8,167 post(s)
#31-Jul-18 00:18

And yes it absolutely translates to large corporate and government environments. It matters even more then. Speak up every time.

adamw

8,037 post(s)
#01-Aug-18 17:04

You can limit the number of CPUs we use right now using Windows. For Windows 10: launch Task Manager, go to the Details tab, locate MANIFOLD.EXE, right-click it, select 'Set affinity' and check only a subset of CPUs. We could add an option to limit this from our side, but such an option would perhaps be counter-productive, because you typically want to limit not just any Manifold session, but a specific one: eg, set that batch import to use a single CPU, set that contouring to use a different CPU, etc.

It's tougher with RAM. Typical requests are to let the user increase the amount of RAM we use for cache, but in your case it seems you want to limit how much of it is getting tied up, and that's not for cache, that's for cache + Windows cache for the offline storage. We don't have an easy solution for that right now.

The issue is this: how to limit a background session of Manifold from bogging down the PC, right? We might have some other ideas for that.

Dimitri


4,938 post(s)
#28-Jul-18 06:34

Do you have an example of a specific case where current settings have proved inefficient, or have seemed to hang the system?

In a world where people complain software uses only 5% of their processors, in a way it's a nice problem to have. :-)

I know what you mean by the above, but all the same going forward it's important to use precise nomenclature to identify where one would like to see improvement. If the system is extremely busy doing something - and tools like task manager report that it is busy doing something - although the GUI may not be responsive to mouse motions and keyboard clicks the system is not hung... it's very busy doing something.

Given the interplay of Windows, 9, and other software, it can be challenging to identify what causes an issue, so you can do something about it. My favorite example is how when using image servers you can run into a case of an unresponsive GUI while waiting for a Google layer to bring in tiles. If you don't use your personal API key, Google seems to more and more frequently "throttle" responses. I see that not just in 9 but also after intensive sessions using Google it's seen in a browser (Edge) as well. Sorting out exactly where in the stack the grabbing of GUI function as a result of Google network unresponsiveness is not a simple matter (all the same, it will get sorted).

The parallelism you get with 9 has opened up all sorts of new worlds of experience. The general rule in the past is that people were annoyed that only 5% of their system's resources were being used, so the main focus in 9 was to make full use of your system. It turns out that when you do that you discover there are many parts of your system (could be Windows, could be drivers, could be other software) that are designed with an unconscious expectation of slack, that whatever else is running is running sufficiently ineptly that there will be plenty of leftover resources. It's a new thing that software is running which, when it asks to use whatever resources are available, actually uses all that are made available.

It's not just CPU / RAM / Windows resources either. Manifold discovered very early on that if you can manage to actually use 100% of a GPU, keeping all the cores fed with data so the GPU spent all of its time working, that the hardware of many GPU cards was not designed for 100% use cycles. It turns out that some GPU cards were designed with cooling that unconsciously assumed the GPU would for the most part not be running at full utilization.

It's very rare in most applications the GPU gets used anywhere near full capacity, so if cooling is designed for only a 50% cooling cycle or less you can overheat, and even "melt" the GPU when you run it 100%. (I write "melt" in quotes because the chip itself does not actually melt. I think it is the very fine wires connecting the pads on the chip to the container that melt, or the solder that bonds the wires to the pads...). That's no longer an issue since GPU cards these days are designed with better cooling, the hardware vendors also having gained experience as significantly parallel applications have emerged.

I believe that as a practical matter the THREADS command will limit the number of CPU cores (could be wrong). But, as there is more and more experience at this it could be that if better control than THREADS is required to limit the number of resources 9 grabs then facilities to do that will be introduced.

StanNWT73 post(s)
#30-Jul-18 18:10

Thanks Dimitri,

I appreciate the feedback. It is a nice problem to have. Many software vendors for graphic design, raw photo editing, video editing and animation have or are beginning to take advantage of CPU and GPGPU parallelism to take advantage of the hardware to allow the user to be more productive and have totally new tools at the users disposal. We all know this. It's just now getting around to the GIS industry at least on the desktop side.

My machine in general is what Esri states as a recommended configuration for ArcGIS Pro, their optimal configuration is a dual deca-core setup. At the same configuration Manifold runs circles around ArcGIS doing equivalent tasks thanks to the massive development effort from the mid 2000s until now.

On my new machine I will be comparing how long it takes to process large data sets in both environments and compare the outputs to each others outputs in both programs:

  • ArcGIS output in Manifold
  • Manifold output in ArcGIS
This is what I was doing with the large DEMs I have that I sent to support back in June. But now that I have a much more powerful machine Manifold should run circles around my old times, just because of the Samsung 960 Pro as my OS / pagefile / temp drive alone.

Manifold User Community Use Agreement Copyright (C) 2007-2017 Manifold Software Limited. All rights reserved.