Do you have an example of a specific case where current settings have proved inefficient, or have seemed to hang the system?
In a world where people complain software uses only 5% of their processors, in a way it's a nice problem to have. :-)
I know what you mean by the above, but all the same going forward it's important to use precise nomenclature to identify where one would like to see improvement. If the system is extremely busy doing something - and tools like task manager report that it is busy doing something - although the GUI may not be responsive to mouse motions and keyboard clicks the system is not hung... it's very busy doing something.
Given the interplay of Windows, 9, and other software, it can be challenging to identify what causes an issue, so you can do something about it. My favorite example is how when using image servers you can run into a case of an unresponsive GUI while waiting for a Google layer to bring in tiles. If you don't use your personal API key, Google seems to more and more frequently "throttle" responses. I see that not just in 9 but also after intensive sessions using Google it's seen in a browser (Edge) as well. Sorting out exactly where in the stack the grabbing of GUI function as a result of Google network unresponsiveness is not a simple matter (all the same, it will get sorted).
The parallelism you get with 9 has opened up all sorts of new worlds of experience. The general rule in the past is that people were annoyed that only 5% of their system's resources were being used, so the main focus in 9 was to make full use of your system. It turns out that when you do that you discover there are many parts of your system (could be Windows, could be drivers, could be other software) that are designed with an unconscious expectation of slack, that whatever else is running is running sufficiently ineptly that there will be plenty of leftover resources. It's a new thing that software is running which, when it asks to use whatever resources are available, actually uses all that are made available.
It's not just CPU / RAM / Windows resources either. Manifold discovered very early on that if you can manage to actually use 100% of a GPU, keeping all the cores fed with data so the GPU spent all of its time working, that the hardware of many GPU cards was not designed for 100% use cycles. It turns out that some GPU cards were designed with cooling that unconsciously assumed the GPU would for the most part not be running at full utilization.
It's very rare in most applications the GPU gets used anywhere near full capacity, so if cooling is designed for only a 50% cooling cycle or less you can overheat, and even "melt" the GPU when you run it 100%. (I write "melt" in quotes because the chip itself does not actually melt. I think it is the very fine wires connecting the pads on the chip to the container that melt, or the solder that bonds the wires to the pads...). That's no longer an issue since GPU cards these days are designed with better cooling, the hardware vendors also having gained experience as significantly parallel applications have emerged.
I believe that as a practical matter the THREADS command will limit the number of CPU cores (could be wrong). But, as there is more and more experience at this it could be that if better control than THREADS is required to limit the number of resources 9 grabs then facilities to do that will be introduced.