I'm hoping that we can have an instance of the main machine being the licence and using other machines not having to have a full licence
Current thinking is that the free Viewer will be on worker nodes. That would make it easy to install on however many systems you wanted, either on your local network on in the cloud. Cloud providers like that too, since no SN/activation is required.
Have any Manifold engineers got access to a dual EPYC 7742 or 7H12 server to try to stretch Manifold 9 a bit?
Manifold has way more hardware than necessary. That's good, but it's only part of the picture since optimizing use of hundreds of CPU cores depends on what is being done.
So far, very many cores in a box (dual-CPU motherboards, running two EPYC 7742 CPUs with 64 cores each = 128 cores / 256 threads) have mostly been used by what AMD calls "Hollywood creators," running rendering/special effects. The parallelism in that business is very straightforward, basically a lot of the same thing done over and over for more and more frames. It's easy to scale from a dozen cores to many, so AMD has been very successful just dropping in new CPUs with bigger and bigger core counts.
GIS is different in that manycore CPU parallelism can involve significantly different approaches for the very different types of parallel processing that are done with vectors, rasters, databases, SQL, data access, computation, etc. Part of the challenge is not just getting parallelism done right for one thing, like a raster calculation, but for many different things, and also for a mix of those many different things that can change from moment to moment. 9 already does that for typical core counts, but it would be a mistake to just go forward blindly without tuning for the significantly larger core counts now coming into use.
Simulations and other lab work can go a long way but you can't beat real life experience in such totally new environments as distributing a wide variety of GIS tasks over hundreds of CPUs. Good feedback from the community on real life applications will help the system wring all possible performance out of lots of cores.
Those people who are running 32, 64 or 128 core machines, or (when distributed computing comes out) who are launching tasks in networks with hundreds of worker CPU cores, should report their experiences in different tasks and maybe volunteer to try specialized builds that can probe different optimizations. That will help tune lab results to a wide range of real life uses.
I expect that just as 9 already has had a steady stream of performance improvements and optimizations based on real life scenarios reported by the community, just so as core counts increase that will continue. It's already a pretty wild world out there, as technology has shifted gears from increases that amounted to no more than a couple of more cores per year to suddenly, jumps of 24, 32 and 64 cores at a time. It's really a great time to be running parallel, to take best advantage of all that new power.