I did not notice the when breaking up the datasets that if I had a .map file (I created a separate project for each chunk) over 1 GB that it became interminably slow. Under 1GB and things were fairly speedy.
There's nothing special about 1 GB being a magic boundary for Manifold, but it could be that the way your system is set up going above that results in the disk thrashing. You don't say what you're doing for disk, how that's organized in terms of page files and such, how much ram, what else is running (40 tabs open in Google chrome?) and so on.
I ended up breaking the data set into 40 separate files and running the operations on each. That proved fairly fast.
That seems incredibly unnecessary, like there absolutely has to be a way in 9 not to have to do that. My strong impression is that something basic has been overlooked, or the query being used is ordering the system to do something very inefficiently, or some other detail. Weird performance issues are often all about the details of what's being done. To help out, we need to know all details about your data and what you're doing.
Could you post your starting data, a description of what you want to accomplish, and the query you are currently using? That will eliminate guesswork and will allow everyone to provide specific advice on how to do what you want fast.