to better handle the impact on temp file size.
Well, the amount of space required for temp files depends on the amount of data you are working with, the options you choose, and on the nifty features that Manifold provides. There's also a lot of interaction with Windows.
That Manifold makes liberal use of such storage options is a good thing, because your time is worth way more than the cost of absurdly inexpensive disk space. That is especially true if using a bit of extra disk space can save you from catastrophes like a power or hardware failure in the middle of saving a big project. Redoing work for a big project, just one, can cost you several times more than the cost of an inexpensive big disk. Having plenty of storage space is really cheap insurance.
The general rule of thumb is that you should have three times the size of your project available as free space on disk. The size of your project is what it is, plus as much again to enable a temporarily cached version to get some protection from hardware failures during processing, plus as much again as working space. That advice is usually phrased as "three times the size of the project in free TEMP space" because most folks can't keep track of what's temp and what's not temp space.
How you run your project can also affect the project size. For example, importing takes more room in a project than linking. But when you start merging to create a single raster out of many linked rasters, you still end up reading the data from all those linked rasters into project memory, be it cache in memory, cache on disk, temp files on disk, etc.
Other options can affect the total size of the project in play. For example, when you link a file, did you check the "Save cache" checkbox? See the Cache and Linked Data section of the Importing and Linking topic. If you checked that box, you commanded Manifold to use cache, which means larger project size in memory, which means more use of secondary storage (disk, temp files, page file, etc) for larger projects. Cache is generally a very good thing, well worth the storage space involved.
To take another example, Manifold creates a .SAVEDATA file in a double-tap regime to avoid catastrophic corruption of .map projects if a power failure happens right in the middle of a save. It's part of Manifold's hardening against common system failures, to make Manifold as reliable as possible. That does mean saves take slightly longer and it also means there will be greater use of temp files given the interaction with Windows, limited main memory, etc. But it also means far greater reliability than you get with other software. When I leave an array of programs open on my Windows desktop in the evening, if the next morning I see Windows updated itself and rebooted (!$#!), the one application I never have to worry about is Manifold, if I had several projects left open.
I give those examples to illustrate why as a rule of thumb, it doesn't pay to second-guess what the system is doing in terms of optimally utilizing project files, working files like .MAPCACHE, and purely temporary files that may be in temp space or pagefiles. There are very many moving parts involved in getting performance, being able to work with larger data despite relatively small main memory, and being hardened against disasters as much as possible.
If you find that space is tight, the good news is that a solution is easy and inexpensive: A 6TB Hitachi hard disk is $106 and a Seagate 6TB hard disk is $115. A 12 GB Hitachi is $194. Install a larger hard disk if you will be doing 500 GB projects. That will give you plenty of room for projects and for temp space. It also will provide plenty of room for archival storage, so you can save intermediate versions of projects in case you discover an error in workflow and want to go back to some intermediate version, without having to redo everything from the beginning.
Nor can you easily select all the images in the project pane, even with the images filtered, because they are buried in the file tree.
Here's a quick way to drop many images (I'd do 50 or 100 at a time, using folders to keep them conveniently organized) from within many data sources into a map:
1. Read about how to select items in swaths, where you ctrl-click one and then Shift-ctrl-click another and all in between get selected as well. The Layers Pane and Tables topic has a nice example.
2. Create the map into which the images will be dropped.
3. Choose File - Link and then in the Link dialog choose 50 or 100 or so and link all at once into the folder for that batch.
4. In the Project pane, set it to display images only (button to the right of the Filter box).
5. Starting at the lowest data source, click the + box to open it. Starting at the lowest data source lets the system do the work of scrolling through many as they open up. Open all the data sources. Takes about a second per data source.
6. Ctrl-click the first image to select it, scroll to the last data source and Shift-ctrl-click the last one, to select those and all in between.
7. Drag and drop all the selected items into the map. Done.
If you want to reduce rendering time, use the Layers pane to turn off all the image layers you've dropped into the map. If it's not visible, no time will be spent rendering it. A layer doesn't have to be visible to be used in the Merge dialog. I've done merges of about 100 GB for images, and didn't find the rendering time to be objectionable, but it can add up when merging 100 GB of, say, LiDAR point clouds. Those layers I turn off.
I grant that the above process involves manual effort. But if you use facilities that enable you to select very many TIF files for linking, and then swath selection to select many components to drop into a map, to turn on or off all together in the Layers pane, etc., it goes quickly even if you deal with hundreds of them.
If you have to do this repetitively, automate the process with a script. SQL is not the right choice because that's generally good where you have many records in one component, but not very many components to iterate through. If you have to do a repetitive process manipulating hundreds of components, use a script.