It is likely primarily a network issue, but Manifold is not handling it well. It is possible it has nothing to do with Windows 10, but the timing seems to match.
This raises a few very important technical issues that need to be kept clear when working with networks. So, with no intent other than technical clarity...
1. There are limits to what an application can "handle" in areas the operating system manages. Otherwise, the operating system would have failed at its most fundamental job of abstracting system resources so applications can run without leaking outside their sandboxes. In key ways, Manifold has to trust that Windows functions correctly. There is no "oh, let Manifold take over in cases where Windows gets it wrong" option.
2. Part of an operating system's job is handling system level resources like networking resources. Windows cannot do that perfectly in cases where networking resources are uncontrollable because they are totally outside of Windows. Windows also has to trust that network resources will function correctly.
3. Neither the operating system nor an application can "handle" destructive events in hardware or third party software they do not control. Unplug a physical Ethernet cable between two computers, one of which hosts the network drive being used by the other, and there is no way for Windows or the application to plug that cable back in. Having an unreliable link between you and your network data store is an example as well.
When a sophisticated application uses data it is like the brain being connected to the retina in eyeballs via the optic nerve. That is such a tight and close connection many neurologists consider the retina an extension of the brain. For sight to work as it does the connection has to be very tight, and the brain has to be able to count on that optic nerve and retina to function.
A "destructive event" might be taking a knife to the optic nerve to cut it. No more sight. Trying to design a brain/retina system that can "handle" such a destructive event would not produce the fast and elegant vision system humans now enjoy. There are ways around that, like very many, compound eyes, but they do not have the features and benefits of the vision system we now have.
When you save your data on a networked device, it is as if you took the eyeballs out of your eye sockets, stretched out the optic nerves, and started playing ping-pong with them. No surprise if you get more "destructive events" that way.
There is not much Windows or any other operating system can do to guard against destructive events in network connections, or, at least not much that can be done which will not dramatically reduce performance when lots of data is involved.
A further effect is that what can be done to increase network reliability is very expensive compared to the basically consumer-level, non-fault tolerant systems that everyone uses. Are you using weapons-grade, embedded control networks like those that are used to connect the many fly-by-wire computers within a modern fighter aircraft? Probably not.
You're probably connecting over garden-variety Ethernet, TCP/IP local area and wide area networks, wiring up systems that are running the usual hodge-podge of Windows and Linux systems, all operating on motherboards that use network chips built out in some Chinese fab where the drivers were written by the lowest bid group of third world programmers. In most cases, I/O chip vendors have never met the people who do work for them and often they don't even know who they are.
At the same time the motherboard runs a BIOS and disk drivers that, under massive pressure to be politically correct, are wired up to automatically shut down both disk access and network function to save power, of course at whatever is considered a good time by some other, anonymous group of programmers, here today and gone tomorrow in whatever Eastern European or Asian country they live. Add to that all of the fun you get with routers, where router hardware vendors toss together outsourced hardware with whatever layers their one programmer who reads quasi-English can find on the web to get their router functional, without having the slightest idea who wrote those layers or how they function. Stir and mix well with endless updates and revisions.
That all that works together at all, given zillions of different motherboards, network chips, bridge chipsets, I/O chipsets, disk controller chipsets, drivers, operating systems, network layers, routers and switches and all of their layers is amazing. But expecting it to work perfectly for every byte over trillions of bytes all the time, well, that's expecting perfection in a system where perfection does not exist.
The bottom line is that if you want to run complex applications with storage outsourced to a network resource you usually end up trading off cost, performance and reliability. To get higher reliability you often have to spend more and/or accept lower performance. No surprises there.
I don't recommend trusting imperfect networks, but if you must, at least invest in super-high quality components every step of the way. That can be very expensive, as you spend more money everywhere. Highly fault tolerant servers for storage are significantly more expensive than ordinary gear most people use, and highly reliable switches, routers and network wiring also tend to be much more expensive. The top end of Cisco's product line, for example, is way more expensive than buying the cheapest D-Link stuff you can find on Amazon. When you connect over the web, like through VPN, you have no control over the quality of many unknown intermediaries.
There are ways of throwing money at that problem as well, to create fault tolerant pipes even through public networks, but such methods often have such performance-reducing side effects, in addition to their high cost, that you might not want to use them. For a fraction of the cost you can get a terabyte SSD and run faster anyway.