I have a large number of audio files I am running through a processing algorithm to attempt to extract certain bits of data from it (ie: average volume of the entire clip). I have a number of build scripts that previously pulled the input data from a Samba network share, which I've created a network drive mapping to via net use
(ie: M: ==> \\server\share0
).
Now that I have a new massive 1TB SSD, I can store the files locally and process them very quickly. To avoid having to do a massive re-write of my processing scripts, I removed my network drive mapping, and re-created it using the localhost
host name. ie: M: ==> \\localhost\mydata
.
When I make use of such a mapping, do I risk incurring significant overhead, such as from the data having to travel through part of Windows' network stack, or does the OS use any shortcuts so it equates more-or-less to direct disk access (ie: does the machine know it's just pulling files from its own hard drive). Increased latency isn't much of a concern of mine, but maximum sustained average throughput is critical.
I ask this because I'm deciding whether or not I should modify all of my processing scripts to work with a different style for network paths.
Extra Question: Does the same apply to Linux hosts: are they smart enough to know they are pulling from a local disk?
subst
to assign a drive letter to the folder. The overhead on that is negligible, and the network stack is not involved. – Hewlettsubst
ornet use
, if one is faster than the other. – Noway