I'm performing a TFS Integration migration from tfs.visualstudio to an on-premise 2012 server. I'm running into a problem with a particular changeset that contains multiple binary files in excess of 1 MB, a few of which are 15-16 MB. [I'm working remotely (WAN) with the on-premise TFS]
From the TFSI logs, I'm seeing: Microsoft.TeamFoundation.VersionControl.Client.VersionControlException: C:\TfsIPData\42\******\Foo.msi: The request was aborted: The request was canceled.
---> System.Net.WebException: The request was aborted: The request was canceled.
---> System.IO.IOException: Cannot close stream until all bytes are written.
Doing some Googling, I've encountered others running into similar issues, not necessarily concerning TFS Integration. I'm confident this same issue would arise if I were just checking in a changeset like normal that met the same criteria. As I understand it, when uploading files (checking in), the default chunk size is 16MB and the timeout is 5 minutes.
My internet upload speed is only 1Mbit/s at this site. (While I think the problem would mitigated with sufficient upload bandwidth, it wouldn't solve the problem).
Using TCPView, I've watched the connections to the TFS server from my client while the upload was in progress. What I see is 9 simultaneous connections. My bandwidth is thus getting shared among 9 file uploads. Sure enough, after about 5 minutes the connections crap out before the upload byte counts can finish.
My question is, how can I configure my TFS client to utilize fewer concurrent connections, and/or smaller chunk sizes, and/or increased timeouts? Can this be done globally somewhere to cover VS, TF.EXE, and TFS Integration?