What is the fastest way to send large binary file from one pc to another pc over the Internet?
Asked Answered
D

5

6

I need to send large binary(2Gb-10Gb) data from one pc(client) to another pc(server) over the Internet. First I tried to use WCF service hosted in IIS using wsHttpBinding binding with message security but it took a lot of time (a few days) which is a inappropriate for me. Now i think about writing client and server applications using sockets. Would it be faster?

What is the best way to do it?

Thanks

Downbow answered 14/2, 2011 at 9:51 Comment(3)
Too bad you mention it had to be over the Internet. Writing the data on a memory card, and send it by pigeon, could be faster. (dailymail.co.uk/news/worldnews/article-1212333/…).Dell
@Dell Nothing is stopping you from implementing CPIP blug.linux.no/rfc1149Postorbital
On a more serious note. There is nothing that makes FTP any faster than WCF. You simply need to configure it correctly. It really just sounds like an issue with the size of your pipe, in which case your problem is the "over the Internet" part.Postorbital
C
9

The plain old FTP in order to me would be suitable in this case. By using it you will have the chance to recover an interrupted transfer without need to redo de job from start. You need to keep in account the possibility a so massive download get interrupted for some reasons.

Congregate answered 14/2, 2011 at 9:53 Comment(2)
i agree. After connect it opens a separate data channel where the data will be sent completely raw, only embedded into a TCP header. There is nothing faster you can do. (Maybe zip / unzip them before / after transport, but maybe the effort isn't worth depending on the data itself).Spinifex
Should i use SFTP instead of FTP if i want data to be encrypted?Downbow
D
4

When sending large amounts of data, you are limited by the bandwidth of the connection. And you should take care of disruptions in the connection. Small disruptions can have a big impact if you have to resend a lot of data.

You can use BITS, this transfers the data in the background, and divides the data into blocks. So it will take care of a lot of stuff for you.

It depends on IIS (on the server), and has a client (API) to transfer the data. So you do not need to read or write the basics of the data transferring the data.

I don't know if it will be faster, but at least a lot more reliable as making a single HTTP or FTP request. And you can have it running very fast.

If bandwidth is a problem, and it doesn't have to be send over the internet, you could check out high-bandwidth/low-latency connections like sending a DVD by courier.

You can use BITS from .Net, on CodeProject there is wrapper.

Dell answered 14/2, 2011 at 10:11 Comment(3)
Won't BITS reduce its usage of bandwidth when others use it? That should in theory increase the overall transfer time compared to other options?Trajan
The idea of BITS is indeed background transfer. But I've found it reasonably robust for transferring data.Dell
thanks for your answer! i will try to use BITS and let you know about the resultsDownbow
T
1

Well, the bandwidth is your problem, going even lower into sockets won't help you much there as WCF overhead doesn't play much with long binary responses. Maybe your option is to use some lossless streaming compression algorithm? Provided that your data is compressible (do a dry run using zip, if it shrinks a file on local disk you can find a suitable streaming algorithm). Btw, I would suggest providing a resume support :)

Trajan answered 14/2, 2011 at 10:13 Comment(1)
Clearly you have no idea about the WCF stack. Look into Binary Streams with WCF and look into other Message types other than XML.Postorbital
A
1

Usually it's most appropriate to leverage something that's already been written for this type of thing. e.g. FTP, SCP, rsync etc

FTP supports resuming if the download broke, although not sure if it supports a resumed upload. Rsync is much better at this kind of thing.

EDIT: It might be worth considering something that I'm not terribly familiar with but might be another option - bit-torrent?

A further option is to roll your own client/server using a protocol library such as UDT which will give you better than TCP performance. See: http://udt.sourceforge.net/

Archimandrite answered 14/2, 2011 at 10:34 Comment(0)
H
0

Although there is some bandwidth overhead associated with higher level frameworks, I have found WCF file transfer as a stream to be more than adequately fast. Usually as fast as a regular file transfer over SMB. I have transferred hundreds of thousands of small files in a session, which included larger files 6-10gb sometimes larger. Never once had any major issues over any sort of decent connection.

I really like the interfaces that it provides. Allows you to do some pretty cool stuff that FTP cant, like remoting, or duplex end points. You get programmatic control over every aspect of the connection on both sides, and they can communicate messages along with the files. Fun stuff.

Yes FTP is fast and simple, if you don't need all that stuff.

Helenehelenka answered 24/3, 2016 at 4:7 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.