Issue with Azure chunked upload to fileshare via Azure.Storage.Files.Shares library
Asked Answered
C

1

7

I'm trying to upload files to an Azure fileshare using the library Azure.Storage.Files.Shares.

If I don't chunk the file (by making a single UploadRange call) it works fine, but for files over 4Mb I haven't been able to get the chunking working. The file is the same size when downloaded, but won't open in a viewer.

I can't set smaller HttpRanges on a large file as I get a 'request body is too large' error, so I'm splitting the filestream into multiple mini streams and uploading the entire HttpRange of each of these

        ShareClient share = new ShareClient(Common.Settings.AppSettings.AzureStorageConnectionString, ShareName());
        ShareDirectoryClient directory = share.GetDirectoryClient(directoryName);

        ShareFileClient file = directory.GetFileClient(fileKey);
        using(FileStream stream = fileInfo.OpenRead())
        {
            file.Create(stream.Length);

            //file.UploadRange(new HttpRange(0, stream.Length), stream);

            int blockSize = 128 * 1024;

            BinaryReader reader = new BinaryReader(stream);
            while(true)
            {
                byte[] buffer = reader.ReadBytes(blockSize);
                if (buffer.Length == 0)
                    break;

                MemoryStream uploadChunk = new MemoryStream();
                uploadChunk.Write(buffer, 0, buffer.Length);
                uploadChunk.Position = 0;

                file.UploadRange(new HttpRange(0, uploadChunk.Length), uploadChunk);
            }

            reader.Close();
        }

The code above uploads without error, but when downloading the image from Azure it is corrupt.

Does anyone have any ideas? Thanks for any help you can provide.

cheers

Steve

Chatterer answered 2/4, 2020 at 22:11 Comment(0)
K
13

I was able to reproduce the issue. Basically the problem is with the following line of code:

new HttpRange(0, uploadChunk.Length)

Essentially you're always setting the content at the same range and that's why the file is getting corrupted.

Please try the code below. It should work. What I did here is defined the HTTP range offset and moving it constantly with number of bytes already written to the file.

        using (FileStream stream = fileInfo.OpenRead())
        {
            file.Create(stream.Length);

            //file.UploadRange(new HttpRange(0, stream.Length), stream);

            int blockSize = 1 * 1024;
            long offset = 0;//Define http range offset
            BinaryReader reader = new BinaryReader(stream);
            while (true)
            {
                byte[] buffer = reader.ReadBytes(blockSize);
                if (buffer.Length == 0)
                    break;

                MemoryStream uploadChunk = new MemoryStream();
                uploadChunk.Write(buffer, 0, buffer.Length);
                uploadChunk.Position = 0;

                HttpRange httpRange = new HttpRange(offset, buffer.Length);
                var resp = file.UploadRange(httpRange, uploadChunk);
                offset += buffer.Length;//Shift the offset by number of bytes already written
            }

            reader.Close();
        }
Kickapoo answered 2/4, 2020 at 23:50 Comment(4)
Gaurav, you are a star. Thank you. I was assuming the HttpRange related to the local file, not the file within Azure. It all makes sense now.Chatterer
It's worth noting the Max Chunk Size is 4194304 bytes which is 4MBMeliorism
The code above has a .Dispose() related bug. You need to wrap the MemoryStream in a using () {} statement (or if on C# 8, simply prefix with "using MemoryStream...". Because of this, it was hanging for me on large file uploads. BinaryReader would need this also. Otherwise works great!Hanford
using (MemoryStream memoryStream = new MemoryStream(50)) { memoryStream.Write(bytes, 0, bytes.Length); }Dermato

© 2022 - 2024 — McMap. All rights reserved.