Dropzone JS - Chunking
Asked Answered
D

1

15

I think I'm pretty close with this, I have the following dropzone config:

Dropzone.options.myDZ = {
  chunking: true,
  chunkSize: 500000,
  retryChunks: true,
  retryChunksLimit: 3,
  chunksUploaded: function(file, done) {
   done();
  }
};

However because of the done() command it finishes after 1 chunk. I think at this point I need to check if all chunks have been uploaded, and if so call done()

Here is the wiki for chunking: https://gitlab.com/meno/dropzone/wikis/faq#chunked-uploads

And here is the config options: http://www.dropzonejs.com/#configuration

Has anyone used dropzone before?

Diagonal answered 11/4, 2018 at 8:13 Comment(2)
I'm attempting chunking with dropzone.js right now. If I get it working I'll let you know. Please let me know if you figure it out too.Representation
After working on this all day, I can confirm that chunksUploaded is only called once all chunks have been sent. The done() function only performs processing on the front side to update the status and mark the file as complete. What you'll likely want to do is, inside of the of the chunksUploaded, make a request to the server that will merge all of the chunks. If that request returns successfully, then you'll want to call done(). I can give more examples tomorrow. Some examples of your server side code would help too.Representation
R
25

After working on this for a while I can confirm (with the latest version of dropzone 5.4.0) that chunksUploaded will only be called when all of the chunks for a file have been uploaded. Calling done() will process file as successful. What is the file size that you're attempting to upload? If it is below the chunkSize then it won't actually chunk that file (since the default value of forceChunking = false;) and chunksUploaded will not be called (source).

Below I have included my working front-side implementation of chunking with dropzone.js. A couple notes beforehand: myDropzone and currentFile are global variables declared outside of the $(document).ready() like this:

var currentFile = null;
var myDropzone = null;

This is because I need them to be in scope when I do my error handling for the PUT request inside of the chunksUploaded function (the done() passed in here doesn't accept an error message as an argument, so we have to handle it on our own which requires those global variables. I can elaborate more if necessary).

$(function () {
    myDropzone = new Dropzone("#attachDZ", {
        url: "/api/ChunkedUpload",  
        params: function (files, xhr, chunk) {
            if (chunk) {
                return {
                    dzUuid: chunk.file.upload.uuid,
                    dzChunkIndex: chunk.index,
                    dzTotalFileSize: chunk.file.size,
                    dzCurrentChunkSize: chunk.dataBlock.data.size,
                    dzTotalChunkCount: chunk.file.upload.totalChunkCount,
                    dzChunkByteOffset: chunk.index * this.options.chunkSize,
                    dzChunkSize: this.options.chunkSize,
                    dzFilename: chunk.file.name,
                    userID: <%= UserID %>,
                };
            }
        },
        parallelUploads: 1,  // since we're using a global 'currentFile', we could have issues if parallelUploads > 1, so we'll make it = 1
        maxFilesize: 1024,   // max individual file size 1024 MB
        chunking: true,      // enable chunking
        forceChunking: true, // forces chunking when file.size < chunkSize
        parallelChunkUploads: true, // allows chunks to be uploaded in parallel (this is independent of the parallelUploads option)
        chunkSize: 1000000,  // chunk size 1,000,000 bytes (~1MB)
        retryChunks: true,   // retry chunks on failure
        retryChunksLimit: 3, // retry maximum of 3 times (default is 3)
        chunksUploaded: function (file, done) {
            // All chunks have been uploaded. Perform any other actions
            currentFile = file;

            // This calls server-side code to merge all chunks for the currentFile
            $.ajax({
                type: "PUT",
                url: "/api/ChunkedUpload?dzIdentifier=" + currentFile.upload.uuid
                    + "&fileName=" + encodeURIComponent(currentFile.name)
                    + "&expectedBytes=" + currentFile.size
                    + "&totalChunks=" + currentFile.upload.totalChunkCount
                    + "&userID=" + <%= UserID %>,
                success: function (data) {
                    // Must call done() if successful
                    done();
                },
                error: function (msg) {
                    currentFile.accepted = false;
                    myDropzone._errorProcessing([currentFile], msg.responseText);
                }
             });
        },
        init: function() {

            // This calls server-side code to delete temporary files created if the file failed to upload
            // This also gets called if the upload is canceled
            this.on('error', function(file, errorMessage) {
                $.ajax({
                    type: "DELETE",
                    url: "/api/ChunkedUpload?dzIdentifier=" + file.upload.uuid
                        + "&fileName=" + encodeURIComponent(file.name)
                        + "&expectedBytes=" + file.size
                        + "&totalChunks=" + file.upload.totalChunkCount
                        + "&userID=" + <%= UserID %>,
                    success: function (data) {
                        // nothing
                    }
                });
            });
        }
    });
});

If anyone is interested in my server-side code, let me know and I'll post it. I am using C#/ASP.Net.

EDIT: Added Server-side code

ChunkedUploadController.cs:

public class ChunkedUploadController : ApiController
{
    private class DzMeta
    {
        public int intChunkNumber = 0;
        public string dzChunkNumber { get; set; }
        public string dzChunkSize { get; set; }
        public string dzCurrentChunkSize { get; set; }
        public string dzTotalSize { get; set; }
        public string dzIdentifier { get; set; }
        public string dzFilename { get; set; }
        public string dzTotalChunks { get; set; }
        public string dzCurrentChunkByteOffset { get; set; }
        public string userID { get; set; }

        public DzMeta(Dictionary<string, string> values)
        {
            dzChunkNumber = values["dzChunkIndex"];
            dzChunkSize = values["dzChunkSize"];
            dzCurrentChunkSize = values["dzCurrentChunkSize"];
            dzTotalSize = values["dzTotalFileSize"];
            dzIdentifier = values["dzUuid"];
            dzFilename = values["dzFileName"];
            dzTotalChunks = values["dzTotalChunkCount"];
            dzCurrentChunkByteOffset = values["dzChunkByteOffset"];
            userID = values["userID"];
            int.TryParse(dzChunkNumber, out intChunkNumber);
        }

        public DzMeta(NameValueCollection values)
        {
            dzChunkNumber = values["dzChunkIndex"];
            dzChunkSize = values["dzChunkSize"];
            dzCurrentChunkSize = values["dzCurrentChunkSize"];
            dzTotalSize = values["dzTotalFileSize"];
            dzIdentifier = values["dzUuid"];
            dzFilename = values["dzFileName"];
            dzTotalChunks = values["dzTotalChunkCount"];
            dzCurrentChunkByteOffset = values["dzChunkByteOffset"];
            userID = values["userID"];
            int.TryParse(dzChunkNumber, out intChunkNumber);
        }
    }

    [HttpPost]
    public async Task<HttpResponseMessage> UploadChunk()
    {
        HttpResponseMessage response = new HttpResponseMessage { StatusCode = HttpStatusCode.Created };

        try
        {
            if (!Request.Content.IsMimeMultipartContent("form-data"))
            {
                //No Files uploaded
                response.StatusCode = HttpStatusCode.BadRequest;
                response.Content = new StringContent("No file uploaded or MIME multipart content not as expected!");
                throw new HttpResponseException(response);
            }

            var meta = new DzMeta(HttpContext.Current.Request.Form);
            var chunkDirBasePath = tSysParm.GetParameter("CHUNKUPDIR"); 
            var path = string.Format(@"{0}\{1}", chunkDirBasePath, meta.dzIdentifier);
            var filename = string.Format(@"{0}.{1}.{2}.tmp", meta.dzFilename, (meta.intChunkNumber + 1).ToString().PadLeft(4, '0'), meta.dzTotalChunks.PadLeft(4, '0'));
            Directory.CreateDirectory(path);

            Request.Content.LoadIntoBufferAsync().Wait();

            await Request.Content.ReadAsMultipartAsync(new CustomMultipartFormDataStreamProvider(path, filename)).ContinueWith((task) =>
            {
                if (task.IsFaulted || task.IsCanceled)
                {
                    response.StatusCode = HttpStatusCode.InternalServerError;
                    response.Content = new StringContent("Chunk upload task is faulted or canceled!");
                    throw new HttpResponseException(response);
                }
            });
        }
        catch (HttpResponseException ex)
        {
            LogProxy.WriteError(ex.Response.Content.ToString(), ex);
        }
        catch (Exception ex)
        {
            LogProxy.WriteError("Error uploading/saving chunk to filesystem", ex);
            response.StatusCode = HttpStatusCode.InternalServerError;
            response.Content = new StringContent(string.Format("Error uploading/saving chunk to filesystem: {0}", ex.Message));
        }

        return response;
    }

    [HttpPut]
    public HttpResponseMessage CommitChunks([FromUri]string dzIdentifier, [FromUri]string fileName, [FromUri]int expectedBytes, [FromUri]int totalChunks, [FromUri]int userID)
    {
        HttpResponseMessage response = new HttpResponseMessage { StatusCode = HttpStatusCode.OK };
        string path = "";

        try
        {
            var chunkDirBasePath = tSysParm.GetParameter("CHUNKUPDIR");
            path = string.Format(@"{0}\{1}", chunkDirBasePath, dzIdentifier);
            var dest = Path.Combine(path, HttpUtility.UrlDecode(fileName));
            FileInfo info = null;

            // Get all files in directory and combine in filestream
            var files = Directory.EnumerateFiles(path).Where(s => !s.Equals(dest)).OrderBy(s => s);
            // Check that the number of chunks is as expected
            if (files.Count() != totalChunks)
            {
                response.Content = new StringContent(string.Format("Total number of chunks: {0}. Expected: {1}!", files.Count(), totalChunks));
                throw new HttpResponseException(response);
            }

            // Merge chunks into one file
            using (var fStream = new FileStream(dest, FileMode.Create))
            {
                foreach (var file in files)
                {
                    using (var sourceStream = System.IO.File.OpenRead(file))
                    {
                        sourceStream.CopyTo(fStream);
                    }
                }
                fStream.Flush();
            }

            // Check that merged file length is as expected.
            info = new FileInfo(dest);
            if (info != null)
            {
                if (info.Length == expectedBytes)
                {
                    // Save the file in the database
                    tTempAtt file = tTempAtt.NewInstance();
                    file.ContentType = MimeMapping.GetMimeMapping(info.Name);
                    file.File = System.IO.File.ReadAllBytes(info.FullName);
                    file.FileName = info.Name;
                    file.Title = info.Name;
                    file.TemporaryID = userID;
                    file.Description = info.Name;
                    file.User = userID;
                    file.Date = SafeDateTime.Now;
                    file.Insert();
                }
                else
                {
                    response.Content = new StringContent(string.Format("Total file size: {0}. Expected: {1}!", info.Length, expectedBytes));
                    throw new HttpResponseException(response);
                }
            }
            else
            {
                response.Content = new StringContent("Chunks failed to merge and file not saved!");
                throw new HttpResponseException(response);
            }
        }
        catch (HttpResponseException ex)
        {
            LogProxy.WriteError(ex.Response.Content.ToString(), ex);
            response.StatusCode = HttpStatusCode.InternalServerError;
        }
        catch (Exception ex)
        {
            LogProxy.WriteError("Error merging chunked upload!", ex);
            response.StatusCode = HttpStatusCode.InternalServerError;
            response.Content = new StringContent(string.Format("Error merging chunked upload: {0}", ex.Message));
        }
        finally
        {
            // No matter what happens, we need to delete the temporary files if they exist
            if (!path.IsNullOrWS() && Directory.Exists(path))
            {
                Directory.Delete(path, true);
            }
        }

        return response;
    }

    [HttpDelete]
    public HttpResponseMessage DeleteCanceledChunks([FromUri]string dzIdentifier, [FromUri]string fileName, [FromUri]int expectedBytes, [FromUri]int totalChunks, [FromUri]int userID)
    {
        HttpResponseMessage response = new HttpResponseMessage { StatusCode = HttpStatusCode.OK };

        try
        {
            var chunkDirBasePath = tSysParm.GetParameter("CHUNKUPDIR");
            var path = string.Format(@"{0}\{1}", chunkDirBasePath, dzIdentifier);

            // Delete abandoned chunks if they exist
            if (!path.IsNullOrWS() && Directory.Exists(path))
            {
                Directory.Delete(path, true);
            }
        }
        catch (Exception ex)
        {
            LogProxy.WriteError("Error deleting canceled chunks", ex);
            response.StatusCode = HttpStatusCode.InternalServerError;
            response.Content = new StringContent(string.Format("Error deleting canceled chunks: {0}", ex.Message));
        }

        return response;
    }
}

And lastly, CustomMultipartFormDataStreamPrivder.cs:

public class CustomMultipartFormDataStreamProvider : MultipartFormDataStreamProvider
{
    public readonly string _filename;
    public CustomMultipartFormDataStreamProvider(string path, string filename) : base(path)
    {
        _filename = filename;
    }

    public override string GetLocalFileName(HttpContentHeaders headers)
    {
        return _filename;
    }
}
Representation answered 13/4, 2018 at 16:37 Comment(2)
@SandeepGarg I posted updated javascript, the asp.net web api controller, and the custom multipartformdatastreamprovider class that is used. Please let me know if you have any questions.Representation
Hi Sandeep, thanks for this, your example code has got me most of the way to a working dropzone chunk upload flow, but the upload success varies a lot. Some files, where there is only 1 or 2 chunks upload correctly, but larger files with 3 or more chunks fail because the last 1 or more chunks have an empty file object in the formData. Did you experience this at all? Any idea why this might be happening? ThanksDemavend

© 2022 - 2024 — McMap. All rights reserved.