How to save data streams in S3? aws-sdk-go example not working?
Asked Answered
K

1

10

I am trying to persist a given stream of data to an S3 compatible storage. The size is not known before the stream ends and can vary from 5MB to ~500GB.

I tried different possibilities but did not find a better solution than to implement sharding myself. My best guess is to make a buffer of a fixed size fill it with my stream and write it to the S3. Is there a better solution? Maybe a way where this is transparent to me, without writing the whole stream to memory?

The aws-sdk-go readme has an example programm that takes data from stdin and writes it to S3: https://github.com/aws/aws-sdk-go#using-the-go-sdk

When I try to pipe data in with a pipe | I get the following error: failed to upload object, SerializationError: failed to compute request body size caused by: seek /dev/stdin: illegal seek Am I doing something wrong or is the example not working as I expect it to?

I although tried minio-go, with PutObject() or client.PutObjectStreaming(). This is functional but consumes as much memory as the data to store.

  1. Is there a better solution?
  2. Is there a small example program that can pipe arbitrary data into S3?
Kilmarnock answered 24/4, 2017 at 19:6 Comment(0)
D
10

You can use the sdk's Uploader to handle uploads of unknown size but you'll need to make the os.Stdin "unseekable" by wrapping it into an io.Reader. This is because the Uploader, while it requires only an io.Reader as the input body, under the hood it does a check to see whether the input body is also a Seeker and if it is, it does call Seek on it. And since os.Stdin is just an *os.File which implements the Seeker interface, by default, you would get the same error you got from PutObjectWithContext.

The Uploader also allows you to upload the data in chunks whose size you can configure and you can also configure how many of those chunks should be uploaded concurrently.

Here's a modified version of the linked example, stripped off of code that can remain unchanged.

package main

import (
    // ...
    "io"
    "github.com/aws/aws-sdk-go/service/s3/s3manager"
)

type reader struct {
    r io.Reader
}

func (r *reader) Read(p []byte) (int, error) {
    return r.r.Read(p)
}

func main() {
    // ... parse flags

    sess := session.Must(session.NewSession())
    uploader := s3manager.NewUploader(sess, func(u *s3manager.Uploader) {
        u.PartSize = 20 << 20 // 20MB
        // ... more configuration
    })

    // ... context stuff

    _, err := uploader.UploadWithContext(ctx, &s3manager.UploadInput{
        Bucket: aws.String(bucket),
        Key:    aws.String(key),
        Body:   &reader{os.Stdin},
    })

    // ... handle error
}

As to whether this is a better solution than minio-go I do not know, you'll have to test that yourself.

Denis answered 24/4, 2017 at 23:55 Comment(4)
Thank you very much. I did some testing and get a constant memory usage of ~500MB no matter if I store 5GB or 25GB of data. This is far from perfect, but acceptable. :)Kilmarnock
I'm glad I could help. What part size are you using and how many concurrent uploads are you allowing?Denis
I did not explicit set concurrent uploads and used your 20MB as PartSize. I just tried 256MB and it consumes ~2.1 GB memory. With PartSize = 5MB it consumes 132MB. I start to see a pattern here ;)Kilmarnock
By default i think it does 5 concurrent uploads so at 20mb per part it should be 100mb? and at 256mb it should 1.2gb? but that's me assuming that nothing else is consuming memory...Denis

© 2022 - 2024 — McMap. All rights reserved.