docker-compose: how to use minio in- and outside of the docker network
Asked Answered
T

6

21

I have the following docker-compose.yml to run a local environment for my Laravel App.

version: '3'
services:
  app:
    build:
      context: .
      dockerfile: .docker/php/Dockerfile
    ports:
      - 80:80
      - 443:443
    volumes:
      - .:/var/www:delegated
    environment:
      AWS_ACCESS_KEY_ID: minio_access_key
      AWS_SECRET_ACCESS_KEY: minio_secret_key
      AWS_BUCKET: Bucket
      AWS_ENDPOINT: http://s3:9000
    links:
      - database
      - s3
  database:
    image: mariadb:10.3
    ports:
      - 63306:3306
    environment:
      MYSQL_ROOT_PASSWORD: secret
  s3:
    image: minio/minio
    ports:
      - "9000:9000"
    volumes:
      - ./storage/minio:/data
    environment:
      MINIO_ACCESS_KEY: minio_access_key
      MINIO_SECRET_KEY: minio_secret_key
    command: server /data

As you can see, I use minio as AWS S3 compatible storage. This works very well but when I generate a url for a file (Storage::disk('s3')->url('some-file.txt')) obviously I get a url like this http://s3:9000/Bucket/some-file.txt which does not work outside of the Docker network.

I've already tried to set AWS_ENDPOINT to http://127.0.0.1:9000 but then Laravel can't connect to the Minio Server...

Is there a way to configure Docker / Laravel / Minio to generate urls which are accessible in- and outside of the Docker network?

Tellurium answered 17/6, 2019 at 8:23 Comment(1)
Related question: #56971374Miltie
B
11

I expanded on the solutions in this question to create a solution that is working for me on both a localhost and on a server with an accessible dns.

The localhost solution is essentially the solution described above.

Create localhost host mapping

sudo echo "127.0.0.1       my-minio-localhost-alias" >> /etc/hosts

Set HOSTNAME, use 'my-minio-localhost-alias' for localhost

export HOSTNAME=my-minio-localhost-alias

Create hello.txt

Hello from Minio!

Create docker-compose.yml

This compose file contains the following containers:

  • minio: minio service
  • minio-mc: command line tool to initialize content
  • s3-client: command line tool to generate presigned urls
version: '3.7'
networks:
  mynet:
services:
  minio:
    container_name: minio
    image: minio/minio
    ports:
    - published: 9000
      target: 9000
    command: server /data
    networks:
      mynet:
        aliases:
        # For localhost access, add the following to your /etc/hosts
        # 127.0.0.1       my-minio-localhost-alias
        # When accessing the minio container on a server with an accessible dns, use the following
        - ${HOSTNAME}
  # When initializing the minio container for the first time, you will need to create an initial bucket named my-bucket.
  minio-mc:
    container_name: minio-mc
    image: minio/mc
    depends_on:
    - minio
    volumes:
    - "./hello.txt:/tmp/hello.txt"
    networks:
      mynet:
  s3-client:
    container_name: s3-client
    image: amazon/aws-cli
    environment:
      AWS_ACCESS_KEY_ID: minioadmin
      AWS_SECRET_ACCESS_KEY: minioadmin
    depends_on:
    - minio
    networks:
      mynet:

Start the minio container

docker-compose up -d minio

Create a bucket in minio and load a file

docker-compose run minio-mc mc config host add docker http://minio:9000 minioadmin minioadmin
docker-compose run minio-mc mb docker/my-bucket
docker-compose run minio-mc mc cp /tmp/hello.txt docker/my-bucket/foo.txt

Create a presigned URL that is accessible inside AND outside of the docker network

docker-compose run s3-client --endpoint-url http://${HOSTNAME}:9000 s3 presign s3://my-bucket/hello.txt
Braggart answered 14/4, 2020 at 18:17 Comment(1)
The only solution that actually works!Pyxis
L
8

how about binding address? (not tested)

...
  s3:
    image: minio/minio
    ports:
      - "9000:9000"
    volumes:
      - ./storage/minio:/data
    environment:
      MINIO_ACCESS_KEY: minio_access_key
      MINIO_SECRET_KEY: minio_secret_key
    command: server --address 0.0.0.0:9000 /data
Litigation answered 12/5, 2020 at 0:47 Comment(2)
I don't think this would solve the problem because this would not provide a single hostname that can be used from a Docker service and from the host.Miltie
the command part changed as of now: command: server /data --console-address ":9001"Barcellona
W
2

Since you are mapping the 9000 port on host to that service, you should be able to access it via s3:9000 if you simply add s3 to your hosts file (/etc/hosts on Mac/Linux)

Add this 127.0.0.1 s3 to your hosts file and you should be able to access the s3 container from your host machine by using https://s3:9000/path/to/file

This means you can use the s3 hostname from inside and outside the docker network

Woodbine answered 17/6, 2019 at 10:22 Comment(5)
I've already thought about that, but it's more like a workaround for me.Tellurium
It's actually not that much of a workaround, since in local development environment, that's how things sometimes work... Are you not adding your web application hostnames to hosts file when developing them? You can also set up a proxy server that is running on your host machine and forward requests for s3 to the localhost. Or simply make the minio service completely unavailable from host machine and handle the 9000 port requests with a nginx proxy service that's part of the docker-compose projectPlasterboard
Having the same issue and I agree with @Tellurium this isn't an ideal solution, even if it's the only one. Shouldn't be any need for local DNS configuration, assuming the app is fine to run off of localhost, and the nginx proxy layer seems overkill if minio is just a small part of the overall app. Seems the best solution here would be for Minio to allow an alternative hostname when generating its signed URLs, but looks like there isn't any movement on that: github.com/minio/minio/issues/2848Desdamonna
Well this is a solution that we can do ourselves... Sitting and hoping for the developers to implement a functionality on their tools isn't gonna be too productive. You can probably try to create a proxy server through which you serve all your files using nginx or some other lightweight solutionPlasterboard
Agreed on both points, your answer is probably the only option at the moment. Seems odd this isn't widely addressed anywhere though, even similar tools to Minio (FakeS3, etc.) don't seem to consider this use case either. Lots of folks running docker-compose these days that would likely run in to this if they were trying to duplicate their infra locally.Desdamonna
D
1

I didn't find a complete setup of minio using docker-compose. here it is:

version: '2.4'

services:
  s3:
    image: minio/minio:latest
    ports:
      - "9000:9000"
      - "9099:9099"
    environment:
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: minioadmin
    volumes:
      - storage-minio:/data
    command: server --address ":9099" --console-address ":9000" /data
    restart: always # necessary since it's failing to start sometimes

volumes:
  storage-minio:
    external: true

In command section we have the address which is the API address and we have console-address where you can connect to the console see the image below. Use to the MINIO_ROOT_USER & MINIO_ROOT_PASSWORD values to sign in.

minio console

Dudden answered 1/12, 2021 at 13:0 Comment(5)
As far as I can tell, this doesn't solve the problem. The service would still be reachable under two different hostnames, depending on the place you're trying to connect from.Miltie
My answer provides a full setup of minio using docker-compose and also how to use minio console.Dudden
Also both of the console and API are exposed from the container. If you don't want that, you can comment the ports above in the setup.Dudden
You are right and I think this might be helpful for someone looking to setup MinIO using Docker. But this is not a solution to the problem @Tellurium encountered: How do you consolidate the two hosts that are used to contact the service? You can either use s3 (inside of Docker) or localhost (outside of Docker) but there is no host that works in both environments.Miltie
So, how would OP be able to generate an S3 URL that works both inside and outside of the container then, using your solution? The --address argument just binds the server to an address/port combination. It doesn't change the fact that the resulting URL won't work both inside and outside of Docker. Please correct me if I'm wrong, I'm happy to learn.Miltie
D
1

Adding the "s3" alias to my local hosts file did not do the trick. But explicitly binding the ports to 127.0.0.1 worked like a charm:

s3:
    image: minio/minio:RELEASE.2022-02-05T04-40-59Z
    restart: "unless-stopped"
    volumes:
        - s3data:/data
    environment:
        MINIO_ROOT_USER: minio
        MINIO_ROOT_PASSWORD: minio123
    # Allow all incoming hosts to access the server by using 0.0.0.0
    command: server --address 0.0.0.0:9000 --console-address ":9001" /data
    ports:
        # Bind explicitly to 127.0.0.1
        - "127.0.0.1:9000:9000"
        - "9001:9001"
    healthcheck:
        test: ["CMD", "curl", "-f", "http://127.0.0.1:9000/minio/health/live"]
        interval: 30s
        timeout: 20s
        retries: 3
Drava answered 6/2, 2022 at 16:2 Comment(0)
C
-1

For those who are looking for s3 with minio object server integration test. Specially for JAVA implementation.

docker-compose file:

version: '3.7'
services:
  minio-service:
    image: quay.io/minio/minio
    command: minio server /data
    ports:
      - "9000:9000"
    environment:
      MINIO_ROOT_USER: minio
      MINIO_ROOT_PASSWORD: minio123

The actual IntegrationTest class:

import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.S3Object;
import org.junit.jupiter.api.*;
import org.testcontainers.containers.DockerComposeContainer;

import java.io.File;

@TestInstance(TestInstance.Lifecycle.PER_CLASS)
class MinioIntegrationTest {

    private static final DockerComposeContainer minioContainer = new DockerComposeContainer<>(new File("src/test/resources/docker-compose.yml"))
            .withExposedService("minio-service", 9000);
    private static final String MINIO_ENDPOINT = "http://localhost:9000";
    private static final String ACCESS_KEY = "minio";
    private static final String SECRET_KEY = "minio123";
    private AmazonS3 s3Client;

    @BeforeAll
    void setupMinio() {
        minioContainer.start();
        initializeS3Client();
    }

    @AfterAll
    void closeMinio() {
        minioContainer.close();
    }

    private void initializeS3Client() {
        String name = Regions.US_EAST_1.getName();
        AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration(MINIO_ENDPOINT, name);
         s3Client = AmazonS3ClientBuilder.standard()
                .withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(ACCESS_KEY, SECRET_KEY)))
                .withEndpointConfiguration(endpoint)
                .withPathStyleAccessEnabled(true)
                .build();
    }

    @Test
    void shouldReturnActualContentBasedOnBucketName() throws Exception{
        String bucketName = "test-bucket";
        String key = "s3-test";
        String content = "Minio Integration test";
        s3Client.createBucket(bucketName);
        s3Client.putObject(bucketName, key, content);
        S3Object object = s3Client.getObject(bucketName, key);
        byte[] actualContent = new byte[22];
        object.getObjectContent().read(actualContent);
        Assertions.assertEquals(content, new String(actualContent));
    }
}
Clairclairaudience answered 6/10, 2021 at 13:0 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.