In my Dockerfile, I would like to define variables that I can use later in the Dockerfile.
I am aware of the ENV
instruction, but I do no want these variables to be environment variables.
Is there a way to declare variables at Dockerfile scope?
In my Dockerfile, I would like to define variables that I can use later in the Dockerfile.
I am aware of the ENV
instruction, but I do no want these variables to be environment variables.
Is there a way to declare variables at Dockerfile scope?
You can use ARG
- see https://docs.docker.com/engine/reference/builder/#arg
The
ARG
instruction defines a variable that users can pass at build-time to the builder with thedocker build
command using the--build-arg <varname>=<value>
flag. If a user specifies a build argument that was not defined in the Dockerfile, the build outputs an error.
Can be useful with COPY during build time (e.g. copying tag specific content like specific folders) For example:
ARG MODEL_TO_COPY
COPY application ./application
COPY $MODEL_TO_COPY ./application/$MODEL_TO_COPY
While building the container:
docker build --build-arg MODEL_TO_COPY=model_name -t <container>:<model_name specific tag> .
To answer your question:
In my Dockerfile, I would like to define variables that I can use later in the Dockerfile.
You can define a variable with:
ARG myvalue=3
Note that spaces around the equal character are not allowed.
And use it later with:
RUN echo "$myvalue" > /test
RUN echo "$myvalue" > /test
Is this correct? –
Hoax To my knowledge, only ENV
allows that, as mentioned in "Environment replacement"
Environment variables (declared with the
ENV
statement) can also be used in certain instructions as variables to be interpreted by the Dockerfile.
They have to be environment variables in order to be redeclared in each new containers created for each line of the Dockerfile by docker build
.
In other words, those variables aren't interpreted directly in a Dockerfile, but in a container created for a Dockerfile line, hence the use of environment variable.
This day, I use both ARG
(docker 1.10+, and docker build --build-arg var=value
) and ENV
.
Using ARG
alone means your variable is visible at build time, not at runtime.
My Dockerfile usually has:
ARG var
ENV var=${var}
In your case, ARG
is enough: I use it typically for setting http_proxy variable, that docker build needs for accessing internet at build time.
Christopher King adds in the comments:
Watch out!
TheARG
variable is only in scope for the "stage that it is used" and needs to be redeclared for each stage.
He points out to Dockerfile / scope
An
ARG
variable definition comes into effect from the line on which it is defined in the Dockerfile not from the argument’s use on the command-line or elsewhere.For example, consider this Dockerfile:
FROM busybox USER ${user:-some_user} ARG user USER $user # ...
A user builds this file by calling:
docker build --build-arg user=what_user .
The
USER
at line 2 evaluates tosome_user
as the user variable is defined on the subsequent line 3.
TheUSER
at line 4 evaluates towhat_user
as user is defined and thewhat_user
value was passed on the command line.
Prior to its definition by anARG
instruction, any use of a variable results in an empty string.An
ARG
instruction goes out of scope at the end of the build stage where it was defined.
To use an arg in multiple stages, each stage must include theARG
instruction.
If the variable is re-used within the same RUN
instruction, one could simply set a shell variable. I really like how they approached this with the official Ruby Dockerfile.
RUN foo=$(date) && echo $foo
–
Harmsworth You can use ARG variable defaultValue
and during the run command you can even update this value using --build-arg variable=value
. To use these variables in the docker file you can refer them as $variable
in run command.
Note: These variables would be available for Linux commands like RUN echo $variable
and they wouldn't persist in the image.
ARG variable=defaultvalue
instead of ARG variable defaultvalue
–
Thirst Late to the party, but if you don't want to expose environment variables, I guess it's easier to do something like this:
RUN echo 1 > /tmp/__var_1
RUN echo `cat /tmp/__var_1`
RUN rm -f /tmp/__var_1
I ended up doing it because we host private npm packages in aws codeartifact:
RUN aws codeartifact get-authorization-token --output text > /tmp/codeartifact.token
RUN npm config set //company-123456.d.codeartifact.us-east-2.amazonaws.com/npm/internal/:_authToken=`cat /tmp/codeartifact.token`
RUN rm -f /tmp/codeartifact.token
And here ARG
cannot work and i don't want to use ENV
because i don't want to expose this token to anything else
RUN
s would have 3 layers. If we inspect the earlier layers, can we see your secrets? –
Friesen Adding my own answer as I had to do a lot more research to understand and use https://mcmap.net/q/93111/-how-to-define-a-variable-in-a-dockerfile @Evgeny's answer above.
For my situation, I needed the Dockerfile itself to set the value of an argument. I wanted to run Node.js to run a script, but unfortunately, the environment that is going to run the Dockerfile (docker build, docker run) is a Jenkins box which doesn't have Node.js installed. The Dockerfile I'm using, on the other hand, is responsible for installing Node.js. (This situation may also apply to you if you are coding in Python, Go, Ruby (as in Evegeny's answer), etc. and the Dockerfile installs those, but they're not available in the outside environment.)
I then wanted to take the output of my Node.js script and use that to "set a variable". It seems like all the other answers here are about doing docker run -e BLAH=something
(which will let docker know that an environmental variable or ARG
is a certain value, but that assumes that the exterior environment knows the value is something (or can figure it out) and pass it in to the Dockerfile. To me this is essentially hard coding it, not setting it dynamically, as the question asks. SO it wouldn't work for me
Anyway, let's say you have a script which requires that docker install something to be able execute:
RUN
command.node my-pre-build-script.js
or python script.py
).RUN prebuildout=$(node pre-build-script.js)
.RUN
command within docker, so instead you have to execute it and use it all within one long command.RUN prebuildout=$(node pre-build-script.js) && \
echo "$(prebuildout)" && \
MY_VARIABLE="$(prebuildout)" npm start && \
...etc
NOTE:
\
to break the line even though you didn't finish that RUN command."
quotes to use the shell temporary variable within the command you are running like: RUN python myCommand "$(metaShellVariable)"
.For my particular use case this didn't end up working, maybe my syntax was off, but I got it to work easily since I only needed the output of the command once using something like npm run "$(node my-pre-build-script.js)"
. In my actual use case I couldn't use npm run
to do the script otherwise I could just use something like (prebuild: node my-pre-build-script.js
or something), in my case this is before the package.json is there but after node is installed.
Again my-pre-build-script.js has only one console.log statement in it and looks for the sake of this example like this:
(async () => {
const res = await fetch('/blah');
const json = await res.json();
const commandToRun = extractCommand(json);
console.log(commandToRun || 'build');
})()
In other words have your script's output printed to console (System.out.println
, printf()
, etc.) and you can use it within the docker shell command RUN
.
This is powerful, because your script might involve making several secure API calls that only this docker container is configured to be able to access and you might not want to use curl and try to parse the output out of that.
© 2022 - 2024 — McMap. All rights reserved.