Package managers for JavaScript
like npm
and yarn
use a package.json
to specify 'top-level' dependencies, and create a lock-file to keep track of the specific versions of all packages (i.e. top-level and sub-level dependencies) that are installed as a result.
In addition, the package.json
allows us to make a distinction between types of top-level dependencies, such as production and development.
For Python
, on the other hand, we have pip
. I suppose the pip
equivalent of a lock
-file would be the result of pip freeze > requirements.txt
.
However, if you maintain only this single requirements.txt
file, it is difficult to distinguish between top-level and sub-level dependencies (you would need for e.g. pipdeptree -r
to figure those out). This can be a real pain if you want to remove or change top-level dependencies, as it is easy to be left with orphaned packages (as far as I know, pip
does not remove sub-dependencies when you pip uninstall
a package).
Now, I wonder: Is there some convention for dealing with different types of these requirements
files and distinguishing between top-level and sub-level dependencies with pip
?
For example, I can imagine having a requirements-prod.txt
which contains only the top-level requirements for the production environment, as the (simplified) equivalent of package.json
, and a requirements-prod.lock
, which contains the output of pip freeze
, and acts as my lock
-file. In addition I could have a requirements-dev.txt
for development dependencies, and so on and so forth.
I would like to know if this is the way to go, or if there is a better approach.
p.s. The same question could be asked for conda
's environment.yml
.