The most obvious economy is that, in the first case, your Perl process is creating the directory, while in the second, Perl is starting a new process that runs a command shell which parses the command line and runs the shell mkdir
command to create the directory, and then the child process is deleted. You would be creating and deleting a process and running the shell for every call to system
: there is no caching of processes or similar economy.
The second thing that comes to mind is that, if your original mkdir
fails, it is simple to handle the error in Perl, whereas shelling out to run a mkdir
command puts your program at a distance from the error, and it is far more awkward to handle the many different problems that may arise.
There is also the question of maintainability and portability, which will affect you even if you aren't expecting to run your program on more than one machine. Once you abandon control to a system
command you have no control over what happens. I could have written a mkdir
that will delete your home directory or, less disastrously, your program may find itself on a system where mkdir
doesn't exist, or does something slightly different.
In the particular case of mkdir
, this is a built-in Perl operator and is part of every Perl installation. There are also many core libraries that require you to put use Module
in your program, but are already installed and need no further action.
I am sure others will come up with more reasons to prefer a Perl operator or module over a shell command. In general you should prefer to keep everything you can within the language. There are only a few cases where you have to run a third-party program, and they usually involve custom software that allows you act on proprietary data formats.