An interpreted language is more or less a large configuration for an executable that is called interpreter. That executable (e. g. /usr/bin/python
) is the program which actually runs. It then reads the script it shall execute (e. g. /home/alfe/bin/factorial.py
) and executes it, in the simplest form line-by-line.
During that process it can encounter references to other files (other modules, e. g. /usr/python/lib/math.py
) and then it will read and interpret those.
Many such languages have mechanisms built in to reduce the overhead of this process by creating byte-code versions of the scripts they interpreted. So there might well be a file /usr/python/lib/math.pyc
for instance, which the interpreter put there after first processing and which it can faster read and interpret than the original /usr/python/lib/math.py
. But this is not really part of the concept of interpreted languages¹.
Sometimes, a binary library is part of an interpreted language; depending on the sophistication of the interpreter it can link that library at runtime and then use it. This is most typical for the system modules and stuff which needs to be highly optimized.
But in general one can say that no binary machine code gets generated at all. And nothing is linked at the compile time. Actually, there is no real compile time, even though one could call that first processing of the input scripts a compile step.
Footnotes:
¹) The concept of interpreting scripts does encompass neither that "compiling" (pre-translating of the source into a faster-to-interpret form) nor that "caching" of this form by storing files like the .pyc
files. WRT to your question concerning linking and splitting programs into several files or modules, these aspects of precompiling and caching are just technical details to speed up things. The concept itself is: read one line of the input script & execute it. Then read the next line and so on.