It is recommended to not to use import *
in Python.
Can anyone please share the reason for that, so that I can avoid it doing next time?
It is recommended to not to use import *
in Python.
Can anyone please share the reason for that, so that I can avoid it doing next time?
Because it puts a lot of stuff into your namespace (might shadow some other object from previous import and you won't know about it).
Because you don't know exactly what is imported and can't easily find from which module a certain thing was imported (readability).
Because you can't use cool tools like pyflakes
to statically detect errors in your code.
numpy.any
shadowing any
when they do from numpy import *
or a "helpful" tool does it for them. –
Blastula import *
makes the order of the import
statements significant... even for standard library modules that don't normally care about import order. Something as innocent as alphabetizing your import
statements could break your script when a former casualty of the import war becomes the sole survivor. (Even if your script works now and never changes, it could suddenly fail sometime later if the imported module introduces a new name that replaces one you were relying on.) –
Scuta According to the Zen of Python:
Explicit is better than implicit.
... can't argue with that, surely?
You don't pass **locals()
to functions, do you?
Since Python lacks an "include" statement, and the self
parameter is explicit, and scoping rules are quite simple, it's usually very easy to point a finger at a variable and tell where that object comes from -- without reading other modules and without any kind of IDE (which are limited in the way of introspection anyway, by the fact the language is very dynamic).
The import *
breaks all that.
Also, it has a concrete possibility of hiding bugs.
import os, sys, foo, sqlalchemy, mystuff
from bar import *
Now, if the bar module has any of the "os
", "mystuff
", etc... attributes, they will override the explicitly imported ones, and possibly point to very different things. Defining __all__
in bar is often wise -- this states what will implicitly be imported - but still it's hard to trace where objects come from, without reading and parsing the bar module and following its imports. A network of import *
is the first thing I fix when I take ownership of a project.
Don't misunderstand me: if the import *
were missing, I would cry to have it. But it has to be used carefully. A good use case is to provide a facade interface over another module.
Likewise, the use of conditional import statements, or imports inside function/class namespaces, requires a bit of discipline.
I think in medium-to-big projects, or small ones with several contributors, a minimum of hygiene is needed in terms of statical analysis -- running at least pyflakes or even better a properly configured pylint -- to catch several kind of bugs before they happen.
Of course since this is python -- feel free to break rules, and to explore -- but be wary of projects that could grow tenfold, if the source code is missing discipline it will be a problem.
execfile()
. Luckily, it's rarely used and gone in 3.x. –
Poly **vars()
to include globals if the called function is in another file? :P –
Wyattwyche That is because you are polluting the namespace. You will import all the functions and classes in your own namespace, which may clash with the functions you define yourself.
Furthermore, I think using a qualified name is more clear for the maintenance task; you see on the code line itself where a function comes from, so you can check out the docs much more easily.
In module foo:
def myFunc():
print 1
In your code:
from foo import *
def doThis():
myFunc() # Which myFunc is called?
def myFunc():
print 2
It is OK to do from ... import *
in an interactive session.
doctest
string? Does the import *
get interpreted inside a "sandbox" in this case? Thanks. –
Superfecundation Say you have the following code in a module called foo:
import ElementTree as etree
and then in your own module you have:
from lxml import etree
from foo import *
You now have a difficult-to-debug module that looks like it has lxml's etree in it, but really has ElementTree instead.
Understood the valid points people put here. However, I do have one argument that, sometimes, "star import" may not always be a bad practice:
const.py
:
import const
, then for every constant, I have to refer it as const.SOMETHING
, which is probably not the most convenient way.from const import SOMETHING_A, SOMETHING_B ...
, then obviously it's way too verbose and defeats the purpose of the structuring.from const import *
may be a better choice.__all__
exists for, as it allows you to define what from const import *
actually imports from the side of const
. As a free bonus, it also helps you prevent accidentally importing things that were imported in const.py
because they were needed there! –
Accumulate http://docs.python.org/tutorial/modules.html
Note that in general the practice of importing
*
from a module or package is frowned upon, since it often causes poorly readable code.
These are all good answers. I'm going to add that when teaching new people to code in Python, dealing with import *
is very difficult. Even if you or they didn't write the code, it's still a stumbling block.
I teach children (about 8 years old) to program in Python to manipulate Minecraft. I like to give them a helpful coding environment to work with (Atom Editor) and teach REPL-driven development (via bpython). In Atom I find that the hints/completion works just as effectively as bpython. Luckily, unlike some other statistical analysis tools, Atom is not fooled by import *
.
However, lets take this example... In this wrapper they from local_module import *
a bunch modules including this list of blocks. Let's ignore the risk of namespace collisions. By doing from mcpi.block import *
they make this entire list of obscure types of blocks something that you have to go look at to know what is available. If they had instead used from mcpi import block
, then you could type walls = block.
and then an autocomplete list would pop up.
It is a very BAD practice for two reasons:
For point 1: Let's see an example of this:
from module1 import *
from module2 import *
from module3 import *
a = b + c - d
Here, on seeing the code no one will get idea regarding from which module b
, c
and d
actually belongs.
On the other way, if you do it like:
# v v will know that these are from module1
from module1 import b, c # way 1
import module2 # way 2
a = b + c - module2.d
# ^ will know it is from module2
It is much cleaner for you, and also the new person joining your team will have better idea.
For point 2: Let say both module1
and module2
have variable as b
. When I do:
from module1 import *
from module2 import *
print b # will print the value from module2
Here the value from module1
is lost. It will be hard to debug why the code is not working even if b
is declared in module1
and I have written the code expecting my code to use module1.b
If you have same variables in different modules, and you do not want to import entire module, you may even do:
from module1 import b as mod1b
from module2 import b as mod2b
As suggested in the docs, you should (almost) never use import *
in production code.
While importing *
from a module is bad, importing * from a package is probably even worse.
By default, from package import *
imports whatever names are defined by the package's __init__.py
, including any submodules of the package that were loaded by previous import
statements.
If a package’s __init__.py
code defines a list named __all__
, it is taken to be the list of submodule names that should be imported when from package import *
is encountered.
Now consider this example (assuming there's no __all__
defined in sound/effects/__init__.py
):
# anywhere in the code before import *
import sound.effects.echo
import sound.effects.surround
# in your module
from sound.effects import *
The last statement will import the echo
and surround
modules into the current namespace (possibly overriding previous definitions) because they are defined in the sound.effects
package when the import
statement is executed.
As a test, I created a module test.py with 2 functions A and B, which respectively print "A 1" and "B 1". After importing test.py with:
import test
. . . I can run the 2 functions as test.A() and test.B(), and "test" shows up as a module in the namespace, so if I edit test.py I can reload it with:
import importlib
importlib.reload(test)
But if I do the following:
from test import *
there is no reference to "test" in the namespace, so there is no way to reload it after an edit (as far as I can tell), which is a problem in an interactive session. Whereas either of the following:
import test
import test as tt
will add "test" or "tt" (respectively) as module names in the namespace, which will allow re-loading.
If I do:
from test import *
the names "A" and "B" show up in the namespace as functions. If I edit test.py, and repeat the above command, the modified versions of the functions do not get reloaded.
And the following command elicits an error message.
importlib.reload(test) # Error - name 'test' is not defined
If someone knows how to reload a module loaded with "from module import *", please post. Otherwise, this would be another reason to avoid the form:
from module import *
© 2022 - 2024 — McMap. All rights reserved.
import *
doesn't work for me in the first place in Python 2 or 3. – Kinneyusing package.module;
from C#, although it usually has one or two more safeguards when compiling. – Ermey