Hidden features of Python [closed]
Asked Answered
H
739

Chaining comparison operators:

>>> x = 5
>>> 1 < x < 10
True
>>> 10 < x < 20 
False
>>> x < 10 < x*10 < 100
True
>>> 10 > x <= 9
True
>>> 5 == x > 4
True

In case you're thinking it's doing 1 < x, which comes out as True, and then comparing True < 10, which is also True, then no, that's really not what happens (see the last example.) It's really translating into 1 < x and x < 10, and x < 10 and 10 < x * 10 and x*10 < 100, but with less typing and each term is only evaluated once.

Hemihydrate answered 19/9, 2008 at 11:50 Comment(12)
Isn't 10 > x <= 9 the same as x <= 9 (ignoring overloaded operators, that is)Glorification
Of course. It was just an example of mixing different operators.Hemihydrate
This applies to other comparison operators as well, which is why people are sometimes surprised why code like (5 in [5] is True) is False (but it's unpythonic to explicitly test against booleans like that to begin with).Snider
Lisp does not have anything similar?Gorrono
Not that I know of. Perl 6 does have this feature, though :)Transmigrate
Good but watch out for equal prcedence, like 'in' and '='. 'A in B == C in D' means '(A in B) and (B == C) and (C in D)' which might be unexpected.Dumortierite
Azafe: Lisp's comparisons naturally work this way. It's not a special case because there's no other (reasonable) way to interpret (< 1 x 10). You can even apply them to single arguments, like (= 10): cs.cmu.edu/Groups/AI/html/hyperspec/HyperSpec/Body/…Speechmaking
@Snider a less confusing example might be "a == b in c" which is equivalent to "a == b and b in c". See docs.python.org/reference/expressions.html#notinPic
@Charles Merriam for me its not unexpected, just logical. Although its ugly to use A in B == C in D.Inhuman
This is also great for tests. You can do a == b == c, and it will return True only if all three items are equal.Chaunceychaunt
is not and not in are similarly surprisingly good too. Apparently is not is 1 binary operator, not a binary and then a unary. not in is the same too. This makes code like 'foo' is not 'bar' so much more readable.County
Ken: I like Python's version better than Lisp's, since it allows for mixing different kinds of comparisons, such as a <= b < c. Mathematica, which is more or a dialect of Lisp, does allow you to use different comparisons --it uses what would in Lisp syntax be (inequality a '<= b '< c).Coprophilia
S
511

Get the python regex parse tree to debug your regex.

Regular expressions are a great feature of python, but debugging them can be a pain, and it's all too easy to get a regex wrong.

Fortunately, python can print the regex parse tree, by passing the undocumented, experimental, hidden flag re.DEBUG (actually, 128) to re.compile.

>>> re.compile("^\[font(?:=(?P<size>[-+][0-9]{1,2}))?\](.*?)[/font]",
    re.DEBUG)
at at_beginning
literal 91
literal 102
literal 111
literal 110
literal 116
max_repeat 0 1
  subpattern None
    literal 61
    subpattern 1
      in
        literal 45
        literal 43
      max_repeat 1 2
        in
          range (48, 57)
literal 93
subpattern 2
  min_repeat 0 65535
    any None
in
  literal 47
  literal 102
  literal 111
  literal 110
  literal 116

Once you understand the syntax, you can spot your errors. There we can see that I forgot to escape the [] in [/font].

Of course you can combine it with whatever flags you want, like commented regexes:

>>> re.compile("""
 ^              # start of a line
 \[font         # the font tag
 (?:=(?P<size>  # optional [font=+size]
 [-+][0-9]{1,2} # size specification
 ))?
 \]             # end of tag
 (.*?)          # text between the tags
 \[/font\]      # end of the tag
 """, re.DEBUG|re.VERBOSE|re.DOTALL)
Skiagraph answered 19/9, 2008 at 11:50 Comment(3)
Except parsing HTML using regular expression is slow and painful. Even the built-in 'html' parser module doesn't use regexes to get the work done. And if the html module doesn't please you, there is plenty of XML/HTML parser modules that does the job without having to reinvent the wheel.Skiagraph
A link to documentation on the output syntax would be great.Subdue
This should be an official part of Python, not experimental... RegEx is always tricky and being able to trace what's happening is really helpful.Esotropia
Z
459

enumerate

Wrap an iterable with enumerate and it will yield the item along with its index.

For example:


>>> a = ['a', 'b', 'c', 'd', 'e']
>>> for index, item in enumerate(a): print index, item
...
0 a
1 b
2 c
3 d
4 e
>>>

References:

Zsazsa answered 19/9, 2008 at 11:50 Comment(13)
i think it's been deprecated in python3Bakst
And all this time I was coding this way: for i in range(len(a)): ... and then using a[i] to get the current item.Numbles
@Berry Tsakala: To my knowledge, it has not been deprecated.Lilialiliaceous
shorter than using zip and count for index, item in zip(itertools.count(), a): print(index,item)Binford
Great feature, +1. @Draemon: this is actually covered in the Python tutorial that comes installed with Python (there's a section on various looping constructs), so I'm always surprised that this is so little known.Projectionist
The nice thing about this is when you're iterating through more than one loop simultaneouslyRight
@Berry Tsakala: definitely not deprecated.Chintz
Sorry my ignorance, but isn't it enough to just do: a = ["a","b","c"] >>> for x in enumerate(a): ... print x why do you do for index, item in enumerate(a): print index, itemUltimate
I always hacked idx, elem in itertools.izip(itertools.count(), iterable):...Tun
@Tufa: You might not want to use the index and item in the same statement. In this simple example your code is equivalent, but in a more sophisticated scenario it won't be able to do all that for inx, itm in enumerate(a): can do.Elspeth
for i in range(len(a)) is still a lot better than for (int i=0;i<a.size();i++) ....Interesting
enumerate can start from arbitrary index, not necessary 0. Example: 'for i, item in enumerate(list, start=1): print i, item' will start enumeration from 1, not 0.Bramble
Not deprecated in Py3K docs.python.org/py3k/library/…Ionian
A
418

Creating generators objects

If you write

x=(n for n in foo if bar(n))

you can get out the generator and assign it to x. Now it means you can do

for n in x:

The advantage of this is that you don't need intermediate storage, which you would need if you did

x = [n for n in foo if bar(n)]

In some cases this can lead to significant speed up.

You can append many if statements to the end of the generator, basically replicating nested for loops:

>>> n = ((a,b) for a in range(0,2) for b in range(4,6))
>>> for i in n:
...   print i 

(0, 4)
(0, 5)
(1, 4)
(1, 5)
Allain answered 19/9, 2008 at 11:50 Comment(8)
You could also use a nested list comprehension for that, yes?Bots
Of particular note is the memory overhead savings. Values are computed on-demand, so you never have the entire result of the list comprehension in memory. This is particularly desirable if you later iterate over only part of the list comprehension.Noll
I use ifilter for this kind of thing: docs.python.org/library/itertools.html#itertools.ifilterChelsae
This is not particularly "hidden" imo, but also worth noting is the fact that you could not rewind a generator object, whereas you can reiterate over a list any number of times.Guglielma
ditto susmits. Although these are extremely cool, it's a documented feature of Python docs.python.org/tutorial/classes.html Using callbacks with your generators, also documented, adds to the coolness of generators. python.org/dev/peps/pep-0255Gwendolin
The "no rewind" feature of generators can cause some confusion. Specifically, if you print a generator's contents for debugging, then use it later to process the data, it doesn't work. The data is produced, consumed by print(), then is not available for the normal processing. This doesn't apply to list comprehensions, since they're completely stored in memory.Tiebout
Similar (dup?) answer: https://mcmap.net/q/16685/-hidden-features-of-python-closed/… Note, however, that the answer I linked here mentions a REALLY GOOD presentation about the power of generators. You really should check it out.Monopolize
Here's a good article in using generator in solving real-world problems dabeaz.com/generators/Generators.pdfNanete
F
352

iter() can take a callable argument

For instance:

def seek_next_line(f):
    for c in iter(lambda: f.read(1),'\n'):
        pass

The iter(callable, until_value) function repeatedly calls callable and yields its result until until_value is returned.

Fewell answered 19/9, 2008 at 11:50 Comment(2)
As a newbie to python, can you please explain why the lambda keyword is necessary here?Terpsichorean
@Terpsichorean without the lambda, f.read(1) would be evaluated (returning a string) before being passed to the iter function. Instead, the lambda creates an anonymous function and passes that to iter.Ullage
F
339

Be careful with mutable default arguments

>>> def foo(x=[]):
...     x.append(1)
...     print x
... 
>>> foo()
[1]
>>> foo()
[1, 1]
>>> foo()
[1, 1, 1]

Instead, you should use a sentinel value denoting "not given" and replace with the mutable you'd like as default:

>>> def foo(x=None):
...     if x is None:
...         x = []
...     x.append(1)
...     print x
>>> foo()
[1]
>>> foo()
[1]
Felisha answered 19/9, 2008 at 11:50 Comment(12)
That's definitely one of the more nasty hidden features. I've run into it from time to time.Noenoel
I found this a lot easier to understand when I learned that the default arguments live in a tuple that's an attribute of the function, e.g. foo.func_defaults. Which, being a tuple, is immutable.Grogshop
Could you explain how it happens in detail?Salter
@grayger: As the def statement is executed its arguments are evaluated by the interpreter. This creates (or rebinds) a name to a code object (the suite of the function). However, the default arguments are instantiated as objects at the time of definition. This is true of any time of defaulted object, but only significant (exposing visible semantics) when the object is mutable. There's no way of re-binding that default argument name in the function's closure although it can obviously be over-ridden for any call or the whole function can be re-defined).Humpage
@Robert of course the arguments tuple might be immutable, but the objects it point to are not necessarily immutable.Pic
One quick hack to make your initialization a little shorter: x = x or []. You can use that instead of the 2 line if statement.Roscoe
Default values also become nasty if you use more than one of them. For example - say you wrote a function like: <function> def f(a=[], b=[], c=[]): a.append(3) </ function>. You will have inadvertently changed the values of a, b and c without having touched them. This is because similar default values seem to point to the same thing in memory. Nasty bugs ariseMacario
this feature / wart or what you'd call it is one of the most important things to understand when you start learning python. it directly connects you to understanding what is done when in a program, and without that knowledge, any code beyond a pretty low threshold of complexity cannot be written.Revivalism
This seems like a bug in the compiler, right?Wherewith
Just a comment that pylint complains vigorously about usage like this, as it should.Listerism
@davemankoff: I think that's a bad habit to get into, because sooner or later you'll reject a valid falsey value by mistake.Zeitler
This was literally an interview question for my current job. :) It is probably the classic Python gotcha.Annuitant
M
316

Sending values into generator functions. For example having this function:

def mygen():
    """Yield 5 until something else is passed back via send()"""
    a = 5
    while True:
        f = (yield a) #yield a and possibly get f in return
        if f is not None: 
            a = f  #store the new value

You can:

>>> g = mygen()
>>> g.next()
5
>>> g.next()
5
>>> g.send(7)  #we send this back to the generator
7
>>> g.next() #now it will yield 7 until we send something else
7
Montague answered 19/9, 2008 at 11:50 Comment(7)
Agreed. Let's treat this as a nasty example of a hidden feature of Python :)Caponize
In other languages, I believe this magical device is called a "variable".Hinkley
coroutines should be coroutines and genenerator should be themselves too, without mixing. Mega-great link and talk and examples about this here: dabeaz.com/coroutinesIndue
@finnw: the example implements something that's similar to a variable. However, the feature could be used in many other ways ... unlike a variable. It should also be obvious that similar semantics can be implemented using objects (a class implemting Python's call method, in particular).Humpage
This is too trivial an example for people who've never seen (and probably won't understand) co-routines. The example that implements the running average without risk of sum variable overflow is a good one.Improper
More on the yield topic here: stackoverflow.com/questions/231767/…Zoila
@Hinkley and his upvoters, I think you've misundetstood the point of this examble. The important bit is not storing a value in 'a', it's that 'mygen' is acting like a function with multiple entry points, and with the ability to suspend execution halfway through, and return a value to the caller, but then resume execution later, from that same point, with all local variables intact. You can read more about them here en.m.wikipedia.org/wiki/CoroutineChinchin
L
312

If you don't like using whitespace to denote scopes, you can use the C-style {} by issuing:

from __future__ import braces
Larine answered 19/9, 2008 at 11:50 Comment(21)
That's evil. :)Felisha
>>> from __future__ import braces File "<stdin>", line 1 SyntaxError: not a chance :PRheotropism
Wait, isn't the future package future additions to the language? So are they planning to add braces at some point?Leandroleaning
Dynamic whitespace is half of python's goodness. That's... twisted.Gabriellagabrielle
that's blasphemy!Tartuffe
I think that we may have a syntactical mistake here, shouldn't that be "from __past__ import braces"?Lamellicorn
from __cruft__ import bracesEntitle
I admit that's funny, but inversely what about the blind? I remember reading a while back of an individual who was blind and frustrated that he/she couldn't use Python due to the lack of brackets for statements.Gunner
I can understand the use of braces for minification of code :)Peirsen
Totally breaks the Python idiomRemoved
@David: How are braces better for the blind? In the best-case scenario (Well-indented code, which Python enforces), braces would only add a minuscule amount of clarity. A block of text with whitespace before would be in my opinion much easier to notice than the presence of a small typographical character. The legibility of braces depends on which version of the OTBS that person believes in. The inline braces I prefer would be horrible to read without proper vision.Kovno
@Alex: How does the text reader say the indentation level? You would need a Python specific text reader to tell you "for <stuff> colon newline indent pass newline <next statement>". Now add some indents: "indent indent indent for <stuff> colon newline indent indent indent indent pass newline indent indent indent <next statement>"Somnambulism
jmucchiello: Yes you need something python-specific. The screen reader should speak the tokens that the python interpreter uses, "intent in", "indent out".Indue
@David, @jmucchiello: there is a script that adds braces to every block in a comment (# }), and in fact I've read of blind people that uses it to allow them to write Python :)Ruyle
@David, @jmucchiello: Ah, you meant blind-blind, not just "horribly bad eyesight"-blind.Kovno
I know a few devs that are learning Python (but know a c style language) who would love this. It's just because they don't know any better ;)Gwendolin
Blind programmer can use this syntax: python.org/doc/humor/…Inhuman
A strange feature indeed. Props for sharing the first thing I didn't know as I read through this thread.Brouwer
I had my braces removed when I grew up!Devilkin
There is pybraces, an encoding you can use for your python source code files in order to really enable braces. ;)Carthusian
That's one hell of an easter egg. I love oss communities...Brassware
M
305

The step argument in slice operators. For example:

a = [1,2,3,4,5]
>>> a[::2]  # iterate over the whole list in 2-increments
[1,3,5]

The special case x[::-1] is a useful idiom for 'x reversed'.

>>> a[::-1]
[5,4,3,2,1]
Montague answered 19/9, 2008 at 11:50 Comment(13)
much clearer, in my opinion, is the reversed() function. >>> list(reversed(range(4))) [3, 2, 1, 0]Conde
then how to write "this i a string"[::-1] in a better way? reversed doesnt seem to helpBakst
"".join(reversed("this i a string"))Sukkoth
The problem with reversed() is that it returns an iterator, so if you want to preserve the type of the reversed sequence (tuple, string, list, unicode, user types...), you need an additional step to convert it back.Caponize
def reverse_string(string): return string[::-1]Pouf
@pi I think if one knows enough to define reverse_string as you have then one can leave the [::-1] in your code and be comfortable with its meaning and the feeling it is appropriate.Cissoid
Is there a speed difference between [::-1] and reversed()?Baculiform
-1, because it is not hidden and you learn it early enought, but its an useful featureRondelle
ooh, noticed that [1,2,3,4,5][::-2] also works as expected, which is quite coolSkink
You can make a cool palindrome finder with this!Ultimate
@Berry list(reversed('blah blah'))Spoilfive
@Austin: yes a huge difference with strings: pastebin.com/ZV6cHYhPAnnuitant
@Trufa: yeah very easy to find a palindrome: if someseq == (someseq[::-1]) then it's a palindrome, and this would work with any sequence type (strings, lists, etc).Annuitant
L
288

The for...else syntax (see http://docs.python.org/ref/for.html )

for i in foo:
    if i == 0:
        break
else:
    print("i was never 0")

The "else" block will be normally executed at the end of the for loop, unless the break is called.

The above code could be emulated as follows:

found = False
for i in foo:
    if i == 0:
        found = True
        break
if not found: 
    print("i was never 0")
Lightless answered 19/9, 2008 at 11:50 Comment(22)
I think the for/else syntax is awkward. It "feels" as if the else clause should be executed if the body of the loop is never executed.Stratus
It becomes less awkward if we think of it as for/if/else, with the else belonging to the if. And it's so useful an idiom that I wonder other language designers didn't think of it!Zoospore
ah. Never saw that one! But I must say it is a bit of a misnomer. Who would expect the else block to execute only if break never does? I agree with codeape: It looks like else is entered for empty foos.Blastocyst
I've added an equivalent code that is not using else.Colony
I find this much less useful than if the else clause executed if the for loop didn't. I've wanted that so many times, but I've never found a case I wanted to use this.Halberd
Anyone remember the FOR var … NEXT var … END FOR var of Sinclair QL's SuperBasic? Everything between NEXT and END FOR would execute at the end of the loop, unless an EXIT FOR was issued. That syntax was cleaner :)Glorification
seems like the keyword should be finally, not elsePeirsen
Except finally is already used in a way where that suite is always executed.Larianna
This is really convenient, and I use it, but it needs an explaining comment each time.Indue
Should definately not be 'else'. Maybe 'then' or something, and then 'else' for when the loop was never executed.Muldrow
I used this on a programming assignment for a class and lost points because the grader had never seen it before... totally got those back.Henbit
Hey, people forgot to mention that this idiom also works for while loops.Monopolize
I've always thought a for...then...else construct would be better, where then is only executed if the for is successful, else when the for cannot be entered (eg: for i in []; pass; else; print "empty list". But then I'm a novice. :)Entitle
Does this work ONLY if there is a break statement in the for loop or are there any other circumstances where this trick works this way?Macario
@inspectorG4dget: it works fine without a break... but serves no purpose if there's no break. (The code in the else might as well just be outdented one level)Exacerbate
@jkerian: Many thanks. I observed that behavior, but was wondering more along the lines of "would this work the same way if return was used instead of break?"Macario
i shun this feature. every time i want to use it i have to read up on it, and then i still find it hard to get right.Revivalism
Yeah, using this syntax got me screamed at by a couple of PHP and C programmers. Go figure. :-)If
Never use for-else. It does not do what the next programmer to see your code thinks it does.Celestyna
I used to be confused by this behavior as well until I thought of it in terms of try.. except.. else.. Python's for.. else.. behavior is consistent with how try blocks are executed. If the contents of try or for succeed, jump to else.Rennes
Control flow altering statements (like break) are generally poor choices to begin with, combine this with an unusual use of a keyword (in this case "else") and you end up with code that is even harder to read, especially for novices. I'd definitely shy away from this one.Annuitant
if any(i == 0 for i in foo): ... Would be my choice of phrasing for this kind of code. Maybe it's my Haskell influence.Candleberry
G
288

Decorators

Decorators allow to wrap a function or method in another function that can add functionality, modify arguments or results, etc. You write decorators one line above the function definition, beginning with an "at" sign (@).

Example shows a print_args decorator that prints the decorated function's arguments before calling it:

>>> def print_args(function):
>>>     def wrapper(*args, **kwargs):
>>>         print 'Arguments:', args, kwargs
>>>         return function(*args, **kwargs)
>>>     return wrapper

>>> @print_args
>>> def write(text):
>>>     print text

>>> write('foo')
Arguments: ('foo',) {}
foo
Grenoble answered 19/9, 2008 at 11:50 Comment(13)
When defining decorators, I'd recommend decorating the decorator with @decorator. It creates a decorator that preserves a functions signature when doing introspection on it. More info here: phyast.pitt.edu/~micheles/python/documentation.htmlAniseikonia
How is this a hidden feature?Intensive
Well, it's not present in most simple Python tutorials, and I stumbled upon it a long time after I started using Python. That is what I would call a hidden feature, just about the same as other top posts here.Subversion
vetler, the questions asks for "lesser-known but useful features of the Python programming language." How do you measure 'lesser-known but useful features'? I mean how are any of these responses hidden features?Myrlemyrlene
@vetler Most of the thing here are hardly "hidden".Disturbance
Hidden? this is a documented feature python.org/dev/peps/pep-0318Gwendolin
If the standard is whether or not a feature is documented, then this question should be closed.Landward
I thought we were supposed to list hidden features of python not the awesome features of python. ;-)Basrelief
why would this be useful except in the very rare situations? Why not just redefine the function and add optional parameters?Robins
@Dexter: Because that decorator may be universal -- it can be attached to any function for a short moment, e.g. when you need to debug it, and then very easily removed. Besides, there are many uses of decorators other than debugging.Subversion
Decorating a decorator with the decorator decorator? We must go deeper.Fry
As for useful (arguable), some more common ones: @ property, @ classmethod, @ staticmethod, @ coroutine, @ _o (monocle)Await
Decorators are extremely handy, but they can be a PITA to write. There's so many variations -- class based vs not class based, decorators which can decorate methods vs functions (or both), adding descriptors, decorators which take arguments, etc. So while the simple example above may not be a "hidden" feature of Python, I'd say consider it a starting point for learning about a rather beefy topic in the language, and should be in the list.Annuitant
I
258

From 2.5 onwards dicts have a special method __missing__ that is invoked for missing items:

>>> class MyDict(dict):
...  def __missing__(self, key):
...   self[key] = rv = []
...   return rv
... 
>>> m = MyDict()
>>> m["foo"].append(1)
>>> m["foo"].append(2)
>>> dict(m)
{'foo': [1, 2]}

There is also a dict subclass in collections called defaultdict that does pretty much the same but calls a function without arguments for not existing items:

>>> from collections import defaultdict
>>> m = defaultdict(list)
>>> m["foo"].append(1)
>>> m["foo"].append(2)
>>> dict(m)
{'foo': [1, 2]}

I recommend converting such dicts to regular dicts before passing them to functions that don't expect such subclasses. A lot of code uses d[a_key] and catches KeyErrors to check if an item exists which would add a new item to the dict.

Inulin answered 19/9, 2008 at 11:50 Comment(6)
This is where I put fork bombs.Alverta
I prefer using setdefault. m={} ; m.setdefault('foo',1)Salter
@Salter meant this m={}; m.setdefault('foo', []).append(1).Menstruate
There are however cases where passing the defaultdict is very handy. The function may for example iter over the value and it works for undefined keys without extra code, as the default is an empty list.Elegance
defaultdict is better in some circumstances than setdefault, since it doesn't create the default object unless the key is missing. setdefault creates it whether it's missing or not. If your default object is expensive to create this can be a performance hit - I got a decent speedup out of one program simply by changing all setdefault calls.Fevre
defaultdict is also more powerful than the setdefault method in other cases. For example, for a counter—dd = collections.defaultdict(int) ... dd[k] += 1 vs d.setdefault(k, 0) += 1.Bagdad
R
247

In-place value swapping

>>> a = 10
>>> b = 5
>>> a, b
(10, 5)

>>> a, b = b, a
>>> a, b
(5, 10)

The right-hand side of the assignment is an expression that creates a new tuple. The left-hand side of the assignment immediately unpacks that (unreferenced) tuple to the names a and b.

After the assignment, the new tuple is unreferenced and marked for garbage collection, and the values bound to a and b have been swapped.

As noted in the Python tutorial section on data structures,

Note that multiple assignment is really just a combination of tuple packing and sequence unpacking.

Rhettrhetta answered 19/9, 2008 at 11:50 Comment(8)
Does this use more real memory than the traditional way? I would guess do since you are creating a tuple, instead of just one swap variablePassword
It doesn't use more memory. It uses less.. I just wrote it both ways, and de-compiled the bytecode.. the compiler optimizes, as you'd hope it would. dis results showed it's setting up the vars, and then ROT_TWOing. ROT_TWO means 'swap the two top-most stack vars'... Pretty snazzy, actually.Diggs
You also inadvertently point out another nice feature of Python, which is that you can implicitly make a tuple of items just by separating them by commas.Chaunceychaunt
I would prefer (a, b) = (b, a). I don't think it is necessarily clear whether , or = has higher precedence.Antilepton
Dana the Sane: assignment in Python is a statement, not an expression, so that expression would be invalid if = had higher priority (i.e. it was interpreted as a, (b = b), a).Greggs
royal: it did actually create tuples in older versions of Python (I think pre-2.4).Greggs
This is the least hidden feature I've read here. Nice, but explicitly described in every Python tutorial.Cousin
I love this feature, but we have to be careful about the semantics of the objects being swapped. I got bitten when doing foo[x:y], bar[x:y] = bar[x:y], foo[x:y] with foo and bar being numpy arrays, because slicing numpy arrays creates views, not copies of the data!Misguidance
D
235

Readable regular expressions

In Python you can split a regular expression over multiple lines, name your matches and insert comments.

Example verbose syntax (from Dive into Python):

>>> pattern = """
... ^                   # beginning of string
... M{0,4}              # thousands - 0 to 4 M's
... (CM|CD|D?C{0,3})    # hundreds - 900 (CM), 400 (CD), 0-300 (0 to 3 C's),
...                     #            or 500-800 (D, followed by 0 to 3 C's)
... (XC|XL|L?X{0,3})    # tens - 90 (XC), 40 (XL), 0-30 (0 to 3 X's),
...                     #        or 50-80 (L, followed by 0 to 3 X's)
... (IX|IV|V?I{0,3})    # ones - 9 (IX), 4 (IV), 0-3 (0 to 3 I's),
...                     #        or 5-8 (V, followed by 0 to 3 I's)
... $                   # end of string
... """
>>> re.search(pattern, 'M', re.VERBOSE)

Example naming matches (from Regular Expression HOWTO)

>>> p = re.compile(r'(?P<word>\b\w+\b)')
>>> m = p.search( '(((( Lots of punctuation )))' )
>>> m.group('word')
'Lots'

You can also verbosely write a regex without using re.VERBOSE thanks to string literal concatenation.

>>> pattern = (
...     "^"                 # beginning of string
...     "M{0,4}"            # thousands - 0 to 4 M's
...     "(CM|CD|D?C{0,3})"  # hundreds - 900 (CM), 400 (CD), 0-300 (0 to 3 C's),
...                         #            or 500-800 (D, followed by 0 to 3 C's)
...     "(XC|XL|L?X{0,3})"  # tens - 90 (XC), 40 (XL), 0-30 (0 to 3 X's),
...                         #        or 50-80 (L, followed by 0 to 3 X's)
...     "(IX|IV|V?I{0,3})"  # ones - 9 (IX), 4 (IV), 0-3 (0 to 3 I's),
...                         #        or 5-8 (V, followed by 0 to 3 I's)
...     "$"                 # end of string
... )
>>> print pattern
"^M{0,4}(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})$"
Darlenadarlene answered 19/9, 2008 at 11:50 Comment(9)
I don't know if I'd really consider that a Python feature, most RE engines have a verbose option.Landsknecht
Yes, but because you can't do it in grep or in most editors, a lot of people don't know it's there. The fact that other languages have an equivalent feature doesn't make it not a useful and little known feature of pythonFurnace
In a large project with lots of optimized regular expressions (read: optimized for machine but not human beings), I bit the bullet and converted all of them to verbose syntax. Now, introducing new developers to projects is much easier. From now on we enforce verbose REs on every project.Tartuffe
I'd rather just say: hundreds = "(CM|CD|D?C{0,3})" # 900 (CM), 400 (CD), etc. The language already has a way to give things names, a way to add comments, and a way to combine strings. Why use special library syntax here for things the language already does perfectly well? It seems to go directly against Perlis' Epigram 9.Speechmaking
@Ken: a regex may not always be directly in the source, it could be read from settings or a config file. Allowing comments or just additional whitespace (for readability) can be a great help.Larianna
If you're writing a Python program and your config file isn't Python, then (Yegge would say and I'd agree that) "you're talking out of both sides of your mouth" re OO: sites.google.com/site/steveyegge2/the-emacs-problemSpeechmaking
Nice! With the string literal concatenation, the comments are parsed as actual comments.Chaunceychaunt
I start my verbose patterns with (?x) # Use verbose mode, which feels more self-documenting than using re.VERBOSE at the compile step. These must be the very first characters in the pattern - no leading whitespace. Also, when using a verbose pattern, remember to us \s or [ ] to signify spaces (depending on if you want to capture all whitespace or just spaces). It can be easy to forget when converting from standard to verbose patterns.Unionism
+1 for string literal concatenation, but -1 for Python even having the re.VERBOSE flag, which I think leads to terrible-to-read code.Imitative
C
222

Function argument unpacking

You can unpack a list or a dictionary as function arguments using * and **.

For example:

def draw_point(x, y):
    # do some magic

point_foo = (3, 4)
point_bar = {'y': 3, 'x': 2}

draw_point(*point_foo)
draw_point(**point_bar)

Very useful shortcut since lists, tuples and dicts are widely used as containers.

Calley answered 19/9, 2008 at 11:50 Comment(13)
Use this all the time, love it.Canorous
* is also known as the splat operatorDigiacomo
I like this feature, but pylint doesn't sadly.Roeser
pylint's advice is not a law. The other way, apply(callable, arg_seq, arg_map), is deprecated since 2.3.Squinteyed
pylint's advice may not be law, but it's damn good advice. Debugging code that over-indulges in stuff like this is pure hell. As the original poster notes, this is a useful shortcut.Nickelson
I saw this being used in code once and wondered what it did. Unfortunately it's hard to google for "Python **"Locally
It's called the splat operator. So you can google for "python splat", but it's unlikely anybody would know the name if he doesn't know the feature :-pDamnify
@andrew: Pylint tends to complain for a lot of classic idioms like try/except ImportErrorDamnify
@e-satis: that's because a lot of those "classic idioms" are poor practice. I agree that Pylint can be overly "nitpicky", but the vast majority of the time unless you have a clear reason to not adhere to its suggestions it's best to comply (and for the cases where you have a good reason, you can always do a pylint-disable to suppress the warning in the specific case)Annuitant
@Adam: don't get me wrong, I actually have pylint run everytime my document is saved automatically. but Try/ except error is really useful. Dictionary comprehensions as well, and pylint don't understand them. Pylint typically complains a lot in unittest as well were you set variables without using them, because the test actually require to. It's important not to get psycho with pylint alerts, or you will just stop coding. And using * or ** is actually clean code, not dirty one.Damnify
Weird, I never have Pylint complain about dict comprehensions & I use those all the time. Most of the unit test warnings are due to the fact that the unittest module isn't PEP-8 compliant (ex: forced to name setup method "setUp" instead of "set_up"). But yeah, I agree Pylint can be (shall we say) overzealous at times. What I do is everytime I put in a disable-pylint directive I follow it with a comment justifying its use. I find this works well & there's been a few times where my team has challenged me & in the end I discovered a better way to do something.Annuitant
If pylint complains about dict comprehensions, then it is running under python 2.6 (the version built in to vim.) To fix this, run pylint using 2.7 (which for me on OSX meant I had to compile macvim myself tartley.com/?p=1355 but on other platforms I think vim binaries with 2.7 support are out there. You need 2.7 installed for this to work, macvim uses it.Chinchin
Yeah, the pylint warning about "** magic" is dumb and can be turned off globally.Chinchin
A
205

ROT13 is a valid encoding for source code, when you use the right coding declaration at the top of the code file:

#!/usr/bin/env python
# -*- coding: rot13 -*-

cevag "Uryyb fgnpxbiresybj!".rapbqr("rot13")
Aerify answered 19/9, 2008 at 11:50 Comment(9)
Great! Notice how byte strings are taken literally, but unicode strings are decoded: try cevag h"Uryyb fgnpxbiresybj!"Indue
unfortunately it is removed from py3kProtuberancy
This is good for bypassing antivirus.Moa
That has nothing to do with the encoding, it is just Python written in Welsh. :-PBeckmann
Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn!Acrospire
see? you can write unintelligible code in any languages, even in pythonHynes
Uryyb fgnpxbiresybj! -> Hello stackoverflow!Deuno
@Manuel Ferreria : sry, but i couldn't figure what u said... is it ROT13 ??Caporal
@Caporal I too was flumoxxed, until I remembered about google, and found en.wiktionary.org/wiki/…Darkle
N
183

Creating new types in a fully dynamic manner

>>> NewType = type("NewType", (object,), {"x": "hello"})
>>> n = NewType()
>>> n.x
"hello"

which is exactly the same as

>>> class NewType(object):
>>>     x = "hello"
>>> n = NewType()
>>> n.x
"hello"

Probably not the most useful thing, but nice to know.

Edit: Fixed name of new type, should be NewType to be the exact same thing as with class statement.

Edit: Adjusted the title to more accurately describe the feature.

Noenoel answered 19/9, 2008 at 11:50 Comment(8)
This has a lot of potential for usefulness, e.g., JIT ORMsIgnazio
I use it to generate HTML-Form classes based on a dynamic input. Very nice!Pouf
I also used it to generate dynamic django forms (until i discovered formsets)Peirsen
Note: all classes are created at runtime. So you can use the 'class' statement within a conditional, or within a function (very useful for creating families of classes or classes that act as closures). The improvement that 'type' brings is the ability to neatly define a dynamically generated set of attributes (or bases).Handfasting
Extremely useful, in Django, for generating dynamic models that wrap existing sets of tables with similar structures.Ahwaz
You can also create anonymous types with a blank string like: type('', (object,), {'x': 'blah'})Shellashellac
Could be very useful for code injections.Zebec
You can also instantiate this class in one line too. x = type("X", (object,), {'val':'Hello'})()Condorcet
F
179

Context managers and the "with" Statement

Introduced in PEP 343, a context manager is an object that acts as a run-time context for a suite of statements.

Since the feature makes use of new keywords, it is introduced gradually: it is available in Python 2.5 via the __future__ directive. Python 2.6 and above (including Python 3) has it available by default.

I have used the "with" statement a lot because I think it's a very useful construct, here is a quick demo:

from __future__ import with_statement

with open('foo.txt', 'w') as f:
    f.write('hello!')

What's happening here behind the scenes, is that the "with" statement calls the special __enter__ and __exit__ methods on the file object. Exception details are also passed to __exit__ if any exception was raised from the with statement body, allowing for exception handling to happen there.

What this does for you in this particular case is that it guarantees that the file is closed when execution falls out of scope of the with suite, regardless if that occurs normally or whether an exception was thrown. It is basically a way of abstracting away common exception-handling code.

Other common use cases for this include locking with threads and database transactions.

Fall answered 19/9, 2008 at 11:50 Comment(6)
I wouldn't approve a code review which imported anything from future. The features are more cute than useful, and usually they just end up confusing Python newcomers.Shrine
Yes, such "cute" features as nested scopes and generators are better left to those who know what they're doing. And anyone who wants to be compatible with future versions of Python. For nested scopes and generators, "future versions" of Python means 2.2 and 2.5, respectively. For the with statement, "future versions" of Python means 2.6.Metal
This may go without saying, but with python v2.6+, you no longer need to import from future. with is now a first class keyword.Elinorelinore
In 2.7 you can have multiple withs :) with open('filea') as filea and open('fileb') as fileb: ...Baculiform
@Austin i could not get that syntax to work on 2.7. this however did work: with open('filea') as filea, open('fileb') as fileb: ...Deuno
It could be useful to explain why, in which cases, this with statement is different from f = open('foo.txt', 'w').Quizmaster
R
168

Dictionaries have a get() method

Dictionaries have a 'get()' method. If you do d['key'] and key isn't there, you get an exception. If you do d.get('key'), you get back None if 'key' isn't there. You can add a second argument to get that item back instead of None, eg: d.get('key', 0).

It's great for things like adding up numbers:

sum[value] = sum.get(value, 0) + 1

Rejoice answered 19/9, 2008 at 11:50 Comment(4)
also, checkout the setdefault method.Blastocyst
also, checkout collections.defaultdict class.Colony
If you are using Python 2.7 or later, or 3.1 or later, check out the Counter class in the collections module. docs.python.org/library/collections.html#collections.CounterTrudietrudnak
Oh man, this whole time I've been doing get(key, None). Had no idea that None was provided by default.Darkle
P
152

Descriptors

They're the magic behind a whole bunch of core Python features.

When you use dotted access to look up a member (eg, x.y), Python first looks for the member in the instance dictionary. If it's not found, it looks for it in the class dictionary. If it finds it in the class dictionary, and the object implements the descriptor protocol, instead of just returning it, Python executes it. A descriptor is any class that implements the __get__, __set__, or __delete__ methods.

Here's how you'd implement your own (read-only) version of property using descriptors:

class Property(object):
    def __init__(self, fget):
        self.fget = fget

    def __get__(self, obj, type):
        if obj is None:
            return self
        return self.fget(obj)

and you'd use it just like the built-in property():

class MyClass(object):
    @Property
    def foo(self):
        return "Foo!"

Descriptors are used in Python to implement properties, bound methods, static methods, class methods and slots, amongst other things. Understanding them makes it easy to see why a lot of things that previously looked like Python 'quirks' are the way they are.

Raymond Hettinger has an excellent tutorial that does a much better job of describing them than I do.

Pompei answered 19/9, 2008 at 11:50 Comment(4)
This is a duplicate of decorators, isn't it!? (stackoverflow.com/questions/101268/… )Zoila
no, decorators and descriptors are totally different things, though in the example code, i'm creating a descriptor decorator. :)Pompei
The other way to do this is with a lambda: foo = property(lambda self: self.__foo)Hacker
@PetePeterson Yes, but property itself is implemented with descriptors, which was the point of my post.Pompei
B
142

Conditional Assignment

x = 3 if (y == 1) else 2

It does exactly what it sounds like: "assign 3 to x if y is 1, otherwise assign 2 to x". Note that the parens are not necessary, but I like them for readability. You can also chain it if you have something more complicated:

x = 3 if (y == 1) else 2 if (y == -1) else 1

Though at a certain point, it goes a little too far.

Note that you can use if ... else in any expression. For example:

(func1 if y == 1 else func2)(arg1, arg2) 

Here func1 will be called if y is 1 and func2, otherwise. In both cases the corresponding function will be called with arguments arg1 and arg2.

Analogously, the following is also valid:

x = (class1 if y == 1 else class2)(arg1, arg2)

where class1 and class2 are two classes.

Butterfield answered 19/9, 2008 at 11:50 Comment(13)
The assignment is not the special part. You could just as easily do something like: return 3 if (y == 1) else 2.Carnotite
An alternate way to do this is: y == 1 and 3 or 2Arium
That alternate way is fraught with problems. For one thing, normally this works: if y == 1: #3 else if y == 70: #2 Why? y == 1 is only evaluated, THEN y == 70 if y == 1 is false. In this statement: y == 1 and 3 or 2 - 3 and 2 are evaluated as well as y == 1.Kobi
That alternate way is the first time I've seen obfuscated Python.Ugrian
Kylebrooks: It doesn't in that case, boolean operators short circuit. It will only evaluate 2 if bool(3) == False.Pulitzer
this backwards-style coding confusing me. something like x = ((y == 1) ? 3 : 2) makes more sense to meRefrain
I feel just the opposite of @Mark, C-style ternary operators have always confused me, is the right side or the middle what gets evaluated on a false condition? I much prefer Python's ternary syntax.Multiply
@Mark "x = (y == 1) and 3 or 2" is also valid.Sarsaparilla
I think C-style ternary operators are simpler, more english-like: 'am I drunk' ? 'yes, make out with her' : 'no, dont even think about it'Mickiemickle
x = 3 if (y == 1) else 2 - I find that in many cases, x = (2, 3)[y==1] is actually more readable (normally with really long statements, so you can keep the results (2, 3) together).Acetum
Somehow Guido and the Python folks managed to make one of the most contorted parts of the C language readable and easily understandable, even if you don't know what it is.Chaunceychaunt
@Infinity, you should consult with a doctor to replace the always-true constant 'am I drunk' with a non-deterministic function am_i_drunk().Trilateral
The first time I saw the ternary op in Python I found it confusing to read, largely due to my familiarity with the C-style one. Not sure which one is better ("the grass is wet if it is raining otherwise the grass is dry" vs "if it is raining then the grass is wet otherwise the grass is dry")Annuitant
I
141

Doctest: documentation and unit-testing at the same time.

Example extracted from the Python documentation:

def factorial(n):
    """Return the factorial of n, an exact integer >= 0.

    If the result is small enough to fit in an int, return an int.
    Else return a long.

    >>> [factorial(n) for n in range(6)]
    [1, 1, 2, 6, 24, 120]
    >>> factorial(-1)
    Traceback (most recent call last):
        ...
    ValueError: n must be >= 0

    Factorials of floats are OK, but the float must be an exact integer:
    """

    import math
    if not n >= 0:
        raise ValueError("n must be >= 0")
    if math.floor(n) != n:
        raise ValueError("n must be exact integer")
    if n+1 == n:  # catch a value like 1e300
        raise OverflowError("n too large")
    result = 1
    factor = 2
    while factor <= n:
        result *= factor
        factor += 1
    return result

def _test():
    import doctest
    doctest.testmod()    

if __name__ == "__main__":
    _test()
Innocent answered 19/9, 2008 at 11:50 Comment(10)
Doctests are certainly cool, but I really dislike all the cruft you have to type to test that something should raise an exceptionFm
Doctests are overrated and pollute the documentation. How often do you test a standalone function without any setUp() ?Shrine
who says you can't have setup in a doctest? write a function that generates the context and returns locals() then in your doctest do locals().update(setUp()) =DPeirsen
These are nice for making sure examples in docstrings don't go out of sync.Moa
If a standalone function requires a setUp, chances are high that it should be decoupled from some unrelated stuff or put into a class. Class doctest namespace can then be re-used in class method doctests, so it's a bit like setUp, only DRY and readable.Unicameral
bemusement.org/diary/2008/October/24/more-doctest-problems - doctests make for ok docs, bad testsPic
"How often do you test a standalone function" - lots. I find doctests often emerge naturally from the design process when I am deciding on facades.Killer
Doctest is hard to use with some modules and frameworks, such as Django. Usually, which makes Doctest hard to use is some point of the API design that is heavyweight, overcoupled to other components or has a lot of dependencies. Doctest has some problems and limitations but most of the time I feel that an API that makes it hard to use Doctest is more complex than it is needed.Cleaner
I think doctests are misnamed. They are really useful if you look at them as small usage examples, coming with a guarantee that they run.Imprescriptible
I've never understood the point of doctests, if you have a snippet of code that tests a function then put it into a proper unit test.Annuitant
F
138

Named formatting

% -formatting takes a dictionary (also applies %i/%s etc. validation).

>>> print "The %(foo)s is %(bar)i." % {'foo': 'answer', 'bar':42}
The answer is 42.

>>> foo, bar = 'question', 123

>>> print "The %(foo)s is %(bar)i." % locals()
The question is 123.

And since locals() is also a dictionary, you can simply pass that as a dict and have % -substitions from your local variables. I think this is frowned upon, but simplifies things..

New Style Formatting

>>> print("The {foo} is {bar}".format(foo='answer', bar=42))
Forgo answered 19/9, 2008 at 11:50 Comment(11)
Will be phased out and eventually replaced with string's format() method.Curbing
Named formatting is very useful for translators as they tend to just see the format string without the variable names for contextHammerfest
Appears to work in python 3.0.1 (needed to add parenttheses around print call).Forgo
a hash, huh? I see where you came from.Emplacement
%-formatting won't go away any time soon, but the "format" method on strings is the new (current) best-practices method. It supports everything %-formatting does and most people think the API and the formatting syntax is much nicer. (Myself included.) Python has a third method, string.Template added in 2.4; basically nobody likes that one.Roborant
%s formatting will not be phased out. str.format() is certainly more pythonic, however is actually 10x's slower for simple string replacement. My belief is %s formatting is still best practice.Hildehildebrand
For completeness, the locals()-equivalent for new-style formatting is of course print "The {foo} is {bar}".format(**locals()).Rose
I love locals(), but it has the annoying side-effect that if you use pylint, you will often get errors for not using a variable in the scope of the function.Christianachristiane
As of Python 3.2, the locals() equivalent is print("The {foo} is {bar}".format_map(locals()))Chintz
That format is slower should be fixable. After all it does the same as % formatting. And in 3.1.3 timeit gives me these speed measurements: >>> timeit('''"a %(b)s" % {"b": "c"}''') 0.2503829002380371 >>> timeit('''"a {b}".format(b="c")''') 0.41667699813842773Slowworm
Hey @matt, it's not clear which kind of formatting you're recommending against, and it's especially not clear why.Chinchin
O
132

To add more python modules (espcially 3rd party ones), most people seem to use PYTHONPATH environment variables or they add symlinks or directories in their site-packages directories. Another way, is to use *.pth files. Here's the official python doc's explanation:

"The most convenient way [to modify python's search path] is to add a path configuration file to a directory that's already on Python's path, usually to the .../site-packages/ directory. Path configuration files have an extension of .pth, and each line must contain a single path that will be appended to sys.path. (Because the new paths are appended to sys.path, modules in the added directories will not override standard modules. This means you can't use this mechanism for installing fixed versions of standard modules.)"

Outsider answered 19/9, 2008 at 11:50 Comment(1)
I never made the connection between that .pth file in the site-packages directory from setuptools and this idea. awesome.Arborization
C
122

Exception else clause:

try:
  put_4000000000_volts_through_it(parrot)
except Voom:
  print "'E's pining!"
else:
  print "This parrot is no more!"
finally:
  end_sketch()

The use of the else clause is better than adding additional code to the try clause because it avoids accidentally catching an exception that wasn’t raised by the code being protected by the try ... except statement.

See http://docs.python.org/tut/node10.html

Curbing answered 19/9, 2008 at 11:50 Comment(6)
+1 this is awesome. If the try block executes without entering any exception blocks, then the else block is entered. And then of course, the finally block is executedMacario
It would make more sense to use continue, but I guess it's already taken ;)Clothilde
Note that on older versions of Python2 you can't have both else: and finally: clauses for the same try: blockDefinitely
@Paweł Prażak: I don't think it would. As continue and break refer to loops and this is a single conditional statement.Africanist
@IsaacRemuant you are right. Maybe something like expected or default or action or normal? :)Clothilde
@Paweł Prażak, as Kevin Horn mentioned, this syntax was introduced after the initial release of Python and adding new reserved keywords to existing language is always problematic. That's why an existing keyword is usually reused (c.f. "auto" in recent C++ standard).Curbing
H
113

Re-raising exceptions:

# Python 2 syntax
try:
    some_operation()
except SomeError, e:
    if is_fatal(e):
        raise
    handle_nonfatal(e)

# Python 3 syntax
try:
    some_operation()
except SomeError as e:
    if is_fatal(e):
        raise
    handle_nonfatal(e)

The 'raise' statement with no arguments inside an error handler tells Python to re-raise the exception with the original traceback intact, allowing you to say "oh, sorry, sorry, I didn't mean to catch that, sorry, sorry."

If you wish to print, store or fiddle with the original traceback, you can get it with sys.exc_info(), and printing it like Python would is done with the 'traceback' module.

Hemihydrate answered 19/9, 2008 at 11:50 Comment(6)
Sorry but this is a well known and common feature of almost all languages.Rhettrhetta
Note the italicized text. Some people will do raise e instead, which doesn't preserve the original traceback.Epsom
Maybe more magical, exc_info = sys.exc_info(); raise exc_info[0], exc_info[1], exc_info[2] is equivalent to this, but you can change those values around (e.g., change the exception type or message)Nibbs
@Lucas S. Well, I didn't know it, and I'm glad it's written here.Damnify
i may be showing my youth here, but i have always used the python 3 syntax in python 2.7 with no issueDeuno
The Python 3 syntax works in 2.6 and 2.7 as well, yes.Hemihydrate
H
106

Main messages :)

import this
# btw look at this module's source :)

De-cyphered:

The Zen of Python, by Tim Peters

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than right now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!

Horsepower answered 19/9, 2008 at 11:50 Comment(12)
Loving the source for that :DAshanti
Any idea why the source was cyphered that way? Was it just for fun, or was there some other reason?Brindle
the way the source is written goes against the zen!Minimal
svn.python.org/view/python/trunk/Lib/this.py?view=markupSukkoth
It should be easier to understand if instead of 65 it used ord("A"), ord("a") instead of 97 and ord("z")-ord("a") instead of 26. The rest is just a Caesar cipher by 13 (A.K.A. ROT13). But indeed it would have been more pythonic to use the str.translate method :-pThrawn
I've updated my /usr/lib/python2.6/this.py replacing the old code with this print s.translate("".join(chr(64<i<91 and 65+(i-52)%26 or 96<i<123 and 97+(i-84)%26 or i) for i in range(256))) and it looks much better now!! :-DThrawn
year, that's called irony. (the reason, why they made it)Inhuman
@MiniQuark: quick history lesson: wefearchange.org/2010/06/import-this-and-zen-of-python.htmlInnovation
I found this history of import this the other day. Rather interesting: wefearchange.org/2010/06/import-this-and-zen-of-python.htmlChaunceychaunt
@Dan: Damn. I didn't see your comment until just now.Chaunceychaunt
hg.python.org/cpython/file/tip/Lib/this.pyStocky
I think the source was obfuscated to disguise the commit, so the easter egg really would be a surprise, even to people skimming commits.Chinchin
I
105

Interactive Interpreter Tab Completion

try:
    import readline
except ImportError:
    print "Unable to load readline module."
else:
    import rlcompleter
    readline.parse_and_bind("tab: complete")


>>> class myclass:
...    def function(self):
...       print "my function"
... 
>>> class_instance = myclass()
>>> class_instance.<TAB>
class_instance.__class__   class_instance.__module__
class_instance.__doc__     class_instance.function
>>> class_instance.f<TAB>unction()

You will also have to set a PYTHONSTARTUP environment variable.

Inland answered 19/9, 2008 at 11:50 Comment(7)
This is a very useful feature. So much so I've a simple script to enable it (plus a couple of other introspection enhancements): pixelbeat.org/scripts/inpyHammerfest
IPython gives you this plus tons of other neat stuffDissident
This would have been more useful at pdb prompt than the regular python prompt (as IPython serves that purpose anyway). However, this doesn't seem to work at the pdb prompt, probably because pdb binds its own for tab (which is less useful). I tried calling parse_and_bind() at the pdb prompt, but it still didn't work. The alternative of getting pdb prompt with IPython is more work so I tend to not use it.Erelia
Found this recipe, but this didn't work for me (using python 2.6): code.activestate.com/recipes/498182Erelia
@Erelia -- easy_install ipdb -- then you can use import ipdb; ipdb.set_trace()Manque
For me the best tip was to use the try:except:else:. I've forgotten about the else in the try blockAssimilate
On osx [and i imagine other systems which use libedit] you have to do readline.parse_and_bind ("bind ^I rl_complete")Tantalus
J
91

Operator overloading for the set builtin:

>>> a = set([1,2,3,4])
>>> b = set([3,4,5,6])
>>> a | b # Union
{1, 2, 3, 4, 5, 6}
>>> a & b # Intersection
{3, 4}
>>> a < b # Subset
False
>>> a - b # Difference
{1, 2}
>>> a ^ b # Symmetric Difference
{1, 2, 5, 6}

More detail from the standard library reference: Set Types

Jurel answered 19/9, 2008 at 11:50 Comment(1)
In the tutorial, partly docs.python.org/tutorial/datastructures.html#setsAwait
M
91

Nested list comprehensions and generator expressions:

[(i,j) for i in range(3) for j in range(i) ]    
((i,j) for i in range(4) for j in range(i) )

These can replace huge chunks of nested-loop code.

Montague answered 19/9, 2008 at 11:50 Comment(6)
"for j in range(i)" - is this a typo? Normally you'd want fixed ranges for i and j. If you're accessing a 2d array, you'd miss out on half your elements.Suttle
I'm not accessing any arrays in this example. The only purpose of this code is to show that the expressions from the inner ranges can access those from the outer ones. The by-product is a list of pairs (x,y) such that 4>x>y>0.Caponize
sorta like double integration in calculus, or double summation.Binford
Key point to remember here (which took me a long time to realize) is that the order of the for statements are to be written in the order you'd expect them to be written in a standard for-loop, from the outside inwards.Anastatius
To add on to sykora's comment: imagine you're starting with a stack of fors and ifs with yield x inside. To convert that to a generator expression, move x first, delete all the colons (and the yield), and surround the whole thing in parentheses. To make a list comprehension instead, replace the outer parens with square brackets.Hyssop
Great comment, Ken, I have trouble visualizing this as well but anyone could grasp from your comment.Makings
G
85

Negative round

The round() function rounds a float number to given precision in decimal digits, but precision can be negative:

>>> str(round(1234.5678, -2))
'1200.0'
>>> str(round(1234.5678, 2))
'1234.57'

Note: round() always returns a float, str() used in the above example because floating point math is inexact, and under 2.x the second example can print as 1234.5700000000001. Also see the decimal module.

Gravitation answered 19/9, 2008 at 11:50 Comment(4)
So often I have to round a number to a multiple. Eg, round 17 to a multiple of 5 (15). But Python's round doesn't let me do that! IMO, it should be structured as round(num, precision=1) - round "num" to the nearest multiple of "precision"Acetum
@wallacoloo what's the matter with (17 / 5)*5 ? Isn't it short and expressive?Ovate
@Ovate try that with (19 / 5)*5. 19 rounded to the nearest 5 should be 20, right? But that seems to return 15. Also, that's relying on the integer division rules of Python 2.x. It won't work the same in 3.x. The most concise, correct solution imo is: roundNearest = lambda n, m: round(float(n)/m)*mAcetum
Or in general, roundNearest = lambda n, m: (n + (m/2)) / m * m. It's twice as fast as using round(float) on my system.Thyestes
H
81

Multiplying by a boolean

One thing I'm constantly doing in web development is optionally printing HTML parameters. We've all seen code like this in other languages:

class='<% isSelected ? "selected" : "" %>'

In Python, you can multiply by a boolean and it does exactly what you'd expect:

class='<% "selected" * isSelected %>'

This is because multiplication coerces the boolean to an integer (0 for False, 1 for True), and in python multiplying a string by an int repeats the string N times.

Historicism answered 19/9, 2008 at 11:50 Comment(4)
+1, that's a nice one. OTOH, as it's just a bit arcane, it's easy to see why you might not want to do this, for readability reasons.Windup
I would write bool(isSelected) both for reliability and readability.Elegance
you could also use something like: ('not-selected', 'selected')[isSelected] if you need an option for False value too..Born
Proper conditional expressions were added to Python in 2.5. If you're using 2.5+ you probably shouldn't use these tricks for readability reasons.Heartily
I
74

Python's advanced slicing operation has a barely known syntax element, the ellipsis:

>>> class C(object):
...  def __getitem__(self, item):
...   return item
... 
>>> C()[1:2, ..., 3]
(slice(1, 2, None), Ellipsis, 3)

Unfortunately it's barely useful as the ellipsis is only supported if tuples are involved.

Inulin answered 19/9, 2008 at 11:50 Comment(3)
see stackoverflow.com/questions/118370/… for more infoMarsiella
Actually, the ellipsis is quite useful when dealing with multi-dimensional arrays from numpy module.Monopolize
This is supposed to be more useful in Python 3, where the ellipsis will become a literal. (Try it, you can type ... in a Python 3 interpreter and it will return Eillipsis)Chaunceychaunt
C
72

re can call functions!

The fact that you can call a function every time something matches a regular expression is very handy. Here I have a sample of replacing every "Hello" with "Hi," and "there" with "Fred", etc.

import re

def Main(haystack):
  # List of from replacements, can be a regex
  finds = ('Hello', 'there', 'Bob')
  replaces = ('Hi,', 'Fred,', 'how are you?')

  def ReplaceFunction(matchobj):
    for found, rep in zip(matchobj.groups(), replaces):
      if found != None:
        return rep

    # log error
    return matchobj.group(0)

  named_groups = [ '(%s)' % find for find in finds ]
  ret = re.sub('|'.join(named_groups), ReplaceFunction, haystack)
  print ret

if __name__ == '__main__':
  str = 'Hello there Bob'
  Main(str)
  # Prints 'Hi, Fred, how are you?'
Ciapha answered 19/9, 2008 at 11:50 Comment(2)
This is insane. I had no idea this existed. awesome. thanks a lot.Minuteman
Never seen this before, but a better example might be re.sub('[aeiou]', lambda match: match.group().upper()*3, 'abcdefghijklmnopqrstuvwxyz')Runt
C
70

tuple unpacking in python 3

in python 3, you can use a syntax identical to optional arguments in function definition for tuple unpacking:

>>> first,second,*rest = (1,2,3,4,5,6,7,8)
>>> first
1
>>> second
2
>>> rest
[3, 4, 5, 6, 7, 8]

but a feature less known and more powerful allows you to have an unknown number of elements in the middle of the list:

>>> first,*rest,last = (1,2,3,4,5,6,7,8)
>>> first
1
>>> rest
[2, 3, 4, 5, 6, 7]
>>> last
8
Conk answered 19/9, 2008 at 11:50 Comment(2)
Quite haskellish :) cool one :)Kimkimball
i like it , bummer it doesn't work in 2.7..Deuno
N
67

Multi line strings

One approach is to use backslashes:

>>> sql = "select * from some_table \
where id > 10"
>>> print sql
select * from some_table where id > 10

Another is to use the triple-quote:

>>> sql = """select * from some_table 
where id > 10"""
>>> print sql
select * from some_table where id > 10

Problem with those is that they are not indented (look poor in your code). If you try to indent, it'll just print the white-spaces you put.

A third solution, which I found about recently, is to divide your string into lines and surround with parentheses:

>>> sql = ("select * from some_table " # <-- no comma, whitespace at end
           "where id > 10 "
           "order by name") 
>>> print sql
select * from some_table where id > 10 order by name

note how there's no comma between lines (this is not a tuple), and you have to account for any trailing/leading white spaces that your string needs to have. All of these work with placeholders, by the way (such as "my name is %s" % name).

Nephrectomy answered 19/9, 2008 at 11:50 Comment(2)
have been looking for this for a long timeMideast
That's a gooood thing when writing long stuff in code, while keeping a low line length!Incurable
G
63

This answer has been moved into the question itself, as requested by many people.

Gringo answered 19/9, 2008 at 11:50 Comment(0)
L
59
  • The underscore, it contains the most recent output value displayed by the interpreter (in an interactive session):
>>> (a for a in xrange(10000))
<generator object at 0x81a8fcc>
>>> b = 'blah'
>>> _
<generator object at 0x81a8fcc>
  • A convenient Web-browser controller:
>>> import webbrowser
>>> webbrowser.open_new_tab('http://www.stackoverflow.com')
  • A built-in http server. To serve the files in the current directory:
python -m SimpleHTTPServer 8000
  • AtExit
>>> import atexit
Lengthways answered 19/9, 2008 at 11:50 Comment(5)
Why not just SimpleHTTPServer?Cottier
worth noting that the _ is available only in interactive mode. when running scripts from a file, _ has no special meaning.Windup
@TokenMacGuy: Actually, you can define _ to be a variable in a file (just in case you do want to go for obfuscated Python).Chaunceychaunt
note: you can also use __ for the second-last and ___ for the third lastDeuno
@Chaunceychaunt I frequently use _ as a name for variables I do not care about (eg for _, desired_value, _ in my_tuple_with_some_irrelevant_values). Yes, ike a prologger :)Cleaner
D
56

pow() can also calculate (x ** y) % z efficiently.

There is a lesser known third argument of the built-in pow() function that allows you to calculate xy modulo z more efficiently than simply doing (x ** y) % z:

>>> x, y, z = 1234567890, 2345678901, 17
>>> pow(x, y, z)            # almost instantaneous
6

In comparison, (x ** y) % z didn't given a result in one minute on my machine for the same values.

Dynel answered 19/9, 2008 at 11:50 Comment(7)
I've always wondered what the use case is for this. I haven't encountered one, but then again I don't do scientific computing.Detrusion
@buzkor: it's pretty useful for cryptography, tooSacrilegious
Remember, this is the built-in pow() function. This is not the math.pow() function, which accepts only 2 arguments.Monopolize
I remember stating very adamantly that I could not code cryptography in pure Python without this feature. This was in 2003, and so the version of Python I was working with was 2.2 or 2.3. I wonder if I was making a fool of myself and pow had that third parameter then or not.Aquiline
pow had that third parameter at least since Python 2.1. However, according to the documentation, "[i]n Python 2.1 and before, floating 3-argument pow() returned platform-dependent results depending on floating-point rounding accidents."Geanticline
The cool thing here is that you can override this behavior in your own objects using __pow__. You just have to define an optional third argument. And for more information on where this would be used, see en.wikipedia.org/wiki/Modular_exponentiation.Chaunceychaunt
Fermats little theorem made quick!Patrolman
D
52

enumerate with different starting index

enumerate has partly been covered in this answer, but recently I've found an even more hidden feature of enumerate that I think deserves its own post instead of just a comment.

Since Python 2.6, you can specify a starting index to enumerate in its second argument:

>>> l = ["spam", "ham", "eggs"]
>>> list(enumerate(l))
>>> [(0, "spam"), (1, "ham"), (2, "eggs")]
>>> list(enumerate(l, 1))
>>> [(1, "spam"), (2, "ham"), (3, "eggs")]

One place where I've found it utterly useful is when I am enumerating over entries of a symmetric matrix. Since the matrix is symmetric, I can save time by iterating over the upper triangle only, but in that case, I have to use enumerate with a different starting index in the inner for loop to keep track of the row and column indices properly:

for ri, row in enumerate(matrix):
    for ci, column in enumerate(matrix[ri:], ri):
        # ci now refers to the proper column index

Strangely enough, this behaviour of enumerate is not documented in help(enumerate), only in the online documentation.

Dynel answered 19/9, 2008 at 11:50 Comment(4)
help(enumerate) has this proper function signature in python2.x, but not in py3k. I guess, a bug needs to be filled.Mede
help(enumerate) is definitely wrong in Python 2.6.5. Maybe they have fixed it already in Python 2.7.Geanticline
help(enumerate) from Python 3.1.2 says class enumerate(object) | enumerate(iterable) -> iterator for index, value of iterable, but the trick from the answer works fine.Menstruate
It looks like this was added in Python 2.6 as it does not work in Python 2.5.Geanticline
B
52

You can easily transpose an array with zip.

a = [(1,2), (3,4), (5,6)]
zip(*a)
# [(1, 3, 5), (2, 4, 6)]
Bhayani answered 19/9, 2008 at 11:50 Comment(4)
Basically, zip(*a) unzips a. So if b = zip(a), then a == zip(*b).Chaunceychaunt
map(None, *a) can come in handy if your tuples are of differing lengths: map(None, *[(1,2), (3,4,5), (5,)]) => [(1, 3, 5), (2, 4, None), (None, 5, None)]Greggs
Just found this feature at docs.python.org/library/functions.html and was about to share it on here. Looks like ya beat me to the chase.Brouwer
The way I remember how this works is that "zip* turns a list of pairs in to a pair of lists" (and vice versa)Artillery
K
49

You can use property to make your class interfaces more strict.

class C(object):
    def __init__(self, foo, bar):
        self.foo = foo # read-write property
        self.bar = bar # simple attribute

    def _set_foo(self, value):
        self._foo = value

    def _get_foo(self):
        return self._foo

    def _del_foo(self):
        del self._foo

    # any of fget, fset, fdel and doc are optional,
    # so you can make a write-only and/or delete-only property.
    foo = property(fget = _get_foo, fset = _set_foo,
                   fdel = _del_foo, doc = 'Hello, I am foo!')

class D(C):
    def _get_foo(self):
        return self._foo * 2

    def _set_foo(self, value):
        self._foo = value / 2

    foo = property(fget = _get_foo, fset = _set_foo,
                   fdel = C.foo.fdel, doc = C.foo.__doc__)

In Python 2.6 and 3.0:

class C(object):
    def __init__(self, foo, bar):
        self.foo = foo # read-write property
        self.bar = bar # simple attribute

    @property
    def foo(self):
        '''Hello, I am foo!'''

        return self._foo

    @foo.setter
    def foo(self, value):
        self._foo = value

    @foo.deleter
    def foo(self):
        del self._foo

class D(C):
    @C.foo.getter
    def foo(self):
        return self._foo * 2

    @foo.setter
    def foo(self, value):
        self._foo = value / 2

To learn more about how property works refer to descriptors.

Kitchen answered 19/9, 2008 at 11:50 Comment(1)
It would be nice if your pre-2.6 and your 2.6 and 3.0 examples would actually present the exact same thing: classname is different, there are comments in the pre-2.6 version, the 2.6 and 3.0 versions don't contain initialization code.Asturias
A
48

Many people don't know about the "dir" function. It's a great way to figure out what an object can do from the interpreter. For example, if you want to see a list of all the string methods:

>>> dir("foo")
['__add__', '__class__', '__contains__', (snipped a bunch), 'title',
 'translate', 'upper', 'zfill']

And then if you want more information about a particular method you can call "help" on it.

>>> help("foo".upper)
    Help on built-in function upper:

upper(...)
    S.upper() -> string

    Return a copy of the string S converted to uppercase.
Astred answered 19/9, 2008 at 11:50 Comment(6)
dir() is essential for development. For large modules I've enhanced it to add filtering. See pixelbeat.org/scripts/inpyHammerfest
You can also directly use help: help('foo')Arium
If you use IPython, you can append a question mark to get help on a variable/method.Dissident
see: An alternative to Python's dir(). Easy to type; easy to read! For humans only: github.com/inky/seeCamera
I call this python's man pages and can also be implemented to work when 'man' is called rather than 'help'Macario
@Camera -- see() is very handy. Very nice! So much easier to read than the output of dir()Annuitant
E
47

set/frozenset

Probably an easily overlooked python builtin is "set/frozenset".

Useful when you have a list like this, [1,2,1,1,2,3,4] and only want the uniques like this [1,2,3,4].

Using set() that's exactly what you get:

>>> x = [1,2,1,1,2,3,4] 
>>> 
>>> set(x) 
set([1, 2, 3, 4]) 
>>>
>>> for i in set(x):
...     print i
...
1
2
3
4

And of course to get the number of uniques in a list:

>>> len(set([1,2,1,1,2,3,4]))
4

You can also find if a list is a subset of another list using set().issubset():

>>> set([1,2,3,4]).issubset([0,1,2,3,4,5])
True

As of Python 2.7 and 3.0 you can use curly braces to create a set:

myset = {1,2,3,4}

as well as set comprehensions:

{x for x in stuff}

For more details: http://docs.python.org/library/stdtypes.html#set

Ellie answered 19/9, 2008 at 11:50 Comment(4)
Also useful in cases where a dictionary were used only to test if a value is there.Orlon
I use set about as much as tuple and list.Moa
for subsets, i believe it is issubset not isasubset. either way, the subset operator <= is nicer anyway.Deuno
you can do dict comprehension too in python 2.7 like this { x:x*2 for x in range(3) } It's probably sort of confusing if you don't know what you are doing imhoMariammarian
P
46

Built-in base64, zlib, and rot13 codecs

Strings have encode and decode methods. Usually this is used for converting str to unicode and vice versa, e.g. with u = s.encode('utf8'). But there are some other handy builtin codecs. Compression and decompression with zlib (and bz2) is available without an explicit import:

>>> s = 'a' * 100
>>> s.encode('zlib')
'x\x9cKL\xa4=\x00\x00zG%\xe5'

Similarly you can encode and decode base64:

>>> 'Hello world'.encode('base64')
'SGVsbG8gd29ybGQ=\n'
>>> 'SGVsbG8gd29ybGQ=\n'.decode('base64')
'Hello world'

And, of course, you can rot13:

>>> 'Secret message'.encode('rot13')
'Frperg zrffntr'
Popovich answered 19/9, 2008 at 11:50 Comment(4)
Sadly this will stop working in Python 3Gaiseric
Oh, will it stop working? That's too bad :/. I was just thinking how great this feature was. Then I saw your comment.Blankbook
Awe, the base64 one was pretty useful in interactive sessions handling data from the web.Moa
In my opionion it's some type of en/decoding, but on the other side there should "only one way to it" and I think, that these things are better put in its own module!Inhuman
G
43

An interpreter within the interpreter

The standard library's code module let's you include your own read-eval-print loop inside a program, or run a whole nested interpreter. E.g. (copied my example from here)

$ python
Python 2.5.1 (r251:54863, Jan 17 2008, 19:35:17) 
[GCC 4.0.1 (Apple Inc. build 5465)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> shared_var = "Set in main console"
>>> import code
>>> ic = code.InteractiveConsole({ 'shared_var': shared_var })
>>> try:
...     ic.interact("My custom console banner!")
... except SystemExit, e:
...     print "Got SystemExit!"
... 
My custom console banner!
>>> shared_var
'Set in main console'
>>> shared_var = "Set in sub-console"
>>> import sys
>>> sys.exit()
Got SystemExit!
>>> shared_var
'Set in main console'

This is extremely useful for situations where you want to accept scripted input from the user, or query the state of the VM in real-time.

TurboGears uses this to great effect by having a WebConsole from which you can query the state of you live web app.

Gaytan answered 19/9, 2008 at 11:50 Comment(0)
A
40
>>> from functools import partial
>>> bound_func = partial(range, 0, 10)
>>> bound_func()
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> bound_func(2)
[0, 2, 4, 6, 8]

not really a hidden feature but partial is extremely useful for having late evaluation of functions.

you can bind as many or as few parameters in the initial call to partial as you want, and call it with any remaining parameters later (in this example i've bound the begin/end args to range, but call it the second time with a step arg)

See the documentation.

Assyria answered 19/9, 2008 at 11:50 Comment(1)
I wish curryfication add a decent operator in python though.Taryn
S
36

While debugging complex data structures pprint module comes handy.

Quoting from the docs..

>>> import pprint    
>>> stuff = sys.path[:]
>>> stuff.insert(0, stuff)
>>> pprint.pprint(stuff)
[<Recursion on list with id=869440>,
 '',
 '/usr/local/lib/python1.5',
 '/usr/local/lib/python1.5/test',
 '/usr/local/lib/python1.5/sunos5',
 '/usr/local/lib/python1.5/sharedmodules',
 '/usr/local/lib/python1.5/tkinter']
Smyth answered 19/9, 2008 at 11:50 Comment(1)
pprint is also good for printing dictionaries in doctests, since it always sorts the output by keysDissident
C
34

Python has GOTO

...and it's implemented by external pure-Python module :)

from goto import goto, label
for i in range(1, 10):
    for j in range(1, 20):
        for k in range(1, 30):
            print i, j, k
            if k == 3:
                goto .end # breaking out from a deeply nested loop
label .end
print "Finished"
Curbing answered 19/9, 2008 at 11:50 Comment(8)
Maybe it is best that this feature remains hidden.Leandroleaning
Well, the actual hidden feature here is mechanism used to implement GOTO.Curbing
Surely, for breaking out of a nested loop you can just raise an exception, no?Emplacement
+1 first one I actually did not know about.Windup
@shylent: Exceptions should be exceptional. For that reason they are optimized for the case that they are not thrown. If you expect the condition to occur in the course of normal processing, you should use another methodWindup
@shylent, the correct way to break out of a nested loop is to put the loop into a function, and return from the functionConde
External modules should not be included in this list. GOTO is not a feature of Python.Karlee
@TokenMacGuy: not in Python. Exception are used internally to end loops using StopIteration. Exception are not exceptional at all.Damnify
G
32

dict's constructor accepts keyword arguments:

>>> dict(foo=1, bar=2)
{'foo': 1, 'bar': 2}
Georg answered 19/9, 2008 at 11:50 Comment(2)
So long as the keyword arguments are valid Python identifiers (names). You can't use: dict(1="one", two=2 ...) because the "1" is not a valid identifier even though it's a perfectly valid dictionary key.Humpage
It's perfect for copy-and-update: base = {'a': 4, 'b': 5}; updated = dict(base, c=5)Ian
L
29

Sequence multiplication and reflected operands

>>> 'xyz' * 3
'xyzxyzxyz'

>>> [1, 2] * 3
[1, 2, 1, 2, 1, 2]

>>> (1, 2) * 3
(1, 2, 1, 2, 1, 2)

We get the same result with reflected (swapped) operands

>>> 3 * 'xyz'
'xyzxyzxyz'

It works like this:

>>> s = 'xyz'
>>> num = 3

To evaluate an expression s * num interpreter calls s.___mul___(num)

>>> s * num
'xyzxyzxyz'

>>> s.__mul__(num)
'xyzxyzxyz'

To evaluate an expression num * s interpreter calls num.___mul___(s)

>>> num * s
'xyzxyzxyz'

>>> num.__mul__(s)
NotImplemented

If the call returns NotImplemented then interpreter calls a reflected operation s.___rmul___(num) if operands have different types

>>> s.__rmul__(num)
'xyzxyzxyz'

See http://docs.python.org/reference/datamodel.html#object.rmul

Launalaunce answered 19/9, 2008 at 11:50 Comment(4)
+1 I knew about sequence multiplication, but the reflected operands are new to me.Rafe
@Space, it would be unpythonic to have x * y != y * x, after all :)Ovoid
In python you may have x * y != y * x (it's just enough to play with the 'mul' methods).Outroar
Seeing many questions about problems with x= [] * 20, i am thinking if it would be better to make shallow copies of the operands by defaultNoodle
N
28

Interleaving if and for in list comprehensions

>>> [(x, y) for x in range(4) if x % 2 == 1 for y in range(4)]
[(1, 0), (1, 1), (1, 2), (1, 3), (3, 0), (3, 1), (3, 2), (3, 3)]

I never realized this until I learned Haskell.

Noenoel answered 19/9, 2008 at 11:50 Comment(6)
way cool. docs.python.org/tutorial/…Recondition
Not so cool, you are just having a list comprehension with two for loops. What is so surprising about that?Beckmann
@Olivier: there's an if between the two for loops.Noenoel
@Torsten: well, the list comprehension comprises already a for .. if, so what is so interesting? You can write: [x for i in range(10) if i%2 for j in range(10) if j%2], nothing especially cool or interesting. The if in the middle of your example has nothing to do with the second for.Beckmann
I was wondering, is there a way to do this with an else? [ a for (a, b) in zip(lista, listb) if a == b else: '-' ]Baculiform
in [ _ for _ in _ if _ ] the if is a filter for the example above it would need to be [ _ if _ else _ for _ ]Appreciative
B
28

Getter functions in module operator

The functions attrgetter() and itemgetter() in module operator can be used to generate fast access functions for use in sorting and search objects and dictionaries

Chapter 6.7 in the Python Library Docs

Byrd answered 19/9, 2008 at 11:50 Comment(1)
This answer deserves good examples, for instance in conjunction with map()Wahlstrom
M
27

Obviously, the antigravity module. xkcd #353

Maris answered 19/9, 2008 at 11:50 Comment(5)
Probably my most used module. After the soul module, of course.Bareilly
Which actually works. Try putting "import antigravity" in the newest Py3K.Cottier
@Nickelson Szeto... what does it do?Peirsen
@Jim Robert: It opens up the webbrowser to the xkcd site ;)Lithology
the skynet module is quite useful too…Chiclayo
N
27

Tuple unpacking:

>>> (a, (b, c), d) = [(1, 2), (3, 4), (5, 6)]
>>> a
(1, 2)
>>> b
3
>>> c, d
(4, (5, 6))

More obscurely, you can do this in function arguments (in Python 2.x; Python 3.x will not allow this anymore):

>>> def addpoints((x1, y1), (x2, y2)):
...     return (x1+x2, y1+y2)
>>> addpoints((5, 0), (3, 5))
(8, 5)
Nibbs answered 19/9, 2008 at 11:50 Comment(6)
For what it's worth, tuple unpacking in function definitions is going aaway in python 3.0Decode
Mostly because it makes the implementation really nasty, as far as I understand. (Eg.in inspect.getargs in the standard library - the normal path (no tuple args) is about 10 lines, and there are about 30 extra lines for handling tuple args (which only gets used occasionally).) Makes me sad though.Beira
Looks like they are removing some of the batteries in 3.0 :/ .Blankbook
It's good, that they remove it, because it's ugly and you can just emulate this, by typing: x1, x2 = x; y1, y2 = y (if you have x,y arguments)Inhuman
That's a shame. I was hoping support for * would be added for remaining arguments, so you could do stuff like a, b, *c = [1, 2, 3, 4, 5] (equivalent to a = 1, b = 2, c = [3, 4, 5]).Greggs
@yangyang: that was added. The only thing that was removed is the tuple unpacking in function definitions. Instead, you just move such unpacking to the first line of the function implementation.Chintz
S
26

The Python Interpreter

>>> 

Maybe not lesser known, but certainly one of my favorite features of Python.

Scilicet answered 19/9, 2008 at 11:50 Comment(4)
The #1 reason Python is better than everything else. </fanboi>Bareilly
Everything else you've seen. </smuglispweenie>Bluefish
And it also has iPython which is much better than the default interpreterConatus
I wish I could use iPython like SLIME in all of its gloryMetapsychology
M
25

The simplicity of :

>>> 'str' in 'string'
True
>>> 'no' in 'yes'
False
>>> 

is something i love about Python, I have seen a lot of not very pythonic idiom like that instead :

if 'yes'.find("no") == -1:
    pass
Mervin answered 19/9, 2008 at 11:50 Comment(2)
I'm conflicted about this, because it's inconsistent with the in behavior on other kinds of sequences. 1 in [3, 2, 1] is True, but [2, 1] in [3, 2, 1] is False, and it could really be a problem if it were True. But that's what would be needed to make it consistent with the string behavior explained here. So I think the .find() approach is actually more Pythonic, although of course .find() ought to have returned None instead of -1.Culley
Also note: 'str' not in 'abc' #trueMareld
H
25

The unpacking syntax has been upgraded in the recent version as can be seen in the example.

>>> a, *b = range(5)
>>> a, b
(0, [1, 2, 3, 4])
>>> *a, b = range(5)
>>> a, b
([0, 1, 2, 3], 4)
>>> a, *b, c = range(5)
>>> a, b, c
(0, [1, 2, 3], 4)
Handicapper answered 19/9, 2008 at 11:50 Comment(4)
never seen this before, it's pretty nice!Interlope
which version? as this doesn't work in 2.5.2Appreciative
works with 3.1, but not with 2.7Clothilde
Nice - been hoping for that! Shame the destructuring went.Greggs
T
25

Referencing a list comprehension as it is being built...

You can reference a list comprehension as it is being built by the symbol '_[1]'. For example, the following function unique-ifies a list of elements without changing their order by referencing its list comprehension.

def unique(my_list):
    return [x for x in my_list if x not in locals()['_[1]']]
Timbering answered 19/9, 2008 at 11:50 Comment(6)
Nifty trick. Do you know if this is accepted behavior or is it more of a dirty hack that may change in the future? The underscore makes me think the latter.Jurel
Interesting. I think it'd be a dirty hack of the locals() dictionary, but I'd be curious to know for sure.Abra
Brilliant, I was literally just looking for this yesterday!Fairtrade
not a good idea for algorithmic as well as practical reasons. Algorithmically, this will give you a linear search of the list so far on every iteration, changing your O(n) loop into O(n**2); much better to just make the list into a set afterwards. Practically speaking, it's undocumented, may change, and probably doesn't work in ironpython/jython/pypy .Melanesian
This is an undocumented implementation detail, not a hidden feature. It would be a bad idea to rely on this.Gaiseric
If you want to reference the list as you're building it, use an ordinary loop. This is very implementation dependent - CPython uses a hidden name in the locals dict because it is convenient, but other implementations are under no obligation to do the same thing.Chintz
V
25

Python sort function sorts tuples correctly (i.e. using the familiar lexicographical order):

a = [(2, "b"), (1, "a"), (2, "a"), (3, "c")]
print sorted(a)
#[(1, 'a'), (2, 'a'), (2, 'b'), (3, 'c')]

Useful if you want to sort a list of persons after age and then name.

Vidovik answered 19/9, 2008 at 11:50 Comment(2)
This is a consequence of tuple comparison working correctly in general, i.e. (1, 2) < (1, 3).Curbing
This is useful for version tuples: (1, 9) < (1, 10).Larianna
V
24

I personally love the 3 different quotes

str = "I'm a string 'but still I can use quotes' inside myself!"
str = """ For some messy multi line strings.
Such as
<html>
<head> ... </head>"""

Also cool: not having to escape regular expressions, avoiding horrible backslash salad by using raw strings:

str2 = r"\n" 
print str2
>> \n
Vinosity answered 19/9, 2008 at 11:50 Comment(2)
Four different quotes, if you include '''Chee
I enjoy having ' and " do pretty much the same thing in code. My IDE highlights strings from the two in different colors, and it makes it easy to differentiate short strings (with ') from longer ones (with ").Chaunceychaunt
C
24

Metaclasses

of course :-) What is a metaclass in Python?

Cislunar answered 19/9, 2008 at 11:50 Comment(0)
G
23

Generators

I think that a lot of beginning Python developers pass over generators without really grasping what they're for or getting any sense of their power. It wasn't until I read David M. Beazley's PyCon presentation on generators (it's available here) that I realized how useful (essential, really) they are. That presentation illuminated what was for me an entirely new way of programming, and I recommend it to anyone who doesn't have a deep understanding of generators.

Grogshop answered 19/9, 2008 at 11:50 Comment(2)
Wow! My brain is fried and that was just the first 6 parts. Starting in 7 I had to start drawing pictures just to see if I really understood what was happening with multi-process / multi-thread / multi-machine processing pipelines. Amazing stuff!Bray
+1 for the link to the presentationHospitalization
S
22

Zero-argument and variable-argument lambdas

Lambda functions are usually used for a quick transformation of one value into another, but they can also be used to wrap a value in a function:

>>> f = lambda: 'foo'
>>> f()
'foo'

They can also accept the usual *args and **kwargs syntax:

>>> g = lambda *args, **kwargs: args[0], kwargs['thing']
>>> g(1, 2, 3, thing='stuff')
(1, 'stuff')
Sleeper answered 19/9, 2008 at 11:50 Comment(1)
The main reason I see to keep lambda around: defaultdict(lambda: 1)Tannin
I
22

The textwrap.dedent utility function in python can come quite in handy testing that a multiline string returned is equal to the expected output without breaking the indentation of your unittests:

import unittest, textwrap

class XMLTests(unittest.TestCase):
    def test_returned_xml_value(self):
        returned_xml = call_to_function_that_returns_xml()
        expected_value = textwrap.dedent("""\
        <?xml version="1.0" encoding="utf-8"?>
        <root_node>
            <my_node>my_content</my_node>
        </root_node>
        """)

        self.assertEqual(expected_value, returned_xml)
Icefall answered 19/9, 2008 at 11:50 Comment(0)
A
22

When using the interactive shell, "_" contains the value of the last printed item:

>>> range(10)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> _
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>>
Antakiya answered 19/9, 2008 at 11:50 Comment(5)
I always forget about this one! It's a great feature.Bula
_ automatic variable is the best feature when using Python shell as a calculator. Very powerful calculator, by the way.Monopolize
I still try to use %% in the python shell from too much Mathematica in a previous life... If only %% were a valid variable name, I'd set %% = _...Sinegold
This was already given by someone (I don't know if it was earlier, but it is voted higher).Chaunceychaunt
__ for second-last and ___ for third-lastDeuno
C
22

Implicit concatenation:

>>> print "Hello " "World"
Hello World

Useful when you want to make a long text fit on several lines in a script:

hello = "Greaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa Hello " \
        "Word"

or

hello = ("Greaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa Hello " 
         "Word")
Calley answered 19/9, 2008 at 11:50 Comment(8)
To make a long text fit on several lines, you can also use the triple quotes.Caponize
Your example is wrong and misleading. After running it, the "Word" part won't be on the end of the hello string. It won't concatenate. To continue on next line like that, you would need implicit line continuation and string concatenation and that only happens if you use some delimiter like () or [].Quincyquindecagon
Only one thing was wrong here: the tab before "word" (typo). What's more, you are really unfriendly, espacially for somebody who didn't even take the time to check if it works (since you would have seen it does). You may want to read that : steve.yegge.googlepages.com/bambi-meets-godzillaDamnify
Anyone who has ever forgotten a comma in a list of strings knows how evil this 'feature' is.Burnie
Well, a PEP had been set to get rid of it but Guido decided finally to keep it. I guess it's more useful than hateful. Actually the drawbacks are no so dangerous (no safety issues) and for long strings, it helps a lot.Damnify
This is probably my favorite feature of Python. You can forget correct syntax and it's still correct syntax.Bareilly
even better: hello = "Greaaaaa Hello \<pretend there's a line break here>World"Lilialiliaceous
I always write a + at the end of the line (though I still do use the implicit line continuations from parentheses). It just makes things clearer to read.Chaunceychaunt
P
21

Using keyword arguments as assignments

Sometimes one wants to build a range of functions depending on one or more parameters. However this might easily lead to closures all referring to the same object and value:

funcs = [] 
for k in range(10):
     funcs.append( lambda: k)

>>> funcs[0]()
9
>>> funcs[7]()
9

This behaviour can be avoided by turning the lambda expression into a function depending only on its arguments. A keyword parameter stores the current value that is bound to it. The function call doesn't have to be altered:

funcs = [] 
for k in range(10):
     funcs.append( lambda k = k: k)

>>> funcs[0]()
0
>>> funcs[7]()
7
Phosgene answered 19/9, 2008 at 11:50 Comment(3)
A less hackish way to do that (imho) is just to use a separate function to manufacture lambdas that don't close on a loop variable. Like this: def make_lambda(k): return lambda: k.Rejoinder
"less hackish"?....it's personal preference, I guess, but this is core Python stuff -- not really a hack. You certainly can structure it ( using functions ) so that the reader does not need to understand how Python's default arguments work -- but if you do understand how default arguments work, you will read the "lambda: k=k:k" and understand immediately that it is "saving" the current value of "k" ( as the lambda is created ), and attaching it to the lambda itself. This works the same with normal "def" functions, too.Karlee
Jason Orendorff's answer is correct, but this is how we used to emulate closures in Python before Guido finally agreed that nested scopes were a good idea.Culley
H
20

Mod works correctly with negative numbers

-1 % 5 is 4, as it should be, not -1 as it is in other languages like JavaScript. This makes "wraparound windows" cleaner in Python, you just do this:

index = (index + increment) % WINDOW_SIZE
Historicism answered 19/9, 2008 at 11:50 Comment(2)
In most languages, number = coefficient x quotient + remainder. In Python (and Ruby), quotient is different than in JavaScript (or C or Java), because integer division in Python rounds towards negative infinity, but in JavaScript it rounds towards zero (truncates). I agree that % in Python makes more sense, but I don't know if / does. See en.wikipedia.org/wiki/Modulo_operation for details on each language.Thyestes
In general, if abs(increment) < WINDOW_SIZE, then you can say index = (index + WINDOW_SIZE + increment) in any language, and have it do the right thing.Corneliuscornell
J
19

Nice treatment of infinite recursion in dictionaries:

>>> a = {}
>>> b = {}
>>> a['b'] = b
>>> b['a'] = a
>>> print a
{'b': {'a': {...}}}
Justifier answered 19/9, 2008 at 11:50 Comment(4)
That is just the 'nice treatment' of "print", it doesn't imply a nice treatment across the language.Erelia
Both str() and repr() return the string you posted above. However, the ipython shell returns something a little different, a little more informative: {'b': {'a': <Recursion on dict with id=17830960>}}Monopolize
@denilson: ipython uses pprint module, which is available whithin standard python shell.Antibiotic
+1 for the first one that I had absolutely no idea about whatsoever.Chaunceychaunt
R
19

Passing tuple to builtin functions

Much Python functions accept tuples, also it doesn't seem like. For example you want to test if your variable is a number, you could do:

if isinstance (number, float) or isinstance (number, int):  
   print "yaay"

But if you pass us tuple this looks much cleaner:

if isinstance (number, (float, int)):  
   print "yaay"
Rowlock answered 19/9, 2008 at 11:50 Comment(4)
cool, is this even documented?Acetum
Yes, but nearly nobody knows about that.Rowlock
What other functions support this?? Good tipMickiemickle
Not sure about other functions, but this is supposed in except (FooError, BarError) clauses.Hoarfrost
L
19

Not very hidden, but functions have attributes:

def doNothing():
    pass

doNothing.monkeys = 4
print doNothing.monkeys
4
Licence answered 19/9, 2008 at 11:50 Comment(4)
It's because functions can be though of as objects with __call__() function defined.Biophysics
It's because functions can be thought of as descriptors with __call__() function defined.Minuteman
Wait, does __call__() also have a __call__() function?Muscolo
I'll bet it's __call__() functions all the way down.Reform
N
19

Assigning and deleting slices:

>>> a = range(10)
>>> a
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> a[:5] = [42]
>>> a
[42, 5, 6, 7, 8, 9]
>>> a[:1] = range(5)
>>> a
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> del a[::2]
>>> a
[1, 3, 5, 7, 9]
>>> a[::2] = a[::-2]
>>> a
[9, 3, 5, 7, 1]

Note: when assigning to extended slices (s[start:stop:step]), the assigned iterable must have the same length as the slice.

Noenoel answered 19/9, 2008 at 11:50 Comment(0)
L
19

Ternary operator

>>> 'ham' if True else 'spam'
'ham'
>>> 'ham' if False else 'spam'
'spam'

This was added in 2.5, prior to that you could use:

>>> True and 'ham' or 'spam'
'ham'
>>> False and 'ham' or 'spam'
'spam'

However, if the values you want to work with would be considered false, there is a difference:

>>> [] if True else 'spam'
[]
>>> True and [] or 'spam'
'spam'
Larianna answered 19/9, 2008 at 11:50 Comment(2)
Prior to 2.5, "foo = bar and 'ham' or 'spam'"Shrine
@a paid nerd - not quite: 1 == 1 and 0 or 3 => 3. The and short circuits on the 0 (as it equivalent to False - same deal with "" and None).Greggs
C
19

First-class functions

It's not really a hidden feature, but the fact that functions are first class objects is simply great. You can pass them around like any other variable.

>>> def jim(phrase):
...   return 'Jim says, "%s".' % phrase
>>> def say_something(person, phrase):
...   print person(phrase)

>>> say_something(jim, 'hey guys')
'Jim says, "hey guys".'
Corissa answered 19/9, 2008 at 11:50 Comment(4)
This also makes callback and hook creation (and, thus, plugin creation for your Python scripts) so trivial that you might not even know you're doing it.Bareilly
Any langauge that doesn't have first class functions (or at least some good substitute, like C function pointers) it is a misfeature. It is completely unbearable to go without.Windup
This might be a stupider question than I intend, but isn't this essentially a function pointer? Or do I have this mixed up?Macario
@inspectorG4dget: It's certainly related to function pointers, in that it can accomplish all of the same purposes, but it's slightly more general, more powerful, and more intuitive. Particularly powerful when you combine it with the fact that functions can have attributes, or the fact that instances of certain classes can be called, but that starts to get arcane.Tannin
B
18

Arguably, this is not a programming feature per se, but so useful that I'll post it nevertheless.

$ python -m http.server

...followed by $ wget http://<ipnumber>:8000/filename somewhere else.

If you are still running an older (2.x) version of Python:

$ python -m SimpleHTTPServer

You can also specify a port e.g. python -m http.server 80 (so you can omit the port in the url if you have the root on the server side)

Besmirch answered 19/9, 2008 at 11:50 Comment(0)
B
18

Not "hidden" but quite useful and not commonly used

Creating string joining functions quickly like so

 comma_join = ",".join
 semi_join  = ";".join

 print comma_join(["foo","bar","baz"])
 'foo,bar,baz

and

Ability to create lists of strings more elegantly than the quote, comma mess.

l = ["item1", "item2", "item3"]

replaced by

l = "item1 item2 item3".split()
Bump answered 19/9, 2008 at 11:50 Comment(2)
I think these both make the thing more long and obfuscated.Await
I don't know. I've found places where judicious use made things easier to read.Bump
K
18

reversing an iterable using negative step

>>> s = "Hello World"
>>> s[::-1]
'dlroW olleH'
>>> a = (1,2,3,4,5,6)
>>> a[::-1]
(6, 5, 4, 3, 2, 1)
>>> a = [5,4,3,2,1]
>>> a[::-1]
[1, 2, 3, 4, 5]
Kovrov answered 19/9, 2008 at 11:50 Comment(2)
Good to know, but minor point: that only works with sequences not iterables in general. I.e., (n for n in (1,2,3,4,5))[::-1] doesn't work.Oxen
That notation will actually create a new (reversed) instance of that sequence, which might be undesirable in some cases. For such cases, reversed() function is better, as it returns a reverse iterator instead of allocating a new sequence.Monopolize
K
17

From python 3.1 ( 2.7 ) dictionary and set comprehensions are supported :

{ a:a for a in range(10) }
{ a for a in range(10) }
Kymry answered 19/9, 2008 at 11:50 Comment(5)
there is no such thing as tuples comprehension, and this is not a syntax for dict comprehensions.Mede
Edited the typo with dict comprehensions.Kymry
uh oh, looks like I have to upgrade my version of python so I can play with dict and set comprehensionsManfred
for dictionaries that way is better but dict( (a,a) for a in range(10) ) works too and your error is probably due to remembering this formAppreciative
I cannot wait to use this feature.Chaunceychaunt
S
17

Multiple references to an iterator

You can create multiple references to the same iterator using list multiplication:

>>> i = (1,2,3,4,5,6,7,8,9,10) # or any iterable object
>>> iterators = [iter(i)] * 2
>>> iterators[0].next()
1
>>> iterators[1].next()
2
>>> iterators[0].next()
3

This can be used to group an iterable into chunks, for example, as in this example from the itertools documentation

def grouper(n, iterable, fillvalue=None):
    "grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
    args = [iter(iterable)] * n
    return izip_longest(fillvalue=fillvalue, *args)
Sleeper answered 19/9, 2008 at 11:50 Comment(4)
You can do the opposite with itertools.tee -- take one iterator and return n that yield the same but do not share state.Bough
I actually don't see the difference to doing this one: "a = iter(i)" and subsequently "b = a" I also get multiple references to the same iterator -- there is no magic about that to me, no hidden feature it is just the normal reference copying stuff of the language. What is done, is creating the iterator, then (the list multiplication) copying this iterator several times. Thats all, its all in the language.Fascinator
@Juergen: indeed, a = iter(i); b = a does the same thing and I could just as well have written that into the answer instead of [iter(i)] * n. Either way, there is no "magic" about it. That's no different from any of the other answers to this question - none of them are "magical", they are all in the language. What makes the features "hidden" is that many people don't realize they're possible, or don't realize interesting ways in which they can be used, until they are pointed out explicitly.Sleeper
Well, for one thing, you can do it an arbitrary number of times with [iter(i)]*n. Also, it isn't necessarily well known (to many people's peril) that list*int creates referential, not actual, copies of the elements of the list. It's good to see that that is actually useful somehow.Chaunceychaunt
I
15

Python can understand any kind of unicode digits, not just the ASCII kind:

>>> s = u'10585'
>>> s
u'\uff11\uff10\uff15\uff18\uff15'
>>> print s
10585
>>> int(s)
10585
>>> float(s)
10585.0
Indue answered 19/9, 2008 at 11:50 Comment(0)
F
14

You can ask any object which module it came from by looking at its __ module__ property. This is useful, for example, if you're experimenting at the command line and have imported a lot of things.

Along the same lines, you can ask a module where it came from by looking at its __ file__ property. This is useful when debugging path issues.

Fillet answered 19/9, 2008 at 11:50 Comment(0)
H
14

Manipulating sys.modules

You can manipulate the modules cache directly, making modules available or unavailable as you wish:

>>> import sys
>>> import ham
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named ham

# Make the 'ham' module available -- as a non-module object even!
>>> sys.modules['ham'] = 'ham, eggs, saussages and spam.'
>>> import ham
>>> ham
'ham, eggs, saussages and spam.'

# Now remove it again.
>>> sys.modules['ham'] = None
>>> import ham
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named ham

This works even for modules that are available, and to some extent for modules that already are imported:

>>> import os
# Stop future imports of 'os'.
>>> sys.modules['os'] = None
>>> import os
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named os
# Our old imported module is still available.
>>> os
<module 'os' from '/usr/lib/python2.5/os.pyc'>

As the last line shows, changing sys.modules only affects future import statements, not past ones, so if you want to affect other modules it's important to make these changes before you give them a chance to try and import the modules -- so before you import them, typically. None is a special value in sys.modules, used for negative caching (indicating the module was not found the first time, so there's no point in looking again.) Any other value will be the result of the import operation -- even when it is not a module object. You can use this to replace modules with objects that behave exactly like you want. Deleting the entry from sys.modules entirely causes the next import to do a normal search for the module, even if it was already imported before.

Hemihydrate answered 19/9, 2008 at 11:50 Comment(1)
And you can do sys.modules['my_module'] = MyClass(), to implement read only attributes 'module' if MyClass has the right hooks.Noodle
A
14

itertools

This module is often overlooked. The following example uses itertools.chain() to flatten a list:

>>> from itertools import *
>>> l = [[1, 2], [3, 4]]
>>> list(chain(*l))
[1, 2, 3, 4]

See http://docs.python.org/library/itertools.html#recipes for more applications.

Artemas answered 19/9, 2008 at 11:50 Comment(0)
I
14

__slots__ is a nice way to save memory, but it's very hard to get a dict of the values of the object. Imagine the following object:

class Point(object):
    __slots__ = ('x', 'y')

Now that object obviously has two attributes. Now we can create an instance of it and build a dict of it this way:

>>> p = Point()
>>> p.x = 3
>>> p.y = 5
>>> dict((k, getattr(p, k)) for k in p.__slots__)
{'y': 5, 'x': 3}

This however won't work if point was subclassed and new slots were added. However Python automatically implements __reduce_ex__ to help the copy module. This can be abused to get a dict of values:

>>> p.__reduce_ex__(2)[2][1]
{'y': 5, 'x': 3}
Inulin answered 19/9, 2008 at 11:50 Comment(3)
Oh wow, I might actually have good use for this!Bareilly
Beware that __reduce_ex__ can be overridden in subclasses, and since it's also used for pickling, it often is. (If you're making data containers, you should think of using it too! or it's younger siblings __getstate__ and __setstate__.)Hyssop
You can still do object.__reduce_ex__(p, 2)[2][1] then.Inulin
A
13

Guessing integer base

>>> int('10', 0)
10
>>> int('0x10', 0)
16
>>> int('010', 0)  # does not work on Python 3.x
8
>>> int('0o10', 0)  # Python >=2.6 and Python 3.x
8
>>> int('0b10', 0)  # Python >=2.6 and Python 3.x
2
Artemas answered 19/9, 2008 at 11:50 Comment(0)
H
13

One word: IPython

Tab introspection, pretty-printing, %debug, history management, pylab, ... well worth the time to learn well.

Hyssop answered 19/9, 2008 at 11:50 Comment(3)
That's not built in python core is it?Removed
You're right, it's not. And probably with good reason. But I recommend it without reservation to any Python programmer. (However, I heartily recommend turning off autocall. When it does something you don't expect, it can be very hard to realize why.)Hyssop
I also love IPython. I've tried BPython, but it was too slow for me (although I agree it has some cool features).Monopolize
L
13

Some of the builtin favorites, map(), reduce(), and filter(). All extremely fast and powerful.

Lontson answered 19/9, 2008 at 11:50 Comment(7)
Be careful of reduce(), If you're not careful, you can write really slow reductions.Loralorain
And be careful of map(), it's depreciated in 2.6 and removed in 3.0.Bareilly
list comprehensions can achieve everything you can do with any of those functions.Elke
It can also obfuscate Python code if you abuse themConatus
@sil: map still exists in Python 3, as does filter, and reduce exists as functools.reduce.Indue
@recursive: I defy you to produce a list comprehension/generator expression that performs the action of reduce()Windup
The correct statement is "reduce() can achieve everything you can do with map(), filter(), or list comprehensions."Culley
T
12

Extending properties (defined as descriptor) in subclasses

Sometimes it's useful to extent (modify) value "returned" by descriptor in subclass. It can be easily done with super():

class A(object):
    @property
    def prop(self):
        return {'a': 1}

class B(A):
    @property
    def prop(self):
        return dict(super(B, self).prop, b=2)

Store this in test.py and run python -i test.py (another hidden feature: -i option executed the script and allow you to continue in interactive mode):

>>> B().prop
{'a': 1, 'b': 2}
Timepleaser answered 19/9, 2008 at 11:50 Comment(1)
+1 properties! Cant get enough of them.Minuteman
V
12

You can build up a dictionary from a set of length-2 sequences. Extremely handy when you have a list of values and a list of arrays.

>>> dict([ ('foo','bar'),('a',1),('b',2) ])
{'a': 1, 'b': 2, 'foo': 'bar'}

>>> names = ['Bob', 'Marie', 'Alice']
>>> ages = [23, 27, 36]
>>> dict(zip(names, ages))
{'Alice': 36, 'Bob': 23, 'Marie': 27}
Vibrator answered 19/9, 2008 at 11:50 Comment(2)
self.data = {} _i = 0 for keys in self.VDESC.split(): self.data[keys] = _data[_i] _i += 1 I replaced my code with this one-liner :) self.data = dict(zip(self.VDESC.split(), _data)) Thanks for the handy tip.Bicorn
Also helps in Python2.x where there is no dict comprehension syntax. Sou you can write dict((x, x**2) for x in range(10)).Elegance
C
11
C
11

The Object Data Model

You can override any operator in the language for your own classes. See this page for a complete list. Some examples:

  • You can override any operator (* + - / // % ^ == < > <= >= . etc.). All this is done by overriding __mul__, __add__, etc. in your objects. You can even override things like __rmul__ to handle separately your_object*something_else and something_else*your_object. . is attribute access (a.b), and can be overridden to handle any arbitrary b by using __getattr__. Also included here is a(…) using __call__.

  • You can create your own slice syntax (a[stuff]), which can be very complicated and quite different from the standard syntax used in lists (numpy has a good example of the power of this in their arrays) using any combination of ,, :, and that you like, using Slice objects.

  • Handle specially what happens with many keywords in the language. Included are del, in, import, and not.

  • Handle what happens when many built in functions are called with your object. The standard __int__, __str__, etc. go here, but so do __len__, __reversed__, __abs__, and the three argument __pow__ (for modular exponentiation).

Chaunceychaunt answered 19/9, 2008 at 11:50 Comment(1)
For in you have to override __contains__.Chaunceychaunt
B
11

Creating enums

In Python, you can do this to quickly create an enumeration:

>>> FOO, BAR, BAZ = range(3)
>>> FOO
0

But the "enums" don't have to have integer values. You can even do this:

class Colors(object):
    RED, GREEN, BLUE, YELLOW = (255,0,0), (0,255,0), (0,0,255), (0,255,255)

#now Colors.RED is a 3-tuple that returns the 24-bit 8bpp RGB 
#value for saturated red
Burge answered 19/9, 2008 at 11:50 Comment(0)
C
11

A slight misfeature of python. The normal fast way to join a list of strings together is,

''.join(list_of_strings)
Cementation answered 19/9, 2008 at 11:50 Comment(7)
there are very good reasons that this is a method of string instead of a method of list. this allows the same function to join any iterable, instead of duplicating join for every iterable type.Conde
Yes I know why it does - but would anyone discover this if they hadn't been told?Cementation
Discover? It's pretty hard to remember too, and I've used python since before there were methods om strings.Disperse
If this is too ugly for you to cope with, you can write the very same thing as str.join('',list_of_strings) but other pythonistas may scorn you for trying to write java.Windup
@TokenMacGuy: the reason why ''.join([...]) is preferred is because many people often mixes up the order of the arguments in string.join(..., ...); by putting ''.join() things become clearerHynes
I'm fairly certain that the only reason most pythonistas use "".join(iterable) over str.join("",iterable) is because it's 4 characters shorter.Windup
@TokenMacGuy No. And what is wrong with having split and join in the str-class? It IS easy to remember and btw. this is an example of 'Although practicality beats purity.'Inhuman
O
10

string-escape and unicode-escape encodings

Lets say you have a string from outer source, that contains \n, \t and so on. How to transform them into new-line or tab? Just decode string using string-escape encoding!

>>> print s
Hello\nStack\toverflow
>>> print s.decode('string-escape')
Hello
Stack   overflow

Another problem. You have normal string with unicode literals like \u01245. How to make it work? Just decode string using unicode-escape encoding!

>>> s = '\u041f\u0440\u0438\u0432\u0456\u0442, \u0441\u0432\u0456\u0442!'
>>> print s
\u041f\u0440\u0438\u0432\u0456\u0442, \u0441\u0432\u0456\u0442!
>>> print unicode(s)
\u041f\u0440\u0438\u0432\u0456\u0442, \u0441\u0432\u0456\u0442!
>>> print unicode(s, 'unicode-escape')
Привіт, світ!
Oilcan answered 19/9, 2008 at 11:50 Comment(0)
T
10

Changing function label at run time:

>>> class foo:
...   def normal_call(self): print "normal_call"
...   def call(self): 
...     print "first_call"
...     self.call = self.normal_call

>>> y = foo()
>>> y.call()
first_call
>>> y.call()
normal_call
>>> y.call()
normal_call
...
Tantalus answered 19/9, 2008 at 11:50 Comment(0)
P
10

The Zen of Python

>>> import this
The Zen of Python, by Tim Peters

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
Pine answered 19/9, 2008 at 11:50 Comment(4)
Hidden? OTOH, This is one of the selling points of Python.Minuteman
I like the syntax coloring, esp. for Dutch.Chaunceychaunt
Duplicate of a previous answerDamnify
Duplicate of a previous answerNoodle
C
10

The reversed() builtin. It makes iterating much cleaner in many cases.

quick example:

for i in reversed([1, 2, 3]):
    print(i)

produces:

3
2
1

However, reversed() also works with arbitrary iterators, such as lines in a file, or generator expressions.

Conde answered 19/9, 2008 at 11:50 Comment(0)
A
10

"Unpacking" to function parameters

def foo(a, b, c):
        print a, b, c

bar = (3, 14, 15)
foo(*bar)

When executed prints:

3 14 15
Aam answered 19/9, 2008 at 11:50 Comment(1)
This is the canonical alternative to the old "apply()" built-in.Humpage
S
9

The Borg Pattern

This is a killer from Alex Martelli. All instances of Borg share state. This removes the need to employ the singleton pattern (instances don't matter when state is shared) and is rather elegant (but is more complicated with new classes).

The value of foo can be reassigned in any instance and all will be updated, you can even reassign the entire dict. Borg is the perfect name, read more here.

class Borg:
    __shared_state = {'foo': 'bar'}
    def __init__(self):
        self.__dict__ = self.__shared_state
    # rest of your class here

This is perfect for sharing an eventlet.GreenPool to control concurrency.

Spandau answered 19/9, 2008 at 11:50 Comment(0)
C
9

Flattening a list with sum().

The sum() built-in function can be used to __add__ lists together, providing a handy way to flatten a list of lists:

Python 2.7.1 (r271:86832, May 27 2011, 21:41:45) 
[GCC 4.2.1 (Apple Inc. build 5664)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> l = [[1, 2, 3], [4, 5], [6], [7, 8, 9]]
>>> sum(l, [])
[1, 2, 3, 4, 5, 6, 7, 8, 9]
Coccyx answered 19/9, 2008 at 11:50 Comment(0)
U
9

Dynamically added attributes

This might be useful if you think about adding some attributes to your classes just by calling them. This can be done by overriding the __getattribute__ member function which is called when the dot operand is used. So, let's see a dummy class for example:

class Dummy(object):
    def __getattribute__(self, name):
        f = lambda: 'Hello with %s'%name
        return f

When you instantiate a Dummy object and do a method call you’ll get the following:

>>> d = Dummy()
>>> d.b()
'Hello with b'

Finally, you can even set the attribute to your class so it can be dynamically defined. This could be useful if you work with Python web frameworks and want to do queries by parsing the attribute's name.

I have a gist at github with this simple code and its equivalent on Ruby made by a friend.

Take care!

Uriah answered 19/9, 2008 at 11:50 Comment(0)
W
9

namedtuple is a tuple

>>> node = namedtuple('node', "a b")
>>> node(1,2) + node(5,6)
(1, 2, 5, 6)
>>> (node(1,2), node(5,6))
(node(a=1, b=2), node(a=5, b=6))
>>> 

Some more experiments to respond to comments:

>>> from collections import namedtuple
>>> from operator import *
>>> mytuple = namedtuple('A', "a b")
>>> yourtuple = namedtuple('Z', "x y")
>>> mytuple(1,2) + yourtuple(5,6)
(1, 2, 5, 6)
>>> q = [mytuple(1,2), yourtuple(5,6)]
>>> q
[A(a=1, b=2), Z(x=5, y=6)]
>>> reduce(operator.__add__, q)
(1, 2, 5, 6)

So, namedtuple is an interesting subtype of tuple.

Whisper answered 19/9, 2008 at 11:50 Comment(9)
At this point, you've lost all context. If you don't need the context, or the data isn't structured in a particular way, why a tuple at all? Surely you're just using it as a list?Colb
@Samir Talwar The question/answer is about hidden features. Did you know about this one? I'm not defending one design or the other, but just pointing out what is there. When I first tried to use named tuples, I thought they woulnd't match as tuples do, but... Let me expand the example to show you.Whisper
@Apalala: I had assumed it, but never checked. You're right: it is an interesting and hidden feature. I guess useful is a different thing.Colb
Also fun is that you can feed the result of a namedtuple call directly into a class definition, as in class rectangle(namedtuple("rectangle", "width height")): in order to add custom methodsRose
@Samir Talwar I use namedtuples as the representation for parse trees, and their behavior was useful in merging siblings so they looked more like lists. Imagine the typical grammar productions for a list...Whisper
@Apalala: OK, you've sold me. Can't say it's how I would approach the problem, but the feature is clearly useful.Colb
@Ben Blank. I didn't understand your comment about feeding nametuples to classes.Whisper
@Whisper — Here's an example: pastebin.com/d6e5VMgbRose
@Ben Blank. Incredible! It merits its own answer.Whisper
S
9

Top Secret Attributes

>>> class A(object): pass
>>> a = A()
>>> setattr(a, "can't touch this", 123)
>>> dir(a)
['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', "can't touch this"]
>>> a.can't touch this # duh
  File "<stdin>", line 1
    a.can't touch this
                     ^
SyntaxError: EOL while scanning string literal
>>> getattr(a, "can't touch this")
123
>>> setattr(a, "__class__.__name__", ":O")
>>> a.__class__.__name__
'A'
>>> getattr(a, "__class__.__name__")
':O'
Saucier answered 19/9, 2008 at 11:50 Comment(1)
AHHHH! Bad, bad, bad!Chaunceychaunt
P
9

Creating dictionary of two sequences that have related data

In [15]: t1 = (1, 2, 3)

In [16]: t2 = (4, 5, 6)

In [17]: dict (zip(t1,t2))
Out[17]: {1: 4, 2: 5, 3: 6}
Petersham answered 19/9, 2008 at 11:50 Comment(0)
H
9

unzip un-needed in Python

Someone blogged about Python not having an unzip function to go with zip(). unzip is straight-forward to calculate because:

>>> t1 = (0,1,2,3)
>>> t2 = (7,6,5,4)
>>> [t1,t2] == zip(*zip(t1,t2))
True

On reflection though, I'd rather have an explicit unzip().

Hammond answered 19/9, 2008 at 11:50 Comment(3)
def unzip(x): return zip(*x) Done!Detrusion
The solution is slightly subtle (I can understand the point of view of anyone who asks for it), but I can also see why it would be redundantMacario
+1. I was going to add this, but it seems I was beat to it.Chaunceychaunt
E
8

threading.enumerate() gives access to all Thread objects in the system and sys._current_frames() returns the current stack frames of all threads in the system, so combine these two and you get Java style stack dumps:

def dumpstacks(signal, frame):
    id2name = dict([(th.ident, th.name) for th in threading.enumerate()])
    code = []
    for threadId, stack in sys._current_frames().items():
        code.append("\n# Thread: %s(%d)" % (id2name[threadId], threadId))
        for filename, lineno, name, line in traceback.extract_stack(stack):
            code.append('File: "%s", line %d, in %s' % (filename, lineno, name))
            if line:
                code.append("  %s" % (line.strip()))
    print "\n".join(code)

import signal
signal.signal(signal.SIGQUIT, dumpstacks)

Do this at the beginning of a multi-threaded python program and you get access to current state of threads at any time by sending a SIGQUIT. You may also choose signal.SIGUSR1 or signal.SIGUSR2.

See

Erelia answered 19/9, 2008 at 11:50 Comment(0)
B
8

pdb — The Python Debugger

As a programmer, one of the first things that you need for serious program development is a debugger. Python has one built-in which is available as a module called pdb (for "Python DeBugger", naturally!).

http://docs.python.org/library/pdb.html

Bernhardt answered 19/9, 2008 at 11:50 Comment(0)
V
7

infinite recursion in list

>>> a = [1,2]
>>> a.append(a)
>>> a
[1, 2, [...]]
>>> a[2]
[1, 2, [...]]
>>> a[2][2][2][2][2][2][2][2][2] == a
True
Vesper answered 19/9, 2008 at 11:50 Comment(1)
i don't think it's a Python feature. nor it's hidden. where this can be used?Noodle
E
7

Operators can be called as functions:

from operator import add
print reduce(add, [1,2,3,4,5,6])
Erythroblastosis answered 19/9, 2008 at 11:50 Comment(3)
? what did you think operators are?Cule
sorry, i dont get your point..what do you think that we think operators are?Cule
@Ant, if you were already aware of operators being functions, you can disregard this tip. Not all languages implement operators as functions, so a person coming from another language might not have known this.Erythroblastosis
O
7

Backslashes inside raw strings can still escape quotes. See this:

>>> print repr(r"aaa\"bbb")
'aaa\\"bbb'

Note that both the backslash and the double-quote are present in the final string.

As consequence, you can't end a raw string with a backslash:

>>> print repr(r"C:\")
SyntaxError: EOL while scanning string literal
>>> print repr(r"C:\"")
'C:\\"'

This happens because raw strings were implemented to help writing regular expressions, and not to write Windows paths. Read a long discussion about this at Gotcha — backslashes in Windows filenames.

Otocyst answered 19/9, 2008 at 11:50 Comment(3)
Note that the backslash is still part of the string afterwards... So one might not regard this as regular escaping.Handicraft
You're probably better off just using single quotes ' for the outer string.Chaunceychaunt
Or just use (forward) slashes, as the Windows API will translate them automatically, then you can finally forget about DOS-style paths. (Though you must use backslashes for "\\server\share\path\file" style resources)Shane
H
7

Reloading modules enables a "live-coding" style. But class instances don't update. Here's why, and how to get around it. Remember, everything, yes, everything is an object.

>>> from a_package import a_module
>>> cls = a_module.SomeClass
>>> obj = cls()
>>> obj.method()
(old method output)

Now you change the method in a_module.py and want to update your object.

>>> reload(a_module)
>>> a_module.SomeClass is cls
False # Because it just got freshly created by reload.
>>> obj.method()
(old method output)

Here's one way to update it (but consider it running with scissors):

>>> obj.__class__ is cls
True # it's the old class object
>>> obj.__class__ = a_module.SomeClass # pick up the new class
>>> obj.method()
(new method output)

This is "running with scissors" because the object's internal state may be different than what the new class expects. This works for really simple cases, but beyond that, pickle is your friend. It's still helpful to understand why this works, though.

Hyssop answered 19/9, 2008 at 11:50 Comment(1)
+1 for suggesting pickle (or cPickle). It was really helpful for me, some weeks ago.Monopolize
P
7

inspect module is also a cool feature.

Parsee answered 19/9, 2008 at 11:50 Comment(0)
P
7

...that dict.get() has a default value of None, thereby avoiding KeyErrors:

In [1]: test = { 1 : 'a' }

In [2]: test[2]
---------------------------------------------------------------------------
<type 'exceptions.KeyError'>              Traceback (most recent call last)

&lt;ipython console&gt; in <module>()

<type 'exceptions.KeyError'>: 2

In [3]: test.get( 2 )

In [4]: test.get( 1 )
Out[4]: 'a'

In [5]: test.get( 2 ) == None
Out[5]: True

and even to specify this 'at the scene':

In [6]: test.get( 2, 'Some' ) == 'Some'
Out[6]: True

And you can use setdefault() to have a value set and returned if it doesn't exist:

>>> a = {}
>>> b = a.setdefault('foo', 'bar')
>>> a
{'foo': 'bar'}
>>> b
'bar
Paxton answered 19/9, 2008 at 11:50 Comment(0)
S
6

Rounding Integers: Python has the function round, which returns numbers of type double:

 >>> print round(1123.456789, 4)
1123.4568
 >>> print round(1123.456789, 2)
1123.46
 >>> print round(1123.456789, 0)
1123.0

This function has a wonderful magic property:

 >>> print round(1123.456789, -1)
1120.0
 >>> print round(1123.456789, -2)
1100.0

If you need an integer as a result use int to convert type:

 >>> print int(round(1123.456789, -2))
1100
 >>> print int(round(8359980, -2))
8360000

Thank you Gregor.

Starofbethlehem answered 19/9, 2008 at 11:50 Comment(0)
C
6

Slices as lvalues. This Sieve of Eratosthenes produces a list that has either the prime number or 0. Elements are 0'd out with the slice assignment in the loop.

def eras(n):
    last = n + 1
    sieve = [0,0] + list(range(2, last))
    sqn = int(round(n ** 0.5))
    it = (i for i in xrange(2, sqn + 1) if sieve[i])
    for i in it:
        sieve[i*i:last:i] = [0] * (n//i - i + 1)
    return filter(None, sieve)

To work, the slice on the left must be assigned a list on the right of the same length.

Christianachristiane answered 19/9, 2008 at 11:50 Comment(0)
H
6

Python 2.x ignores commas if found after the last element of the sequence:

>>> a_tuple_for_instance = (0,1,2,3,)
>>> another_tuple = (0,1,2,3)
>>> a_tuple_for_instance == another_tuple
True

A trailing comma causes a single parenthesized element to be treated as a sequence:

>>> a_tuple_with_one_element = (8,)
House answered 19/9, 2008 at 11:50 Comment(1)
Python3 ignores them as well.Tamishatamma
B
6

Slices & Mutability

Copying lists

>>> x = [1,2,3]
>>> y = x[:]
>>> y.pop()
3
>>> y
[1, 2]
>>> x
[1, 2, 3]

Replacing lists

>>> x = [1,2,3]
>>> y = x
>>> y[:] = [4,5,6]
>>> x
[4, 5, 6]
Beholden answered 19/9, 2008 at 11:50 Comment(0)
S
6

Manipulating Recursion Limit

Getting or setting the maximum depth of recursion with sys.getrecursionlimit() & sys.setrecursionlimit().

We can limit it to prevent a stack overflow caused by infinite recursion.

Salter answered 19/9, 2008 at 11:50 Comment(0)
L
6

You can decorate functions with classes - replacing the function with a class instance:

class countCalls(object):
    """ decorator replaces a function with a "countCalls" instance
    which behaves like the original function, but keeps track of calls

    >>> @countCalls
    ... def doNothing():
    ...     pass
    >>> doNothing()
    >>> doNothing()
    >>> print doNothing.timesCalled
    2
    """
    def __init__ (self, functionToTrack):
        self.functionToTrack = functionToTrack
        self.timesCalled = 0
    def __call__ (self, *args, **kwargs):
        self.timesCalled += 1
        return self.functionToTrack(*args, **kwargs)
Licence answered 19/9, 2008 at 11:50 Comment(0)
T
6

Objects of small intgers (-5 .. 256) never created twice:


>>> a1 = -5; b1 = 256
>>> a2 = -5; b2 = 256
>>> id(a1) == id(a2), id(b1) == id(b2)
(True, True)
>>>
>>> c1 = -6; d1 = 257
>>> c2 = -6; d2 = 257
>>> id(c1) == id(c2), id(d1) == id(d2)
(False, False)
>>>

Edit: List objects never destroyed (only objects in lists). Python has array in which it keeps up to 80 empty lists. When you destroy list object - python puts it to that array and when you create new list - python gets last puted list from this array:


>>> a = [1,2,3]; a_id = id(a)
>>> b = [1,2,3]; b_id = id(b)
>>> del a; del b
>>> c = [1,2,3]; id(c) == b_id
True
>>> d = [1,2,3]; id(d) == a_id
True
>>>

Treharne answered 19/9, 2008 at 11:50 Comment(2)
This feature is implementation dependent, so you shouldn't rely on it.Timepleaser
As Denis said, do not rely on this behavior. It doesn't work, for example, in PyPy, and your code will break miserably in that if you try to use it.Chaunceychaunt
T
6

You can override the mro of a class with a metaclass

>>> class A(object):
...     def a_method(self):
...         print("A")
... 
>>> class B(object):
...     def b_method(self):
...         print("B")
... 
>>> class MROMagicMeta(type):
...     def mro(cls):
...         return (cls, B, object)
... 
>>> class C(A, metaclass=MROMagicMeta):
...     def c_method(self):
...         print("C")
... 
>>> cls = C()
>>> cls.c_method()
C
>>> cls.a_method()
Traceback (most recent call last):
 File "<stdin>", line 1, in <module>
AttributeError: 'C' object has no attribute 'a_method'
>>> cls.b_method()
B
>>> type(cls).__bases__
(<class '__main__.A'>,)
>>> type(cls).__mro__
(<class '__main__.C'>, <class '__main__.B'>, <class 'object'>)

It's probably hidden for a good reason. :)

Timeserver answered 19/9, 2008 at 11:50 Comment(2)
That's playing with fire, and asking for ethernal damnation. Better have good reason ;)Favoritism
Does not work with python 2.x. Use __metaclass__ = MROMagicMeta instead.Tamishatamma
F
6

Nested Function Parameter Re-binding

def create_printers(n):
    for i in xrange(n):
        def printer(i=i): # Doesn't work without the i=i
            print i
        yield printer
Forgather answered 19/9, 2008 at 11:50 Comment(3)
it works without it, but differently. :-)Indue
No, it doesn't work without it. Omit the i=i and see the difference between map(apply, create_printers(10)) and map(apply, list(apply_printers(10))), where converting to a list consumes the generator and now all ten printer functions have i bound to the same value: 9, where calling them one at a time calls them before the next iteration of the generator changes the int i is bound to in the outer scope.Forgather
I think what @kaizer.se is saying is that when you omit the i=i the i in the printer function references the i from the for loop rather than the local i that is created when a new printer function is created with the i=i keyword arg. So it still does work (it yields functions, each with access to a closure) but it doesn't work in the way you'd expect without explicitly creating a local variable.Clannish
I
6

Builtin methods or functions don't implement the descriptor protocol which makes it impossible to do stuff like this:

>>> class C(object):
...  id = id
... 
>>> C().id()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: id() takes exactly one argument (0 given)

However you can create a small bind descriptor that makes this possible:

>>> from types import MethodType
>>> class bind(object):
...  def __init__(self, callable):
...   self.callable = callable
...  def __get__(self, obj, type=None):
...   if obj is None:
...    return self
...   return MethodType(self.callable, obj, type)
... 
>>> class C(object):
...  id = bind(id)
... 
>>> C().id()
7414064
Inulin answered 19/9, 2008 at 11:50 Comment(2)
It's simpler and easier to do this as a property, in this case: class C(object): id = property(id)Aright
lambda is also a good alternative: class C(object): id = lambda s, *a, **kw: id(*a, **kw); and a better version of bind: def bind(callable): return lambda s, *a, **kw: callable(*a, **kw)Hynes
F
6

Ability to substitute even things like file deletion, file opening etc. - direct manipulation of language library. This is a huge advantage when testing. You don't have to wrap everything in complicated containers. Just substitute a function/method and go. This is also called monkey-patching.

Fetter answered 19/9, 2008 at 11:50 Comment(1)
Creating a test harness which provides classes that have the same interfaces as the objects which would be manipulated by the code under test (the subjects of our testing) is referred to as "Mocking" (these are called "Mock Classes" and their instances are "Mock Objects").Humpage
G
5

Set Comprehensions

>>> {i**2 for i in range(5)}                                                       
set([0, 1, 4, 16, 9])

Python documentation

Wikipedia Entry

Gwendolin answered 19/9, 2008 at 11:50 Comment(1)
This is already covered in https://mcmap.net/q/16685/-hidden-features-of-python-closed.Bromeosin
G
5

Dict Comprehensions

>>> {i: i**2 for i in range(5)}
{0: 0, 1: 1, 2: 4, 3: 9, 4: 16}

Python documentation

Wikipedia Entry

Gwendolin answered 19/9, 2008 at 11:50 Comment(0)
I
5

Here are 2 easter eggs:


One in python itself:

>>> import __hello__
Hello world...

And another one in the Werkzeug module, which is a bit complicated to reveal, here it is:

By looking at Werkzeug's source code, in werkzeug/__init__.py, there is a line that should draw your attention:

'werkzeug._internal':   ['_easteregg']

If you're a bit curious, this should lead you to have a look at the werkzeug/_internal.py, there, you'll find an _easteregg() function which takes a wsgi application in argument, it also contains some base64 encoded data and 2 nested functions, that seem to do something special if an argument named macgybarchakku is found in the query string.

So, to reveal this easter egg, it seems you need to wrap an application in the _easteregg() function, let's go:

from werkzeug import Request, Response, run_simple
from werkzeug import _easteregg

@Request.application
def application(request):
    return Response('Hello World!')

run_simple('localhost', 8080, _easteregg(application))

Now, if you run the app and visit http://localhost:8080/?macgybarchakku, you should see the easter egg.

Interlope answered 19/9, 2008 at 11:50 Comment(0)
G
5

Simple built-in benchmarking tool

The Python Standard Library comes with a very easy-to-use benchmarking module called "timeit". You can even use it from the command line to see which of several language constructs is the fastest.

E.g.,

% python -m timeit 'r = range(0, 1000)' 'for i in r: pass'
10000 loops, best of 3: 48.4 usec per loop

% python -m timeit 'r = xrange(0, 1000)' 'for i in r: pass'
10000 loops, best of 3: 37.4 usec per loop
Gasiform answered 19/9, 2008 at 11:50 Comment(0)
H
5

getattr takes a third parameter

getattr(obj, attribute_name, default) is like:

try:
    return obj.attribute
except AttributeError:
    return default

except that attribute_name can be any string.

This can be really useful for duck typing. Maybe you have something like:

class MyThing:
    pass
class MyOtherThing:
    pass
if isinstance(obj, (MyThing, MyOtherThing)):
    process(obj)

(btw, isinstance(obj, (a,b)) means isinstance(obj, a) or isinstance(obj, b).)

When you make a new kind of thing, you'd need to add it to that tuple everywhere it occurs. (That construction also causes problems when reloading modules or importing the same file under two names. It happens more than people like to admit.) But instead you could say:

class MyThing:
    processable = True
class MyOtherThing:
    processable = True
if getattr(obj, 'processable', False):
    process(obj)

Add inheritance and it gets even better: all of your examples of processable objects can inherit from

class Processable:
    processable = True

but you don't have to convince everybody to inherit from your base class, just to set an attribute.

Hyssop answered 19/9, 2008 at 11:50 Comment(0)
T
5

In addition to this mentioned earlier by haridsv:

>>> foo = bar = baz = 1
>>> foo, bar, baz
(1, 1, 1)

it's also possible to do this:

>>> foo, bar, baz = 1, 2, 3
>>> foo, bar, baz
(1, 2, 3)
Taratarabar answered 19/9, 2008 at 11:50 Comment(0)
R
5

I'm not sure where (or whether) this is in the Python docs, but for python 2.x (at least 2.5 and 2.6, which I just tried), the print statement can be called with parenthenses. This can be useful if you want to be able to easily port some Python 2.x code to Python 3.x.

Example: print('We want Moshiach Now') should print We want Moshiach Now work in python 2.5, 2.6, and 3.x.

Also, the not operator can be called with parenthenses in Python 2 and 3: not False and not(False) should both return True.

Parenthenses might also work with other statements and operators.

EDIT: NOT a good idea to put parenthenses around not operators (and probably any other operators), since it can make for surprising situations, like so (this happens because the parenthenses are just really around the 1):

>>> (not 1) == 9
False

>>> not(1) == 9
True

This also can work, for some values (I think where it is not a valid identifier name), like this: not'val' should return False, and print'We want Moshiach Now' should return We want Moshiach Now. (but not552 would raise a NameError since it is a valid identifier name).

Reminisce answered 19/9, 2008 at 11:50 Comment(2)
Side-effect of one of the basic design rules of the Python syntax. Parentheses and whitespace can be varied in pretty much any way that doesn't make the meaning ambiguous. (Which is why you get more freedom to word-wrap things like if/while statements if you put the test body in brackets.)Anyplace
What ssokolow said is correct. In python 2.6 the language was updated to be (more) compatible with python 3. In python 3+ parenthesis are required to call print. see here for more information: docs.python.org/whatsnew/2.6.html#pep-3105-print-as-a-functionTimbering
B
5

Monkeypatching objects

Every object in Python has a __dict__ member, which stores the object's attributes. So, you can do something like this:

class Foo(object):
    def __init__(self, arg1, arg2, **kwargs):
        #do stuff with arg1 and arg2
        self.__dict__.update(kwargs)

f = Foo('arg1', 'arg2', bar=20, baz=10)
#now f is a Foo object with two extra attributes

This can be exploited to add both attributes and functions arbitrarily to objects. This can also be exploited to create a quick-and-dirty struct type.

class struct(object):
    def __init__(**kwargs):
       self.__dict__.update(kwargs)

s = struct(foo=10, bar=11, baz="i'm a string!')
Burge answered 19/9, 2008 at 11:50 Comment(2)
except for the classes with __slots__Misbegotten
Except for some "primitive" types implemented in C (for performance reasons, I guess). For instance, after a = 2, there is no a.__dict__Monopolize
H
5

The pythonic idiom x = ... if ... else ... is far superior to x = ... and ... or ... and here is why:

Although the statement

x = 3 if (y == 1) else 2

Is equivalent to

x = y == 1 and 3 or 2

if you use the x = ... and ... or ... idiom, some day you may get bitten by this tricky situation:

x = 0 if True else 1    # sets x equal to 0

and therefore is not equivalent to

x = True and 0 or 1   # sets x equal to 1

For more on the proper way to do this, see Hidden features of Python.

Heracliteanism answered 19/9, 2008 at 11:50 Comment(0)
C
5

Exposing Mutable Buffers

Using the Python Buffer Protocol to expose mutable byte-oriented buffers in Python (2.5/2.6).

(Sorry, no code here. Requires use of low-level C API or existing adapter module).

Cochlea answered 19/9, 2008 at 11:50 Comment(0)
L
5

import antigravity

Licking answered 19/9, 2008 at 11:50 Comment(1)
this answer was already givenOtt
B
5

__getattr__()

getattr is a really nice way to make generic classes, which is especially useful if you're writing an API. For example, in the FogBugz Python API, getattr is used to pass method calls on to the web service seamlessly:

class FogBugz:
    ...

    def __getattr__(self, name):
        # Let's leave the private stuff to Python
        if name.startswith("__"):
            raise AttributeError("No such attribute '%s'" % name)

        if not self.__handlerCache.has_key(name):
            def handler(**kwargs):
                return self.__makerequest(name, **kwargs)
            self.__handlerCache[name] = handler
        return self.__handlerCache[name]
    ...

When someone calls FogBugz.search(q='bug'), they don't get actually call a search method. Instead, getattr handles the call by creating a new function that wraps the makerequest method, which crafts the appropriate HTTP request to the web API. Any errors will be dispatched by the web service and passed back to the user.

Butterfield answered 19/9, 2008 at 11:50 Comment(1)
You can also create semi-custom types in this manner.Bareilly
I
5

If you are using descriptors on your classes Python completely bypasses __dict__ for that key which makes it a nice place to store such values:

>>> class User(object):
...  def _get_username(self):
...   return self.__dict__['username']
...  def _set_username(self, value):
...   print 'username set'
...   self.__dict__['username'] = value
...  username = property(_get_username, _set_username)
...  del _get_username, _set_username
... 
>>> u = User()
>>> u.username = "foo"
username set
>>> u.__dict__
{'username': 'foo'}

This helps to keep dir() clean.

Inulin answered 19/9, 2008 at 11:50 Comment(0)
P
5

Too lazy to initialize every field in a dictionary? No problem:

In Python > 2.3:

from collections import defaultdict

In Python <= 2.3:

def defaultdict(type_):
    class Dict(dict):
        def __getitem__(self, key):
            return self.setdefault(key, type_())
    return Dict()

In any version:

d = defaultdict(list)
for stuff in lots_of_stuff:
     d[stuff.name].append(stuff)

UPDATE:

Thanks Ken Arnold. I reimplemented a more sophisticated version of defaultdict. It should behave exactly as the one in the standard library.

def defaultdict(default_factory, *args, **kw):                              

    class defaultdict(dict):

        def __missing__(self, key):
            if default_factory is None:
                raise KeyError(key)
            return self.setdefault(key, default_factory())

        def __getitem__(self, key):
            try:
                return dict.__getitem__(self, key)
            except KeyError:
                return self.__missing__(key)

    return defaultdict(*args, **kw)
Pouf answered 19/9, 2008 at 11:50 Comment(4)
You may be interested to learn about collections.defaultdict(list).Hemihydrate
Thanks. Does not work on my production environment though. Python 2.3.Pouf
Careful, that defaultdict reimplementation ends up calling type_ on every lookup instead of only when the item is missing.Hyssop
Prior to python 2.2, you could not subclass dict directly, so you'd need to subclass from UserDict.UserDict. Better still would be to upgrade.Windup
G
5

List comprehensions

list comprehensions

Compare the more traditional (without list comprehension):

foo = []
for x in xrange(10):
  if x % 2 == 0:
     foo.append(x)

to:

foo = [x for x in xrange(10) if x % 2 == 0]
Gringo answered 19/9, 2008 at 11:50 Comment(7)
In what way is list comprehensions a hidden feature of Python ?Nganngc
They are probably "hidden" for former C & Java programmers who haven't seen such features before, don't think to look for it and ignore it if they see it in a tutorial. OTOH a Haskell programmer will notice it immediately.Hinkley
The question does ask for "an example and short description of the feature, not just a link to documentation". Any chance of adding one?Alberich
List comprehensions were implemented by Greg Ewing, who was a postdoc at a department where they taught functional programming in a first-year paper.Pederson
If this was a hidden feature of python there would have been 40% more lines of code written in python today.Secunderabad
It took me ages to find list comprehensions in Python. Can't live without them now, of course...Burge
+1 I think that nested list comprehensions should also be mentioned: stackoverflow.com/questions/1198777/…Macario
C
4

insert vs append

not a feature, but may be interesting

suppose you want to insert some data in a list, and then reverse it. the easiest thing is

count = 10 ** 5
nums = []
for x in range(count):
    nums.append(x)
nums.reverse()

then you think: what about inserting the numbers from the beginning, instead? so:

count = 10 ** 5 
nums = [] 
for x in range(count):
    nums.insert(0, x)

but it turns to be 100 times slower! if we set count = 10 ** 6, it will be 1,000 times slower; this is because insert is O(n^2), while append is O(n).

the reason for that difference is that insert has to move each element in a list each time it's called; append just add at the end of the list that elements (sometimes it has to re-allocate everything, but it's still much more fast)

Cule answered 19/9, 2008 at 11:50 Comment(4)
Or you can use nums.reverse() and have it done by the core - without the need to use range()Outroar
i don't get your point, sorry..Cule
The fact python lists are implemented with arrays is interesting; however, the example is not that useful, because the idiomatic way to reverse a list is to use reverse method, without any additional step.Outroar
And that would be why collections.deque exists - you can insert and pop entries from either end in O(1)Chintz
G
4

Combine unpacking with the print function:

# in 2.6 <= python < 3.0, 3.0 + the print function is native
from __future__ import print_function 

mylist = ['foo', 'bar', 'some other value', 1,2,3,4]  
print(*mylist)
Gringo answered 19/9, 2008 at 11:50 Comment(6)
I prefer something like print(' '.join([str(x) for x in mylist])). Using unpacking like this is too clever.Carnotite
Performance wise I think the 'clever' version is faster (after doing some completely non-scientific tests). Plus you know * means you're unpacking a list or tuple, and you can use the sep keyword.Gringo
I find this clean and simple, but I always wonder why pylint insists there's too much magic in there ;)Clothilde
@Paweł Prażak: I believe PyLint simply considers * and ** to be too magical, period.Anyplace
maybe some people are just allergic to * and ** because of pointer and double pointer resemblance ;)Clothilde
@Carnotite I would drop the list and use generator print(' '.join(word for word in mylist))Clothilde
E
4

You can assign several variables to the same value

>>> foo = bar = baz = 1
>>> foo, bar, baz
(1, 1, 1)

Useful to initialize several variable to None, in a compact way.

Erelia answered 19/9, 2008 at 11:50 Comment(3)
You could also do: foo, bar, baz = [None]*3 to get the same result.Hardee
You can also compare multiple things at once, like foo == bar == baz. It's essentially the same thing as (what is right now) the top answer.Chaunceychaunt
Also be aware that this will only create the value once, and all the variables will reference that one same value. It's fine for None, though, since it is a singleton object.Chaunceychaunt
T
4

There are no secrets in Python ;)

Tough answered 19/9, 2008 at 11:50 Comment(0)
L
4

With a minute amount of work, the threading module becomes amazingly easy to use. This decorator changes a function so that it runs in its own thread, returning a placeholder class instance instead of its regular result. You can probe for the answer by checking placeolder.result or wait for it by calling placeholder.awaitResult()

def threadify(function):
    """
    exceptionally simple threading decorator. Just:
    >>> @threadify
    ... def longOperation(result):
    ...     time.sleep(3)
    ...     return result
    >>> A= longOperation("A has finished")
    >>> B= longOperation("B has finished")

    A doesn't have a result yet:
    >>> print A.result
    None

    until we wait for it:
    >>> print A.awaitResult()
    A has finished

    we could also wait manually - half a second more should be enough for B:
    >>> time.sleep(0.5); print B.result
    B has finished
    """
    class thr (threading.Thread,object):
        def __init__(self, *args, **kwargs):
            threading.Thread.__init__ ( self )  
            self.args, self.kwargs = args, kwargs
            self.result = None
            self.start()
        def awaitResult(self):
            self.join()
            return self.result        
        def run(self):
            self.result=function(*self.args, **self.kwargs)
    return thr
Licence answered 19/9, 2008 at 11:50 Comment(1)
You may be interested in the concurrent.futures module added in Python 3.2Chintz
C
4

Method replacement for object instance

You can replace methods of already created object instances. It allows you to create object instance with different (exceptional) functionality:

>>> class C(object):
...     def fun(self):
...         print "C.a", self
...
>>> inst = C()
>>> inst.fun()  # C.a method is executed
C.a <__main__.C object at 0x00AE74D0>
>>> instancemethod = type(C.fun)
>>>
>>> def fun2(self):
...     print "fun2", self
...
>>> inst.fun = instancemethod(fun2, inst, C)  # Now we are replace C.a by fun2
>>> inst.fun()  # ... and fun2 is executed
fun2 <__main__.C object at 0x00AE74D0>

As we can C.a was replaced by fun2() in inst instance (self didn't change).

Alternatively we may use new module, but it's depreciated since Python 2.6:

>>> def fun3(self):
...     print "fun3", self
...
>>> import new
>>> inst.fun = new.instancemethod(fun3, inst, C)
>>> inst.fun()
fun3 <__main__.C object at 0x00AE74D0>

Node: This solution shouldn't be used as general replacement of inheritance mechanism! But it may be very handy in some specific situations (debugging, mocking).

Warning: This solution will not work for built-in types and for new style classes using slots.

Clarita answered 19/9, 2008 at 11:50 Comment(1)
I personally tend to prefer to leave instancemethod to classes; paticularly so that the binding behavior foo.method works normally. If I'm binding self explicitly, I'll instead use functools.partial, which achieves the same effect, but makes it a bit clearer that the binding behavior is explicit.Windup
H
4

Taking advantage of python's dynamic nature to have an apps config files in python syntax. For example if you had the following in a config file:

{
  "name1": "value1",
  "name2": "value2"
}

Then you could trivially read it like:

config = eval(open("filename").read())
Hammerfest answered 19/9, 2008 at 11:50 Comment(14)
I agree. I've started using a settings.py or config.py file which I then load as a module. Sure beats the extra steps of parsing some other file format.Ellie
I can see this becoming a security issue.Smegma
It could be, but sometimes it's not. In those cases, it's awesome.Elke
Python can be a much more expressive configuration language than any amount of XML or INI files. I'm trying to avoid explicit config, with just an invoke script that does “import myapp; app= myapp.Application(...); app.run()”. Options default sensibly but can be changed using constructor args.Catabolism
(This assumes that run-time configuration in the app itself is stored in a database. More significant configuration is possible through allowing the user to subclass Application and set properties/methods on the subclass.)Catabolism
That's a bold action for even non-hostile environments. eval() is a loaded gun, that needs intensive caution while handling. On the other hand, using JSON (now in 2.6 stdlib) is much more secure and portable for carrying configuration.Tartuffe
I would never approve a code review which contained an eval.Shrine
@Richard Waite: It's usually a security issue if an adversary can modify your config file...Moa
I agree, this is extremely useful in many quick'n'dirty scripts. But it's better to use execfile instead of eval+open+read.Dodge
Even in a trusted environment, this is an unacceptable security issue. If you need to parse config files, use ConfigParser - 10 lines of code give you a full blown mechanism for creating universally readable configuration file. Your approach is really not portable and not extensible.Paolapaolina
Then why does Django store site settings in a .py file (including db password)? Are they out of their minds, are they not using eval(), or is there something I'm missing?Sacrilegious
I personally don't like using eval() for anything, especially settings. I always wrap Django settings around ConfigParser and save actual information in a permission-guarded file. Like Rasmus Lerdorf said "If eval() is the answer, you’re almost certainly asking the wrong question."Haft
eval() has the same security issues that import does, so denying a script that uses it for security issues doesn't make sense. It is the usual issue of never evaling untrusted user input, but if the file ends in .py and gets imported, it still gets executed. The reason to use import is because it puts your configuration into a different namespace cleanly. You could also use execfile(ConfigFile,ConfigDict) to store the configuration files into a dictionary.Alodee
no need for eval, name your dict (config) and import it from your module… (from configfile import config)Chiclayo
H
4

The first-classness of everything ('everything is an object'), and the mayhem this can cause.

>>> x = 5
>>> y = 10
>>> 
>>> def sq(x):
...   return x * x
... 
>>> def plus(x):
...   return x + x
... 
>>> (sq,plus)[y>x](y)
20

The last line creates a tuple containing the two functions, then evaluates y>x (True) and uses that as an index to the tuple (by casting it to an int, 1), and then calls that function with parameter y and shows the result.

For further abuse, if you were returning an object with an index (e.g. a list) you could add further square brackets on the end; if the contents were callable, more parentheses, and so on. For extra perversion, use the result of code like this as the expression in another example (i.e. replace y>x with this code):

(sq,plus)[y>x](y)[4](x)

This showcases two facets of Python - the 'everything is an object' philosophy taken to the extreme, and the methods by which improper or poorly-conceived use of the language's syntax can lead to completely unreadable, unmaintainable spaghetti code that fits in a single expression.

Haberman answered 19/9, 2008 at 11:50 Comment(4)
why would you ever do this? it is hardly a valid criticism of a language to show how it can be intentionally abused. accidental abuse would be valid, but this would never happen by accident.Conde
@Gorgapor: Python's consistency and lack of exceptions and special cases is what makes it easy to learn and, to me at least, beautiful. Any powerful tool, used abusively can cause 'mayhem'. Contrary to your opinion, I think the ability to index into a sequence of functions and call it, in a single expression is a powerful and useful idiom, and I've used it more than once, with explanatory comments.Oxen
@Don: Your use case, indexing a sequence of functions, is a good one, and very useful. Dan Udey's use case, using a boolean as an index into an inline tuple of functions, is a horrible and useless one, which is needlessly obfuscated.Conde
@Gorganpor: Sorry, I meant to address my comment to Dan Udey, not you. I agree entirely with you.Oxen
M
4

Tuple unpacking in for loops, list comprehensions and generator expressions:

>>> l=[(1,2),(3,4)]
>>> [a+b for a,b in l ] 
[3,7]

Useful in this idiom for iterating over (key,data) pairs in dictionaries:

d = { 'x':'y', 'f':'e'}
for name, value in d.items():  # one can also use iteritems()
   print "name:%s, value:%s" % (name,value)

prints:

name:x, value:y
name:f, value:e
Montague answered 19/9, 2008 at 11:50 Comment(1)
This is also useful when l is replaced with zip(something).Chaunceychaunt
S
4

Special methods

Absolute power!

Selfservice answered 19/9, 2008 at 11:50 Comment(2)
This is my favorite thing about Python. I especially love overloading operators. IMHO object1.add(object2) should always be object1 + object2.Seedy
I read object1.add() as a destructive operation and + as one that only returns the result without modifying object1.Await
S
0

some cool features with reduce and operator.

>>> from operator import add,mul
>>> reduce(add,[1,2,3,4])
10
>>> reduce(mul,[1,2,3,4])
24
>>> reduce(add,[[1,2,3,4],[1,2,3,4]])
[1, 2, 3, 4, 1, 2, 3, 4]
>>> reduce(add,(1,2,3,4))
10
>>> reduce(mul,(1,2,3,4))
24
Scraggly answered 19/9, 2008 at 11:50 Comment(0)
A
0

In Python 2 you can generate a string representation of an expression by enclosing it with backticks:

 >>> `sorted`
'<built-in function sorted>'

This is gone in python 3.X.

Antakiya answered 19/9, 2008 at 11:50 Comment(0)
S
0

Interactive Debugging of Scripts (and doctest strings)

I don't think this is as widely known as it could be, but add this line to any python script:

import pdb; pdb.set_trace()

will cause the PDB debugger to pop up with the run cursor at that point in the code. What's even less known, I think, is that you can use that same line in a doctest:

"""
>>> 1 in (1,2,3)   
Becomes
>>> import pdb; pdb.set_trace(); 1 in (1,2,3)
"""

You can then use the debugger to checkout the doctest environment. You can't really step through a doctest because the lines are each run autonomously, but it's a great tool for debugging the doctest globs and environment.

Symptomatic answered 19/9, 2008 at 11:50 Comment(0)
P
0

Here is a helpful function I use when debugging type errors

def typePrint(object):
    print(str(object) + " - (" + str(type(object)) + ")")

It simply prints the input followed by the type, for example

>>> a = 101
>>> typePrint(a)
    101 - (<type 'int'>)
Pontine answered 19/9, 2008 at 11:50 Comment(0)
M
0

commands.getoutput

If you want to get the output of a function which outputs directly to stdout or stderr as is the case with os.system, commands.getoutput comes to the rescue. The whole module is just made of awesome.

>>> print commands.getoutput('ls')
myFile1.txt    myFile2.txt    myFile3.txt    myFile4.txt    myFile5.txt
myFile6.txt    myFile7.txt    myFile8.txt    myFile9.txt    myFile10.txt
myFile11.txt   myFile12.txt   myFile13.txt   myFile14.txt   module.py
Macario answered 19/9, 2008 at 11:50 Comment(3)
Given that it's basically a UNIX-only precursor to the subprocess module and has been removed in Python 3.0, shouldn't you be talking about subprocess instead of commands?Anyplace
Touche! However, I'm using 2.7 on windows (not UNIX-only) at work. It works here and I just discovered it. Thus, I thought it was worth a mention.Macario
specifically, subprocess.check_outputDeuno
S
0

Multiply a string to get it repeated

print "SO"*5 

gives

SOSOSOSOSO
Susannahsusanne answered 19/9, 2008 at 11:50 Comment(1)
You can also do this with lists: [3]*3 == [3, 3, 3]Macario
T
0

You can construct a functions kwargs on demand:

kwargs = {}
kwargs[str("%s__icontains" % field)] = some_value
some_function(**kwargs)

The str() call is somehow needed, since python complains otherwise that it is no string. Don't know why ;) I use this for a dynamic filters within Djangos object model:

result = model_class.objects.filter(**kwargs)
Talus answered 19/9, 2008 at 11:50 Comment(1)
The reason is complains is probably because "field" is unicode, which makes the whole string unicode.Err
W
-2
for line in open('foo'):
    print(line)

which is equivalent (but better) to:

f = open('foo', 'r')
for line in f.readlines():
   print(line)
f.close()
Welty answered 19/9, 2008 at 11:50 Comment(1)
That's not equivalent at all, because you can't predict when the file will be closed. That depends on the interpreter. As far as I know CPython garbage collects objects as soon as possible, but other interpreters might not.Menstruate
C
-2
is_ok() and "Yes" or "No"
Cambrel answered 19/9, 2008 at 11:50 Comment(6)
That's strange. Interesting, but strange. >>> True and "Yes" or "No" 'Yes' >>> False and "Yes" or "No" 'No' >>> x = "Yes" >>> y = "No" >>> >>> False and x or yEllie
The preferred way to accomplish this in Python 2.5 or up is " 'Yes' if is_ok() else 'No' ".Monteith
whether it is preferred or not, the way is correct and I use all the time and I think it is elegant. since this is hidden features question really interesting this post has been negatively voted,Cambrel
"preferred" argument is open to discussion, becouse this way, the execution order is the same as the logical order, while "Yes" if True else "No" is not like that.Cambrel
"Preferred" In this case means that the conditional operator works as expected for all possible operands. Specifically, True and False or True is True, but False if True else True is false, which is almost certainly what you expected. This is especially important where the operands have side effects, and the conditional operator will NEVER evaluate more than one of its conditional clauses.Windup
This is a commonly used feature in many languages [especially bash, where the && || syntax is used to emulate ternary operator]Tantalus
P
-5

to activate the autocompletion in IDE that accepts it (like IDLE, Editra, IEP) instead of making: "hi". (and then you hit TAB), you can cheat in the IDE, just make hi". (and you heat TAB) (as you can see, there is no single quote in the beginning) because it will only follows the latest punctuation, it's like when you add : and hit enter, it adds directly an indentation, dont know if it will make change, but it's a tip no more :)

Potaufeu answered 19/9, 2008 at 11:50 Comment(3)
Can someone please clarify what this means?Psychotherapy
that when you hit tab choices can be aviable even if it's not a string, just do this is IEP for example: ". and hit TAB, you'll get choices that offer them when dealing with strings... or make this other hint: : and hit enter, you'll get an identation :)Potaufeu
This seems to be just a common editor feature or two.Await
S
-9

Braces

def g():
    print 'hi!'

def f(): (
    g()
)

>>> f()
hi!
Saucier answered 19/9, 2008 at 11:50 Comment(3)
>>> def f(): ( ... g() ... g() File "<stdin>", line 3 g() ^ SyntaxError: invalid syntaxDetrusion
I was trying to show that your feature doesn't work if you have more than one statement inside the "braces".Detrusion
Everyone knows that Python uses #{ and #} for braces. Subject to certain lexical constraints.Randalrandall

© 2022 - 2024 — McMap. All rights reserved.