Manipulating functions
In Python, functions are "first-class objects" - this means, roughly, that they can be used for every non-type-specific purpose that any other object can be used for. In particular, they can:
- be assigned to names with
=
(and, in fact, the def
statement is a form of assignment)
- be passed as arguments to (other) functions
- be
return
ed from a function
- have attributes which can be inspected with the
.
syntax (in fact, Python allows modifying some of these attributes and assigning new ones; of course this is not possible for all objects)
- participate in expressions (a call to a function is an expression; the function-call syntax is conceptually an operator acting upon the function)
- most importantly for current purposes: be stored in other container objects such as lists and dictionaries
The problem with the failed attempts described in the question is that they call the function immediately. Just like with any other object, Python code can refer to a function (itself, as an object) by its name. The name of a function does not include parentheses; writing foo()
means to call the function now, and evaluates to the result (whatever was return
ed).
Example
# Set up some demo functions
def foo():
print('foo function')
def bar():
print('bar function')
def goo():
print('goo function')
# Put them into containers
function_sequence = [foo, bar, goo]
function_mapping = {'foo': foo, 'bar': bar, 'goo': goo}
# Access them iteratively
for f in function_sequence:
# Each time through the loop, a different function is bound to `f`.
# `f` is thus a name for that function, which can be used to call it.
f()
# Access one by lookup, and call it.
# The lookup gives us an object which is a function;
# therefore it can be called with the function-call syntax
to_call = input('which function should i call?')
function_mapping[to_call]()
The functions themselves are called foo
, bar
, and goo
; and with these names they can be manipulated just like anything else that has a name. There is nothing special about writing foo()
that requires it to use the name from the def
statement. There is no requirement to use a name at all, either - just as there wouldn't be for e.g. a multiplication, which could use values looked up from a container, literal values, or values computed from another expression.
As long as there is an expression that evaluates to a function object, that can be a sub-expression of a function-call expression.
Extensions
lambda
Python's lambda
syntax creates objects of the same type as ordinary functions. The syntax restricts what the resulting function can do, and they get a special __name__
attribute (since there was no def
statement to specify one during compilation); but otherwise they are perfectly ordinary functions that can be manipulated the same way.
def example_func():
pass
# A list containing two do-nothing functions, created with `def` and `lambda`.
dummies = [example_func, lambda: None]
# Either is usable by lookup:
dummies[0]()
dummies[1]()
# They have exactly the same type:
assert(type(dummies[0]) is type(dummies[1]))
Instance methods
In Python 3.x, looking up an instance method directly in the class results in a perfectly ordinary function - there is no separate type for "unbound methods". When that function is called, an instance must be provided explicitly:
class Example:
def method(self):
pass
instance = Example()
# Explicit syntax for a method call via the class:
Example.method(instance)
# Following all the same patterns as before, that can be separated:
m = Example.method # the function itself
m(instance) # call it
# Again, `type(m)` is the same function type.
Looking up an instance method on an instance, of course, results in a bound method.
# Calling a method via the instance:
instance.method()
# This, too, is separable:
bound = instance.method
bound()
Bound methods have a different type, but offer the same callable interface as functions: they are called with the ()
syntax.
@staticmethod
and @classmethod
There is nothing too unexpected here, either:
class Fancy:
@staticmethod
def s(a, b, c):
pass
@classmethod
def c(cls):
pass
f = Fancy()
# For both of these, the result is the same whether lookup
# uses the class or the instance.
assert f.s is Fancy.s
# That assertion will NOT work for the classmethod, because method binding
# creates a new bound method object each time.
# But they can be verified to be functionally identical:
assert type(f.c) is type(Fancy.c)
assert f.c.__code__ is Fancy.c.__code__ # etc.
# As before, the results can be stored and used.
# As one would expect, the `cls` argument is bound for the classmethod,
# while the staticmethod expects all arguments that are listed.
fs = f.s
fs(1, 2, 3)
fc = f.c
fc()
Dealing with arguments
Code that tries to look up a callable (whether it's a function, lambda, bound method, class...) from some data structure and then call it, will need to be able to supply appropriate arguments for the call. Of course, this is easiest to arrange if every available callable expects the same number of arguments (with the same types and semantics). In many cases it's necessary to adapt such callables by pre-filling arguments that won't come from the common arguments supplied by lookup, or by wrapping them to ignore arguments.
def zero():
pass
def one(a):
print('a', a)
def two(a, b):
print('a', a, 'b', b)
funcs = # help!
def dispatch():
a = input('what should be the value of a?')
f = input('which func should be used?')
return funcs[f](a)
Ignoring arguments can be done by writing an explicit wrapper, or by redesigning so that keyword arguments will be passed instead (and having the called function simply ignore any extraneous **kwargs
contents).
See How can I bind arguments to a function in Python? for the case of pre-filling arguments.
For example, we could adapt using lambdas (this gets unwieldy with large numbers of parameters):
funcs = {
'zero': (lambda a: zero()),
'one': one,
'two': (lambda a: two(a, 'bee'))
}
Or redesign the underlying functions to make them more usable in this setup first:
from functools import partial
def zero(**kwargs):
pass
def one(a):
print('a', a)
def two(b, a): # this order is more convenient for functools.partial
print('a', a, 'b', b)
funcs = {'zero': zero, 'one': one, 'two': partial(two, 'bee')}
def dispatch():
a = input('what should be the value of a?')
f = input('which func should be used?')
return funcs[f](a=a)
functools.partial
is particularly useful here, as it avoids a common trap caused by late binding in lambdas. For example, if the adapter lambda a: two(a, 'bee')
used a variable instead of the literal 'bee'
text, and that variable subsequently changed, the change would be reflected when using the dispatch
function (this is usually not desired).