What are some (concrete) use-cases for metaclasses?
Asked Answered
E

21

174

I have a friend who likes to use metaclasses, and regularly offers them as a solution.

I am of the mind that you almost never need to use metaclasses. Why? because I figure if you are doing something like that to a class, you should probably be doing it to an object. And a small redesign/refactor is in order.

Being able to use metaclasses has caused a lot of people in a lot of places to use classes as some kind of second rate object, which just seems disastrous to me. Is programming to be replaced by meta-programming? The addition of class decorators has unfortunately made it even more acceptable.

So please, I am desperate to know your valid (concrete) use-cases for metaclasses in Python. Or to be enlightened as to why mutating classes is better than mutating objects, sometimes.

I will start:

Sometimes when using a third-party library it is useful to be able to mutate the class in a certain way.

(This is the only case I can think of, and it's not concrete)

Elagabalus answered 24/12, 2008 at 20:13 Comment(2)
This is a great question. Judging from the answers below, its quite clear that there are no such thing as a concrete use for metaclasses.Drainpipe
@MarcusOttosson no, obfuscating your code is a pretty good use for them.Thoreau
M
29

I have a class that handles non-interactive plotting, as a frontend to Matplotlib. However, on occasion one wants to do interactive plotting. With only a couple functions I found that I was able to increment the figure count, call draw manually, etc, but I needed to do these before and after every plotting call. So to create both an interactive plotting wrapper and an offscreen plotting wrapper, I found it was more efficient to do this via metaclasses, wrapping the appropriate methods, than to do something like:

class PlottingInteractive:
    add_slice = wrap_pylab_newplot(add_slice)

This method doesn't keep up with API changes and so on, but one that iterates over the class attributes in __init__ before re-setting the class attributes is more efficient and keeps things up to date:

class _Interactify(type):
    def __init__(cls, name, bases, d):
        super(_Interactify, cls).__init__(name, bases, d)
        for base in bases:
            for attrname in dir(base):
                if attrname in d: continue # If overridden, don't reset
                attr = getattr(cls, attrname)
                if type(attr) == types.MethodType:
                    if attrname.startswith("add_"):
                        setattr(cls, attrname, wrap_pylab_newplot(attr))
                    elif attrname.startswith("set_"):
                        setattr(cls, attrname, wrap_pylab_show(attr))

Of course, there might be better ways to do this, but I've found this to be effective. Of course, this could also be done in __new__ or __init__, but this was the solution I found the most straightforward.

Muldoon answered 26/12, 2008 at 1:35 Comment(0)
R
164

I was asked the same question recently, and came up with several answers. I hope it's OK to revive this thread, as I wanted to elaborate on a few of the use cases mentioned, and add a few new ones.

Most metaclasses I've seen do one of two things:

  1. Registration (adding a class to a data structure):

    models = {}
    
    class ModelMetaclass(type):
        def __new__(meta, name, bases, attrs):
            models[name] = cls = type.__new__(meta, name, bases, attrs)
            return cls
    
    class Model(object):
        __metaclass__ = ModelMetaclass
    

    Whenever you subclass Model, your class is registered in the models dictionary:

    >>> class A(Model):
    ...     pass
    ...
    >>> class B(A):
    ...     pass
    ...
    >>> models
    {'A': <__main__.A class at 0x...>,
     'B': <__main__.B class at 0x...>}
    

    This can also be done with class decorators:

    models = {}
    
    def model(cls):
        models[cls.__name__] = cls
        return cls
    
    @model
    class A(object):
        pass
    

    Or with an explicit registration function:

    models = {}
    
    def register_model(cls):
        models[cls.__name__] = cls
    
    class A(object):
        pass
    
    register_model(A)
    

    Actually, this is pretty much the same: you mention class decorators unfavorably, but it's really nothing more than syntactic sugar for a function invocation on a class, so there's no magic about it.

    Anyway, the advantage of metaclasses in this case is inheritance, as they work for any subclasses, whereas the other solutions only work for subclasses explicitly decorated or registered.

    >>> class B(A):
    ...     pass
    ...
    >>> models
    {'A': <__main__.A class at 0x...> # No B :(
    
  2. Refactoring (modifying class attributes or adding new ones):

    class ModelMetaclass(type):
        def __new__(meta, name, bases, attrs):
            fields = {}
            for key, value in attrs.items():
                if isinstance(value, Field):
                    value.name = '%s.%s' % (name, key)
                    fields[key] = value
            for base in bases:
                if hasattr(base, '_fields'):
                    fields.update(base._fields)
            attrs['_fields'] = fields
            return type.__new__(meta, name, bases, attrs)
    
    class Model(object):
        __metaclass__ = ModelMetaclass
    

    Whenever you subclass Model and define some Field attributes, they are injected with their names (for more informative error messages, for example), and grouped into a _fields dictionary (for easy iteration, without having to look through all the class attributes and all its base classes' attributes every time):

    >>> class A(Model):
    ...     foo = Integer()
    ...
    >>> class B(A):
    ...     bar = String()
    ...
    >>> B._fields
    {'foo': Integer('A.foo'), 'bar': String('B.bar')}
    

    Again, this can be done (without inheritance) with a class decorator:

    def model(cls):
        fields = {}
        for key, value in vars(cls).items():
            if isinstance(value, Field):
                value.name = '%s.%s' % (cls.__name__, key)
                fields[key] = value
        for base in cls.__bases__:
            if hasattr(base, '_fields'):
                fields.update(base._fields)
        cls._fields = fields
        return cls
    
    @model
    class A(object):
        foo = Integer()
    
    class B(A):
        bar = String()
    
    # B.bar has no name :(
    # B._fields is {'foo': Integer('A.foo')} :(
    

    Or explicitly:

    class A(object):
        foo = Integer('A.foo')
        _fields = {'foo': foo} # Don't forget all the base classes' fields, too!
    

    Although, on the contrary to your advocacy for readable and maintainable non-meta programming, this is much more cumbersome, redundant and error prone:

    class B(A):
        bar = String()
    
    # vs.
    
    class B(A):
        bar = String('bar')
        _fields = {'B.bar': bar, 'A.foo': A.foo}
    

Having considered the most common and concrete use cases, the only cases where you absolutely HAVE to use metaclasses are when you want to modify the class name or list of base classes, because once defined, these parameters are baked into the class, and no decorator or function can unbake them.

class Metaclass(type):
    def __new__(meta, name, bases, attrs):
        return type.__new__(meta, 'foo', (int,), attrs)

class Baseclass(object):
    __metaclass__ = Metaclass

class A(Baseclass):
    pass

class B(A):
    pass

print A.__name__ # foo
print B.__name__ # foo
print issubclass(B, A)   # False
print issubclass(B, int) # True

This may be useful in frameworks for issuing warnings whenever classes with similar names or incomplete inheritance trees are defined, but I can't think of a reason beside trolling to actually change these values. Maybe David Beazley can.

Anyway, in Python 3, metaclasses also have the __prepare__ method, which lets you evaluate the class body into a mapping other than a dict, thus supporting ordered attributes, overloaded attributes, and other wicked cool stuff:

import collections

class Metaclass(type):

    @classmethod
    def __prepare__(meta, name, bases, **kwds):
        return collections.OrderedDict()

    def __new__(meta, name, bases, attrs, **kwds):
        print(list(attrs))
        # Do more stuff...

class A(metaclass=Metaclass):
    x = 1
    y = 2

# prints ['x', 'y'] rather than ['y', 'x']

 

class ListDict(dict):
    def __setitem__(self, key, value):
        self.setdefault(key, []).append(value)

class Metaclass(type):

    @classmethod
    def __prepare__(meta, name, bases, **kwds):
        return ListDict()

    def __new__(meta, name, bases, attrs, **kwds):
        print(attrs['foo'])
        # Do more stuff...

class A(metaclass=Metaclass):

    def foo(self):
        pass

    def foo(self, x):
        pass

# prints [<function foo at 0x...>, <function foo at 0x...>] rather than <function foo at 0x...>

You might argue ordered attributes can be achieved with creation counters, and overloading can be simulated with default arguments:

import itertools

class Attribute(object):
    _counter = itertools.count()
    def __init__(self):
        self._count = Attribute._counter.next()

class A(object):
    x = Attribute()
    y = Attribute()

A._order = sorted([(k, v) for k, v in vars(A).items() if isinstance(v, Attribute)],
                  key = lambda (k, v): v._count)

 

class A(object):

    def _foo0(self):
        pass

    def _foo1(self, x):
        pass

    def foo(self, x=None):
        if x is None:
            return self._foo0()
        else:
            return self._foo1(x)

Besides being much more ugly, it's also less flexible: what if you want ordered literal attributes, like integers and strings? What if None is a valid value for x?

Here's a creative way to solve the first problem:

import sys

class Builder(object):
    def __call__(self, cls):
        cls._order = self.frame.f_code.co_names
        return cls

def ordered():
    builder = Builder()
    def trace(frame, event, arg):
        builder.frame = frame
        sys.settrace(None)
    sys.settrace(trace)
    return builder

@ordered()
class A(object):
    x = 1
    y = 'foo'

print A._order # ['x', 'y']

And here's a creative way to solve the second one:

_undefined = object()

class A(object):

    def _foo0(self):
        pass

    def _foo1(self, x):
        pass

    def foo(self, x=_undefined):
        if x is _undefined:
            return self._foo0()
        else:
            return self._foo1(x)

But this is much, MUCH voodoo-er than a simple metaclass (especially the first one, which really melts your brain). My point is, you look at metaclasses as unfamiliar and counter-intuitive, but you can also look at them as the next step of evolution in programming languages: you just have to adjust your mindset. After all, you could probably do everything in C, including defining a struct with function pointers and passing it as the first argument to its functions. A person seeing C++ for the first time might say, "what is this magic? Why is the compiler implicitly passing this to methods, but not to regular and static functions? It's better to be explicit and verbose about your arguments". But then, object-oriented programming is much more powerful once you get it; and so is this, uh... quasi-aspect-oriented programming, I guess. And once you understand metaclasses, they're actually very simple, so why not use them when convenient?

And finally, metaclasses are rad, and programming should be fun. Using standard programming constructs and design patterns all the time is boring and uninspiring, and hinders your imagination. Live a little! Here's a metametaclass, just for you.

class MetaMetaclass(type):
    def __new__(meta, name, bases, attrs):
        def __new__(meta, name, bases, attrs):
            cls = type.__new__(meta, name, bases, attrs)
            cls._label = 'Made in %s' % meta.__name__
            return cls 
        attrs['__new__'] = __new__
        return type.__new__(meta, name, bases, attrs)

class China(type):
    __metaclass__ = MetaMetaclass

class Taiwan(type):
    __metaclass__ = MetaMetaclass

class A(object):
    __metaclass__ = China

class B(object):
    __metaclass__ = Taiwan

print A._label # Made in China
print B._label # Made in Taiwan

Edit

This is a pretty old question, but it's still getting upvotes, so I thought I'd add a link to a more comprehensive answer. If you'd like to read more about metaclasses and their uses, I've just published an article about it here.

Robertroberta answered 25/6, 2015 at 22:26 Comment(6)
That's a great answer, thanks for the time writing it and giving multiple examplesPood
"...the advantage of metaclasses in this case is inheritance, as they work for any subclasses" - not in Python 3, I suppose? I think it works in Python 2 only because any child class inherits the __metaclass__ attribute, but this attribute is no longer special in Python 3. Is there any way to make this "children classes are also constructed by the parent's metaclass" thing work in Python 3?Reexamine
This is true for Python 3 as well, because a class B, inheriting from A, whose metaclass is M, is also a type-of M. So, when B is evaluated, M is invoked to create it, and this effectively allows you to "work on any subclasses" (of A). Having said that, Python 3.6 introduced the much simpler init_subclass, so now you can manipulate subclasses in a baseclass, and no longer need a metaclass for that purpose.Robertroberta
This is brilliant, I read so many blog posts on metaclasses, only this one make know the pros and cons and alternatives to metaclass.Lorimer
The "overloading" example does not work without significantly more overhead, an attempt at actually implementing it returns this error due to __prepare__ being a dict of lists, which would would take significant steps to rectify: TypeError: type __qualname__ must be a str, not listNutt
I'm not sure why you think it'll require significant additions: all you need to do for the overloading example to work, is add in the metaclass's __new__ method a couple lines that will loop over the prepared attrs, combine lists of functions into an OverloadedFunction object, and change everything else (like __qualname__) back from lists to values. The OverloadedFunction object then can have a generic __call__(self, *args), and call the right function based on len(args). Let me know if you're stuck with it and want an example, but it should all fit in less than 50 lines.Robertroberta
U
37

The purpose of metaclasses isn't to replace the class/object distinction with metaclass/class - it's to change the behaviour of class definitions (and thus their instances) in some way. Effectively it's to alter the behaviour of the class statement in ways that may be more useful for your particular domain than the default. The things I have used them for are:

  • Tracking subclasses, usually to register handlers. This is handy when using a plugin style setup, where you wish to register a handler for a particular thing simply by subclassing and setting up a few class attributes. eg. suppose you write a handler for various music formats, where each class implements appropriate methods (play / get tags etc) for its type. Adding a handler for a new type becomes:

    class Mp3File(MusicFile):
        extensions = ['.mp3']  # Register this type as a handler for mp3 files
        ...
        # Implementation of mp3 methods go here
    

    The metaclass then maintains a dictionary of {'.mp3' : MP3File, ... } etc, and constructs an object of the appropriate type when you request a handler through a factory function.

  • Changing behaviour. You may want to attach a special meaning to certain attributes, resulting in altered behaviour when they are present. For example, you may want to look for methods with the name _get_foo and _set_foo and transparently convert them to properties. As a real-world example, here's a recipe I wrote to give more C-like struct definitions. The metaclass is used to convert the declared items into a struct format string, handling inheritance etc, and produce a class capable of dealing with it.

    For other real-world examples, take a look at various ORMs, like sqlalchemy's ORM or sqlobject. Again, the purpose is to interpret defintions (here SQL column definitions) with a particular meaning.

Unbound answered 24/12, 2008 at 21:43 Comment(6)
Well, yes, tracking subclasses. But why would you ever want that? Your example is just implicit for register_music_file(Mp3File, ['.mp3']), and the explicit way is more readable and maintainable. This is an example of the bad cases I am talking about.Elagabalus
About the ORM case, are you talking about the class-based way of defining tables, or the metaclasses on mapped objects. Because SQLAlchemy can (rightly) map to any class (and I am assuming that it doesn't use a metaclass for that activity).Elagabalus
I prefer the more declarative style, rather than require extra registration methods for every subclass - better if everything is wrapped in a single location.Unbound
For sqlalchemy, I'm thinking mostly of the declarative layer, so perhaps sqlobject is a better example. However the metaclasses used internally are also examples of similar reinterpretation of particular attributes to declare meaning.Unbound
Sorry one of my conmments got lost in the SO timeout scenario. I find classes for declarative almost an abomination. I know people love it, and it is accepted behaviour. But (from experience) I know it is unusable in a situation where you want to UN-declare things. Unregistering a class is hard.Elagabalus
Whereas callung registry.unregister(obj) is very easy to use, and implement.Elagabalus
M
29

I have a class that handles non-interactive plotting, as a frontend to Matplotlib. However, on occasion one wants to do interactive plotting. With only a couple functions I found that I was able to increment the figure count, call draw manually, etc, but I needed to do these before and after every plotting call. So to create both an interactive plotting wrapper and an offscreen plotting wrapper, I found it was more efficient to do this via metaclasses, wrapping the appropriate methods, than to do something like:

class PlottingInteractive:
    add_slice = wrap_pylab_newplot(add_slice)

This method doesn't keep up with API changes and so on, but one that iterates over the class attributes in __init__ before re-setting the class attributes is more efficient and keeps things up to date:

class _Interactify(type):
    def __init__(cls, name, bases, d):
        super(_Interactify, cls).__init__(name, bases, d)
        for base in bases:
            for attrname in dir(base):
                if attrname in d: continue # If overridden, don't reset
                attr = getattr(cls, attrname)
                if type(attr) == types.MethodType:
                    if attrname.startswith("add_"):
                        setattr(cls, attrname, wrap_pylab_newplot(attr))
                    elif attrname.startswith("set_"):
                        setattr(cls, attrname, wrap_pylab_show(attr))

Of course, there might be better ways to do this, but I've found this to be effective. Of course, this could also be done in __new__ or __init__, but this was the solution I found the most straightforward.

Muldoon answered 26/12, 2008 at 1:35 Comment(0)
A
21

Let's start with Tim Peter's classic quote:

Metaclasses are deeper magic than 99% of users should ever worry about. If you wonder whether you need them, you don't (the people who actually need them know with certainty that they need them, and don't need an explanation about why). Tim Peters (c.l.p post 2002-12-22)

Having said that, I have (periodically) run across true uses of metaclasses. The one that comes to mind is in Django where all of your models inherit from models.Model. models.Model, in turn, does some serious magic to wrap your DB models with Django's ORM goodness. That magic happens by way of metaclasses. It creates all manner of exception classes, manager classes, etc. etc.

See django/db/models/base.py, class ModelBase() for the beginning of the story.

Amatory answered 25/12, 2008 at 2:23 Comment(2)
Well, yes, I see the point. I don't wonder "how" or "why" to use metaclasses, I wonder the "who" and the "what". ORMs are a common case here I see. Unfortunately Django's ORM is pretty poor compared to SQLAlchemy which has less magic. Magic is bad, and metaclasses are really not necessary for this.Elagabalus
Having read Tim Peters' quote in the past, time has showed that his statement is rather unhelpful. Not until researching Python metaclasses here on StackOverflow did it become apparent how to even implement them. After forcing myself to learn how to write and use metaclasses, their abilities astonished me and gave me a much better understanding of how Python really works. Classes can provide reusable code, and metaclasses can provide reusable enhancements for those classes.Tegucigalpa
D
8

The only legitimate use-case of a metaclass is to keep other nosy developers from touching your code. Once a nosy developer masters metaclasses and starts poking around with yours, throw in another level or two to keep them out. If that doesn't work, start using type.__new__ or perhaps some scheme using a recursive metaclass.

(written tongue in cheek, but I've seen this kind of obfuscation done. Django is a perfect example)

Dorman answered 16/3, 2011 at 19:13 Comment(1)
I'm not sure the motivation was the same in Django.Elagabalus
C
8

A reasonable pattern of metaclass use is doing something once when a class is defined rather than repeatedly whenever the same class is instantiated.

When multiple classes share the same special behaviour, repeating __metaclass__=X is obviously better than repeating the special purpose code and/or introducing ad-hoc shared superclasses.

But even with only one special class and no foreseeable extension, __new__ and __init__ of a metaclass are a cleaner way to initialize class variables or other global data than intermixing special-purpose code and normal def and class statements in the class definition body.

Caffeine answered 14/8, 2011 at 14:49 Comment(0)
F
7

Metaclasses can be handy for construction of Domain Specific Languages in Python. Concrete examples are Django, SQLObject 's declarative syntax of database schemata.

A basic example from A Conservative Metaclass by Ian Bicking:

The metaclasses I've used have been primarily to support a sort of declarative style of programming. For instance, consider a validation schema:

class Registration(schema.Schema):
    first_name = validators.String(notEmpty=True)
    last_name = validators.String(notEmpty=True)
    mi = validators.MaxLength(1)
    class Numbers(foreach.ForEach):
        class Number(schema.Schema):
            type = validators.OneOf(['home', 'work'])
            phone_number = validators.PhoneNumber()

Some other techniques: Ingredients for Building a DSL in Python (pdf).

Edit (by Ali): An example of doing this using collections and instances is what I would prefer. The important fact is the instances, which give you more power, and eliminate reason to use metaclasses. Further worth noting that your example uses a mixture of classes and instances, which is surely an indication that you can't just do it all with metaclasses. And creates a truly non-uniform way of doing it.

number_validator = [
    v.OneOf('type', ['home', 'work']),
    v.PhoneNumber('phone_number'),
]

validators = [
    v.String('first_name', notEmpty=True),
    v.String('last_name', notEmpty=True),
    v.MaxLength('mi', 1),
    v.ForEach([number_validator,])
]

It's not perfect, but already there is almost zero magic, no need for metaclasses, and improved uniformity.

Fournier answered 24/12, 2008 at 22:54 Comment(5)
Thanks for this. This is a very good example of a use-case I think is unnecessary, ugly, and unmanagemable, which would be simpler based on a simple collection instance (with nested collections as required).Elagabalus
@Ali A: you are welcome to provide a concrete example of side-by-side comparision between declarative syntax via metaclasses and an approach based on simple collection instance.Fournier
@Ali A: you may edit my answer inplace to add a collection style example.Fournier
Ok done that. Sorry am in a bit of a hurry today, but will try to answer any queries later/tomorrow. Happy Holidays!Elagabalus
The second example is ugly as you had to tie the validator instance with their name. A slightly better way of doing it is to use a dictionary instead of a list, but then, in python classes are just syntax sugar for dictionary, so why not use classes? You get free name validation as well because python babes cannot contain spaces or special characters that a string could.Glasgow
C
6

I was thinking the same thing just yesterday and completely agree. The complications in the code caused by attempts to make it more declarative generally make the codebase harder to maintain, harder to read and less pythonic in my opinion. It also normally requires a lot of copy.copy()ing (to maintain inheritance and to copy from class to instance) and means you have to look in many places to see whats going on (always looking from metaclass up) which goes against the python grain also. I have been picking through formencode and sqlalchemy code to see if such a declarative style was worth it and its clearly not. Such style should be left to descriptors (such as property and methods) and immutable data. Ruby has better support for such declarative styles and I am glad the core python language is not going down that route.

I can see their use for debugging, add a metaclass to all your base classes to get richer info. I also see their use only in (very) large projects to get rid of some boilerplate code (but at the loss of clarity). sqlalchemy for example does use them elsewhere, to add a particular custom method to all subclasses based on an attribute value in their class definition e.g a toy example

class test(baseclass_with_metaclass):
    method_maker_value = "hello"

could have a metaclass that generated a method in that class with special properties based on "hello" (say a method that added "hello" to the end of a string). It could be good for maintainability to make sure you did not have to write a method in every subclass you make instead all you have to define is method_maker_value.

The need for this is so rare though and only cuts down on a bit of typing that its not really worth considering unless you have a large enough codebase.

Candi answered 28/12, 2008 at 3:20 Comment(0)
R
5

The only time I used metaclasses in Python was when writing a wrapper for the Flickr API.

My goal was to scrape flickr's api site and dynamically generate a complete class hierarchy to allow API access using Python objects:

# Both the photo type and the flickr.photos.search API method 
# are generated at "run-time"
for photo in flickr.photos.search(text=balloons):
    print photo.description

So in that example, because I generated the entire Python Flickr API from the website, I really don't know the class definitions at runtime. Being able to dynamically generate types was very useful.

Reeder answered 24/12, 2008 at 21:19 Comment(2)
You can dynamically generate types without using metaclasses. >>> help(type)Elagabalus
Even if you're not aware of it, you are using metaclasses then. type is a metaclass, in fact the most common one. :-)Graf
B
5

Metaclasses aren't replacing programming! They're just a trick which can automate or make more elegant some tasks. A good example of this is Pygments syntax highlighting library. It has a class called RegexLexer which lets the user define a set of lexing rules as regular expressions on a class. A metaclass is used to turn the definitions into a useful parser.

They're like salt; it's easy to use too much.

Berri answered 24/12, 2008 at 22:15 Comment(2)
Well, in my opinion, that Pygments case is just unnecessary. Why not just have a plain collection like a dict, why force a class to do this?Elagabalus
Because a class nice encapulates the idea of Lexer and has other useful methods like guess_filename(), etc.Berri
K
4

You never absolutely need to use a metaclass, since you can always construct a class that does what you want using inheritance or aggregation of the class you want to modify.

That said, it can be very handy in Smalltalk and Ruby to be able to modify an existing class, but Python doesn't like to do that directly.

There's an excellent DeveloperWorks article on metaclassing in Python that might help. The Wikipedia article is also pretty good.

Kappel answered 24/12, 2008 at 21:17 Comment(2)
You also don't need objects to do object oriented programming—you could do it with first class functions. So you don't need to use objects. But they're there for convenience. So I'm not sure what point you're trying to make in the first paragraph.Orlov
Look back at the question.Kappel
T
4

Some GUI libraries have trouble when multiple threads try to interact with them. tkinter is one such example; and while one can explicitly handle the problem with events and queues, it can be far simpler to use the library in a manner that ignores the problem altogether. Behold -- the magic of metaclasses.

Being able to dynamically rewrite an entire library seamlessly so that it works properly as expected in a multithreaded application can be extremely helpful in some circumstances. The safetkinter module does that with the help of a metaclass provided by the threadbox module -- events and queues not needed.

One neat aspect of threadbox is that it does not care what class it clones. It provides an example of how all base classes can be touched by a metaclass if needed. A further benefit that comes with metaclasses is that they run on inheriting classes as well. Programs that write themselves -- why not?

Tegucigalpa answered 4/6, 2013 at 19:29 Comment(0)
C
3

The way I used metaclasses was to provide some attributes to classes. Take for example:

class NameClass(type):
    def __init__(cls, *args, **kwargs):
       type.__init__(cls, *args, **kwargs)
       cls.name = cls.__name__

will put the name attribute on every class that will have the metaclass set to point to NameClass.

Creolized answered 25/12, 2008 at 11:47 Comment(1)
Yes, this works. You could use a superclass also, which is at least explicit, and followable in code. Out of interest, what did you use this for?Elagabalus
T
2

This is a minor use, but... one thing I've found metaclasses useful for is to invoke a function whenever a subclass is created. I codified this into a metaclass which looks for an __initsubclass__ attribute: whenever a subclass is created, all parent classes which define that method are invoked with __initsubclass__(cls, subcls). This allows creation of a parent class which then registers all subclasses with some global registry, runs invariant checks on subclasses whenever they are defined, perform late-binding operations, etc... all without have to manually call functions or to create custom metaclasses that perform each of these separate duties.

Mind you, I've slowly come to realize the implicit magicalness of this behavior is somewhat undesirable, since it's unexpected if looking at a class definition out of context... and so I've moved away from using that solution for anything serious besides initializing a __super attribute for each class and instance.

Thundercloud answered 14/8, 2011 at 16:44 Comment(0)
O
1

I recently had to use a metaclass to help declaratively define an SQLAlchemy model around a database table populated with U.S. Census data from http://census.ire.org/data/bulkdata.html

IRE provides database shells for the census data tables, which create integer columns following a naming convention from the Census Bureau of p012015, p012016, p012017, etc.

I wanted to a) be able to access these columns using a model_instance.p012017 syntax, b) be fairly explicit about what I was doing and c) not have to explicitly define dozens of fields on the model, so I subclassed SQLAlchemy's DeclarativeMeta to iterate through a range of the columns and automatically create model fields corresponding to the columns:

from sqlalchemy.ext.declarative.api import DeclarativeMeta

class CensusTableMeta(DeclarativeMeta):
    def __init__(cls, classname, bases, dict_):
        table = 'p012'
        for i in range(1, 49):
            fname = "%s%03d" % (table, i)
            dict_[fname] = Column(Integer)
            setattr(cls, fname, dict_[fname])

        super(CensusTableMeta, cls).__init__(classname, bases, dict_)

I could then use this metaclass for my model definition and access the automatically enumerated fields on the model:

CensusTableBase = declarative_base(metaclass=CensusTableMeta)

class P12Tract(CensusTableBase):
    __tablename__ = 'ire_p12'

    geoid = Column(String(12), primary_key=True)

    @property
    def male_under_5(self):
        return self.p012003

    ...
Opium answered 8/11, 2013 at 17:59 Comment(0)
T
1

There seems to be a legitimate use described here - Rewriting Python Docstrings with a Metaclass.

Toft answered 25/5, 2014 at 8:44 Comment(0)
M
1

Pydantic is a library for data validation and settings management that enforces type hints at runtime and provides user friendly errors when data is invalid. It makes use of metaclasses for its BaseModel and for number range validation.

At work I encountered some code that had a process that had several stages defined by classes. The ordering of these steps was controlled by metaclasses that added the steps to a list as the classes were defined. This was thrown out and the order was set by adding them to a list.

Mcinnis answered 18/5, 2021 at 21:53 Comment(0)
B
0

I had to use them once for a binary parser to make it easier to use. You define a message class with attributes of the fields present on the wire. They needed to be ordered in the way they were declared to construct the final wire format from it. You can do that with metaclasses, if you use an ordered namespace dict. In fact, its in the examples for Metaclasses:

https://docs.python.org/3/reference/datamodel.html#metaclass-example

But in general: Very carefully evaluate, if you really really need the added complexity of metaclasses.

Book answered 2/9, 2016 at 12:47 Comment(0)
R
0

the answer from @Dan Gittik is cool

the examples at the end could clarify many things,I changed it to python 3 and give some explanation:

class MetaMetaclass(type):
    def __new__(meta, name, bases, attrs):
        def __new__(meta, name, bases, attrs):
            cls = type.__new__(meta, name, bases, attrs)
            cls._label = 'Made in %s' % meta.__name__
            return cls

        attrs['__new__'] = __new__
        return type.__new__(meta, name, bases, attrs)

#China is metaclass and it's __new__ method would be changed by MetaMetaclass(metaclass)
class China(MetaMetaclass, metaclass=MetaMetaclass):
    __metaclass__ = MetaMetaclass

#Taiwan is metaclass and it's __new__ method would be changed by MetaMetaclass(metaclass)
class Taiwan(MetaMetaclass, metaclass=MetaMetaclass):
    __metaclass__ = MetaMetaclass

#A is a normal class and it's __new__ method would be changed by China(metaclass)
class A(metaclass=China):
    __metaclass__ = China

#B is a normal class and it's __new__ method would be changed by Taiwan(metaclass)
class B(metaclass=Taiwan):
    __metaclass__ = Taiwan


print(A._label)  # Made in China
print(B._label)  # Made in Taiwan

  • everything is object,so class is object
  • class object is created by metaclass
  • all class inheritted from type is metaclass
  • metaclass could control class creating
  • metaclass could control metaclass creating too(so it could loop for ever)
  • this's metaprograming...you could control the type system at running time
  • again,everything is object,this's a uniform system,type create type,and type create instance
Reflective answered 23/7, 2019 at 11:58 Comment(0)
D
0

Another use case is when you want to be able to modify class-level attributes and be sure that it only affects the object at hand. In practice, this implies "merging" the phases of metaclasses and classes instantiations, thus leading you to deal only with class instances of their own (unique) kind.

I also had to do that when (for concerns of readibility and polymorphism) we wanted to dynamically define propertys which returned values (may) result from calculations based on (often changing) instance-level attributes, which can only be done at the class level, i.e. after the metaclass instantiation and before the class instantiation.

Doer answered 17/8, 2019 at 20:53 Comment(0)
B
0

I know this is an old question But here is a use case that is really invaluable if wanting to create only a single instance of a class based on the parameters passed to the constructor.

Instance singletons I use this code for creating a singleton instance of a device on a Z-Wave network. No matter how many times I create an instance if the same values are passed to the constructor if an instance with the exact same values exists then that is what gets returned.

import inspect


class SingletonMeta(type):
    # only here to make IDE happy
    _instances = {}

    def __init__(cls, name, bases, dct):
        super(SingletonMeta, cls).__init__(name, bases, dct)
        cls._instances = {}

    def __call__(cls, *args, **kwargs):
        sig = inspect.signature(cls.__init__)
        keywords = {}

        for i, param in enumerate(list(sig.parameters.values())[1:]):
            if len(args) > i:
                keywords[param.name] = args[i]
            elif param.name not in kwargs and param.default != param.empty:
                keywords[param.name] = param.default
            elif param.name in kwargs:
                keywords[param.name] = kwargs[param.name]
        key = []
        for k in sorted(list(keywords.keys())):
            key.append(keywords[k])
        key = tuple(key)

        if key not in cls._instances:
            cls._instances[key] = (
                super(SingletonMeta, cls).__call__(*args, **kwargs)
            )

        return cls._instances[key]


class Test1(metaclass=SingletonMeta):

    def __init__(self, param1, param2='test'):
        pass


class Test2(metaclass=SingletonMeta):

    def __init__(self, param3='test1', param4='test2'):
        pass


test1 = Test1('test1')
test2 = Test1('test1', 'test2')
test3 = Test1('test1', 'test')

test4 = Test2()
test5 = Test2(param4='test1')
test6 = Test2('test2', 'test1')
test7 = Test2('test1')

print('test1 == test2:', test1 == test2)
print('test2 == test3:', test2 == test3)
print('test1 == test3:', test1 == test3)
print('test4 == test2:', test4 == test2)
print('test7 == test3:', test7 == test3)
print('test6 == test4:', test6 == test4)
print('test7 == test4:', test7 == test4)
print('test5 == test6:', test5 == test6)

print('number of Test1 instances:', len(Test1._instances))
print('number of Test2 instances:', len(Test2._instances))

output

test1 == test2: False
test2 == test3: False
test1 == test3: True
test4 == test2: False
test7 == test3: False
test6 == test4: False
test7 == test4: True
test5 == test6: False
number of Test1 instances: 2
number of Test2 instances: 3

Now someone might say it can be done without the use of a metaclass and I know it can be done if the __init__ method is decorated. I do not know of another way to do it. The code below while it will return a similiar instance that contains all of the same data it is not a singleton instance, a new instance gets created. Because it creates a new instance with the same data there wuld need to be additional steps taken to check equality of instances. I n the end it consumes more memory then using a metaclass and with the meta class no additional steps need to be taken to check equality.

class Singleton(object):
    _instances = {}

    def __init__(self, param1, param2='test'):
        key = (param1, param2)
        if key in self._instances:
            self.__dict__.update(self._instances[key].__dict__)
        else:
            self.param1 = param1
            self.param2 = param2
            self._instances[key] = self


test1 = Singleton('test1', 'test2')
test2 = Singleton('test')
test3 = Singleton('test', 'test')

print('test1 == test2:', test1 == test2)
print('test2 == test3:', test2 == test3)
print('test1 == test3:', test1 == test3)

print('test1 params', test1.param1, test1.param2)
print('test2 params', test2.param1, test2.param2)
print('test3 params', test3.param1, test3.param2)

print('number of Singleton instances:', len(Singleton._instances))

output

test1 == test2: False
test2 == test3: False
test1 == test3: False
test1 params test1 test2
test2 params test test
test3 params test test
number of Singleton instances: 2

The metaclass approach is really nice to use if needing to check for the removal or addition of a new instance as well.

    import inspect


class SingletonMeta(type):
    # only here to make IDE happy
    _instances = {}

    def __init__(cls, name, bases, dct):
        super(SingletonMeta, cls).__init__(name, bases, dct)
        cls._instances = {}

    def __call__(cls, *args, **kwargs):
        sig = inspect.signature(cls.__init__)
        keywords = {}

        for i, param in enumerate(list(sig.parameters.values())[1:]):
            if len(args) > i:
                keywords[param.name] = args[i]
            elif param.name not in kwargs and param.default != param.empty:
                keywords[param.name] = param.default
            elif param.name in kwargs:
                keywords[param.name] = kwargs[param.name]
        key = []
        for k in sorted(list(keywords.keys())):
            key.append(keywords[k])
        key = tuple(key)

        if key not in cls._instances:
            cls._instances[key] = (
                super(SingletonMeta, cls).__call__(*args, **kwargs)
            )

        return cls._instances[key]


class Test(metaclass=SingletonMeta):

    def __init__(self, param1, param2='test'):
        pass


instances = []

instances.append(Test('test1', 'test2'))
instances.append(Test('test1', 'test'))

print('number of instances:', len(instances))

instance = Test('test2', 'test3')
if instance not in instances:
    instances.append(instance)

instance = Test('test1', 'test2')
if instance not in instances:
    instances.append(instance)

print('number of instances:', len(instances))

output

number of instances: 2
number of instances: 3

Here is a way to remove an instance that has been created after the instance is no longer in use.

    import inspect
import weakref


class SingletonMeta(type):
    # only here to make IDE happy
    _instances = {}

    def __init__(cls, name, bases, dct):
        super(SingletonMeta, cls).__init__(name, bases, dct)

        def remove_instance(c, ref):
            for k, v in list(c._instances.items())[:]:
                if v == ref:
                    del cls._instances[k]
                    break
                    
        cls.remove_instance = classmethod(remove_instance)
        cls._instances = {}

    def __call__(cls, *args, **kwargs):
        sig = inspect.signature(cls.__init__)
        keywords = {}

        for i, param in enumerate(list(sig.parameters.values())[1:]):
            if len(args) > i:
                keywords[param.name] = args[i]
            elif param.name not in kwargs and param.default != param.empty:
                keywords[param.name] = param.default
            elif param.name in kwargs:
                keywords[param.name] = kwargs[param.name]
        key = []
        for k in sorted(list(keywords.keys())):
            key.append(keywords[k])
        key = tuple(key)

        if key not in cls._instances:
            instance = super(SingletonMeta, cls).__call__(*args, **kwargs)

            cls._instances[key] = weakref.ref(
                instance,
                instance.remove_instance
            )

        return cls._instances[key]()


class Test1(metaclass=SingletonMeta):

    def __init__(self, param1, param2='test'):
        pass


class Test2(metaclass=SingletonMeta):

    def __init__(self, param3='test1', param4='test2'):
        pass


test1 = Test1('test1')
test2 = Test1('test1', 'test2')
test3 = Test1('test1', 'test')

test4 = Test2()
test5 = Test2(param4='test1')
test6 = Test2('test2', 'test1')
test7 = Test2('test1')

print('test1 == test2:', test1 == test2)
print('test2 == test3:', test2 == test3)
print('test1 == test3:', test1 == test3)
print('test4 == test2:', test4 == test2)
print('test7 == test3:', test7 == test3)
print('test6 == test4:', test6 == test4)
print('test7 == test4:', test7 == test4)
print('test5 == test6:', test5 == test6)

print('number of Test1 instances:', len(Test1._instances))
print('number of Test2 instances:', len(Test2._instances))


print()
del test1
del test5
del test6

print('number of Test1 instances:', len(Test1._instances))
print('number of Test2 instances:', len(Test2._instances))

output

test1 == test2: False
test2 == test3: False
test1 == test3: True
test4 == test2: False
test7 == test3: False
test6 == test4: False
test7 == test4: True
test5 == test6: False
number of Test1 instances: 2
number of Test2 instances: 3

number of Test1 instances: 2
number of Test2 instances: 1

if you look at the output you will notice that the number of Test1 instances has not changed. That is because test1 and test3 are the same instance and I only deleted test1 so there is still a reference to the test1 instance in the code and as a result of that the test1 instance does not get removed.

Another nice feature of this is if the instance uses only the supplied parameters to do whatever it is tasked to do then you can use the metaclass to facilitate remote creations of the instance either on a different computer entirely or in a different process on the same machine. the parameters can simply be passed over a socket or a named pipe and a replica of the class can be created on the receiving end.

Bertrambertrand answered 20/11, 2021 at 14:42 Comment(1)
"No matter how many times I create an instance if the same values are passed to the constructor if an instance with the exact same values exists then that is what gets returned." The classic way to do this is by implementing __new__ and returning an instance without called __int__, if it exists.Missend

© 2022 - 2024 — McMap. All rights reserved.