How to serialize SqlAlchemy result to JSON?
Asked Answered
B

37

310

Django has some good automatic serialization of ORM models returned from DB to JSON format.

How to serialize SQLAlchemy query result to JSON format?

I tried jsonpickle.encode but it encodes query object itself. I tried json.dumps(items) but it returns

TypeError: <Product('3', 'some name', 'some desc')> is not JSON serializable

Is it really so hard to serialize SQLAlchemy ORM objects to JSON /XML? Isn't there any default serializer for it? It's very common task to serialize ORM query results nowadays.

What I need is just to return JSON or XML data representation of SQLAlchemy query result.

SQLAlchemy objects query result in JSON/XML format is needed to be used in javascript datagird (JQGrid http://www.trirand.com/blog/)

Briant answered 16/2, 2011 at 21:4 Comment(4)
This is a workaround that works for me. enter link description hereBackbreaker
I must warn you that serializing many sqlalchemy models (such as a list of them) will be very slow. If you care about performance, select dictionaries instead.Tymes
https://mcmap.net/q/101367/-retrieve-query-results-as-dict-in-sqlalchemyCubism
On table models create a wrapper and in it define the to_dict method and write the coloumn data you want to serialize and use this wrapper to get the data from the databaseElectrotechnology
P
177

A flat implementation

You could use something like this:

from sqlalchemy.ext.declarative import DeclarativeMeta

class AlchemyEncoder(json.JSONEncoder):

    def default(self, obj):
        if isinstance(obj.__class__, DeclarativeMeta):
            # an SQLAlchemy class
            fields = {}
            for field in [x for x in dir(obj) if not x.startswith('_') and x != 'metadata']:
                data = obj.__getattribute__(field)
                try:
                    json.dumps(data) # this will fail on non-encodable values, like other classes
                    fields[field] = data
                except TypeError:
                    fields[field] = None
            # a json-encodable dict
            return fields

        return json.JSONEncoder.default(self, obj)

and then convert to JSON using:

c = YourAlchemyClass()
print json.dumps(c, cls=AlchemyEncoder)

It will ignore fields that are not encodable (set them to 'None').

It doesn't auto-expand relations (since this could lead to self-references, and loop forever).

A recursive, non-circular implementation

If, however, you'd rather loop forever, you could use:

from sqlalchemy.ext.declarative import DeclarativeMeta

def new_alchemy_encoder():
    _visited_objs = []

    class AlchemyEncoder(json.JSONEncoder):
        def default(self, obj):
            if isinstance(obj.__class__, DeclarativeMeta):
                # don't re-visit self
                if obj in _visited_objs:
                    return None
                _visited_objs.append(obj)

                # an SQLAlchemy class
                fields = {}
                for field in [x for x in dir(obj) if not x.startswith('_') and x != 'metadata']:
                    fields[field] = obj.__getattribute__(field)
                # a json-encodable dict
                return fields

            return json.JSONEncoder.default(self, obj)

    return AlchemyEncoder

And then encode objects using:

print json.dumps(e, cls=new_alchemy_encoder(), check_circular=False)

This would encode all children, and all their children, and all their children... Potentially encode your entire database, basically. When it reaches something its encoded before, it will encode it as 'None'.

A recursive, possibly-circular, selective implementation

Another alternative, probably better, is to be able to specify the fields you want to expand:

def new_alchemy_encoder(revisit_self = False, fields_to_expand = []):
    _visited_objs = []

    class AlchemyEncoder(json.JSONEncoder):
        def default(self, obj):
            if isinstance(obj.__class__, DeclarativeMeta):
                # don't re-visit self
                if revisit_self:
                    if obj in _visited_objs:
                        return None
                    _visited_objs.append(obj)

                # go through each field in this SQLalchemy class
                fields = {}
                for field in [x for x in dir(obj) if not x.startswith('_') and x != 'metadata']:
                    val = obj.__getattribute__(field)

                    # is this field another SQLalchemy object, or a list of SQLalchemy objects?
                    if isinstance(val.__class__, DeclarativeMeta) or (isinstance(val, list) and len(val) > 0 and isinstance(val[0].__class__, DeclarativeMeta)):
                        # unless we're expanding this field, stop here
                        if field not in fields_to_expand:
                            # not expanding this field: set it to None and continue
                            fields[field] = None
                            continue

                    fields[field] = val
                # a json-encodable dict
                return fields

            return json.JSONEncoder.default(self, obj)

    return AlchemyEncoder

You can now call it with:

print json.dumps(e, cls=new_alchemy_encoder(False, ['parents']), check_circular=False)

To only expand SQLAlchemy fields called 'parents', for example.

Phosphoresce answered 19/5, 2012 at 10:5 Comment(9)
that's a great response, however I get a "could not encode "BaseQuery" whenever it hits a relationship with the non-flat methods, any ideas?Recording
@SashaB How about targeting more granularly against cases where a relationship is repeated? For example, if I have online_order and address, both with a relationship to user, but online_order also has an relationship to address. If I wanted to serialize all of this, I'd have to include address in the fields_to_expand, but I wouldn't want to redundantly serialize address due to its relationship to both user and online_order.Lundberg
@BenKilah Let me guess, you're using Flask-SqlAlchemy and your models are inheriting from db.Model, not Base. If that's the case, modify for field in [x for x in dir(obj) if not x.startswith('_') and x != 'metadata']: so that it reads for field in [x for x in dir(obj) if not x.startswith('_') and x != 'metadata' and not x.startswith('query')]:. Keep in mind this solution will prevent you from having a property/relationship with the name 'query'Semilunar
same way like I did, but much more complex. #7103254Ardennes
You can use my solution github.com/n0nSmoker/SQLAlchemy-serializerInattentive
And yet another solution wakatime.com/blog/32-part-1-sqlalchemy-models-to-jsonPanjandrum
I can suggest to use github.com/n0nSmoker/SQLAlchemy-serializer It is very fast and convenient to useInattentive
With your flat implementation, was it intended for you to display the JSON fields backwards from how they were created?Bullhead
This was super helpful to me. Thank you! I needed datetime objects to be converted to strings so made this little tweak: https://mcmap.net/q/99254/-how-to-serialize-sqlalchemy-result-to-jsonAceto
F
374

You could just output your object as a dictionary:

class User:
   def as_dict(self):
       return {c.name: getattr(self, c.name) for c in self.__table__.columns}

And then you use User.as_dict() to serialize your object.

As explained in Convert sqlalchemy row object to python dict

Freckly answered 9/8, 2012 at 13:42 Comment(12)
@charlax, How'd I fix a DateTime? By using this I get 'datetime.datetime(2013, 3, 22, 16, 50, 11) is not JSON serializable' when I do json.dumpsFurie
It's the responsibility of the JSONEncoder object. You can subclass it to define your own encoder for some object, including datetime. Note that Flask, for instance, support encoding datetime in JSON out of the box (with the latest version).Freckly
If you use sqlalchemy's "declarative" method you can add something like this to a custom Base class - this is pretty handy as you can then call my_orm_object.toDict() on any ORM object you have. Similarly you can define a .toJSON() method which uses your toDict method and a custom encoder for handling dates, blobs etcAlverta
How do I handle eager loaded relationships?Cesspool
I would avoid dumping relationships, and rely on client-side joins. If you absolutely want to include relationships, you can either have an explicit list of them and call as_dict on them, or you could imagine using operator.attrgetter and being explicit about everything. There's a way to get relationships on a model programatically as well.Freckly
to also support datetime: return {c.name: unicode(getattr(self, c.name)) for c in self.__table__.columns}Dartmouth
no need to right a your own as_dict() conversion, the row in resultset has an internal dict method. Try this: `` import json rs = YourModel.query.all() s = json.dumps([r.__dict__ for r in rs], default=alchemyencoder) print "result: " + s ``Copenhagen
This doesn't work if your class variables are not the same as your column names. Any idea how to get the class names instead?Beyer
But how to write the deserializer or this model? Do I have to write code for every Column class?Filagree
@JamesBurke - check "More difficult case" section of (my) answer here. But, something like: {a.key: rowproxy[a.class_attribute.name] for a in User.__mapper__.attrs}Caulfield
For Python 3 users, @Shoham's answer needs a small change: return {c.name: str(getattr(self, c.name)) for c in self.__table__.columns}Irresponsive
Great answer. Is there a way to use it with flask.json.jsonify() to set headers et al properly?Grimonia
T
187

Python 3.7+ and Flask 1.1+ can use the built-in dataclasses package

from dataclasses import dataclass
from datetime import datetime
from flask import Flask, jsonify
from flask_sqlalchemy import SQLAlchemy


app = Flask(__name__)
db = SQLAlchemy(app)


@dataclass
class User(db.Model):
  id: int
  email: str

  id = db.Column(db.Integer, primary_key=True, auto_increment=True)
  email = db.Column(db.String(200), unique=True)


@app.route('/users/')
def users():
  users = User.query.all()
  return jsonify(users)  


if __name__ == "__main__":
  users = User(email="[email protected]"), User(email="[email protected]")
  db.create_all()
  db.session.add_all(users)
  db.session.commit()
  app.run()

The /users/ route will now return a list of users.

[
  {"email": "[email protected]", "id": 1},
  {"email": "[email protected]", "id": 2}
]

Auto-serialize related models

@dataclass
class Account(db.Model):
  id: int
  users: User

  id = db.Column(db.Integer)
  users = db.relationship(User)  # User model would need a db.ForeignKey field

The response from jsonify(account) would be this.

{  
   "id":1,
   "users":[  
      {  
         "email":"[email protected]",
         "id":1
      },
      {  
         "email":"[email protected]",
         "id":2
      }
   ]
}

Overwrite the default JSON Encoder

from flask.json import JSONEncoder


class CustomJSONEncoder(JSONEncoder):
  "Add support for serializing timedeltas"

  def default(o):
    if type(o) == datetime.timedelta:
      return str(o)
    if type(o) == datetime.datetime:
      return o.isoformat()
    return super().default(o)

app.json_encoder = CustomJSONEncoder      
Tightfisted answered 30/8, 2019 at 20:17 Comment(20)
This looks like the right kind of simple. Does it also work for deserialization?Moseley
This is what I was looking for, also, it do not seem to work with dump = json.dumps(query_result), but that's okay, I use return make_response(jsonify(query_result), 200).Allochthonous
Note that id: int = Column will work, but id = Column won't, it seem like YOU HAVE TO declare static typing for the json to serialized the field, otherwise you get an empty {} object.Allochthonous
@Tightfisted thanks, but wasn't actually requesting any help (I have no issue), I am using make_response to be able to change the response code, and maybe json.dumps wasn't working due to the typing missing on my model.Allochthonous
This worked for me, why isn't this the accepted answer? I have been playing around app_context for hours to to get this to work with Flask-Marshmallow.Echeverria
Nice answer! However, the datetime format is not so nice (e.g. "Thu, 19 Dec 2019 07:48:28 GMT"). Is there a way to get the isoformat?Pottery
@MartinThoma you could over-ride the default JSON encoder that Flask uses. I updated the example above to demonstrate how you can control the serialization of datetime objects to use ISO format.Tightfisted
wow, it work for me, thank's. However i have question here, , i use __tablename__ property to named my model, how to except this property. thank's :)Overweigh
Worked for me as well. Note that if you're on Python 3.6, you'll want to just install the package: pipenv install dataclasses. And then it'll work just fine.Kassab
Nice. Aside, for related models: given the need for a true class in users: User, I could not find a way to also use the reverse relation, from User to Account, without running into circular dependencies. (I assume that preventing circular dependencies may also be why SQLAlchemy supports string values for class names in, e.g., users = db.relationship('User', back_populates = 'accounts'). But that won't help while declaring the attributes, where User would then need some accounts: Account attribute.)Hernardo
This demo worked for me. But not until I moved the db.create_all() to directly below the definition of the table. And then below db.create_all() I then added the users.Ambiguous
Worked for me with the following modifications (Flask 1.1.2): add: import datetime, modified: def default(o) to: def default(self, o).Skiest
I like the elegance of this. It's easy to define what should be included in the JSON. Only caveat for me was: Decimal column type does not work, and obviously any relationship tables also need to be dataclass'ed to work.Bigham
as everyone has already stated: amazingly straightforward and helpful answer! has anyone gotten around the Decimal or Numeric() column types and decimal not being json serializable error?Priscian
@JohnBlythe were you able to try overwriting the default json encoder? I think that should do it for you.Tightfisted
To make pylance happy, I had to use cast() e.g. id: int = cast(int, Column(Integer))Biondo
If your app uses floats/decimals then you'll want to add another case to the custom json encoder: https://mcmap.net/q/101369/-how-to-convert-all-decimals-in-a-python-data-structure-to-stringMillet
This is awesome. I've been messing around with Marshmallow and really struggling. This is way simpler and just works. I agree with others that this should be the accepted answer. Thanks for sharing!Achromatin
This method can work more generally without using Flask. See this answer.Groundsheet
I was not successfull when trying to implement this answer. But, using the SQLAlchemy Automatic Schema worked... I think.Foreignborn
P
177

A flat implementation

You could use something like this:

from sqlalchemy.ext.declarative import DeclarativeMeta

class AlchemyEncoder(json.JSONEncoder):

    def default(self, obj):
        if isinstance(obj.__class__, DeclarativeMeta):
            # an SQLAlchemy class
            fields = {}
            for field in [x for x in dir(obj) if not x.startswith('_') and x != 'metadata']:
                data = obj.__getattribute__(field)
                try:
                    json.dumps(data) # this will fail on non-encodable values, like other classes
                    fields[field] = data
                except TypeError:
                    fields[field] = None
            # a json-encodable dict
            return fields

        return json.JSONEncoder.default(self, obj)

and then convert to JSON using:

c = YourAlchemyClass()
print json.dumps(c, cls=AlchemyEncoder)

It will ignore fields that are not encodable (set them to 'None').

It doesn't auto-expand relations (since this could lead to self-references, and loop forever).

A recursive, non-circular implementation

If, however, you'd rather loop forever, you could use:

from sqlalchemy.ext.declarative import DeclarativeMeta

def new_alchemy_encoder():
    _visited_objs = []

    class AlchemyEncoder(json.JSONEncoder):
        def default(self, obj):
            if isinstance(obj.__class__, DeclarativeMeta):
                # don't re-visit self
                if obj in _visited_objs:
                    return None
                _visited_objs.append(obj)

                # an SQLAlchemy class
                fields = {}
                for field in [x for x in dir(obj) if not x.startswith('_') and x != 'metadata']:
                    fields[field] = obj.__getattribute__(field)
                # a json-encodable dict
                return fields

            return json.JSONEncoder.default(self, obj)

    return AlchemyEncoder

And then encode objects using:

print json.dumps(e, cls=new_alchemy_encoder(), check_circular=False)

This would encode all children, and all their children, and all their children... Potentially encode your entire database, basically. When it reaches something its encoded before, it will encode it as 'None'.

A recursive, possibly-circular, selective implementation

Another alternative, probably better, is to be able to specify the fields you want to expand:

def new_alchemy_encoder(revisit_self = False, fields_to_expand = []):
    _visited_objs = []

    class AlchemyEncoder(json.JSONEncoder):
        def default(self, obj):
            if isinstance(obj.__class__, DeclarativeMeta):
                # don't re-visit self
                if revisit_self:
                    if obj in _visited_objs:
                        return None
                    _visited_objs.append(obj)

                # go through each field in this SQLalchemy class
                fields = {}
                for field in [x for x in dir(obj) if not x.startswith('_') and x != 'metadata']:
                    val = obj.__getattribute__(field)

                    # is this field another SQLalchemy object, or a list of SQLalchemy objects?
                    if isinstance(val.__class__, DeclarativeMeta) or (isinstance(val, list) and len(val) > 0 and isinstance(val[0].__class__, DeclarativeMeta)):
                        # unless we're expanding this field, stop here
                        if field not in fields_to_expand:
                            # not expanding this field: set it to None and continue
                            fields[field] = None
                            continue

                    fields[field] = val
                # a json-encodable dict
                return fields

            return json.JSONEncoder.default(self, obj)

    return AlchemyEncoder

You can now call it with:

print json.dumps(e, cls=new_alchemy_encoder(False, ['parents']), check_circular=False)

To only expand SQLAlchemy fields called 'parents', for example.

Phosphoresce answered 19/5, 2012 at 10:5 Comment(9)
that's a great response, however I get a "could not encode "BaseQuery" whenever it hits a relationship with the non-flat methods, any ideas?Recording
@SashaB How about targeting more granularly against cases where a relationship is repeated? For example, if I have online_order and address, both with a relationship to user, but online_order also has an relationship to address. If I wanted to serialize all of this, I'd have to include address in the fields_to_expand, but I wouldn't want to redundantly serialize address due to its relationship to both user and online_order.Lundberg
@BenKilah Let me guess, you're using Flask-SqlAlchemy and your models are inheriting from db.Model, not Base. If that's the case, modify for field in [x for x in dir(obj) if not x.startswith('_') and x != 'metadata']: so that it reads for field in [x for x in dir(obj) if not x.startswith('_') and x != 'metadata' and not x.startswith('query')]:. Keep in mind this solution will prevent you from having a property/relationship with the name 'query'Semilunar
same way like I did, but much more complex. #7103254Ardennes
You can use my solution github.com/n0nSmoker/SQLAlchemy-serializerInattentive
And yet another solution wakatime.com/blog/32-part-1-sqlalchemy-models-to-jsonPanjandrum
I can suggest to use github.com/n0nSmoker/SQLAlchemy-serializer It is very fast and convenient to useInattentive
With your flat implementation, was it intended for you to display the JSON fields backwards from how they were created?Bullhead
This was super helpful to me. Thank you! I needed datetime objects to be converted to strings so made this little tweak: https://mcmap.net/q/99254/-how-to-serialize-sqlalchemy-result-to-jsonAceto
F
69

You can convert a RowProxy to a dict like this:

 d = dict(row.items())

Then serialize that to JSON ( you will have to specify an encoder for things like datetime values ) It's not that hard if you just want one record ( and not a full hierarchy of related records ).

json.dumps([(dict(row.items())) for row in rs])
Factotum answered 11/8, 2011 at 20:20 Comment(2)
This works for my custom sql query with db.engine.connect() as con: rs = con.execute(sql)Modigliani
This is much more simple and works. What is the difference between this answer and the accepted answer?Tibbetts
K
61

I recommend using marshmallow. It allows you to create serializers to represent your model instances with support to relations and nested objects.

Here is a truncated example from their docs. Take the ORM model, Author:

class Author(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    first = db.Column(db.String(80))
    last = db.Column(db.String(80))

A marshmallow schema for that class is constructed like this:

class AuthorSchema(Schema):
    id = fields.Int(dump_only=True)
    first = fields.Str()
    last = fields.Str()
    formatted_name = fields.Method("format_name", dump_only=True)

    def format_name(self, author):
        return "{}, {}".format(author.last, author.first)

...and used like this:

author_schema = AuthorSchema()
author_schema.dump(Author.query.first())

...would produce an output like this:

{
        "first": "Tim",
        "formatted_name": "Peters, Tim",
        "id": 1,
        "last": "Peters"
}

Have a look at their full Flask-SQLAlchemy Example.

A library called marshmallow-sqlalchemy specifically integrates SQLAlchemy and marshmallow. In that library, the schema for the Author model described above looks like this:

class AuthorSchema(ModelSchema):
    class Meta:
        model = Author

The integration allows the field types to be inferred from the SQLAlchemy Column types.

marshmallow-sqlalchemy here.

Krueger answered 12/5, 2014 at 11:31 Comment(1)
I also found marshmallow-sqlalchemy.readthedocs.io/en/latest which simplifies the schema generationTabescent
P
19

You can use introspection of SqlAlchemy as this :

mysql = SQLAlchemy()
from sqlalchemy import inspect

class Contacts(mysql.Model):  
    __tablename__ = 'CONTACTS'
    id = mysql.Column(mysql.Integer, primary_key=True)
    first_name = mysql.Column(mysql.String(128), nullable=False)
    last_name = mysql.Column(mysql.String(128), nullable=False)
    phone = mysql.Column(mysql.String(128), nullable=False)
    email = mysql.Column(mysql.String(128), nullable=False)
    street = mysql.Column(mysql.String(128), nullable=False)
    zip_code = mysql.Column(mysql.String(128), nullable=False)
    city = mysql.Column(mysql.String(128), nullable=False)
    def toDict(self):
        return { c.key: getattr(self, c.key) for c in inspect(self).mapper.column_attrs }

@app.route('/contacts',methods=['GET'])
def getContacts():
    contacts = Contacts.query.all()
    contactsArr = []
    for contact in contacts:
        contactsArr.append(contact.toDict()) 
    return jsonify(contactsArr)

@app.route('/contacts/<int:id>',methods=['GET'])
def getContact(id):
    contact = Contacts.query.get(id)
    return jsonify(contact.toDict())

Get inspired from an answer here : Convert sqlalchemy row object to python dict

Pension answered 12/9, 2017 at 15:38 Comment(2)
In terms of minimal solutions, I combined the toDict() above with tom's CustomJSONEncoder (note slight modifications) to get datetimes into ISO format. Ah the power of stack overflow!Skiest
i found the magic ingredient in this piece of code contact.toDict()Use
T
14

Here's one way to make your SqlAlchemy objects serializable: implement a custom JSONEncoder and add it to the base class:

Usage:

from sqlalchemy.ext.declarative import declarative_base
from flask.ext.jsontools import JsonSerializableBase

Base = declarative_base(cls=(DynamicJSONEncoder,))

class User(Base):
    #...

Now the User model is magically serializable.

Here's one way to implement a custom JSONEncoder: it will allow you to customize how an instance is represented in JSON by implementing the __json__() method:

from flask.json import JSONEncoder

class DynamicJSONEncoder(JSONEncoder):
    """ JSON encoder for custom classes:

        Uses __json__() method if available to prepare the object.
        Especially useful for SQLAlchemy models
    """

    def default(self, o):
        # Custom JSON-encodeable objects
        if hasattr(o, '__json__'):
            return o.__json__()

        # Default
        return super().default(o)
Tymes answered 6/9, 2014 at 3:37 Comment(9)
This only solves half the problem, as it only serializes a single row. How to serialize the whole query result?Going
@SteveBennett use the jsontools' jsonapi to encode the response. That will automatically encode the return objectClarify
I have a very simple sqlalchemy model, and I'm getting: TypeError: <ORM.State object at 0x03577A50> is not JSON serializableLytle
It worked eventually by explicitly calling __json__() on my model object: return my_object.__json__()Lytle
The library does not work with Flask 1.0 and above, as import flask.ext.whatever is no longer supported in Flask 1.0.Disconsolate
To provide some useful info and save some googling, it appears you need to use from flask_jsontools import JsonSerializableBase instead of from flask.ext.jsontools import JsonSerializableBase now.Faceharden
Please help a noob - Now the User model is magically serializable so once I what code do I use to return BaseQuery as JSON?Faceharden
@Lytle and others! please add this to your flask app.py file: app.json_encoder = DynamicJSONEncoder now, you can return your models directly without using __json__()!Postimpressionism
Also, in first code block should be: Base = declarative_base(cls=(JsonSerializableBase,))Postimpressionism
C
14

For security reasons you should never return all the model's fields. I prefer to selectively choose them.

Flask's json encoding now supports UUID, datetime and relationships (and added query and query_class for flask_sqlalchemy db.Model class). I've updated the encoder as follows:

app/json_encoder.py

    from sqlalchemy.ext.declarative import DeclarativeMeta
    from flask import json


    class AlchemyEncoder(json.JSONEncoder):
        def default(self, o):
            if isinstance(o.__class__, DeclarativeMeta):
                data = {}
                fields = o.__json__() if hasattr(o, '__json__') else dir(o)
                for field in [f for f in fields if not f.startswith('_') and f not in ['metadata', 'query', 'query_class']]:
                    value = o.__getattribute__(field)
                    try:
                        json.dumps(value)
                        data[field] = value
                    except TypeError:
                        data[field] = None
                return data
            return json.JSONEncoder.default(self, o)

app/__init__.py

# json encoding
from app.json_encoder import AlchemyEncoder
app.json_encoder = AlchemyEncoder

With this I can optionally add a __json__ property that returns the list of fields I wish to encode:

app/models.py

class Queue(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    song_id = db.Column(db.Integer, db.ForeignKey('song.id'), unique=True, nullable=False)
    song = db.relationship('Song', lazy='joined')
    type = db.Column(db.String(20), server_default=u'audio/mpeg')
    src = db.Column(db.String(255), nullable=False)
    created_at = db.Column(db.DateTime, server_default=db.func.now())
    updated_at = db.Column(db.DateTime, server_default=db.func.now(), onupdate=db.func.now())

    def __init__(self, song):
        self.song = song
        self.src = song.full_path

    def __json__(self):
        return ['song', 'src', 'type', 'created_at']

I add @jsonapi to my view, return the resultlist and then my output is as follows:

[

{

    "created_at": "Thu, 23 Jul 2015 11:36:53 GMT",
    "song": 

        {
            "full_path": "/static/music/Audioslave/Audioslave [2002]/1 Cochise.mp3",
            "id": 2,
            "path_name": "Audioslave/Audioslave [2002]/1 Cochise.mp3"
        },
    "src": "/static/music/Audioslave/Audioslave [2002]/1 Cochise.mp3",
    "type": "audio/mpeg"
}

]
Clarify answered 22/7, 2015 at 16:48 Comment(1)
Beautiful! Once again, proof that sometimes you don't need a fat package for every stupid little task--that learning DSL can be harder than doing it the "hard" way. I looked at many many JSON and REST packages before landing here. True, this still requires a package, flask_jsontools (to add @jsonapi to @app.route in views.py etc), but I love the simplicity of it. I think it is cheap Flask added datetime but not date so I added it myself to json_encoder.py: value=...^if isinstance(value, date):^data[field] = datetime.combine(value, time.min).isoformat()^else:^try:...Eyebrow
E
7

A more detailed explanation. In your model, add:

def as_dict(self):
       return {c.name: str(getattr(self, c.name)) for c in self.__table__.columns}

The str() is for python 3 so if using python 2 use unicode(). It should help deserialize dates. You can remove it if not dealing with those.

You can now query the database like this

some_result = User.query.filter_by(id=current_user.id).first().as_dict()

First() is needed to avoid weird errors. as_dict() will now deserialize the result. After deserialization, it is ready to be turned to json

jsonify(some_result)
Errhine answered 11/5, 2018 at 19:29 Comment(1)
it will raise an error if id has uuid type!Kafiristan
M
3

While the original question goes back awhile, the number of answers here (and my own experiences) suggest it's a non-trivial question with a lot of different approaches of varying complexity with different trade-offs.

That's why I built the SQLAthanor library that extends SQLAlchemy's declarative ORM with configurable serialization/de-serialization support that you might want to take a look at.

The library supports:

  • Python 2.7, 3.4, 3.5, and 3.6.
  • SQLAlchemy versions 0.9 and higher
  • serialization/de-serialization to/from JSON, CSV, YAML, and Python dict
  • serialization/de-serialization of columns/attributes, relationships, hybrid properties, and association proxies
  • enabling and disabling of serialization for particular formats and columns/relationships/attributes (e.g. you want to support an inbound password value, but never include an outbound one)
  • pre-serialization and post-deserialization value processing (for validation or type coercion)
  • a pretty straightforward syntax that is both Pythonic and seamlessly consistent with SQLAlchemy's own approach

You can check out the (I hope!) comprehensive docs here: https://sqlathanor.readthedocs.io/en/latest

Hope this helps!

Messy answered 22/7, 2018 at 20:58 Comment(0)
A
2

Custom serialization and deserialization.

"from_json" (class method) builds a Model object based on json data.

"deserialize" could be called only on instance, and merge all data from json into Model instance.

"serialize" - recursive serialization

__write_only__ property is needed to define write only properties ("password_hash" for example).

class Serializable(object):
    __exclude__ = ('id',)
    __include__ = ()
    __write_only__ = ()

    @classmethod
    def from_json(cls, json, selfObj=None):
        if selfObj is None:
            self = cls()
        else:
            self = selfObj
        exclude = (cls.__exclude__ or ()) + Serializable.__exclude__
        include = cls.__include__ or ()
        if json:
            for prop, value in json.iteritems():
                # ignore all non user data, e.g. only
                if (not (prop in exclude) | (prop in include)) and isinstance(
                        getattr(cls, prop, None), QueryableAttribute):
                    setattr(self, prop, value)
        return self

    def deserialize(self, json):
        if not json:
            return None
        return self.__class__.from_json(json, selfObj=self)

    @classmethod
    def serialize_list(cls, object_list=[]):
        output = []
        for li in object_list:
            if isinstance(li, Serializable):
                output.append(li.serialize())
            else:
                output.append(li)
        return output

    def serialize(self, **kwargs):

        # init write only props
        if len(getattr(self.__class__, '__write_only__', ())) == 0:
            self.__class__.__write_only__ = ()
        dictionary = {}
        expand = kwargs.get('expand', ()) or ()
        prop = 'props'
        if expand:
            # expand all the fields
            for key in expand:
                getattr(self, key)
        iterable = self.__dict__.items()
        is_custom_property_set = False
        # include only properties passed as parameter
        if (prop in kwargs) and (kwargs.get(prop, None) is not None):
            is_custom_property_set = True
            iterable = kwargs.get(prop, None)
        # loop trough all accessible properties
        for key in iterable:
            accessor = key
            if isinstance(key, tuple):
                accessor = key[0]
            if not (accessor in self.__class__.__write_only__) and not accessor.startswith('_'):
                # force select from db to be able get relationships
                if is_custom_property_set:
                    getattr(self, accessor, None)
                if isinstance(self.__dict__.get(accessor), list):
                    dictionary[accessor] = self.__class__.serialize_list(object_list=self.__dict__.get(accessor))
                # check if those properties are read only
                elif isinstance(self.__dict__.get(accessor), Serializable):
                    dictionary[accessor] = self.__dict__.get(accessor).serialize()
                else:
                    dictionary[accessor] = self.__dict__.get(accessor)
        return dictionary
Antigone answered 4/4, 2016 at 15:16 Comment(0)
J
2

Use the built-in serializer in SQLAlchemy:

from sqlalchemy.ext.serializer import loads, dumps
obj = MyAlchemyObject()
# serialize object
serialized_obj = dumps(obj)

# deserialize object
obj = loads(serialized_obj)

If you're transferring the object between sessions, remember to detach the object from the current session using session.expunge(obj). To attach it again, just do session.add(obj).

Jakejakes answered 13/12, 2016 at 11:31 Comment(3)
Nifty, but does not convert to JSON.Skeet
For JSON 'serialization' check out marshmallow-sqlalchemy. Definitely the best solution when you're exposing objects to clients. marshmallow-sqlalchemy.readthedocs.ioJakejakes
The serializer module is only appropriate for query structures. It is not needed for: instances of user-defined classes. These contain no references to engines, sessions or expression constructs in the typical case and can be serialized directly.Strang
I
2

Here is a solution that lets you select the relations you want to include in your output as deep as you would like to go. NOTE: This is a complete re-write taking a dict/str as an arg rather than a list. fixes some stuff..

def deep_dict(self, relations={}):
    """Output a dict of an SA object recursing as deep as you want.

    Takes one argument, relations which is a dictionary of relations we'd
    like to pull out. The relations dict items can be a single relation
    name or deeper relation names connected by sub dicts

    Example:
        Say we have a Person object with a family relationship
            person.deep_dict(relations={'family':None})
        Say the family object has homes as a relation then we can do
            person.deep_dict(relations={'family':{'homes':None}})
            OR
            person.deep_dict(relations={'family':'homes'})
        Say homes has a relation like rooms you can do
            person.deep_dict(relations={'family':{'homes':'rooms'}})
            and so on...
    """
    mydict =  dict((c, str(a)) for c, a in
                    self.__dict__.items() if c != '_sa_instance_state')
    if not relations:
        # just return ourselves
        return mydict

    # otherwise we need to go deeper
    if not isinstance(relations, dict) and not isinstance(relations, str):
        raise Exception("relations should be a dict, it is of type {}".format(type(relations)))

    # got here so check and handle if we were passed a dict
    if isinstance(relations, dict):
        # we were passed deeper info
        for left, right in relations.items():
            myrel = getattr(self, left)
            if isinstance(myrel, list):
                mydict[left] = [rel.deep_dict(relations=right) for rel in myrel]
            else:
                mydict[left] = myrel.deep_dict(relations=right)
    # if we get here check and handle if we were passed a string
    elif isinstance(relations, str):
        # passed a single item
        myrel = getattr(self, relations)
        left = relations
        if isinstance(myrel, list):
            mydict[left] = [rel.deep_dict(relations=None)
                                 for rel in myrel]
        else:
            mydict[left] = myrel.deep_dict(relations=None)

    return mydict

so for an example using person/family/homes/rooms... turning it into json all you need is

json.dumps(person.deep_dict(relations={'family':{'homes':'rooms'}}))
Inesita answered 18/12, 2016 at 0:3 Comment(2)
This is fine I think to just put in your base class so that all objects will have it. I'll leave the json encoding to you...Inesita
Note that this version will get all list relations so be cautious providing relations with a ton of items...Inesita
E
2
step1:
class CNAME:
   ...
   def as_dict(self):
       return {item.name: getattr(self, item.name) for item in self.__table__.columns}

step2:
list = []
for data in session.query(CNAME).all():
    list.append(data.as_dict())

step3:
return jsonify(list)
Entomophilous answered 11/5, 2020 at 18:26 Comment(1)
Code dumps without any explanation are rarely helpful. Stack Overflow is about learning, not providing snippets to blindly copy and paste. Please edit your question and explain how it works better than what the OP provided.Cultural
M
2

Even though it's a old post, Maybe I didn't answer the question above, but I want to talk about my serialization, at least it works for me.

I use FastAPI,SqlAlchemy and MySQL, but I don't use orm model;

# from sqlalchemy import create_engine
# from sqlalchemy.orm import sessionmaker
# engine = create_engine(config.SQLALCHEMY_DATABASE_URL, pool_pre_ping=True)
# SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

Serialization code



import decimal
import datetime


def alchemy_encoder(obj):
    """JSON encoder function for SQLAlchemy special classes."""
    if isinstance(obj, datetime.date):
        return obj.strftime("%Y-%m-%d %H:%M:%S")
    elif isinstance(obj, decimal.Decimal):
        return float(obj)

import json
from sqlalchemy import text

# db is SessionLocal() object 

app_sql = 'SELECT * FROM app_info ORDER BY app_id LIMIT :page,:page_size'

# The next two are the parameters passed in
page = 1
page_size = 10

# execute sql and return a <class 'sqlalchemy.engine.result.ResultProxy'> object
app_list = db.execute(text(app_sql), {'page': page, 'page_size': page_size})

# serialize
res = json.loads(json.dumps([dict(r) for r in app_list], default=alchemy_encoder))

If it doesn't work, please ignore my answer. I refer to it here

https://codeandlife.com/2014/12/07/sqlalchemy-results-to-json-the-easy-way/

Mra answered 23/9, 2020 at 9:13 Comment(0)
U
2

install simplejson by pip install simplejson and the create a class

class Serialise(object):

    def _asdict(self):
        """
        Serialization logic for converting entities using flask's jsonify

        :return: An ordered dictionary
        :rtype: :class:`collections.OrderedDict`
        """

        result = OrderedDict()
        # Get the columns
        for key in self.__mapper__.c.keys():
            if isinstance(getattr(self, key), datetime):
                result["x"] = getattr(self, key).timestamp() * 1000
                result["timestamp"] = result["x"]
            else:
                result[key] = getattr(self, key)

        return result

and inherit this class to every orm classes so that this _asdict function gets registered to every ORM class and boom. And use jsonify anywhere

Ultraviolet answered 4/10, 2020 at 7:50 Comment(0)
G
2

Python 3.7+ in 2023

You can add the dataclass decorator to your model and define a custom JSON serializer, then json.dumps will work (by providing the custom encoder to cls). In the example below, db_row is an instance of the DB class:

json.dumps(db_row, cls=models.CustomJSONEncoder)
{"id": 25, "name": "A component", "author": "Bob", "modified": "2023-02-08T11:49:15.675837"}

The custom JSON serializer can be easily modified to make it compatible with any type that isn't natively JSON serializable.

models.py

from datetime import datetime
import dataclasses
import json
from sqlalchemy import Column, Integer, String, DateTime
from database import Base


@dataclasses.dataclass # <<-- add this decorator 
class DB(Base):
    """Model used for SQLite database entries."""

    __tablename__ = "components"

    id: int = Column(Integer, primary_key=True, index=True)
    name: str = Column(String)
    author: str = Column(String)
    modified: datetime = Column(DateTime(timezone=True), default=datetime.utcnow)


class CustomJSONEncoder(json.JSONEncoder): # <<-- Add this custom encoder 
    """Custom JSON encoder for the DB class."""

    def default(self, o):
        if dataclasses.is_dataclass(o): # this serializes anything dataclass can handle  
            return dataclasses.asdict(o)
        if isinstance(o, datetime): # this adds support for datetime
            return o.isoformat()
        return super().default(o)

To further extend it for any non-serializable type you might use in your database, add another if statement to the custom encoder class that returns something serializable (e.g. str).

Groundsheet answered 8/2, 2023 at 19:3 Comment(0)
A
1
class SqlToDict:
    def __init__(self, data) -> None:
        self.data = data

    def to_timestamp(self, date):
        if isinstance(date, datetime):
            return int(datetime.timestamp(date))
        else:
            return date

    def to_dict(self) -> List:
        arr = []
        for i in self.data:
            keys = [*i.keys()]
            values = [*i]
            values = [self.to_timestamp(d) for d in values]
            arr.append(dict(zip(keys, values)))
        return arr

For example:

SqlToDict(data).to_dict()
Alwitt answered 16/2, 2011 at 21:4 Comment(0)
B
1

It is not so straighforward. I wrote some code to do this. I'm still working on it, and it uses the MochiKit framework. It basically translates compound objects between Python and Javascript using a proxy and registered JSON converters.

Browser side for database objects is db.js It needs the basic Python proxy source in proxy.js.

On the Python side there is the base proxy module. Then finally the SqlAlchemy object encoder in webserver.py. It also depends on metadata extractors found in the models.py file.

Berate answered 16/2, 2011 at 22:23 Comment(3)
Quite complicated from the first glance... What I need - is to get SQLAlchemy objects query result in JSON/XML format to use it in javascript datagird (JQGrid trirand.com/blog)Briant
Sometimes problems are more complicated than you exect at first glance... This handles objects returned as foreign keys, and tries to avoid the infinite recursion that happens with deeply nested relations. However, you could probably write some custom queries that return base types only and serialize those with simplejson directly.Berate
Right, maybe I'll really go with querying for dicts using SQLAlchemy and will use benefits of ORM performing save/update actions only.Briant
P
1
def alc2json(row):
    return dict([(col, str(getattr(row,col))) for col in row.__table__.columns.keys()])

I thought I'd play a little code golf with this one.

FYI: I am using automap_base since we have a separately designed schema according to business requirements. I just started using SQLAlchemy today but the documentation states that automap_base is an extension to declarative_base which seems to be the typical paradigm in the SQLAlchemy ORM so I believe this should work.

It does not get fancy with following foreign keys per Tjorriemorrie's solution, but it simply matches columns to values and handles Python types by str()-ing the column values. Our values consist Python datetime.time and decimal.Decimal class type results so it gets the job done.

Hope this helps any passers-by!

Protectionism answered 5/5, 2016 at 0:9 Comment(0)
M
1

I know this is quite an older post. I took solution given by @SashaB and modified as per my need.

I added following things to it:

  1. Field ignore list: A list of fields to be ignored while serializing
  2. Field replace list: A dictionary containing field names to be replaced by values while serializing.
  3. Removed methods and BaseQuery getting serialized

My code is as follows:

def alchemy_json_encoder(revisit_self = False, fields_to_expand = [], fields_to_ignore = [], fields_to_replace = {}):
   """
   Serialize SQLAlchemy result into JSon
   :param revisit_self: True / False
   :param fields_to_expand: Fields which are to be expanded for including their children and all
   :param fields_to_ignore: Fields to be ignored while encoding
   :param fields_to_replace: Field keys to be replaced by values assigned in dictionary
   :return: Json serialized SQLAlchemy object
   """
   _visited_objs = []
   class AlchemyEncoder(json.JSONEncoder):
      def default(self, obj):
        if isinstance(obj.__class__, DeclarativeMeta):
            # don't re-visit self
            if revisit_self:
                if obj in _visited_objs:
                    return None
                _visited_objs.append(obj)

            # go through each field in this SQLalchemy class
            fields = {}
            for field in [x for x in dir(obj) if not x.startswith('_') and x != 'metadata' and x not in fields_to_ignore]:
                val = obj.__getattribute__(field)
                # is this field method defination, or an SQLalchemy object
                if not hasattr(val, "__call__") and not isinstance(val, BaseQuery):
                    field_name = fields_to_replace[field] if field in fields_to_replace else field
                    # is this field another SQLalchemy object, or a list of SQLalchemy objects?
                    if isinstance(val.__class__, DeclarativeMeta) or \
                            (isinstance(val, list) and len(val) > 0 and isinstance(val[0].__class__, DeclarativeMeta)):
                        # unless we're expanding this field, stop here
                        if field not in fields_to_expand:
                            # not expanding this field: set it to None and continue
                            fields[field_name] = None
                            continue

                    fields[field_name] = val
            # a json-encodable dict
            return fields

        return json.JSONEncoder.default(self, obj)
   return AlchemyEncoder

Hope it helps someone!

Madel answered 13/5, 2016 at 14:4 Comment(0)
J
1

following code will serialize sqlalchemy result to json.

import json
from collections import OrderedDict


def asdict(self):
    result = OrderedDict()
    for key in self.__mapper__.c.keys():
        if getattr(self, key) is not None:
            result[key] = str(getattr(self, key))
        else:
            result[key] = getattr(self, key)
    return result


def to_array(all_vendors):
    v = [ ven.asdict() for ven in all_vendors ]
    return json.dumps(v) 

Calling fun,

def all_products():
    all_products = Products.query.all()
    return to_array(all_products)
Janie answered 13/6, 2018 at 12:2 Comment(0)
K
1

The AlchemyEncoder is wonderful but sometimes fails with Decimal values. Here is an improved encoder that solves the decimal problem -

class AlchemyEncoder(json.JSONEncoder):
# To serialize SQLalchemy objects 
def default(self, obj):
    if isinstance(obj.__class__, DeclarativeMeta):
        model_fields = {}
        for field in [x for x in dir(obj) if not x.startswith('_') and x != 'metadata']:
            data = obj.__getattribute__(field)
            print data
            try:
                json.dumps(data)  # this will fail on non-encodable values, like other classes
                model_fields[field] = data
            except TypeError:
                model_fields[field] = None
        return model_fields
    if isinstance(obj, Decimal):
        return float(obj)
    return json.JSONEncoder.default(self, obj)
Krummhorn answered 21/3, 2019 at 21:13 Comment(0)
A
1

When using sqlalchemy to connect to a db I this is a simple solution which is highly configurable. Use pandas.

import pandas as pd
import sqlalchemy

#sqlalchemy engine configuration
engine = sqlalchemy.create_engine....

def my_function():
  #read in from sql directly into a pandas dataframe
  #check the pandas documentation for additional config options
  sql_DF = pd.read_sql_table("table_name", con=engine)

  # "orient" is optional here but allows you to specify the json formatting you require
  sql_json = sql_DF.to_json(orient="index")

  return sql_json

Alaska answered 29/1, 2020 at 18:13 Comment(0)
A
1

(Tiny tweak on Sasha B's really excellent answer)

This specifically converts datetime objects to strings which in the original answer would be converted to None:

# Standard library imports
from datetime import datetime
import json

# 3rd party imports
from sqlalchemy.ext.declarative import DeclarativeMeta

class JsonEncoder(json.JSONEncoder):
    def default(self, obj):
        if isinstance(obj.__class__, DeclarativeMeta):
            dict = {}

            # Remove invalid fields and just get the column attributes
            columns = [x for x in dir(obj) if not x.startswith("_") and x != "metadata"]

            for column in columns:
                value = obj.__getattribute__(column)

                try:
                    json.dumps(value)
                    dict[column] = value
                except TypeError:
                    if isinstance(value, datetime):
                        dict[column] = value.__str__()
                    else:
                        dict[column] = None
            return dict

        return json.JSONEncoder.default(self, obj)
Aceto answered 22/1, 2021 at 10:21 Comment(0)
S
1

Very late 2023

but Faster than json method with: O[n] and 2n space req

My implementation also supports Type Checking and ability to hide some attributes which can trim unnecessary data. and optional debug flag to inspect contents. uses python 3.10 walrus operator.

def obj_to_dict(obj, remove=['_sa_instance_state'], debug=False):
    result = {}

    if type(obj).__name__ == "Row":
        return dict(obj)

    for k in (obj := obj.__dict__):
        if k not in remove:
            result[key] = obj[k]

    print(result) if debug else None
    return result
Swellfish answered 4/1, 2023 at 18:43 Comment(0)
R
0

Under Flask, this works and handles datatime fields, transforming a field of type
'time': datetime.datetime(2018, 3, 22, 15, 40) into
"time": "2018-03-22 15:40:00":

obj = {c.name: str(getattr(self, c.name)) for c in self.__table__.columns}

# This to get the JSON body
return json.dumps(obj)

# Or this to get a response object
return jsonify(obj)
Rhenium answered 10/3, 2018 at 22:15 Comment(0)
G
0

The built in serializer chokes with utf-8 cannot decode invalid start byte for some inputs. Instead, I went with:

def row_to_dict(row):
    temp = row.__dict__
    temp.pop('_sa_instance_state', None)
    return temp


def rows_to_list(rows):
    ret_rows = []
    for row in rows:
        ret_rows.append(row_to_dict(row))
    return ret_rows


@website_blueprint.route('/api/v1/some/endpoint', methods=['GET'])
def some_api():
    '''
    /some_endpoint
    '''
    rows = rows_to_list(SomeModel.query.all())
    response = app.response_class(
        response=jsonplus.dumps(rows),
        status=200,
        mimetype='application/json'
    )
    return response
Giovannigip answered 5/2, 2019 at 16:53 Comment(0)
C
0

Maybe you can use a class like this

from sqlalchemy.ext.declarative import declared_attr
from sqlalchemy import Table


class Custom:
    """Some custom logic here!"""

    __table__: Table  # def for mypy

    @declared_attr
    def __tablename__(cls):  # pylint: disable=no-self-argument
        return cls.__name__  # pylint: disable= no-member

    def to_dict(self) -> Dict[str, Any]:
        """Serializes only column data."""
        return {c.name: getattr(self, c.name) for c in self.__table__.columns}

Base = declarative_base(cls=Custom)

class MyOwnTable(Base):
    #COLUMNS!

With that all objects have the to_dict method

Challenge answered 29/1, 2020 at 20:15 Comment(0)
S
0

This is a JSONEncoder version that preserves model column order and only keeps recursively defined column and relationship fields. It also formats most JSON unserializable types:

import json
from datetime import datetime
from decimal import Decimal

import arrow
from sqlalchemy.ext.declarative import DeclarativeMeta

class SQLAlchemyJSONEncoder(json.JSONEncoder):
    """
    SQLAlchemy ORM JSON Encoder
    If you have a "backref" relationship defined in your SQLAlchemy model,
    this encoder raises a ValueError to stop an infinite loop.
    """

    def default(self, obj):
        if isinstance(obj, datetime):
            return arrow.get(obj).isoformat()
        elif isinstance(obj, Decimal):
            return float(obj)
        elif isinstance(obj, set):
            return sorted(obj)
        elif isinstance(obj.__class__, DeclarativeMeta):
            for attribute, relationship in obj.__mapper__.relationships.items():
                if isinstance(relationship.__getattribute__("backref"), tuple):
                    raise ValueError(
                        f'{obj.__class__} object has a "backref" relationship '
                        "that would cause an infinite loop!"
                    )
            dictionary = {}
            column_names = [column.name for column in obj.__table__.columns]
            for key in column_names:
                value = obj.__getattribute__(key)
                if isinstance(value, datetime):
                    value = arrow.get(value).isoformat()
                elif isinstance(value, Decimal):
                    value = float(value)
                elif isinstance(value, set):
                    value = sorted(value)
                dictionary[key] = value
            for key in [
                attribute
                for attribute in dir(obj)
                if not attribute.startswith("_")
                and attribute != "metadata"
                and attribute not in column_names
            ]:
                value = obj.__getattribute__(key)
                dictionary[key] = value
            return dictionary

        return super().default(obj)
Saltillo answered 12/3, 2021 at 2:36 Comment(0)
S
0

I have used this package succesfully: https://github.com/n0nSmoker/SQLAlchemy-serializer

You can just do this on the model:

from sqlalchemy_serializer import SerializerMixin

class SomeModel(db.Model, SerializerMixin):
    ...

It adds to_dict that is fully recuresive:

item = SomeModel.query.filter(...).one()
result = item.to_dict()

It lets you make rules to avoid infinite recursion:

result = item.to_dict(rules=('-somefield', '-some_relation.nested_one.another_nested_one'))
Saint answered 27/9, 2021 at 2:47 Comment(0)
R
0

https://flask-restplus.readthedocs.io/en/stable/marshalling.html

from flask_restplus import fields, Namespace, marshal
api = Namespace("Student data")
db_data = Student_details.query.all()
data_marshal_obj = api.model(" Data", {
    "id": fields.String(),
    "number": fields.Integer(),
    "house_name": fields.String(),
 })
data_in_json_serialize =  marshal(db_data, data_marshal_obj)}
print(type(data_in_json_serialize )) #  <class 'dict'>

customised serialize marshalling in flask restpluse

Redon answered 14/3, 2022 at 14:19 Comment(0)
J
0

After some trying, I came up with my own solution as below

def to_dict(self):
    keys = self.__mapper__.attrs.keys()
    attrs = vars(self)
    return { k : attrs[k]  for k in keys}
Justinn answered 19/3, 2022 at 3:52 Comment(0)
P
0

a dynamic way to add as_dict method to any model

from sqlalchemy.inspection import inspect

def implement_as_dict(model):
    if not hasattr(model,"as_dict"):
        column_names=[]
        imodel = inspect(model)
        for c in imodel.columns:
            column_names.append(c.key)

        #define model.as_dict()
        def as_dict(self):
            d = {}
            for c in column_names:
                d[c] = getattr(self,c)
            return d

        setattr(model,"as_dict",as_dict)

#model definition
class User(Base):
    __tablename__ = 'users'

    id = Column(Integer, primary_key=True)
    name = Column(String)
# adding as_dict definition to model
implement_as_dict(User)

then you can use

user = session.query(User).filter_by(name='rick').first() 

user.as_dict()
#sample output 
{"id":1,"name":"rick"}
Prominent answered 26/7, 2022 at 23:16 Comment(0)
B
-1

While using some raw sql and undefined objects, using cursor.description appeared to get what I was looking for:

with connection.cursor() as cur:
    print(query)
    cur.execute(query)
    for item in cur.fetchall():
        row = {column.name: item[i] for i, column in enumerate(cur.description)}
        print(row)
Breath answered 8/3, 2020 at 22:15 Comment(0)
H
-2

My take utilizing (too many?) dictionaries:

def serialize(_query):
    #d = dictionary written to per row
    #D = dictionary d is written to each time, then reset
    #Master = dictionary of dictionaries; the id Key (int, unique from database) 
    from D is used as the Key for the dictionary D entry in Master
    Master = {}
    D = {}
    x = 0
    for u in _query:
        d = u.__dict__
        D = {}
        for n in d.keys():
           if n != '_sa_instance_state':
                    D[n] = d[n]
        x = d['id']
        Master[x] = D
    return Master

Running with flask (including jsonify) and flask_sqlalchemy to print outputs as JSON.

Call the function with jsonify(serialize()).

Works with all SQLAlchemy queries I've tried so far (running SQLite3)

Hypersensitize answered 8/6, 2016 at 15:17 Comment(0)
C
-3

if you are using Flask and just want to make a quick query:

def get_cats():
    sql = text("select * from cat")
    sql_params = {}
    result = db.session.execute(sql, sql_params)
    row_list = result.fetchall()
    data = [dict(r) for r in row_list]

    response = jsonify({
        'data': [{
            'categorias': data
        }]
    })
    
    return response
Courteous answered 21/10, 2021 at 0:4 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.