Cosmos db graph vs Azure Sql Server - Performance and cost
Asked Answered
J

2

11

Imagine a social network app. Users follow other users and users take photos. Photos have tags of other users.

I'm trying to get an effective Cosmos db implementation of a graph for that app. I provide an SQL Server version as well as a benchmark.

Here is the graph: enter image description here

Here is a table version of it:

enter image description here

Here is the Gremlin query:

g.V('c39f435b-350e-4d08-a7b6-dfcadbe4e9c5')
.out('follows').as('name')
.out('took').order(local).by('postedAt', decr).as('id', 'postedAt')
.select('id', 'name', 'postedAt').by(id).by('name').by('postedAt')
.limit(10)

Here is the equivalent SQL query (linq actually):

Follows
.Where(f => f.FollowerId == "c39f435b-350e-4d08-a7b6-dfcadbe4e9c5")
.Select(f => f.Followees)
.SelectMany(f => f.Photos)
.OrderByDescending(f => f.PostedAt)
.Select(f => new { f.User.Name, f.Id, f.PostedAt})
.Take(10)

That user follows 136 users who collectively took 257 photos.

Both SQL Server and Cosmos db are in West Europe Azure location. I'm in France. I did a bit of testing on Linpad.

  • The Gremlin Query runs in over 1.20s and consumes about 330 RU. FYI, 400RU/s costs 20$/month.
  • The SQL query runs in 70ms. The db is 10 DTU (1 instance of S0). So it costs 12.65eur / month

How can I get the feed faster and cheaper with cosmos db?

Note: In order to get the RU charged, I'm using Microsoft.Azure.Graph. But I can also use Gremlin.Net and get similar results.

Jewel answered 2/3, 2018 at 9:0 Comment(8)
Just a side note but I keep hearing and reading about nosql and cosmos db being so awesome so I want to benefit from it! -> that is the worst reason ever. You do not need to use a new tech just because it is hot and shiny. If it does not give real benefits don't bother.Enschede
I agree and maybe I didn't express myself correctly. I didn't bother for years. But I know face issues with my sql server and geo replication. Users outside Europe complain about performance. So I thought it was time to have a look at cosmos db as this where it is supposed to shine.Sutherlan
I think it is going to be hard to compare the performance between those two. They are like apples to oranges. I can image that for billions of relations the Graph Api could be faster but you will have to set up lots of tests to find the tipping point, if any.Enschede
While I'm doing all this to compare both sql server and cosmos db, the question is specific on how to improve the graph and / or the query.Sutherlan
Can you include which gremlin client in your answer? Also the order(local) op has no affect on the result since the type being iterated is a Vertex not a Collection or Map, see [Order step][tinkerpop.apache.org/docs/3.3.1/reference/#order-step].Pushkin
Tks for the link. I use either Gremlin.Net or Microsoft.Azure.Graph which have similar results. I updated the question with he info.Sutherlan
I'll let @OliverTowers answer the question.Icebreaker
@PeterBons Surely your comment is both the wrong way round AND the answer: i.e. you are saying it's not hard to compare performance & data stores are not like apples and oranges. Francois has come up with a test, and you are saying the answer is with this data set Cosmos is going to cost more. To be clear, I don't know that is the case (I came hear looking for answers), but if you know for a fact that this set of data cannot be improved on in Cosmos, then that's the answer Francois was looking for. Francois thank you for coming up with a real world test.Gastro
A
3

I know this question is old but here is my tip to help you use cosmos db in efficient way and reduce the RU/s as possible.

330 RU is a lot for such a query, the problem here that makes you consume a lot of RU is the partitioning, when you add partition to the database you are telling cosmos db to partition the data logically by the partition key that you provide, so in your case the best partition key is the user.

Generally to know the best partition key you should first start from your queries, so for example write down all you queries and check what is the top attribute or field you filter your queries with to get back your data. the attribute you choose is the partition key.

If you didn't add partition key you will tell cosmosdb to search for users and if users are spread on many servers and many partitions when scaled, the cosmosdb will search in all partitions ( servers ) and this will cost you a lot, so if you have for example 6 servers, cosmosdb will run queries on the 6 servers till it finds your user, it may find it in first server or the second but also it may find it in the last server so it will take a lot of time and not guaranteed.

The second thing is containers, container is the unit of scaling in cosmosdb, so when cosmosdb wants to scale, it scales the container and all the data in it. so a good practice is to add entities that are queried a lot in their own container so cosmosdb can scale them easily using the partition key assigned to each container.

Maybe I helped you to reduce the RU/s in a different way. hope this answer helps who face the same problem.

Anticlastic answered 18/6, 2020 at 20:23 Comment(0)
H
1

In case you have not noticed, in addition to relational model, SQL Server also has a graph model and you can use graph queries directly in SQL Server.

I also see that you have not used any partition key, which means the Graph queries will be lot slower.

Several months back, for a new product, we initially ran several tests like the one you had. When the number of records are more (in orders of few 100K, and the number of graph traversals are more than 3, then the Cosmos was cheaper than Azure SQL.

Hasty answered 26/5, 2019 at 15:33 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.