Sharing data between SQL databases
Asked Answered
I

5

27

I'm trying to solve a problem, that for once, I didn't create.

I work in an environment with many web apps backed by different databases on different servers.

Each database is rather unique in its design and application but there still remains common data in each that I would like to abstract out. Each database, for instance has a vendors table, a users table, etc...

I would like to abstract out this common data to a single database but still let the other databases join on these tables, even have keys to enforce constraints, etc... I'm in a MsSql environment.

enter image description here

What are the options available? The way I see it, I have the following options:

  • Linked servers
  • Read only logins to give access to views

Is there anything else to consider?

Iso answered 4/5, 2013 at 4:33 Comment(8)
In what manner are you planning on using this information? You could possibly create a database that holds the common information from the other databases. Use a mySQL ETL tool to move the data from the app db locations to one centralized database. I think Pentaho kettle should allow you to do this with very little impact on your app databases.Roxieroxine
Sorry I misread the tag you arn't using mySQL. That makes things much easier.Roxieroxine
How familiar are you with using SSIS?Roxieroxine
Not at all familiar with SSISIso
Alright once I get off of work I can post some beginner tutorials as to how to make this work using integration services. In a way that will make what you are doing flexible and easy to develop/maintain.Roxieroxine
App A DB has Vendor "Bob's tire barn" in Wallawalla, Washington. App B DB has Vendor "Bob's used tire barn LLC" also in Wallawalla, Washington. What goes in the database with "common" stuff? If both, that doesn't seem very common. If you chose one, which one and how would a solution make that determination. Also if we chose one, used tire barn, does that choice then necessitate a cascading update in the other databases?Faintheart
Why do you want to do that? Does "user" mean exactly the same thing in every one of those databases? What about "vendor"? Have you built an ad hoc multi-tenant system by accident?Copybook
Does this common data always stay the same in both places? If not how do you determine which is correct?Roxieroxine
F
30

There are many ways you could tackle this problem. I would strongly recommend either solutions 1, 2, or 3 depending upon your business needs:

  1. Transactional Replication: If the common database is the record of account and you want to provide read-only versions of the data to separate applications, then you can replicate the core tables, possibly even just the core columns of tables, to each separate server. One upside of this approach is that you can replicate to as many subscriber databases as you want. This also means you can customize which tables and fields are available to the subscribers based on their needs. So if one application needs user tables and not vendor tables, then you only subscribe to the user tables. If another only needs vendor tables and not user tables, then you can subscribe to the vendor tables only. Another upside is that replication keeps itself synched and you can always reinitialize a subscription if a problem comes up.

    I have used transactional replication to push out over 100 tables from a data warehouse to separate downstream applications that needed access to aggregated data from multiple systems. Since our data warehouse was updated on an hourly scheduled from mirror and log shipping data sources, the production applications had data from numerous systems within a sliding window of 20 to 80 minutes each hour.

    Peer-to-Peer transactional replication as a publication type may be better suited for the use-case you provided. This can be really useful if you want to roll out schema or replication changes node by node. Standard transactional replication has some limitations in this area.

    Snapshot replication publication types has more latency than transactional publications, but you may want to consider it if a degree of latency is acceptable.

    Although you mentioned you are a Microsoft SQL Server shop, please keep in mind other RDBMs have similar technologies. Since you are talking about MS SQL Server specifically, please note transactional replication does allow you to replicate to Oracle databases, too. So if you have a few of these in your organization, this solution can still work.

    A downside to using transactional replication is that if you central server goes down you may begin experiencing latency with data in downstream copies of the replicated objects. If the replicated objects (articles) are really big and you need to reinitialize a table, then that can take a really long time to do, too.

  2. Mirrors: If you want to make the database accessible in near real-time on downstream servers, you could setup up to two asychronous mirrors. I've integrated data with a CRM application in this manner. All reads came from joins to the mirror. All writes were pushed to a message queue which then applied the changes to the central production server. The downside of this approach is that you can't create more than 2 asynchronous mirrors. You don't want to use synchronous mirrors for this purpose unless you are planning to use the mirrors for disaster recovery, too.

  3. Messaging Systems: If you expect to have numerous separate applications that need data from a single central database, then you may want to consider enterprise messaging systems like IBM Web Sphere, Microsoft BizTalk, Vitria, TIBCO, etc. These applications are built specifically to address this problem. They tend to be expensive and cumbersome to implement and maintain, but they can scale up if you have globally distributed systems or dozens of separate applications that all need to share data to some degree.

  4. Linked Servers: It sounds like you already thought of this one. You could expose the data via linked servers. I do not believe this is a good solution. If you really want to go this route, then consider setting up an asynchronous mirror from the central database to another server and then setup linked server connections to the mirror. This will at least mitigate the risk that a query from the web applications will cause blocking or performance problems with your central production database.

    IMO, linked servers tend to be a dangerous method for sharing data for applications. This approach still treats the data as a second-class citizen in your database. It leads to some pretty bad coding habits, particularly since your developers may be working on different servers in different languages with different connection methods. You don't know if someone is going to write a truly henious query against your core data. If you set a standard that requires pushing a full copy of the shared data down to the non-core server, then you don't have to worry about whether or not a developer writes bad code. At least from the perspective that their poor code won't jeapordize the performance of other well written systems.

    There are many, many resources out there that explain why using Linked Servers can be bad in this context. A non-exhaustive list of reasons includes: (a) the account used for the linked server must have DBCC SHOW STATISTICS permissions or the queries will not be able to make use of existing statistics, (b) query hints can't be uesd unless submitted as an OPENQUERY, (c) parameters can't be passed when used with OPENQUERY, (d) the server doesn't have sufficient statistics about the linked server, consequently, creates pretty terrible query plans, (e) network connectivity issues can cause failures, (f) any one of these five performance issues, and (g) the dreaded SSPI context error when trying to authenticate windows active directory credentials in a double hop scenario. Linked servers can be useful for some specific scenarios, but building access to a central database around this feature, although technically possible, is not recommended.

  5. Bulk ETL Process: If a high degree of latency is acceptable for the web applications, then you could write bulk ETL processes with SSIS (lots of good links in this StackOverflow question) which are executed by SQL Server agent jobs to move data between servers. There are also other alternative ETL tools like Informatica, Pentaho, etc., so use what works best for you.

    This is not a good solution if you need a low degree of latency. I have used this solution when synching to a 3rd party hosted CRM solution for fields that could tolerate high latency. For fields that could not tolerate high latency (basic account creation data) we relied on creating duplicate records in the CRM through web service calls at the point of account generation.

  6. Nightly Backup and Restores: If your data can tolerate high degrees of latency (up to a day) and periods of unavailability, then you could backup and restore the database across environments. This is not a good solution for web applications that need 100% up time. The idea is that you take a baseline backup, restore it to a separate restore name, then rename the original database and the new one as soon as the new one is ready for use. I've seen this done for some internal website applications, but I do not generally recommend this approach. That's better suited for a lower development environment, not a production environment.

  7. Log Shipping Secondaries: You could setup log shipping between the primary and any number of secondaries. This is similar to the nightly backup and restore process, except that you can update the database more frequently. In one instance this solution was used to expose data from one of our major core systems for downstream users by switching between two log shipping recipients. There was another server that pointed to the two databases and switched between them whenever the new one was available. I really hate this solution, but the one time I saw this implementation it did meet the needs of the business.

Fieldsman answered 6/5, 2013 at 19:20 Comment(1)
Great piece on all of them. I'd go a little bit further about why linked servers are nasty though: https://mcmap.net/q/193329/-sql-server-linked-server-example-queryBeverlybevers
S
3

You could also consider using builtin SQL Server replication between common data store and app DBs. From my experience it's well-suited for two-way data transfer, and there's an instance of the tables in each db enabling use of foreign keys (I don't think FKs are possible via linked server).

Stalwart answered 4/5, 2013 at 6:9 Comment(0)
L
2

There may be other options but think you are the right track for the best option with a combination of linked servers and views. This could be as simple as creating a new database, adding two linked servers, setting your permissions and then creating the necessary view.

If your goals are to abstract out this common data to a single database but still let the other databases join on these tables, even have keys to enforce constraints then this solution should work fine.

On the down side you can run into performance issues with linked servers, so if you foresee the database getting a lot of traffic then you might want to look into actually moving the data using the methods that Doug or mwebber suggested.

If you do go the linked server route I would recommend reading up on OPENQUERY. There is a good article on OPENQUERY vs 4 part identifiers here.

Lampyrid answered 6/5, 2013 at 19:8 Comment(0)
O
1

Take a look at the Microsoft Sync Framework. You will have to write a sync app, but it might give you the flexibility you need.

Oneness answered 4/5, 2013 at 7:35 Comment(0)
T
1

I think you should have a good look at replication, as many answers have stated, especially in a high-TPS environment or you want this on many tables. However, I will offer some code on how I achieve your stated goals in some of my systems using linked servers, synonyms and check constraints.

I would like to abstract out this common data to a single database but still let the other databases join on these tables, even have keys to enforce constraints, etc

You can setup a view or synonym in your databases to a common table in a linked server (or other local DB). I prefer synonyms if the view would've just been select * from table anyway.

A table synonym will allow you to run DML on the remote item if you have permissions.

At this point, though, you can't have a foreign key to your view or synonym, but we can accomplish something similar with a check constraint.

Let's see some code:

create synonym MyCentralTable for MyLinkedServer.MyCentralDB.dbo.MyCentralTable
go

create function dbo.MyLocalTableFkConstraint (
    @PK int
)
returns bit
as begin
    declare @retVal bit
    select @retVal = case when exists (
                            select null from MyCentralTable where PK = @PK
                        ) then 1 else 0 end
    return @retVal
end
go

create table MyLocalTable (
    FK int check (dbo.MyLocalTableFKConstraint(FK) = 1)
)
go

-- Will fail: -1 not in MyLinkedServer.MyRemoteDatabase.dbo.MyCentralTable
insert into MyLocalTable select -1
-- Will succeed: RI on a remote table w/o triggers
insert into MyLocalTable select FK from MyCentralTable

Of course, it's important to note that you won't get an error if you delete a referenced record in your central table.

Threepiece answered 6/5, 2013 at 20:7 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.