SQL Server to MySQL data transfer
Asked Answered
C

4

9

I am trying to transfer bulk data on a constant and continuous based from a SQL Server database to a MYSQL database. I wanted to use SQL Server's SSMS's replication but this apparently is only for SQL Server to Oracle or IBM DB2 connection. Currently we are using SSIS to transform data and push it to a temporary location at the MYSQL database where it is copied over. I would like the fastest way to transfer data and am complication several methods.

I have a new way I plan on transforming the data which I am sure will solve most time issues but I want to make sure we do not run into time problems in the future. I have set up a linked server that uses a MYSQL ODBC driver to talk between SQL Server and MYSQL. This seems VERY slow. I have some code that also uses Microsoft's ODBC driver but is used so little that I cannot gauge the performance. Does anyone know of lightening fast ways to communicate between these two databases? I have been researching MYSQL's data providers that seem to communicate with a OleDB layer. Im not too sure what to believe and which way to steer towards, any ideas?

Coward answered 15/8, 2011 at 22:1 Comment(0)
B
1

I used the jdbc-odbc bridge in Java to do just this in the past, but performance through ODBC is not great. I would suggest looking at something like http://jtds.sourceforge.net/ which is a pure Java driver that you can drop into a simple Groovy script like the following:

import groovy.sql.Sql
sql = Sql.newInstance( 'jdbc:jtds:sqlserver://serverName/dbName-CLASS;domain=domainName',     
'username', 'password', 'net.sourceforge.jtds.jdbc.Driver' )
sql.eachRow( 'select * from tableName' ) { 
  println "$it.id -- ${it.firstName} --" 
  // probably write to mysql connection here or write to file, compress, transfer, load
}

The following performance numbers give you a feel for how it might perform: http://jtds.sourceforge.net/benchTest.html

You may find some performance advantages to dumping data to a mysql dumpfile format and using mysql loaddata instead of writing row by row. MySQL has some significant performance improvements for large data sets if you load infile's and doing things like atomic table swaps.

We use something like this to quickly load large datafiles into mysql from one system to another e.g. This is the fastest mechanism to load data into mysql. But real time row by row might be a simple loop to do in groovy + some table to keep track of what row had been moved.

mysql> select * from table into outfile 'tablename.dat';  

shell> myisamchk --keys-used=0 -rq '/data/mysql/schema_name/tablename'

mysql> load data infile 'tablename.dat' into table tablename;

shell> myisamchk -rq /data/mysql/schema_name/tablename

mysql> flush tables;
mysql> exit;

shell> rm tablename.dat
Boyfriend answered 2/9, 2011 at 1:25 Comment(0)
T
1

The best way I have found to transfer SQL data (if you have the space) is a SQL dump in one language and then to use a converting software tool (or perl script, both are prevalent) to convert the SQL dump from MSSQL to MySQL. See my answer to this question about what converter you may be interested in :) .

Thomajan answered 15/8, 2011 at 22:5 Comment(0)
B
1

I used the jdbc-odbc bridge in Java to do just this in the past, but performance through ODBC is not great. I would suggest looking at something like http://jtds.sourceforge.net/ which is a pure Java driver that you can drop into a simple Groovy script like the following:

import groovy.sql.Sql
sql = Sql.newInstance( 'jdbc:jtds:sqlserver://serverName/dbName-CLASS;domain=domainName',     
'username', 'password', 'net.sourceforge.jtds.jdbc.Driver' )
sql.eachRow( 'select * from tableName' ) { 
  println "$it.id -- ${it.firstName} --" 
  // probably write to mysql connection here or write to file, compress, transfer, load
}

The following performance numbers give you a feel for how it might perform: http://jtds.sourceforge.net/benchTest.html

You may find some performance advantages to dumping data to a mysql dumpfile format and using mysql loaddata instead of writing row by row. MySQL has some significant performance improvements for large data sets if you load infile's and doing things like atomic table swaps.

We use something like this to quickly load large datafiles into mysql from one system to another e.g. This is the fastest mechanism to load data into mysql. But real time row by row might be a simple loop to do in groovy + some table to keep track of what row had been moved.

mysql> select * from table into outfile 'tablename.dat';  

shell> myisamchk --keys-used=0 -rq '/data/mysql/schema_name/tablename'

mysql> load data infile 'tablename.dat' into table tablename;

shell> myisamchk -rq /data/mysql/schema_name/tablename

mysql> flush tables;
mysql> exit;

shell> rm tablename.dat
Boyfriend answered 2/9, 2011 at 1:25 Comment(0)
D
0

We've used the ado.net driver for mysql in ssis with quite a bit of success. Basically, install the driver on the machine with integration services installed, restart bids, and it should show up in the driver list when you create an ado.net connection manager.

As for replication, what exactly are you trying to accomplish?

If you are monitoring changes, treat it as a type 1 slowly changing dimension (data warehouse terminology, but same principal applies). Insert new records, update changed records.

If you are only interested in new records and have no plans to update previously loaded data, try an incremental load strategy. Insert records where source.id > max(destination.id).

After you've tested the package, schedule a job in the sql server agent to run the package every x minutes.

Didst answered 16/8, 2011 at 2:2 Comment(0)
L
0

Cou can also try the following. http://kofler.info/english/mssql2mysql/

I tried this a longer time before and it worked for me. But I woudn't recommend it to you. What is the real problem, what you try to do? Don´t you get a MSSQL DB Connection, for example from Linux?

Laaland answered 31/8, 2011 at 18:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.