On HDP 2.3.2 with Sqoop 1.4.6, I'm trying to import tables from SQL Server 2008.
I'm able to successfully connect to the SQL Server because I can list databases and tables etc.
However, every single time during imports I run into the following error:
Error: java.lang.RuntimeException: java.lang.RuntimeException: com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host x.x.x.x, port 1433 has failed. Error: "connect timed out. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".
Again, I am actually able to successfully import from SQL Server, but only after a couple of retries. However, regardless of whether the import succeeded or failed, I always get the error mentioned above and I was wondering what could be causing the problem? It's rather cumbersome to have to keep repeating the imports whenever they fail.
I've already turned off the connection time-out on the SQL Server, and though the connection from the Hadoop cluster and the SQL Server passes through our corporate firewall, our admins tell me that the timeout on the firewall is 3600 seconds. The imports fail before getting anywhere near that mark.
Just an example of one of the sqoop commands I use:
sqoop import \
--connect "jdbc:sqlserver://x.x.x.:1433;database=CEMHistorical" \
--table StreamSummary --username hadoop \
--password-file hdfs:///user/sqoop/.adg.password --hive-import \
--hive-overwrite --create-hive-table --split-by OfferedTime \
--hive-table develop.streamsummary --map-column-hive Call_ID=STRING,Stream_ID=STRING,OriginalCall_ID=STRING,TransactionID=TIMESTAMP
Update:
After getting in touch with our network team, it seems this is most definitely a network issue. To add context, the Hadoop cluster is on a different VLAN as the SQL Server and it goes through a number of firewalls. To test, I tried importing from a different SQL Server within the same VLAN as the Hadoop cluster and I didn't encounter this exception at all.