I have a working example of executing a stored procedure in a SQL SERVER with the below Scala code in Databricks. But I'm wondring if it is possible to do the same in Python JDBC? I cannot make it work. Please see examples below:
WORKING code in SCALA
import java.sql.DriverManager
import java.sql.Connection
import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.types.{StructType, StructField, StringType,IntegerType};
import java.sql.ResultSet
val username = "xxxxx"
val pass = "xxxxx"
val url = "jdbc:sqlserver://xxx.database.windows.net:1433;databaseName=xxx"
val table = "SalesLT.Temp3"
val query = s"EXEC sp_truncate_table '${table}'"
val conn = DriverManager.getConnection(url, username, pass)
val rs = conn.createStatement.execute(query)
Python Code so far
connector_type_sql_server_driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_hostname = "xxxx.database.windows.net"
jdbc_database = "xxxx"
jdbc_port = 1433
jdbc_url = f"jdbc:sqlserver://{jdbc_hostname}:{jdbc_port};database={jdbc_database}"
jdbc_spa_user = "xxx"
jdbc_spa_password = "xxx"
query = "EXEC sys.sp_tables"
query2 = "SELECT * FROM sys.tables"
jdbc_db = (spark.read
.format("jdbc")
.option("driver", connector_type_sql_server_driver)
.option("url", jdbc_url)
.option("query", query)
.option("user", jdbc_spa_user)
.option("password", jdbc_spa_password)
.load()
)
query2 in python is working, but anything starting with EXEC does not seem to work...
If it is not possible can someone explain in detail why it can be done in Scala and not in Python in databricks? I want to make it work with Python because the rest of the notebooks are in Python already..
Thank you.
/Baatch