Executing Stored Procedure in Databricks when using Azure Apache Spark connector
Asked Answered
C

0

0

Following example from Azure team is using Apache Spark connector for SQL Server to write data to a table.

Question: How can we execute a Stored Procedure in an Azure Databricks when using Apache Spark Connector?

    server_name = "jdbc:sqlserver://{SERVER_ADDR}"
    database_name = "database_name"
    url = server_name + ";" + "databaseName=" + database_name + ";"
    
    table_name = "table_name"
    username = "username"
    password = "password123!#" # Please specify password here
    
    try:
      df.write \
        .format("com.microsoft.sqlserver.jdbc.spark") \
        .mode("overwrite") \
        .option("url", url) \
        .option("dbtable", table_name) \
        .option("user", username) \
        .option("password", password) \
        .save()
    except ValueError as error :
        print("Connector write failed", error)
Carlton answered 2/5, 2022 at 20:49 Comment(5)
Does this answer your question? JDBC connection from Databricks to SQL serverRyan
Does this answer your question? How to run stored procedure on SQL server from Spark (Databricks) JDBC python?Jerilynjeritah
2nd link shows how to do that from PySparkJerilynjeritah
@DavidBrowne-Microsoft David, I'm using python and have no knowledge of scala. I was wondering if there a python version of your suggested solution.Carlton
If the ODBC driver is installed you can use pyodbc. But the scala is boilerplate.Ryan

© 2022 - 2024 — McMap. All rights reserved.