Following example from Azure team is using Apache Spark connector for SQL Server to write data to a table.
Question: How can we execute a Stored Procedure
in an Azure Databricks
when using Apache Spark Connector
?
server_name = "jdbc:sqlserver://{SERVER_ADDR}"
database_name = "database_name"
url = server_name + ";" + "databaseName=" + database_name + ";"
table_name = "table_name"
username = "username"
password = "password123!#" # Please specify password here
try:
df.write \
.format("com.microsoft.sqlserver.jdbc.spark") \
.mode("overwrite") \
.option("url", url) \
.option("dbtable", table_name) \
.option("user", username) \
.option("password", password) \
.save()
except ValueError as error :
print("Connector write failed", error)
python
and have no knowledge of scala. I was wondering if there a python version of your suggested solution. – Carlton