What is a good Databricks workflow
Asked Answered
H

3

9

I'm using Azure Databricks for data processing, with notebooks and pipeline.

I'm not satisfied with my current workflow:

  • The notebook used in production can't be modified without breaking the production. When I want to develop an update, I duplicate the notebook, change the source code until I'm satisfied, then I replace the production notebook with my new notebook.
  • My browser is not an IDE! I can't easily go to a function definition. I have lots of notebooks, if I want to modify or even just see the documentation of a function, I need to switch to the notebook where this function is defined.
  • Is there a way to do efficient and systematic testing ?
  • Git integration is very simple, but this is not my main concern.
Hestia answered 12/11, 2019 at 16:0 Comment(0)
I
4

Great question. Definitely dont modify your production code in place.

One recommended pattern is to keep separate folders in your workspace for dev-staging-prod. Do your dev work and then run tests in staging before finally promoting to production.

You can use the Databricks CLI to pull and push a notebook from one folder to another without breaking existing code. Going one step further, you can incorporate this pattern with git to sync with version control. In either case, the CLI gives you programmatic access to the workspace and that should make it easier to update code for production jobs.

Regarding your second point about IDEs - Databricks offers Databricks Connect, which let's you use your IDE while running commands on a cluster. Based on your pain points I think this is a great solution for you, as it will give your more visibility into the functions you have defined and so on. You can also write and run your unit tests this way.

Once you have your scripts ready to go you can always import them into the workspace as a notebook and run it as a job. Also know that you can run .py scripts as a job using the REST API.

Iffy answered 13/11, 2019 at 12:16 Comment(2)
Thank you very much, I'll definitly look into that!Hestia
You say that CLI can be used to push and pull - I'm trying to figure out how to do this, would you mind please answering https://mcmap.net/q/1317143/-execute-git-pull-on-databricks-notebook-using-cli-and-or-api/15629542?Castalia
G
3

I personally prefer to package my code, and copy the *.whl package to DBFS, where I can install the tested package and import it.

Edit: To be more explicit.

The notebook used in production can't be modified without breaking the production. When I want to develop an update, I duplicate the notebook, change the source code until I'm satisfied, then I replace the production notebook with my new notebook.

This can be solved by either having separate environments DEV/TST/PRD. Or having versioned packages that can be modified in isolation. I'll clarify later on.

My browser is not an IDE! I can't easily go to a function definition. I have lots of notebooks, if I want to modify or even just see the documentation of a function, I need to switch to the notebook where this function is defined. Is there a way to do efficient and systematic testing ?

Yes, using the versioned packages method I mentioned in combination with databricks-connect, you are totally able to use your IDE, implement tests, have proper git integration.

Git integration is very simple, but this is not my main concern.

Built-in git integration is actually very poor when working in bigger teams. You can't develop in the same notebook simultaneously, as there's a flat and linear accumulation of changes that are shared with your colleagues. Besides that, you have to link and unlink repositories that are prone to human error, causing your notebooks to be synchronized in the wrong folders, causing runs to break because notebooks can't be imported. I advise you to also use my packaging solution.

The packaging solution works as follows Reference:

  1. List item
  2. On your desktop, install pyspark
  3. Download some anonymized data to work with
  4. Develop your code with small bits of data, writing unit tests
  5. When ready to test on big data, uninstall pyspark, install databricks-connect
  6. When performance and integration is sufficient, push code to your remote repo
  7. Create a build pipeline that runs automated tests, and builds the versioned package
  8. Create a release pipeline that copies the versioned package to DBFS
  9. In a "runner notebook" accept "process_date" and "data folder/filepath" as arguments, and import modules from your versioned package
  10. Pass the arguments to your module to run your tested code
Gilroy answered 23/4, 2020 at 21:36 Comment(1)
Can you add your guidelines here please? We try to build a body of content, not only references elsewhere, which can sometimes disappear.Grommet
A
1

The way we are doing it -

-Integrate the Dev notebooks with Azure DevOps.

-Create custom Build and Deployment tasks for Notebook, Jobs, package and cluster deployments. This is sort of easy to do with the DatabBricks RestAPI

https://docs.databricks.com/dev-tools/api/latest/index.html

Create Release pipeline for Test, Staging and Production deployments. -Deploy on Test and test. -Deploy on Staging and test. -Deploy on production

Hope this can help.

Ardolino answered 25/4, 2020 at 11:29 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.