DatabricksSubmitRunOperator

Databricks

Submits a Spark job run to Databricks using the api/2.1/jobs/runs/submit API endpoint.

View on GitHub

Last Updated: Jun. 4, 2022

Access Instructions

Install the Databricks provider package into your Airflow environment.

Import the module into your DAG file and instantiate it with your desired params.

Parameters

jsonA JSON object containing API parameters which will be passed directly to the api/2.1/jobs/runs/submit endpoint. The other named parameters (i.e. spark_jar_task, notebook_task..) to this operator will be merged with this json dictionary if they are provided. If there are conflicts during the merge, the named parameters will take precedence and override the top level json keys. (templated) See also For more information about templating see Jinja Templating. https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsRunsSubmit
spark_jar_taskThe main class and parameters for the JAR task. Note that the actual JAR is specified in the libraries. EITHER spark_jar_task OR notebook_task OR spark_python_task OR spark_submit_task OR pipeline_task should be specified. This field will be templated. See also https://docs.databricks.com/dev-tools/api/2.0/jobs.html#jobssparkjartask
notebook_taskThe notebook path and parameters for the notebook task. EITHER spark_jar_task OR notebook_task OR spark_python_task OR spark_submit_task OR pipeline_task should be specified. This field will be templated. See also https://docs.databricks.com/dev-tools/api/2.0/jobs.html#jobsnotebooktask
spark_python_taskThe python file path and parameters to run the python file with. EITHER spark_jar_task OR notebook_task OR spark_python_task OR spark_submit_task OR pipeline_task should be specified. This field will be templated. See also https://docs.databricks.com/dev-tools/api/2.0/jobs.html#jobssparkpythontask
spark_submit_taskParameters needed to run a spark-submit command. EITHER spark_jar_task OR notebook_task OR spark_python_task OR spark_submit_task OR pipeline_task should be specified. This field will be templated. See also https://docs.databricks.com/dev-tools/api/2.0/jobs.html#jobssparksubmittask
pipeline_taskParameters needed to execute a Delta Live Tables pipeline task. The provided dictionary must contain at least pipeline_id field! EITHER spark_jar_task OR notebook_task OR spark_python_task OR spark_submit_task OR pipeline_task should be specified. This field will be templated. See also https://docs.databricks.com/dev-tools/api/2.0/jobs.html#jobspipelinetask
new_clusterSpecs for a new cluster on which this task will be run. EITHER new_cluster OR existing_cluster_id should be specified (except when pipeline_task is used). This field will be templated. See also https://docs.databricks.com/dev-tools/api/2.0/jobs.html#jobsclusterspecnewcluster
existing_cluster_idID for existing cluster on which to run this task. EITHER new_cluster OR existing_cluster_id should be specified (except when pipeline_task is used). This field will be templated.
librariesLibraries which this run will use. This field will be templated. See also https://docs.databricks.com/dev-tools/api/2.0/jobs.html#managedlibrarieslibrary
run_nameThe run name used for this task. By default this will be set to the Airflow task_id. This task_id is a required parameter of the superclass BaseOperator. This field will be templated.
idempotency_tokenan optional token that can be used to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the existing run instead. This token must have at most 64 characters.
access_control_listoptional list of dictionaries representing Access Control List (ACL) for a given job run. Each dictionary consists of following field - specific subject (user_name for users, or group_name for groups), and permission_level for that subject. See Jobs API documentation for more details.
wait_for_terminationif we should wait for termination of the job run. True by default.
timeout_secondsThe timeout for this run. By default a value of 0 is used which means to have no timeout. This field will be templated.
databricks_conn_idReference to the Databricks connection. By default and in the common case this will be databricks_default. To use token based authentication, provide the key token in the extra field for the connection and create the key host and leave the host field empty.
polling_period_secondsControls the rate which we poll for the result of this run. By default the operator will poll every 30 seconds.
databricks_retry_limitAmount of times retry if the Databricks backend is unreachable. Its value must be greater than or equal to 1.
databricks_retry_delayNumber of seconds to wait between retries (it might be a floating point number).
databricks_retry_argsAn optional dictionary with arguments passed to tenacity.Retrying class.
do_xcom_pushWhether we should push run_id and run_page_url to xcom.

Documentation

Submits a Spark job run to Databricks using the api/2.1/jobs/runs/submit API endpoint.

There are two ways to instantiate this operator.

In the first way, you can take the JSON payload that you typically use to call the api/2.1/jobs/runs/submit endpoint and pass it directly to our DatabricksSubmitRunOperator through the json parameter. For example

json = {
'new_cluster': {
'spark_version': '2.1.0-db3-scala2.11',
'num_workers': 2
},
'notebook_task': {
'notebook_path': '/Users/airflow@example.com/PrepareData',
},
}
notebook_run = DatabricksSubmitRunOperator(task_id='notebook_run', json=json)

Another way to accomplish the same thing is to use the named parameters of the DatabricksSubmitRunOperator directly. Note that there is exactly one named parameter for each top level parameter in the runs/submit endpoint. In this method, your code would look like this:

new_cluster = {
'spark_version': '10.1.x-scala2.12',
'num_workers': 2
}
notebook_task = {
'notebook_path': '/Users/airflow@example.com/PrepareData',
}
notebook_run = DatabricksSubmitRunOperator(
task_id='notebook_run',
new_cluster=new_cluster,
notebook_task=notebook_task)

In the case where both the json parameter AND the named parameters are provided, they will be merged together. If there are conflicts during the merge, the named parameters will take precedence and override the top level json keys.

See also

For more information on how to use this operator, take a look at the guide: DatabricksSubmitRunOperator

param json

A JSON object containing API parameters which will be passed directly to the api/2.1/jobs/runs/submit endpoint. The other named parameters (i.e. spark_jar_task, notebook_task..) to this operator will be merged with this json dictionary if they are provided. If there are conflicts during the merge, the named parameters will take precedence and override the top level json keys. (templated)

param spark_jar_task

The main class and parameters for the JAR task. Note that the actual JAR is specified in the libraries. EITHER spark_jar_task OR notebook_task OR spark_python_task OR spark_submit_task OR pipeline_task should be specified. This field will be templated.

param notebook_task

The notebook path and parameters for the notebook task. EITHER spark_jar_task OR notebook_task OR spark_python_task OR spark_submit_task OR pipeline_task should be specified. This field will be templated.

param spark_python_task

The python file path and parameters to run the python file with. EITHER spark_jar_task OR notebook_task OR spark_python_task OR spark_submit_task OR pipeline_task should be specified. This field will be templated.

param spark_submit_task

Parameters needed to run a spark-submit command. EITHER spark_jar_task OR notebook_task OR spark_python_task OR spark_submit_task OR pipeline_task should be specified. This field will be templated.

param pipeline_task

Parameters needed to execute a Delta Live Tables pipeline task. The provided dictionary must contain at least pipeline_id field! EITHER spark_jar_task OR notebook_task OR spark_python_task OR spark_submit_task OR pipeline_task should be specified. This field will be templated.

param new_cluster

Specs for a new cluster on which this task will be run. EITHER new_cluster OR existing_cluster_id should be specified (except when pipeline_task is used). This field will be templated.

param existing_cluster_id

ID for existing cluster on which to run this task. EITHER new_cluster OR existing_cluster_id should be specified (except when pipeline_task is used). This field will be templated.

param libraries

Libraries which this run will use. This field will be templated.

param run_name

The run name used for this task. By default this will be set to the Airflow task_id. This task_id is a required parameter of the superclass BaseOperator. This field will be templated.

param idempotency_token

an optional token that can be used to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the existing run instead. This token must have at most 64 characters.

param access_control_list

optional list of dictionaries representing Access Control List (ACL) for a given job run. Each dictionary consists of following field - specific subject (user_name for users, or group_name for groups), and permission_level for that subject. See Jobs API documentation for more details.

param wait_for_termination

if we should wait for termination of the job run. True by default.

param timeout_seconds

The timeout for this run. By default a value of 0 is used which means to have no timeout. This field will be templated.

param databricks_conn_id

Reference to the Databricks connection. By default and in the common case this will be databricks_default. To use token based authentication, provide the key token in the extra field for the connection and create the key host and leave the host field empty.

param polling_period_seconds

Controls the rate which we poll for the result of this run. By default the operator will poll every 30 seconds.

param databricks_retry_limit

Amount of times retry if the Databricks backend is unreachable. Its value must be greater than or equal to 1.

param databricks_retry_delay

Number of seconds to wait between retries (it might be a floating point number).

param databricks_retry_args

An optional dictionary with arguments passed to tenacity.Retrying class.

param do_xcom_push

Whether we should push run_id and run_page_url to xcom.

Was this page helpful?