Skip to main content

Databricks Connector

Databricks Databricks is an analytics and artificial intelligence platform for building, scaling, and governing data and AI, including generative AI and other machine learning models. This component allows interacting with the Databricks REST API to manage clusters, jobs, libraries, and other resources.

API Documentation

This component was built using the Databricks REST API Reference.

Connections

OAuth 2.0 Client Credentials

Authenticate using OAuth 2.0 Client Credentials

With service principal authentication, a service user is created within the account, the user is granted permissions to a workspace, and then a client ID and secret pair is generated for that service user. This component uses that key pair to authenticate with workspaces that the service account has been granted permissions to. This is the best practice for authenticating with the Databricks REST API.

Prerequisites

Setup Steps

  1. Create the service principal
    1. Open Databricks Users. Under the Service principals tab select Add service principal.
    2. Give the service principal any name and click Add.
  2. Grant the service principal permission to the workspace
    1. Navigate to Databricks Workspaces and select the workspace.
    2. Under the Permissions tab select Add permissions.
    3. Search for the service principal created above and grant the permission Admin.
  3. Generate a key pair for the service principal
    1. Navigate to the service principal and open the Principal information tab.
    2. Under OAuth secrets select Generate secret.
    3. Take note of the Secret (i.e. "Client Secret") and Client ID received. The client ID should be a UUID like 00000000-0000-0000-0000-000000000000. The client secret will look like dose00000000000000000000000000000000.

Configure the Connection

Create a connection of type Databricks Workspace Service Principal and enter:

  • Token URL: The OAuth 2.0 Token URL for the Databricks workspace. Replace REPLACE-ME in https://dbc-REPLACE-ME.cloud.databricks.com/oidc/v1/token to reflect the workspace URL. For account-level API access, use https://accounts.cloud.databricks.com/oidc/accounts/<my-account-id>/v1/token instead.
  • Scopes: OAuth scopes to request (defaults to all-apis)
  • Service Principal Client ID: The Client ID from the generated key pair
  • Service Principal Client Secret: The Client Secret from the generated key pair
Account-Level API Access

For account-level access (e.g., managing workspaces using the service principal), grant the service principal administrative access to the account and use the account-level token URL format: https://accounts.cloud.databricks.com/oidc/accounts/<my-account-id>/v1/token.

See Databricks OAuth machine-to-machine authentication for more information on service principal OAuth client credential authentication.

This connection uses OAuth 2.0, a common authentication mechanism for integrations. Read about how OAuth 2.0 works here.

InputCommentsDefault
Token URLThe OAuth 2.0 Token URL for the Databricks workspace. Replace REPLACE-ME in https://dbc-REPLACE-ME.cloud.databricks.com/oidc/v1/token to reflect the workspace URL.https://dbc-REPLACE-ME.cloud.databricks.com/oidc/v1/token
ScopesThe OAuth scopes to request. Defaults to all-apis.all-apis
Service Principal Client IDThe client ID of the Databricks Service Principal. The service principal must be granted the necessary permissions in the Databricks workspace. https://docs.databricks.com/en/dev-tools/auth/oauth-m2m.html#step-2-assign-workspace-level-permissions-to-the-databricks-service-principal
Service Principal Client SecretThe client secret of the Databricks Service Principal.

Personal Access Token

Authenticate using a personal access token

While service principal authentication is the recommended method for authenticating with the Databricks REST API, personal access tokens (which are tied to specific users) can also be used.

Prerequisites

  • A Databricks workspace account

Setup Steps

  1. Open Databricks Workspaces and select the workspace. Open the URL for the workspace (e.g., https://dbc-00000000-aaaa.cloud.databricks.com) and log in.
  2. From the top-right, click the user icon and select Settings.
  3. Under the User > Developer tab, select Manage under Access tokens.
  4. Click the Generate New Token button. Enter a description for the token and click Generate. Omit Lifetime (days) to create a token that never expires.

The token will look similar to dap000000000000000000000000000000000. Copy this token for use in the connection configuration.

Configure the Connection

Create a connection of type Databricks Personal Access Token and enter:

  • Host: The workspace endpoint (e.g., dbc-REPLACE-ME.cloud.databricks.com)
  • Personal Access Token: The token generated above

See Databricks personal access token authentication for more information.

InputCommentsDefault
HostThe hostname of the Databricks instance. Include the entire domain name. For example, dbc-1234567890123456.cloud.databricks.com
Personal Access TokenFrom Databricks, go to User Settings > Developer > Access Tokens > Manage > Generate New Token

Actions

Create Execution Context

Create a Databricks execution context

InputCommentsDefault
ConnectionThe Databricks connection to use.
Cluster IDThe unique identifier for the Databricks cluster.
LanguageThe programming language to use in the execution context.python

Execute SQL Statement

Run a SQL query in the Databricks workspace. You can choose to wait for the result or asynchronously issue the request and return the statement ID.

InputCommentsDefault
ConnectionThe Databricks connection to use.
Warehouse IDThe unique identifier for the Databricks SQL warehouse.
SQL StatementThe SQL statement to execute against the Databricks SQL warehouse.
SQL ParametersThe parameters to use in the SQL statement. This should represent an array of objects, and each object should have a name and value. For example, [{ "name": "my_name", "value": "the name" }

Get Cluster

Get a Databricks cluster by ID

InputCommentsDefault
ConnectionThe Databricks connection to use.
Cluster IDThe unique identifier for the Databricks cluster.

Get Command Status

Gets the status of and, if available, the results from a currently executing command.

InputCommentsDefault
ConnectionThe Databricks connection to use.
Cluster IDThe unique identifier for the Databricks cluster.
Execution Context IDThe ID of the execution context, likely created by the Create Execution Context action.
Command IDThe unique identifier of the command whose status will be retrieved.

Get Current User

Get the currently authenticated Databricks user or service principal.

InputCommentsDefault
ConnectionThe Databricks connection to use.

Get SQL Warehouse

Get a SQL Warehouse by ID.

InputCommentsDefault
ConnectionThe Databricks connection to use.
Warehouse IDThe unique identifier for the Databricks SQL warehouse.

List Clusters

Return information about all pinned clusters, active clusters, up to 200 of the most recently terminated all-purpose clusters in the past 30 days, and up to 30 of the most recently terminated job clusters in the past 30 days.

InputCommentsDefault
ConnectionThe Databricks connection to use.

List Node Types

Returns a list of supported Spark node types. These node types can be used to launch a cluster.

InputCommentsDefault
ConnectionThe Databricks connection to use.

List SQL Warehouses

List all SQL Warehouses in the Databricks workspace

InputCommentsDefault
ConnectionThe Databricks connection to use.

Raw Request

Send raw HTTP request to the Databricks API.

InputCommentsDefault
ConnectionThe Databricks connection to use.
URLThe URL https:///api/ is prepended to the URL you provide here. For example, if you provide "/2.0/clusters/list", the full URL will be "https://${host}/api/2.0/clusters/list". You can also provide a full URL with protocol (i.e. "https://accounts.cloud.databricks.com/api/2.0/accounts/{account_id}/scim/v2/Groups" to override the prepended base URL.
MethodThe HTTP method to use.
DataThe HTTP body payload to send to the URL.
Form DataThe Form Data to be sent as a multipart form upload.
File DataFile Data to be sent as a multipart form upload.
File Data File NamesFile names to apply to the file data inputs. Keys must match the file data keys above.
Query ParameterA list of query parameters to send with the request. This is the portion at the end of the URL similar to ?key1=value1&key2=value2.
HeaderA list of headers to send with the request.
Response TypeThe type of data you expect in the response. You can request json, text, or binary data.json
TimeoutThe maximum time that a client will await a response to its request
Retry Delay (ms)The delay in milliseconds between retries. This is used when 'Use Exponential Backoff' is disabled.0
Retry On All ErrorsIf true, retries on all erroneous responses regardless of type. This is helpful when retrying after HTTP 429 or other 3xx or 4xx errors. Otherwise, only retries on HTTP 5xx and network errors.false
Max Retry CountThe maximum number of retries to attempt. Specify 0 for no retries.0
Use Exponential BackoffSpecifies whether to use a pre-defined exponential backoff strategy for retries. When enabled, 'Retry Delay (ms)' is ignored.false

Restart Cluster

Restart a Databricks cluster by ID

InputCommentsDefault
ConnectionThe Databricks connection to use.
Cluster IDThe unique identifier for the Databricks cluster.

Run Command

Run a command in a Databricks execution context

InputCommentsDefault
ConnectionThe Databricks connection to use.
Cluster IDThe unique identifier for the Databricks cluster.
Execution Context IDThe ID of the execution context, likely created by the Create Execution Context action.
LanguageThe programming language to use in the execution context.python
CommandThe executable code to run in the execution context.

Start SQL Warehouse

Start a SQL Warehouse.

InputCommentsDefault
ConnectionThe Databricks connection to use.
Warehouse IDThe unique identifier for the Databricks SQL warehouse.

Start Terminated Cluster

Start a terminated Databricks cluster by ID

InputCommentsDefault
ConnectionThe Databricks connection to use.
Cluster IDThe unique identifier for the Databricks cluster.

Stop SQL Warehouse

Stop a SQL Warehouse.

InputCommentsDefault
ConnectionThe Databricks connection to use.
Warehouse IDThe unique identifier for the Databricks SQL warehouse.

Terminate Cluster

Terminate a Databricks cluster by ID

InputCommentsDefault
ConnectionThe Databricks connection to use.
Cluster IDThe unique identifier for the Databricks cluster.