@Generated public class ClustersAPI extends Object
Databricks maps cluster node instance types to compute units known as DBUs. See the instance type pricing page for a list of the supported instance types and their corresponding DBUs.
A Databricks cluster is a set of computation resources and configurations on which you run data engineering, data science, and data analytics workloads, such as production ETL pipelines, streaming analytics, ad-hoc analytics, and machine learning.
You run these workloads as a set of commands in a notebook or as an automated job. Databricks makes a distinction between all-purpose clusters and job clusters. You use all-purpose clusters to analyze data collaboratively using interactive notebooks. You use job clusters to run fast and robust automated jobs.
You can create an all-purpose cluster using the UI, CLI, or REST API. You can manually terminate and restart an all-purpose cluster. Multiple users can share such clusters to do collaborative interactive analysis.
IMPORTANT: Databricks retains cluster configuration information for terminated clusters for 30 days. To keep an all-purpose cluster configuration even after it has been terminated for more than 30 days, an administrator can pin a cluster to the cluster list.
| Constructor and Description |
|---|
ClustersAPI(ApiClient apiClient)
Regular-use constructor
|
ClustersAPI(ClustersService mock)
Constructor for mocks
|
public ClustersAPI(ApiClient apiClient)
public ClustersAPI(ClustersService mock)
public ClusterDetails waitGetClusterRunning(String clusterId) throws TimeoutException
TimeoutExceptionpublic ClusterDetails waitGetClusterRunning(String clusterId, Duration timeout, Consumer<ClusterDetails> callback) throws TimeoutException
TimeoutExceptionpublic ClusterDetails waitGetClusterTerminated(String clusterId) throws TimeoutException
TimeoutExceptionpublic ClusterDetails waitGetClusterTerminated(String clusterId, Duration timeout, Consumer<ClusterDetails> callback) throws TimeoutException
TimeoutExceptionpublic void changeOwner(ChangeClusterOwner request)
Change the owner of the cluster. You must be an admin and the cluster must be terminated to perform this operation. The service principal application ID can be supplied as an argument to `owner_username`.
public Wait<ClusterDetails,CreateClusterResponse> create(String sparkVersion)
public Wait<ClusterDetails,CreateClusterResponse> create(CreateCluster request)
Creates a new Spark cluster. This method will acquire new instances from the cloud provider if necessary. This method is asynchronous; the returned ``cluster_id`` can be used to poll the cluster status. When this method returns, the cluster will be in a ``PENDING`` state. The cluster will be usable once it enters a ``RUNNING`` state. Note: Databricks may not be able to acquire some of the requested nodes, due to cloud provider limitations (account limits, spot price, etc.) or transient network issues.
If Databricks acquires at least 85% of the requested on-demand nodes, cluster creation will succeed. Otherwise the cluster will terminate with an informative error message.
Rather than authoring the cluster's JSON definition from scratch, Databricks recommends filling out the [create compute UI] and then copying the generated JSON definition from the UI.
[create compute UI]: https://docs.databricks.com/compute/configure.html
public Wait<ClusterDetails,Void> delete(String clusterId)
public Wait<ClusterDetails,Void> delete(DeleteCluster request)
Terminates the Spark cluster with the specified ID. The cluster is removed asynchronously. Once the termination has completed, the cluster will be in a `TERMINATED` state. If the cluster is already in a `TERMINATING` or `TERMINATED` state, nothing will happen.
public Wait<ClusterDetails,Void> edit(String clusterId, String sparkVersion)
public Wait<ClusterDetails,Void> edit(EditCluster request)
Updates the configuration of a cluster to match the provided attributes and size. A cluster can be updated if it is in a `RUNNING` or `TERMINATED` state.
If a cluster is updated while in a `RUNNING` state, it will be restarted so that the new attributes can take effect.
If a cluster is updated while in a `TERMINATED` state, it will remain `TERMINATED`. The next time it is started using the `clusters/start` API, the new attributes will take effect. Any attempt to update a cluster in any other state will be rejected with an `INVALID_STATE` error code.
Clusters created by the Databricks Jobs service cannot be edited.
public Iterable<ClusterEvent> events(String clusterId)
public Iterable<ClusterEvent> events(GetEvents request)
Retrieves a list of events about the activity of a cluster. This API is paginated. If there are more events to read, the response includes all the parameters necessary to request the next page of events.
public ClusterDetails get(String clusterId)
public ClusterDetails get(GetClusterRequest request)
Retrieves the information for a cluster given its identifier. Clusters can be described while they are running, or up to 60 days after they are terminated.
public GetClusterPermissionLevelsResponse getPermissionLevels(String clusterId)
public GetClusterPermissionLevelsResponse getPermissionLevels(GetClusterPermissionLevelsRequest request)
Gets the permission levels that a user can have on an object.
public ClusterPermissions getPermissions(String clusterId)
public ClusterPermissions getPermissions(GetClusterPermissionsRequest request)
Gets the permissions of a cluster. Clusters can inherit permissions from their root object.
public Iterable<ClusterDetails> list(ListClustersRequest request)
Return information about all pinned and active clusters, and all clusters terminated within the last 30 days. Clusters terminated prior to this period are not included.
public ListNodeTypesResponse listNodeTypes()
Returns a list of supported Spark node types. These node types can be used to launch a cluster.
public ListAvailableZonesResponse listZones()
Returns a list of availability zones where clusters can be created in (For example, us-west-2a). These zones can be used to launch a cluster.
public void permanentDelete(String clusterId)
public void permanentDelete(PermanentDeleteCluster request)
Permanently deletes a Spark cluster. This cluster is terminated and resources are asynchronously removed.
In addition, users will no longer see permanently deleted clusters in the cluster list, and API users can no longer perform any action on permanently deleted clusters.
public void pin(String clusterId)
public void pin(PinCluster request)
Pinning a cluster ensures that the cluster will always be returned by the ListClusters API. Pinning a cluster that is already pinned will have no effect. This API can only be called by workspace admins.
public Wait<ClusterDetails,Void> resize(String clusterId)
public Wait<ClusterDetails,Void> resize(ResizeCluster request)
Resizes a cluster to have a desired number of workers. This will fail unless the cluster is in a `RUNNING` state.
public Wait<ClusterDetails,Void> restart(String clusterId)
public Wait<ClusterDetails,Void> restart(RestartCluster request)
Restarts a Spark cluster with the supplied ID. If the cluster is not currently in a `RUNNING` state, nothing will happen.
public ClusterPermissions setPermissions(String clusterId)
public ClusterPermissions setPermissions(ClusterPermissionsRequest request)
Sets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object.
public GetSparkVersionsResponse sparkVersions()
Returns the list of available Spark versions. These versions can be used to launch a cluster.
public Wait<ClusterDetails,Void> start(String clusterId)
public Wait<ClusterDetails,Void> start(StartCluster request)
Starts a terminated Spark cluster with the supplied ID. This works similar to `createCluster` except: - The previous cluster id and attributes are preserved. - The cluster starts with the last specified cluster size. - If the previous cluster was an autoscaling cluster, the current cluster starts with the minimum number of nodes. - If the cluster is not currently in a ``TERMINATED`` state, nothing will happen. - Clusters launched to run a job cannot be started.
public void unpin(String clusterId)
public void unpin(UnpinCluster request)
Unpinning a cluster will allow the cluster to eventually be removed from the ListClusters API. Unpinning a cluster that is not pinned will have no effect. This API can only be called by workspace admins.
public Wait<ClusterDetails,Void> update(String clusterId, String updateMask)
public Wait<ClusterDetails,Void> update(UpdateCluster request)
Updates the configuration of a cluster to match the partial set of attributes and size. Denote which fields to update using the `update_mask` field in the request body. A cluster can be updated if it is in a `RUNNING` or `TERMINATED` state. If a cluster is updated while in a `RUNNING` state, it will be restarted so that the new attributes can take effect. If a cluster is updated while in a `TERMINATED` state, it will remain `TERMINATED`. The updated attributes will take effect the next time the cluster is started using the `clusters/start` API. Attempts to update a cluster in any other state will be rejected with an `INVALID_STATE` error code. Clusters created by the Databricks Jobs service cannot be updated.
public ClusterPermissions updatePermissions(String clusterId)
public ClusterPermissions updatePermissions(ClusterPermissionsRequest request)
Updates the permissions on a cluster. Clusters can inherit permissions from their root object.
public ClustersService impl()
Copyright © 2025. All rights reserved.