Cloudbreak Release Notes
Also available as:
PDF

New features

Cloudbreak 2.9.0 introduces the following new features:

Feature Description Documentation link
Specifying resource group name on Azure When creating a cluster on Azure, you can specify the name for the new resource group where the cluster will be deployed. Resource group name
Multiple existing security groups on AWS When creating a cluster on AWS, you can select multiple existing security groups. This option is available when an existing VPC is selected. Create a cluster on AWS
EBS volume encryption on AWS You can optionally configure encryption for EBS volumes attached to cluster instances running on EC2. Default or customer-managed encryption keys can be used. EBS encryption on AWS
Shared VPCs on GCP When creating a cluster on Google Cloud, you can place it in an existing shared VPC. Shared networks on GCP
GCP volume encryption By default, Google Compute Engine encrypts data at rest stored on disks. You can optionally configure encryption for the encryption keys used for disk encryption. Customer-supplied (CSEK) or customer-managed (CMEK) encryption keys can be used. Disk encryption on GCP
Workspaces Cloudbreak introduces a new authorization model, which allows resource sharing via workspaces. In addition to using their personal workspaces, users can create shared workspaces to share resources. Workspaces
Operations audit logging Cloudbreak records an audit trail of the actions performed by Cloudbreak users as well as those performed by the Cloudbreak application. Operations audit logging
Updating long-running clusters Cloudbreak supports updating base image's operating system and any third party packages that have been installed as well as upgrading Ambari, HDP and HDF. Updating OS and tools on long-running clusters

Updating Ambari and HDP/HDF on long-running clusters

HDP 3.1 Cloudbreak 2.9 introduces default HDP 3.1 blueprints and allows you to create your custom HDP 3.1 blueprints. Default cluster configurations
HDF 3.3 Cloudbreak 2.9 introduces default HDF 3.3 blueprints and allows you to create your custom HDF 3.3 blueprints. Default cluster configurations
Recipe parameters Supported parameters can be specified in recipes as variables by using mustache kind of templating with "{{{ }}}" syntax. Writing recipes

Recipe parameters

Shebang in Python recipes Cloudbreak supports using shebang in Python scripts run as recipes. Writing recipes
AWS GovCloud (TP) You can install Cloudbreak and create Cloudbreak-managed clusters on AWS GovCloud. This feature is technical preview. Deploying on AWS GovCloud
Azure ADLS Gen2 (TP) When creating a cluster on Azure, you can optionally configure access to ADLS Gen2. This feature is technical preview. Configuring access to ADLS Gen2
New and changed data lake blueprints (TP) Cloudbreak 2.9 includes three data lake blueprints:
  • HDP 2.6 data lake HA blueprint
  • HDP 2.6 data lake blueprint including Atlas
  • HDP 3.1 data lake blueprint

The data lake feature remains technical preview.

Note
Note

Hive Metastore has been removed from the HDP 3.x data lake blueprints, but setting up an external database allows all clusters attached to a data lake to connect to the same Hive Metastore.

Working with data lakes