Cloudbreak 2.9.0 includes the following known issues:
Known issues: Cloudbreak
|BUG-114632||If you started your cluster on Azure with a Cloudbreak version not greater than 2.8.0, 2.7.2 or 2.4.3, then your instances in any 0 or 1 node-sized host group were neither placed in any availability sets nor have rack-info other than 'Default-rack'.||If rack information regarding the related host group is important to you, you should terminate your affected cluster and relaunch it with Cloudbreak 2.9.0.|
|BUG-116919||When defining network security group rules on Google Cloud, it is
possible to specify an incorrect port range such as "5555-3333", causing
the cluster deployment to fail with an error similar to:
|When defining network security group rules on Google Cloud, make sure to define a valid range.|
|BUG-117004||When defining network security rules during cluster creation on
Azure, when ICMP protocol is used, cluster creation fails with an error
||When defining network security rules during cluster creation on Azure, do not use the ICMP protocol.|
|BUG-117005||When defining network security rules during cluster creation on
Google Cloud via CLI, when ICMP protocol is specified and a port is
specified, cluster creation fails with an error similar
This is because when the ICMP protocol is used, no ports should be specified. The UI already enforces this automatically, but with CLI it is possible to specify a port with the ICMP protocol.
|When defining network security rules during cluster creation on Google Cloud via CLI, if you would like to define a rule for using the ICMP protocol, do not specify any ports.|
|BUG-110998||When creating a cluster, the Cloud Storage page in the create cluster wizard includes an option to provide "Path to Ranger Audit Logs for Hive Property" when "Configure Storage Locations" is enabled. This option should only be available for data lakes and not for workload clusters.||Click on "Do not configure".|
|BUG-99581||The Event History in the Cloudbreak web UI displays the
Manual recovery is needed for the following failed nodes: 
This message is displayed when Ambari agent doesn't send the heartbeat and Cloudbreak thinks that the host is unhealthy. However, if all services are green and healthy in Ambari web UI, then it is likely that the status displayed by Cloudbreak is incorrect.
|If all services are green and healthy in Ambari web UI, then syncing the cluster should fix the problem.|
|BUG-110999||The auto-import of HDP/HDF images on OpenStack does not work. This means, that in order to start creating HDP or HDF clusters on OpenStack, your OpenStack admin must import these images manually.||Your OpenStack admin must import these images manually by using the instructions in Import HDP and HDF images to OpenStack.|
|BUG-112787||When a cluster with the same name as specified in CLI JSON already
ERROR: status code: 403, message: Access is denied.
|To avoid this error, pass the cluster name as a parameter with
Known issues: HDP
The known issues described here were discovered when testing Cloudbreak with HDP versions that are used by default in Cloudbreak. For general HDP known issues, refer to HDP release notes published at https://docs.hortonworks.com/.
There are no known issues related to HDP.
Known issues: HDF
The known issues described here were discovered when testing Cloudbreak with HDF versions that are used by default in Cloudbreak. For general HDF known issues, refer to HDF release notes published at https://docs.hortonworks.com/.
|BUG-98865||Blueprint configuration parameters are not applied when scaling an
HDF cluster. One example that affects all users is that after HDF cluster
||Configuration parameters set in the blueprint are not applied when
scaling an HDF cluster. One example that affects all NiFi users is that
after HDF cluster upscale the
Known issues: Data lake
|BUG-109369||Hive does not start on a HDP 2.6 data lake when Kerberos is enabled.||
|BUG-116913, BUG-114150||HiveServer2 does not start on an HDP 3.1 cluster attached to a data
lake. The following error is printed to Ambari