Cloudbreak Release Notes
Also available as:
PDF

Known issues

Cloudbreak 2.8.0 TP includes the following known issues:

Known issues: Cloudbreak

Issue Description Workaround 
RMP-11665 Due to Ambari 2.7.0.0 lack of support for Amazon Linux 2, it is not possible to launch HDF 3.2 clusters that use Ambari 2.7.0.0 on AWS. There is no workaround, but you can:
  • Use HDF 3.1 instead HDF 3.2.
  • Use Ambari 2.7.1 (once released) with HDF 3.2.
  • Use HDF 3.2 with a different cloud provider.
BUG-110998 When creating a workload cluster, the Cloud Storage page in the create cluster wizard includes an option to provide "Path to Ranger Audit Logs for Hive Property" when "Configure Storage Locations" is enabled. This option should not be available for any clusters other than data lakes. Click on "Do not configure".
BUG-109369 Hive does not start on a data lake when Kerberos is enabled.
  1. Modify /etc/hadoop/<VERSION>/0/core-site.xml and /etc/hadoop/conf.backup/core-site.xml by adding the following:
    <configuration>
     <property>
        <name>hadoop.security.authentication</name>
        <value>kerberos</value>
     </property>
    </configuration>
  2. Restart affected services.
BUG-99581 The Event History in the Cloudbreak web UI displays the following message:

Manual recovery is needed for the following failed nodes: []

This message is displayed when Ambari agent doesn't send the heartbeat and Cloudbreak thinks that the host is unhealthy. However, if all services are green and healthy in Ambari web UI, then it is likely that the status displayed by Cloudbreak is incorrect.

If all services are green and healthy in Ambari web UI, then syncing the cluster should fix the problem.
BUG-110397 The Clusters dashboard is very slow when there are more than 50 items.
BUG-110999 The auto-import of HDP/HDF images on OpenStack does not work. This means, that in order to start creating HDP or HDF clusters on OpenStack, your OpenStack admin must import these images manually. Your OpenStack admin must import these images manually by using the instructions in Import HDP and HDF images to OpenStack.
N/A The following Azure and GCP advanced cluster options are missing from the web UI: Don't create public IP, Don't create firewall rules. These web UI options are available in Cloudbreak 2.7.x and they will be reinstated in the next release. In Cloudbreak 2.8, you can only set these by using CLI.

Known issues: Ambari

The known issues described here were discovered when testing Cloudbreak with Ambari versions that are used by default in Cloudbreak. For general Ambari and known issues, refer to Ambari release notes.

Issue Description Workaround 
BUG-109369 Hive does not start on a data lake when Kerberos is enabled.
  1. Modify /etc/hadoop/<Ambari-version>/0/core-site.xml and /etc/hadoop/conf.backup/core-site.xml by adding the following:
    <configuration>
    <property>
    <name>hadoop.security.authentication</name>
    <value>kerberos</value>
    </property>
    </configuration>
  2. Restart affected services.

Known issues: HDP

The known issues described here were discovered when testing Cloudbreak with HDP versions that are used by default in Cloudbreak. For general HDP known issues, refer to HDP release notes.

There are no known issues related to HDP.

Known issues: HDF

Issue Description Workaround 
BUG-98865 Blueprint configuration parameters are not applied when scaling an HDF cluster. One example that affects all users is that after HDF cluster upscale/downscale the nifi.web.proxy.host blueprint parameter does not get updated to include the new nodes, and as a result the NiFi UI is not reachable from these nodes.

Configuration parameters set in the blueprint are not applied when scaling an HDF cluster. One example that affects all NiFi users is that after HDF cluster upscale the nifi.web.proxy.host parameter does not get updated to include the new hosts, and as a result the NiFi UI is not reachable from these hosts.

HOST1-IP:PORT,HOST2-IP:PORT,HOST3-IP:PORT