What's new in this release: Apache Hive
HDP 3.1 includes a Kafka-Hive connector, which you can use to read and write from Kafka to Hive and vice versa, the JdbcStorageHandler for connecting to BI tools, and a built-in UDF for generating surrogate keys. In HDP 3.1, the Hive Warehouse Connector creates Hive tables automatically based on the existing Spark DataFrames when you save a DataFrame to Hive.
HDP 3.0 and later includes Apache Hive 3 enhancements that can help you improve query performance and comply with regulations. The following list briefly describes a few key enhancements of HDP 3.0 and covers unsupported interfaces.
Using workload management, you can configure who uses resources, how much can be used, and how quickly Hive responds to resource requests. Managing resources is critical to Hive LLAP (low-latency analytical processing), especially in a multitenant environment. Using workload management, you can create resource pools and allocate resources to match availability needs and prevent contention for those resources. Workload management improves parallel query execution and cluster sharing for queries running on Hive LLAP, and also improves performance of non-LLAP queries. Workload management reduces resource starvation in large clusters. You implement workload management on the command line using the Hive Query Language.
Transaction processing improvements
Mature versions of ACID (Atomicity, Consistency, Isolation, and Durability) transaction processing and low latency analytical processing (LLAP) evolve in Hive and HDP 3.0. ACID tables are enhanced to serve as the default table type in HDP 3.0, without performance or operational overload. Using ACID table operations facilitates compliance with the right to be forgotten requirement of the GDPR (General Data Protection Regulation). Application development and operations are simplified with stronger transactional guarantees and simpler semantics for SQL commands. You do not need to bucket ACID tables, so maintenance is easier. You no longer need to perform ACID delete operations in a Hive table.
With improvements in transactional semantics comes advanced optimizations, such as materialized view rewrites and automatic query cache. With these optimizations, you can deploy new Hive application types. Because multiple queries frequently need the same intermediate roll up or joined table, you can avoid costly, repetitious query portion sharing, by precomputing and caching intermediate tables into views. The query optimizer automatically leverages the precomputed cache, improving performance. Materialized views increase the speed of join and aggregation queries in business intelligence (BI) and dashboard applications, for example.
Direct, low latency Hive query of Kafka topics
With HDP 3.0, you can create a Druid table within Hive from a Kafka topic in a single command. This feature simplifies queries of Kafka data by eliminating the data processing step between delivery by Kafka and querying in Druid.
HDP 3 introduces a technical preview of Apache Superset, the data exploration and visualization UI platform. Superset is a way to create HDP dashboards. Using Superset, installed by default as a service in Ambari, you can connect to Hive, create visualizations of Hive data, and create custom dashboards on Hive datasets. Superset is an alternative to Hive View, which is not available in HDP 3.0.
Spark integration with Hive
You can use Hive 3 to query data from Apache Spark and Apache Kafka applications, without workarounds. The Hive Warehouse Connector supports reading and writing Hive tables from Spark.
Hive security improvements
Apache Ranger secures Hive data by default. To meet customer demands for concurrency improvements, ACID support for GDPR (General Data Protection Regulation), render security, and other features, Hive now tightly controls the file system and computer memory resources. With the additional control, Hive better optimizes workloads in shared files and YARN containers. The more Hive controls the file system, the better Hive can secure data.
Query result cache
Hive filters and caches similar or identical queries. Hive does not recompute the data that has not changed. Caching repetitive queries can reduce the load substantially when hundreds or thousands of users of BI tools and web services query Hive.
Information schema database
Hive creates two databases from JDBC data sources when you add the Hive service to a cluster: information_schema and sys. All Metastore tables are mapped into your tablespace and available in sys. The information_schema data reveals the state of the system, similar to sys database data. You can query information_schema using SQL standard queries, which are portable from one DBMS to another.
Deprecated, unavailable, or unsupported interfaces
Hive CLI (replaced by Beeline)
- Hcat CLI
SQL Standard Authorization
MapReduce execution engine (replaced by Tez)