

Skip creating scratch dirs for tez if RPC is on Skip setting up hive scratch dir during planning Remove cross-query synchronization for the partition-eval Prevent the creation of query routing appender if property is set to false UDF: FunctionRegistry synchronizes on .ql.udf.UDFType class

Remove expensive logging from the LLAP cache hotpath Tez: SplitGenerator tries to look for plan files, which won't exist for Tez Implement UDF to interpret date/timestamp using its internal representation and Gregorian-Julian hybrid calendarīeeline option to show/not show execution report Hive query with large size via knox fails with Broken pipe Write failed Null pointer exception on running compaction against an MM table. Remove glassfish.jersey and mssql-jdbc classes from jdbc-standalone jar Include MultiDelimitSerDe in HiveServer2 By Default Support parallel load for HastTables - Interfaces MSCK REPAIR Command with Partition Filtering Fails While Dropping Partitions NPE when inserting data with 'distribute by' clause with dynpart sort optimization TableSnapshotInputFormat should use ReadType.STREAM for scanning HFilesĪdd option to disable scanMetrics in TableSnapshotInputFormatįix for ArrayIndexOutOfBoundsException when balancer is executed TezUtils createByteStringFromConf should use snappy instead of DeflaterOutputStream TezUtils.createConfFromByteString on Configuration larger than 32 MB throws exception If you are going to use Spark 3.1 version along with Hive which require ACID support, you need to select this version Interactive Query 3.1 (HDI 5.0).If you are creating an Interactive Query Cluster, you will see from the dropdown list an other version as Interactive Query 3.1 (HDI 5.0).This version has all fixes and features available in open source Hive 3.1.2 version. HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. HDI hive is now compatible with OSS version 3.1.2.Scaling and provisioning improvement changes.This feature is useful for customers who use both Hive and Spark in their data estate.įor more information, see Apache Spark & Hive - Hive Warehouse Connector - Azure HDInsight | Microsoft Docs Ambari This feature adds business value by allowing ACID transactions on Hive Tables using Spark. HWC is currently supported for Spark v2.4 only. The Hive Warehouse Connector (HWC) allows you to take advantage of the unique features of Hive and Spark to build powerful big-data applications. The Hive Warehouse Connector (HWC) on Spark v3.1.2 If you don't see below changes, wait for the release being live in your region over several days. The release date here indicates the first region release date. HDInsight release is made available to all regions over several days. If you would like to subscribe on release notes, watch releases on this GitHub repository. Azure HDInsight is one of the most popular services among enterprise customers for open-source analytics on Azure.
