The Optimized Row Columnar (ORC) file format provides a highly efficient way to store Hive data. It was designed to overcome limitations of the other Hive file formats. Using ORC files improves performance when Hive is reading, writing, and processing data.
Compared with RCFile format, for example, ORC file format has many advantages such as:
An ORC file contains groups of row data called stripes, along with auxiliary information in a file footer. At the end of the file a postscript holds compression parameters and the size of the compressed footer.
The default stripe size is 250 MB. Large stripe sizes enable large, efficient reads from HDFS.
The file footer contains a list of stripes in the file, the number of rows per stripe, and each column's data type.
It also contains column-level aggregates
This diagram illustrates the ORC file structure:
As shown in the diagram, each stripe in an ORC file holds index data, row data, and a stripe footer.
The stripe footer contains a directory of stream locations. Row data is used in table scans.
Index data includes min and max values for each column and the row positions within each column (A bit field or bloom filter could also be included.) Row index entries provide offsets that enable eeking to the right compression block and byte within a decompressed block.
Having relatively frequent row index entries enables row-skipping within a stripe for rapid reads, despite large stripe sizes. By default every 10,000 rows can be skipped.
With the ability to skip large sets of rows based on filter predicates, you can sort a table on its secondary keys to achieve a big reduction in execution time. For example, if the primary partition is transaction date, the table can be sorted on state, zip code, and last name. Then looking for records in one state will skip the records of all other states.
File formats are specified at the table (or partition) level. You can specify the ORC file format with Hive QL statements such as these:
CREATE TABLE ... STORED AS ORC
ALTER TABLE ... [PARTITION partition_spec] SET FILEFORMAT ORC
The parameters are all placed in the TBLPROPERTIES. They are:
|orc.compress||ZLIB||high level compression (one of NONE, ZLIB, SNAPPY)|
|orc.compress.size||262,144||number of bytes in each compression chunk|
|orc.stripe.size||268435456||number of bytes in each stripe|
|orc.row.index.stride||10,000||number of rows between index entries (must be >= 1000)|
|orc.create.index||true||whether to create row indexes|
For example, creating an ORC stored table without compression:
The serialization of column data in an ORC file depends on whether the data type is integer or string.
Integer columns are serialized in two streams.
Integer data is serialized in a way that takes advantage of the common distribution of numbers:
The variable-width encoding is based on Google's protocol buffers and uses the high bit to represent whether this byte is not the last and the lower 7 bits to encode data. To encode negative numbers, a zigzag encoding is used where 0, -1, 1, -2, and 2 map into 0, 1, 2, 3, 4, and 5 respectively.
Each set of numbers is encoded this way:
In run-length encoding, the first byte specifies run length and whether the values are literals or duplicates. Duplicates can step by -128 to +128. Run-length encoding uses protobuf style variable-length integers.
Serialization of string columns uses a dictionary to form unique column values The dictionary is sorted to speed up predicate filtering and improve compression ratios.
String columns are serialized in four streams.
Both the dictionary length and the row values are run length encoded streams of integers.
Streams are compressed using a codec, which is specified as a table property for all streams in that table To optimize memory use, compression is done incrementally as each block is produced. Compressed blocks can be jumped over without first having to be decompressed for scanning. Positions in the stream are represented by a block start location and an offset into the block.
The codec can be Snappy, Zlib, or none.