2. Hadoop Archive Components

  • HAR Format Data Model

    The Hadoop Archive data format has the following layout:

    foo.har/_masterindex //stores hashes and offsets
    foo.har/_index //stores file statuses
    foo.har/part-[1..n] //stores actual file data

    The file data is stored in multi-part files, which are indexed in order to retain the original separation of data. Moreover, the part files can be accessed in parallel by MapReduce programs. The index files also record the original directory tree structures and file status.

  • HAR File System

    Most archival systems, such as tar, are tools for archiving and de-archiving. Generally, they do not fit into the actual file system layer and hence are not transparent to the application writer in that the user must de-archive (expand) the archive before use.

    The Hadoop Archive is integrated with the Hadoop file system interface. The HarFileSystem implements the FileSystem interface and provides access via the har:// scheme. This exposes the archived files and directory tree structures transparently to users. Files in a HAR can be accessed directly without expanding it.

    For example, if we have the following command to copy a HDFS file to a local directory:

    hdfs dfs –get hdfs://namenode/foo/file-1 localdir

    Suppose a Hadoop Archive bar.har is created from the foo directory. With the HAR, the command to copy the original file becomes:

    hdfs dfs –get har://namenode/bar.har/foo/file-1 localdir

    Users only need to change the URI paths. Alternatively, users may choose to create a symbolic link (from hdfs://namenode/foo to har://namenode/bar.har/foo in the example above), and then even the URIs do not need to be changed. In either case, HarFileSystem will be invoked automatically to provide access to the files in the HAR. Because of this transparent layer, HAR is compatible with the Hadoop APIs, MapReduce, the FS shell command-line interface, and higher-level applications such as Pig, Zebra, Streaming, Pipes, and DistCp.

  • Hadoop Archiving Tool

    Hadoop Archives can be created using the Hadoop archiving tool. The archiving tool uses MapReduce to efficiently create Hadoop Archives in parallel. The tool can be invoked using the command:

    hadoop archive -archiveName name -p <parent> <src>* <dest> 

    A list of files is generated by traversing the source directories recursively, and then the list is split into map task inputs. Each map task creates a part file (about 2 GB, configurable) from a subset of the source files and outputs the metadata. Finally, a reduce task collects metadata and generates the index files.

    The Hadoop archiving tool is discussed in further detail in the next section.


loading table of contents...