public class IdentityTableReducer extends TableReducer<Writable,Mutation,Writable>
Deleteinstances) passed to it out to the configured HBase table. This works in combination with
TableOutputFormatwhich actually does the writing to HBase.
Keys are passed along but ignored in TableOutputFormat. However, they can be used to control how your values will be divided up amongst the specified number of reducers.
You can also use the
TableMapReduceUtil class to set up the two
classes in one step:
This will also set the proper
TableMapReduceUtil.initTableReducerJob("table", IdentityTableReducer.class, job);
TableOutputFormatwhich is given the
Deletedefine the row and columns implicitly.
|Constructor and Description|
|Modifier and Type||Method and Description|
Writes each given record, consisting of the row key and the given values, to the configured
public void reduce(Writable key, java.lang.Iterable<Mutation> values, Context context) throws java.io.IOException, java.lang.InterruptedException
org.apache.hadoop.mapreduce.OutputFormat. It is emitting the row key and each
Deleteas separate pairs.
key- The current row key.
Deletelist for the given row.
context- The context of the reduce.
java.io.IOException- When writing the record fails.
java.lang.InterruptedException- When the job gets interrupted.