@InterfaceAudience.Public @InterfaceStability.Evolving public class HFileOutputFormat2 extends org.apache.hadoop.mapreduce.lib.output.FileOutputFormat<ImmutableBytesWritable,Cell>
HFiles. Calling write(null,null) will forcibly roll
all HFiles being written.
Using this class as part of a MapReduce job is best done
using configureIncrementalLoad(Job, Table, RegionLocator).
| Modifier and Type | Field and Description |
|---|---|
static String |
COMPRESSION_OVERRIDE_CONF_KEY |
static String |
DATABLOCK_ENCODING_OVERRIDE_CONF_KEY |
static String |
LOCALITY_SENSITIVE_CONF_KEY
Keep locality while generating HFiles for bulkload.
|
static String |
OUTPUT_TABLE_NAME_CONF_KEY |
static String |
REMOTE_CLUSTER_ZOOKEEPER_CLIENT_PORT_CONF_KEY |
static String |
REMOTE_CLUSTER_ZOOKEEPER_QUORUM_CONF_KEY |
static String |
REMOTE_CLUSTER_ZOOKEEPER_ZNODE_PARENT_CONF_KEY |
static String |
STORAGE_POLICY_PROPERTY |
static String |
STORAGE_POLICY_PROPERTY_CF_PREFIX |
| Constructor and Description |
|---|
HFileOutputFormat2() |
| Modifier and Type | Method and Description |
|---|---|
static void |
configureIncrementalLoad(org.apache.hadoop.mapreduce.Job job,
org.apache.hadoop.hbase.client.HTable table)
Deprecated.
Use
configureIncrementalLoad(Job, Table, RegionLocator) instead. |
static void |
configureIncrementalLoad(org.apache.hadoop.mapreduce.Job job,
HTableDescriptor tableDescriptor,
RegionLocator regionLocator)
Configure a MapReduce Job to perform an incremental load into the given
table.
|
static void |
configureIncrementalLoad(org.apache.hadoop.mapreduce.Job job,
Table table,
RegionLocator regionLocator)
Configure a MapReduce Job to perform an incremental load into the given
table.
|
static void |
configureIncrementalLoadMap(org.apache.hadoop.mapreduce.Job job,
Table table) |
static void |
configureRemoteCluster(org.apache.hadoop.mapreduce.Job job,
org.apache.hadoop.conf.Configuration clusterConf)
Configure HBase cluster key for remote cluster to load region location for locality-sensitive
if it's enabled.
|
org.apache.hadoop.mapreduce.RecordWriter<ImmutableBytesWritable,Cell> |
getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context) |
checkOutputSpecs, getCompressOutput, getDefaultWorkFile, getOutputCommitter, getOutputCompressorClass, getOutputName, getOutputPath, getPathForWorkFile, getUniqueFile, getWorkOutputPath, setCompressOutput, setOutputCompressorClass, setOutputName, setOutputPathpublic static final String DATABLOCK_ENCODING_OVERRIDE_CONF_KEY
public static final String COMPRESSION_OVERRIDE_CONF_KEY
public static final String LOCALITY_SENSITIVE_CONF_KEY
public static final String OUTPUT_TABLE_NAME_CONF_KEY
public static final String REMOTE_CLUSTER_ZOOKEEPER_QUORUM_CONF_KEY
public static final String REMOTE_CLUSTER_ZOOKEEPER_CLIENT_PORT_CONF_KEY
public static final String REMOTE_CLUSTER_ZOOKEEPER_ZNODE_PARENT_CONF_KEY
public static final String STORAGE_POLICY_PROPERTY
public static final String STORAGE_POLICY_PROPERTY_CF_PREFIX
public org.apache.hadoop.mapreduce.RecordWriter<ImmutableBytesWritable,Cell> getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context) throws IOException, InterruptedException
getRecordWriter in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat<ImmutableBytesWritable,Cell>IOExceptionInterruptedException@Deprecated public static void configureIncrementalLoad(org.apache.hadoop.mapreduce.Job job, org.apache.hadoop.hbase.client.HTable table) throws IOException
configureIncrementalLoad(Job, Table, RegionLocator) instead.IOExceptionpublic static void configureIncrementalLoad(org.apache.hadoop.mapreduce.Job job,
Table table,
RegionLocator regionLocator)
throws IOException
IOExceptionpublic static void configureIncrementalLoad(org.apache.hadoop.mapreduce.Job job,
HTableDescriptor tableDescriptor,
RegionLocator regionLocator)
throws IOException
IOExceptionpublic static void configureIncrementalLoadMap(org.apache.hadoop.mapreduce.Job job,
Table table)
throws IOException
IOExceptionpublic static void configureRemoteCluster(org.apache.hadoop.mapreduce.Job job,
org.apache.hadoop.conf.Configuration clusterConf)
TableInputFormat and generate hfiles for HBase cluster B.
Otherwise, HFileOutputFormat2 fetch location from cluster A and locality-sensitive won't
working correctly.
configureIncrementalLoad(Job, Table, RegionLocator) calls this method using
Table.getConfiguration() as clusterConf.
See HBASE-25608.job - which has configuration to be updatedclusterConf - which contains cluster key of the HBase cluster to be locality-sensitiveconfigureIncrementalLoad(Job, Table, RegionLocator),
LOCALITY_SENSITIVE_CONF_KEY,
REMOTE_CLUSTER_ZOOKEEPER_QUORUM_CONF_KEY,
REMOTE_CLUSTER_ZOOKEEPER_CLIENT_PORT_CONF_KEY,
REMOTE_CLUSTER_ZOOKEEPER_ZNODE_PARENT_CONF_KEYCopyright © 2007–2022 The Apache Software Foundation. All rights reserved.