My requirement is to partition an HBase Table and return a group of records (i.e. rows having a specific format) without having to iterate over all of its rows. These partitions (which should ideally be along regions) will eventually be sent to Spark but rather than use the HBase or Hadoop RDDs directly, I'll be using a custom RDD which recognizes partitions as the aforementioned group of records.
I was looking at achieving this through creating InputSplits through TableInputFormat.getSplits(), as being done in the HBase RDD  but I can't figure out a way to do this without having access to the mapred context etc.
Would greatly appreciate if someone could point me in the right direction.
The split occurs against the regions so that if you have n regions, you have n splits.
Please don’t confuse partitions and regions because they are not the same or synonymous. The opinions expressed here are mine, while they may reflect a cognitive thought, that is purely accidental. Use at your own risk. Michael Segel michael_segel (AT) hotmail.com
Thanks for the reply. Yes, I do realise that HBase has regions, perhaps my usage of the term partitions was misleading. What I'm looking for is exactly what you've mentioned - a means of creating splits based on regions, without having to iterate over all rows in the table through the client API. Do you have any idea how I might achieve this?
On Tuesday, March 17, 2015, Michael Segel <[EMAIL PROTECTED]> wrote:
If you know the row key range of your data, then you can create splits points yourself and then use HBase api to actually make the splits.
E.g. If you know that your row key (and it is a very contrived example) has a range of A - Z then you can decide on split points as every 5 th letter as your split points and then use HBaseAdmin.split method to do the split for you. This way you don't have to iterate of your data.
Or are you saying that you don't have the row key range?
On Tue, Mar 17, 2015 at 3:12 PM, Mikhail Antonov <[EMAIL PROTECTED]> wrote:
If you don't want to use the getSplits method, you're welcome to pull the relevant code out into your own RDD. The RegionLocator object is public API, and the code is trivial if you're not interested in normalizing the split points as the MR job does.
On Tue, Mar 17, 2015 at 12:12 PM, Mikhail Antonov <[EMAIL PROTECTED]> wrote:
If you’re writing your own code and you’re writing a m/r program, you will get one split per region. If your scan doesn’t contain a start or stop row, you will scan every row in the table.
The splits provide parallelism. So when you launch your job and you have 10 regions, you’ll have 10 splits.
Going from memory, if your scan has a start/stop row, then those regions where there is no data (e.g. the region’s start row isn’t inside the scope of your scan) the mapper created will complete quickly and no rows are scanned and returned in the result set.
I think what you’re looking for is already done for you.
-Mike The opinions expressed here are mine, while they may reflect a cognitive thought, that is purely accidental. Use at your own risk. Michael Segel michael_segel (AT) hotmail.com
@Mikhail I wanted to split the table into groups of rows, but did not want to initialize a scan and go over all rows and group them into batches in the client code. In other words, I'm looking for a way to divide the rows in the table and merely maintain the boundary information of each division rather than actually populate them at the time of creation.
@Shahab yes, the row key ranges for the splits are not known in advance, which was why I was looking at retrieving the region information of the table and create the groupings that way.
@Sean this was exactly what I was looking for. Based on the region boundaries, I should be able to create virtual groups of rows which can then be retrieved from the table (e.g. through a scan) on demand.
Thanks everyone for your help.
On 18 March 2015 at 00:57, Sean Busbey <[EMAIL PROTECTED]> wrote: