Has anybody ever encountered a problem where the RFiles that are generated by AccumuloFileOutputFormat can't be imported using TableOperations.importDirectory?
I'm seeing this problem very frequently for small RFiles and occasionally for larger RFiles. The errors shown in the monitor's log UI suggest a corrupt file, to me. For instance, the stack trace below shows a case where the BCFileVersion was incorrect, but sometimes it will complain about an invalid length, negative offset, or invalid codec.
I'm using HDP Accumulo 1.7.0 (18.104.22.168.3.4.12-1) on an encrypted HDFS volume, with Kerberos turned on. The RFiles are generated by AccumuloFileOutputFormat from a Spark job.
I'm pretty confident that the keys are being written to the RFile in order. Are there any tools I could use to inspect the internal structure of the RFile?
Unable to find tablets that overlap file hdfs://[redacted]/accumulo/data/tables/f/b-0000ze9/I0000zeb.rf java.lang.RuntimeException: Incompatible BCFile fileBCFileVersion. at org.apache.accumulo.core.file.rfile.bcfile.BCFile$Reader.<init>(BCFile.java:828) at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.init(CachableBlockFile.java:246) at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getBCFile(CachableBlockFile.java:257) at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.access$100(CachableBlockFile.java:137) at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader$MetaBlockLoader.get(CachableBlockFile.java:209) at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getBlock(CachableBlockFile.java:313) at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getMetaBlock(CachableBlockFile.java:368) at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getMetaBlock(CachableBlockFile.java:137) at org.apache.accumulo.core.file.rfile.RFile$Reader.<init>(RFile.java:843) at org.apache.accumulo.core.file.rfile.RFileOperations.openReader(RFileOperations.java:79) at org.apache.accumulo.core.file.DispatchingFileFactory.openReader(DispatchingFileFactory.java:69) at org.apache.accumulo.server.client.BulkImporter.findOverlappingTablets(BulkImporter.java:644) at org.apache.accumulo.server.client.BulkImporter.findOverlappingTablets(BulkImporter.java:615) at org.apache.accumulo.server.client.BulkImporter$1.run(BulkImporter.java:146) at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35) at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35) at java.lang.Thread.run(Thread.java:745)
Yeah, I'd lean towards something corrupting the file as well. We presently have two BCFile versions: 2.0 and 1.0. Both are presently supported by the code so it should not be possible to create a bad RFile using our APIs (assuming correctness from the filesystem, anyways)
I'm reminded of HADOOP-11674, but a quick check does show that is fixed in your HDP-2.3.4 version (sorry for injecting $vendor here).
Some other thoughts on how you could proceed:
* Can Spark write the file to local fs? Maybe you can rule out HDFS w/ encryption as a contributing issue by just writing directly to local disk and then upload them to HDFS after the fact (as a test) * `accumulo rfile-info` should fail in the same way if the metadata is busted as a way to verify things * You can use rfile-info on both files in HDFS and local fs (tying into the first point) * If you can share one of these files that is invalid, we can rip it apart and see what's going on.
The two files contain exactly the same key-value pairs. They differ by 2 bytes in the footer of the RFile. The file written to the encrypted HDFS directory is consistently corrupt - I'm not confident yet that it's always corrupt in the same place because I see several different errors, but in this case those 2 bytes were wrong.
On Fri, Jul 8, 2016 at 12:30 PM Josh Elser <[EMAIL PROTECTED]> wrote:
There are at least 3 variants, depending, if that helps:
Unable to find tablets that overlap file hdfs://mas1.soaktest.phemi.com:8020/apps/accumulo/data/tables/3/b-00019av/I0001b1u.rf java.io.IOException: Corrupted Meta region Index Unable to find tablets that overlap file hdfs://mas1.soaktest.phemi.com:8020/apps/accumulo/data/tables/3/b-00016t1/I00017xv.rf java.lang.RuntimeException: Incompatible BCFile fileBCFileVersion. Unable to find tablets that overlap file hdfs://mas1.soaktest.phemi.com:8020/apps/accumulo/data/tables/3/b-00016t1/I000173f.rf java.lang.IndexOutOfBoundsException: Invalid offset/length: 6230/-25 On 16-07-08 11:49 AM, Russ Weeks wrote:
Is Accumulo writing RFiles to the encrypted HDFS instance and are those ok? If only the spark job is having issue, maybe it using a different hadoop client lib or different hadoop config when it writes files.
On Fri, Jul 8, 2016 at 5:29 PM, Russ Weeks <[EMAIL PROTECTED]> wrote:
Good point, thanks Keith. Yes, Accumulo was writing RFiles to the encrypted HDFS directory without error. The problem was that Spark had its own HDFS client libraries on the driver and executor classpaths and I'd configured my build process to use those instead.
It was a nightmare trying to produce a runtime classpath combining the HDFS client libs that Accumulo needs with the classes in Spark's assembly jar. In the end the cleanest solution was to switch to a distribution of Spark that was configured to use a separate Hadoop installation.
Thanks again for all your help Bill, Josh and Keith!
On Fri, Jul 8, 2016 at 3:09 PM Keith Turner <[EMAIL PROTECTED]> wrote: