clear query| facets| time Search criteria: .   Results from 1 to 10 from 70 (0.0s).
Loading phrases to help you
refine your search...
[HBASE-7782] HBaseTestingUtility.truncateTable() not acting like CLI - HBase - [issue]
...I would like to discuss the behavior of the truncateTable() method of HBaseTestingUtility. It's currently only removing the data through a scan/delete pattern.However, the truncate command i...    Author: Adrien Mogenet , 2018-09-20, 18:41
[expand - 1 more] - How does Spark set task indexes? - Spark - [mail # user]
...Yes I've noticed this one and its related cousin, but not sure this is thesame issue there; our job "properly" ends after 6 attempts.We'll try with disabled speculative mode anyway!On 25 May...
   Author: Adrien Mogenet , 2016-05-25, 08:49
How to add an accumulator for a Set in Spark - Spark - [mail # user]
...Btw, here is a great article about accumulators and all their relatedtraps! (I'm not the author)On 16 March 2016 at 18:24, swetha kasireddy wr...
   Author: Adrien Mogenet , 2016-03-17, 07:32
df.partitionBy().parquet() java.lang.OutOfMemoryError: GC overhead limit exceeded - Spark - [mail # user]
...Very interested in that topic too, thanks Cheng for the direction!We'll give it a try as well.On 3 December 2015 at 01:40, Cheng Lian  wrote:> You may try to set Hadoop conf "parquet...
   Author: Adrien Mogenet , 2015-12-03, 07:39
[expand - 2 more] - [POWERED BY] Please add our organization - Spark - [mail # user]
...Oh, right! I think it was user@ at the time I wrote my first message butit's clear now!Thanks Sean,On 2 December 2015 at 11:56, Sean Owen  wrote:> Same, not sure if anyone handles th...
   Author: Adrien Mogenet , 2015-12-02, 11:04
[HBASE-9260] Timestamp Compactions - HBase - [issue]
...TSCompactionsThe issueOne of the biggest issue I currently deal with is compacting bigstores, i.e. when HBase cluster is 80% full on 4 TB nodes (let saywith a single big table), compactions ...    Author: Adrien Mogenet , 2015-11-10, 03:40
[expand - 1 more] - Split content into multiple Parquet files - Spark - [mail # user]
...My bad, I realized my question was unclear.I did a partitionBy when using saveAsHadoopFile. My question was aboutdoing the same thing for Parquet file. We were using Spark 1.3.x, but nowthat...
   Author: Adrien Mogenet , 2015-09-08, 17:21
[expand - 2 more] - High iowait in idle hbase cluster - Hadoop - [mail # user]
...What is your disk configuration? JBOD? If RAID, possibly a dysfunctionalRAID controller, or a constantly-rebuilding array.Do you have any idea at which files are linked the read blocks?On 4 ...
   Author: Adrien Mogenet , 2015-09-04, 10:08
How to determine the value for spark.sql.shuffle.partitions? - Spark - [mail # user]
...Not sure it would help and answer your question at 100%, but number ofpartitions is supposed to be at least roughly double of your number ofcores (surprised to not see this point in your lis...
   Author: Adrien Mogenet , 2015-09-04, 06:04
Parquet partitioning for unique identifier - Spark - [mail # user]
...Any code / Parquet schema to provide? I'm not sure to understand which stepfails right there...On 3 September 2015 at 04:12, Raghavendra Pandey <[EMAIL PROTECTED]> wrote:> Did you s...
   Author: Adrien Mogenet , 2015-09-03, 06:16