clear query| facets| time Search criteria: .   Results from 1 to 10 from 373 (0.0s).
Loading phrases to help you
refine your search...
[HDFS-4960] Unnecessary .meta seeks even when skip checksum is true - HDFS - [issue]
...While attempting to benchmark an HBase + Hadoop 2.0 setup on SSDs, we found unnecessary seeks into .meta files, each seek was a 7 byte read at the head of the file - this attempts to validat...    Author: Varun Sharma , 2016-05-12, 18:18
Controller not sending messages - Helix - [mail # user]
...Hi,  We are seeing a situation where the external view and the current states are going out of sync. We see the following message in the logs. We are using a simple ONLINE-OFFLINE state...
   Author: Varun Sharma , 2016-03-07, 21:49
[HBASE-8815] A replicated cross cluster client - HBase - [issue]
...I would like to float this idea for brain storming.HBase is a strongly consistent system modelled after bigtable which means a machine going down results in loss of availability of around 2 ...    Author: Varun Sharma , 2016-02-05, 06:40
How to kill spark applications submitted using spark-submit reliably? - Spark - [mail # user]
...I do this in my stop script to kill the application: kill -s SIGTERM `pgrep-f StreamingApp`to stop it forcefully : pkill -9 -f "StreamingApp"StreamingApp is name of class which I submitted.I...
   Author: varun sharma , 2015-11-21, 07:30
In Spark application, how to get the passed in configuration? - Spark - [mail # user]
...You must be getting a warning at the start of application like : Warning:Ignoring non-spark config property: runtime.environment=passInValue .Configs in spark should start with *spark* as pr...
   Author: varun sharma , 2015-11-12, 17:33
[expand - 1 more] - Need more tasks in KafkaDirectStream - Spark - [mail # user]
...Cody, adding partitions to kafka is there as a last resort, I was wonderingif I can decrease the processing time by not touching my Kafka cluster.Adrian, repartition looks like a good option...
   Author: varun sharma , 2015-10-29, 19:52
[expand - 1 more] - correct and fast way to stop streaming application - Spark - [mail # user]
...One more thing we can try is before committing offset we can verify thelatest offset of that partition(in zookeeper) with fromOffset inOffsetRange.Just a thought...Let me know if it works..O...
   Author: varun sharma , 2015-10-27, 16:29
Kafka Streaming and Filtering > 3000 partitons - Spark - [mail # user]
...You can try something like this to filter by topic:val kafkaStringStream = KafkaUtils.createDirectStream[.......]//you might want to create Stream by fetching offsets from zkkafkaStringStrea...
   Author: varun sharma , 2015-10-22, 06:32
[expand - 3 more] - Issue in spark batches - Spark - [mail # user]
...Hi TD,Is there any way in spark  I can fail/retry batch in case of any exceptionsor do I have to write code to explicitly keep on retrying?Also If some batch fail, I want to block furth...
   Author: varun sharma , 2015-10-21, 06:28
[expand - 3 more] - Kafka Direct Stream - Spark - [mail # user]
...I went through the story and as I understood it is for saving data tomultiple keyspaces at once.How will it work for saving data to multiple tables in same keyspace.I think tableName: String...
   Author: varun sharma , 2015-10-04, 10:16