clear query| facets| time Search criteria: .   Results from 1 to 10 from 36 (0.0s).
Loading phrases to help you
refine your search...
Subscribe Multiple Topics Structured Streaming - Spark - [mail # user]
...You can have below statement for multiple topicsval dfStatus = spark.readStream.      format("kafka").      option("subscribe", "utility-status, utility-critica...
   Author: naresh Goud , 2018-09-17, 18:06
java.nio.file.FileSystemException: /tmp/spark- .._cache : No space left on device - Spark - [mail # user]
...Also check enough space available on /tmp directoryOn Fri, Aug 17, 2018 at 10:14 AM Jeevan K. Srivatsa <[EMAIL PROTECTED]> wrote:> Hi Venkata,>> On a quick glance, it looks li...
   Author: naresh Goud , 2018-08-19, 16:51
Unable to alter partition. The transaction for alter partition did not commit successfully. - Spark - [mail # user]
...What are you doing? Give more details o what are you doingOn Wed, May 30, 2018 at 12:58 PM Arun Hive wrote:>> Hi>> While running my spark job component i am getting the following...
   Author: naresh Goud , 2018-05-30, 22:31
ERROR: Hive on Spark - Spark - [mail # user]
...Change you table name in query to spam.spamdataset instead of spamdataset.On Sun, Apr 15, 2018 at 2:12 PM Rishikesh Gawade wrote:> Hello there. I am a newbie in the world of Spark. I have...
   Author: naresh Goud , 2018-04-16, 14:52
[Spark sql]: Re-execution of same operation takes less time than 1st - Spark - [mail # user]
...Whenever spark read the data from it will have it in executor memory untiland unless there is no room for new data read or processed. This is thebeauty of spark.On Tue, Apr 3, 2018 at 12:42 ...
   Author: naresh Goud , 2018-04-03, 16:09
How does extending an existing parquet with columns affect impala/spark performance? - Spark - [mail # user]
...From spark point of view it shouldn’t effect. it’s possible to extendcolumns of new parquet files and it won’t affect Performance and notrequired to change spark application code.On Tue, Apr...
   Author: naresh Goud , 2018-04-03, 14:40
java.lang.UnsupportedOperationException: CSV data source does not support struct/ERROR RetryingBlockFetcher - Spark - [mail # user]
...In case of storing as parquet file I don’t think it requires header.option("header","true")Give a try by removing header option and then try to read it.  I haven’ttried. Just a thought....
   Author: naresh Goud , 2018-03-28, 02:51
is there a way to catch exceptions on executor level - Spark - [mail # user]
...How about accumaltors?Thanks,Nareshwww.linkedin.com/in/naresh-dulamhttp://hadoopandspark.blogspot.com/On Thu, Mar 8, 2018 at 12:07 AM Chethan Bhawarlal <[EMAIL PROTECTED]> wrote:> H...
   Author: naresh Goud , 2018-03-10, 17:53
Reading kafka and save to parquet problem - Spark - [mail # user]
...change it to readStream instead of read as belowval df = spark  .readStream.format("kafka")  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")  .option("subscribe"...
   Author: naresh Goud , 2018-03-08, 01:38
How does Spark Structured Streaming determine an event has arrived late? - Spark - [mail # user]
...Hi Kant,TD's explanation makes a lot sense. Refer this stackoverflow, where its wasexplained with program output.  Hope this helps.https://stackoverflow.com/questions/45579100/structure...
   Author: naresh Goud , 2018-02-28, 01:55