clear query| facets| time Search criteria: author:"Shixiong Zhu".   Results from 1 to 10 from 148 (0.0s).
Loading phrases to help you
refine your search...
Any limitations of spark.shuffle.spill? - Spark - [mail # user]
...Two limitations we found here: http://apache-spark-user-list.1001560.n3.nabble.com/OutOfMemory-in-quot-cogroup-quot-td17349.html  Best Regards, Shixiong Zhu  2014-11-06 2:04 GMT+08:00...
   Author: Shixiong Zhu , 2014-11-06, 06:16
[expand - 2 more] - spark.akka.frameSize setting problem - Spark - [mail # user]
...Created a JIRA to track it: https://issues.apache.org/jira/browse/SPARK-4664  Best Regards, Shixiong Zhu  2014-12-01 13:22 GMT+08:00 Shixiong Zhu :  > Sorry. Should be not greater than 2048...
.... 2047 is the greatest value. > > Best Regards, > Shixiong Zhu > > 2014-12-01 13:20 GMT+08:00 Shixiong Zhu : > >> 4096MB is greater than Int.MaxValue and it will be overflow in Spark. >> Please...
... set it less then 4096. >> >> Best Regards, >> Shixiong Zhu >> >> 2014-12-01 13:14 GMT+08:00 Ke Wang : >> >>> I meet the same problem, did you solve it ? >>> >>> >>> >>> -- >>> View...
   Author: Shixiong Zhu , 2014-12-01, 05:32
[expand - 2 more] - About implicit rddToPairRDDFunctions - Spark - [mail # dev]
...OK. I'll take it.  Best Regards, Shixiong Zhu  2014-11-14 12:34 GMT+08:00 Reynold Xin :  > That seems like a great idea. Can you submit a pull request? > > > On Thu, Nov 13, 2014 at 7:13 PM...
..., Shixiong Zhu  wrote: > >> If we put the `implicit` into "pacakge object rdd" or "object rdd", when >> we write `rdd.groupbykey()`, because rdd is an object of RDD, Scala >> compiler will search...
... is there are >> two copies of same codes. >> >> >> >> >> Best Regards, >> Shixiong Zhu >> >> 2014-11-14 3:57 GMT+08:00 Reynold Xin : >> >>> Do people usually important o.a.spark.rdd._ ? >>> >>> Also in order...
... to maintain source and binary compatibility, we would need >>> to keep both right? >>> >>> >>> On Thu, Nov 6, 2014 at 3:12 AM, Shixiong Zhu  wrote: >>> >>>> I saw many people asked how to convert...
..., the converting will be automatic and not need to >>>> import org.apache.spark.SparkContext._ >>>> >>>> I tried to search some discussion but found nothing. >>>> >>>> Best Regards, >>>> Shixiong Zhu >>>> >>> >>> >> > ...
   Author: Shixiong Zhu , 2014-11-14, 05:20
[expand - 2 more] - How to specify the port for AM Actor ... - Spark - [mail # user]
...LGTM. Could you open a JIRA and send a PR? Thanks. Best Regards, Shixiong Zhu 2015-03-28 7:14 GMT+08:00 Manoj Samel : > I looked @ the 1.3.0 code and figured where this can be added...
..., 2015 at 4:44 PM, Shixiong Zhu  wrote: > >> There is no configuration for it now. >> >> Best Regards, >> Shixiong Zhu >> >> 2015-03-26 7:13 GMT+08:00 Manoj Samel : >> >>> There may be firewall...
..., Mar 25, 2015 at 4:06 PM, Shixiong Zhu  wrote: >>> >>>> It's a random port to avoid port conflicts, since multiple AMs can run >>>> in the same machine. Why do you need a fixed port...
...? >>>> >>>> Best Regards, >>>> Shixiong Zhu >>>> >>>> 2015-03-26 6:49 GMT+08:00 Manoj Samel : >>>> >>>>> Spark 1.3, Hadoop 2.5, Kerbeors >>>>> >>>>> When running spark-shell in yarn client mode, it shows...
   Author: Shixiong Zhu , 2015-03-30, 03:19
[expand - 1 more] - history server - Spark - [mail # user]
...SPARK-5522 is really cool. Didn't notice it. Best Regards, Shixiong Zhu 2015-05-07 11:36 GMT-07:00 Marcelo Vanzin : > That shouldn't be true in 1.3 (see SPARK-5522). > > On Thu, May 7...
..., 2015 at 11:33 AM, Shixiong Zhu  wrote: > >> The history server may need several hours to start if you have a lot of >> event logs. Is it stuck, or still replaying logs? >> >> Best Regards...
..., >> Shixiong Zhu >> >> 2015-05-07 11:03 GMT-07:00 Marcelo Vanzin : >> >> Can you get a jstack for the process? Maybe it's stuck somewhere. >>> >>> On Thu, May 7, 2015 at 11:00 AM, Koert Kuipers...
   Author: Shixiong Zhu , 2015-05-07, 18:52
[expand - 2 more] - Spark UI consuming lots of memory - Spark - [mail # user]
...In addition, you cannot turn off JobListener and SQLListener now... Best Regards, Shixiong Zhu 2015-10-13 11:59 GMT+08:00 Shixiong Zhu : > Is your query very complicated? Could you...
..., > Shixiong Zhu > > 2015-10-13 11:44 GMT+08:00 Nicholas Pritchard < > [EMAIL PROTECTED]>: > >> As an update, I did try disabling the ui with "spark.ui.enabled=false...
... information. I am also using Spark Standalone cluster >>> manager so have not had to use the history server. >>> >>> >>> On Mon, Oct 12, 2015 at 8:17 PM, Shixiong Zhu  wrote: >>> >>>> Could you show...
... >>>> "spark.eventLog.enabled=true" doesn't work now. >>>> >>>> Best Regards, >>>> Shixiong Zhu >>>> >>>> 2015-10-13 2:01 GMT+08:00 pnpritchard >>>> : >>>> >>>>> Hi, >>>>> >>>>> In my application, the Spark UI...
   Author: Shixiong Zhu , 2015-10-13, 04:00
[expand - 2 more] - How did the RDD.union work - Spark - [mail # user]
... to a new node later).  Best Regards, Shixiong Zhu  2014-11-12 15:20 GMT+08:00 qiaou :  >  this work! > but can you explain why should use like this? > > -- > qiaou > 已使用 Sparrow  > > 在 2014年11月12日...
... 星期三,下午3:18,Shixiong Zhu 写道: > > You need to create a new configuration for each RDD. Therefore, "val > hbaseConf = HBaseConfigUtil.getHBaseConfiguration" should be changed to "val...
... > hbaseConf = new Configuration(HBaseConfigUtil.getHBaseConfiguration)" > > Best Regards, > Shixiong Zhu > > 2014-11-12 14:53 GMT+08:00 qiaou : > >  ok here is the code > > def hbaseQuery:(String...
...) => { >             result >           } >         } >       } >       return generateRdd >     } > > -- > qiaou > 已使用 Sparrow  > > 在 2014年11月12日 星期三,下午2:50,Shixiong Zhu 写道: > > Could you provide the code...
... of hbaseQuery? It maybe doesn't support to > execute in parallel. > > Best Regards, > Shixiong Zhu > > 2014-11-12 14:32 GMT+08:00 qiaou : > >  Hi: >     I got a problem with using the union method...
   Author: Shixiong Zhu , 2014-11-12, 07:45
[expand - 1 more] - Spark is much slower than direct access MySQL - Spark - [mail # user]
... Regards, Shixiong Zhu 2015-07-26 16:16 GMT+08:00 Louis Hust : > Look at the given url: > > Code can be found at: > > > https://github.com/louishust/sparkDemo/blob/master/src/main/java...
.../DirectQueryTest.java > > 2015-07-26 16:14 GMT+08:00 Shixiong Zhu : > >> Could you clarify how you measure the Spark time cost? Is it the total >> time of running the query? If so, it's possible because...
... the overhead of >> Spark dominates for small queries. >> >> Best Regards, >> Shixiong Zhu >> >> 2015-07-26 15:56 GMT+08:00 Jerrick Hoang : >> >>> how big is the dataset? how complicated is the query...
   Author: Shixiong Zhu , 2015-07-26, 08:24
[expand - 1 more] - spark streaming printing no output - Spark - [mail # user]
..., Shixiong(Ryan) Zhu 2015-04-15 15:04 GMT+08:00 Shushant Arora : > Yes only Time: 1429054870000 ms  strings gets printed on console. > No output is getting printed. > And timeinterval between two...
... strings of form ( time:****ms)is very less > than Streaming Duration set in program. > > On Wed, Apr 15, 2015 at 5:11 AM, Shixiong Zhu  wrote: > >> Could you see something like this in the console...
...? >> >> ------------------------------------------- >> Time: 1429054870000 ms >> ------------------------------------------- >> >> >> Best Regards, >> Shixiong(Ryan) Zhu >> >> 2015-04-15 2:11 GMT+08:00 Shushant Arora : >> >>> Hi >>> >>> I am...
   Author: Shixiong Zhu , 2015-04-15, 07:09
[expand - 1 more] - Spark Streaming Log4j Inside Eclipse - Spark - [mail # user]
...(...); Best Regards, Shixiong Zhu 2015-09-29 22:07 GMT+08:00 Ashish Soni : > I am using Java Streaming context and it doesnt have method setLogLevel > and also i have tried by passing VM argument...
... Sep 2015, at 18:52, Shixiong Zhu  wrote: >> >> You can use JavaSparkContext.setLogLevel to set the log level in your >> codes. >> >> Best Regards, >> Shixiong Zhu >> >> 2015-09-28 22:55 GMT+08...
   Author: Shixiong Zhu , 2015-09-29, 15:03