clear query| facets| time Search criteria: author:"Yin Huai".   Results from 31 to 40 from 390 (0.0s).
Loading phrases to help you
refine your search...
[SPARK-5875] logical.Project should not be resolved if it contains aggregates or generators - Spark - [issue]
...To reproduce...val rdd = sc.parallelize((1 to 10).map(i => s"""{"a":$i, "b":"str${i}"}"""))sqlContext.jsonRDD(rdd).registerTempTable("jt")sqlContext.sql("CREATE TABLE gen_tmp (key Int)")s...
http://issues.apache.org/jira/browse/SPARK-5875    Author: Yin Huai , 2015-04-25, 21:42
[SPARK-5881] RDD remains cached after the table gets overridden by "CACHE TABLE" - Spark - [issue]
...val rdd = sc.parallelize((1 to 10).map(i => s"""{"a":$i, "b":"str${i}"}"""))sqlContext.jsonRDD(rdd).registerTempTable("jt")sqlContext.sql("CACHE TABLE foo AS SELECT * FROM jt")sqlContext....
http://issues.apache.org/jira/browse/SPARK-5881    Author: Yin Huai , 2016-10-07, 22:42
[SPARK-5909] Add a clearCache command to Spark SQL's cache manager - Spark - [issue]
...This command will clear all cached data from the in-memory cache, which will be useful when users want to quickly clear the cache or as a workaround of cases like SPARK-5881....
http://issues.apache.org/jira/browse/SPARK-5909    Author: Yin Huai , 2015-04-25, 21:42
[SPARK-5910] DataFrame.selectExpr("col as newName") does not work - Spark - [issue]
...val rdd = sc.parallelize((1 to 10).map(i => s"""{"a":$i, "b":"str${i}"}"""))sqlContext.jsonRDD(rdd).selectExpr("a as newName")java.lang.RuntimeException: [1.3] failure: ``or'' expected bu...
http://issues.apache.org/jira/browse/SPARK-5910    Author: Yin Huai , 2015-02-24, 18:53
[SPARK-5911] Make Column.cast(to: String) support fixed precision and scale decimal type - Spark - [issue]
http://issues.apache.org/jira/browse/SPARK-5911    Author: Yin Huai , 2015-04-25, 21:42
[SPARK-5935] Accept MapType in the schema provided to a JSON dataset. - Spark - [issue]
http://issues.apache.org/jira/browse/SPARK-5935    Author: Yin Huai , 2015-04-25, 21:41
[SPARK-5936] Automatically convert a StructType to a MapType when the number of fields exceed a threshold. - Spark - [issue]
http://issues.apache.org/jira/browse/SPARK-5936    Author: Yin Huai , 2016-10-10, 18:45
[SPARK-6366] In Python API, the default save mode for save and saveAsTable should be "error" instead of "append". - Spark - [issue]
...If a user want to append data, he/she should explicitly specify the save mode. Also, in Scala and Java, the default save mode is ErrorIfExists....
http://issues.apache.org/jira/browse/SPARK-6366    Author: Yin Huai , 2015-03-18, 01:42
[SPARK-6367] Use the proper data type for those expressions that are hijacking existing data types. - Spark - [issue]
...For the following expressions, the actual value type does not match the type of our internal representation. ApproxCountDistinctPartitionNewSetAddItemToSetCombineSetsCollectHashSetWe should ...
http://issues.apache.org/jira/browse/SPARK-6367    Author: Yin Huai , 2015-04-12, 02:35
[SPARK-6368] Build a specialized serializer for Exchange operator. - Spark - [issue]
...Kryo is still pretty slow because it works on individual objects and relative expensive to allocate. For Exchange operator, because the schema for key and value are already defined, we can c...
http://issues.apache.org/jira/browse/SPARK-6368    Author: Yin Huai , 2015-05-01, 22:16