clear query| facets| time Search criteria: author:"Yin Huai".   Results from 1 to 10 from 388 (0.0s).
Loading phrases to help you
refine your search...
[SPARK-14410] SessionCatalog needs to check function existence - Spark - [issue]
...Right now, operations for an existing functions in SessionCatalog do not really check if the function exists. We should add this check and avoid of doing the check in command....
http://issues.apache.org/jira/browse/SPARK-14410    Author: Yin Huai , 2018-06-21, 06:12
[SPARK-3559] appendReadColumnIDs and appendReadColumnNames introduce unnecessary columns in the lists of needed column ids and column names stored in hiveConf - Spark - [issue]
...Because we are using the same hiveConf and we are currently using ColumnProjectionUtils.appendReadColumnIDs ColumnProjectionUtils.appendReadColumnNames to append needed column ids and names ...
http://issues.apache.org/jira/browse/SPARK-3559    Author: Yin Huai , 2014-10-13, 20:45
[SPARK-3700] Improve the performance of scanning JSON datasets - Spark - [issue]
http://issues.apache.org/jira/browse/SPARK-3700    Author: Yin Huai , 2015-09-16, 08:04
[SPARK-3641] Correctly populate SparkPlan.currentContext - Spark - [issue]
...After creating a new SQLContext, we need to populate SparkPlan.currentContext before we create any SparkPlan. Right now, only SQLContext.createSchemaRDD populate SparkPlan.currentContext. SQ...
http://issues.apache.org/jira/browse/SPARK-3641    Author: Yin Huai , 2014-12-02, 20:16
[SPARK-10621] Audit function names in FunctionRegistry and corresponding method names shown in functions.scala and functions.py - Spark - [issue]
...Right now, there are a few places that we are not very consistent. There are a few functions that are registered in FunctionRegistry, but not provided in functions.scala and functions.py. Ex...
http://issues.apache.org/jira/browse/SPARK-10621    Author: Yin Huai , 2015-11-25, 21:20
[SPARK-10639] Need to convert UDAF's result from scala to sql type - Spark - [issue]
...We are missing a conversion at https://github.com/apache/spark/blob/branch-1.5/sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/udaf.scala#L427....
http://issues.apache.org/jira/browse/SPARK-10639    Author: Yin Huai , 2015-09-22, 10:37
[SPARK-10671] Calling a UDF with insufficient number of input arguments should throw an analysis error - Spark - [issue]
...import org.apache.spark.sql.functions._Seq((1,2)).toDF("a", "b").select(callUDF("percentile", $"a"))This should throws an Analysis Exception....
http://issues.apache.org/jira/browse/SPARK-10671    Author: Yin Huai , 2015-10-01, 20:24
[SPARK-10672] We should not fail to create a table If we cannot persist metadata of a data source table to metastore in a Hive compatible way - Spark - [issue]
...It is possible that Hive has some internal restrictions on what kinds of metadata of a table it accepts (e.g. Hive 0.13 does not support decimal stored in parquet). If it is the case, we sho...
http://issues.apache.org/jira/browse/SPARK-10672    Author: Yin Huai , 2015-09-22, 20:30
[SPARK-10887] Build HashedRelation outside of HashJoinNode - Spark - [issue]
...Right now, HashJoinNode builds a HashRelation for the build side. We can take this process out. So, we can use HashJoinNode for both Broadcast join and shuffled join....
http://issues.apache.org/jira/browse/SPARK-10887    Author: Yin Huai , 2015-10-08, 18:57
[SPARK-10709] When loading a json dataset as a data frame, if the input path is wrong, the error message is very confusing - Spark - [issue]
...If you do something like sqlContext.read.json("a wrong path"), when we actually read data, the error message is java.io.IOException: No input paths specified in job at org.apache.hadoop.mapr...
http://issues.apache.org/jira/browse/SPARK-10709    Author: Yin Huai , 2015-10-24, 18:05