clear query| facets| time Search criteria: author:"Yin Huai".   Results from 1 to 10 from 390 (0.0s).
Loading phrases to help you
refine your search...
[SPARK-14410] SessionCatalog needs to check function existence - Spark - [issue]
...Right now, operations for an existing functions in SessionCatalog do not really check if the function exists. We should add this check and avoid of doing the check in command....    Author: Yin Huai , 2018-06-21, 06:12
[SPARK-3559] appendReadColumnIDs and appendReadColumnNames introduce unnecessary columns in the lists of needed column ids and column names stored in hiveConf - Spark - [issue]
...Because we are using the same hiveConf and we are currently using ColumnProjectionUtils.appendReadColumnIDs ColumnProjectionUtils.appendReadColumnNames to append needed column ids and names ...    Author: Yin Huai , 2014-10-13, 20:45
[SPARK-3700] Improve the performance of scanning JSON datasets - Spark - [issue]    Author: Yin Huai , 2015-09-16, 08:04
[SPARK-3641] Correctly populate SparkPlan.currentContext - Spark - [issue]
...After creating a new SQLContext, we need to populate SparkPlan.currentContext before we create any SparkPlan. Right now, only SQLContext.createSchemaRDD populate SparkPlan.currentContext. SQ...    Author: Yin Huai , 2014-12-02, 20:16
[SPARK-10621] Audit function names in FunctionRegistry and corresponding method names shown in functions.scala and - Spark - [issue]
...Right now, there are a few places that we are not very consistent. There are a few functions that are registered in FunctionRegistry, but not provided in functions.scala and Ex...    Author: Yin Huai , 2015-11-25, 21:20
[SPARK-10639] Need to convert UDAF's result from scala to sql type - Spark - [issue]
...We are missing a conversion at    Author: Yin Huai , 2015-09-22, 10:37
[SPARK-10671] Calling a UDF with insufficient number of input arguments should throw an analysis error - Spark - [issue]
...import org.apache.spark.sql.functions._Seq((1,2)).toDF("a", "b").select(callUDF("percentile", $"a"))This should throws an Analysis Exception....    Author: Yin Huai , 2015-10-01, 20:24
[SPARK-10672] We should not fail to create a table If we cannot persist metadata of a data source table to metastore in a Hive compatible way - Spark - [issue]
...It is possible that Hive has some internal restrictions on what kinds of metadata of a table it accepts (e.g. Hive 0.13 does not support decimal stored in parquet). If it is the case, we sho...    Author: Yin Huai , 2015-09-22, 20:30
[SPARK-10887] Build HashedRelation outside of HashJoinNode - Spark - [issue]
...Right now, HashJoinNode builds a HashRelation for the build side. We can take this process out. So, we can use HashJoinNode for both Broadcast join and shuffled join....    Author: Yin Huai , 2015-10-08, 18:57
[SPARK-10709] When loading a json dataset as a data frame, if the input path is wrong, the error message is very confusing - Spark - [issue]
...If you do something like"a wrong path"), when we actually read data, the error message is No input paths specified in job at org.apache.hadoop.mapr...    Author: Yin Huai , 2015-10-24, 18:05