clear query| facets| time Search criteria: .   Results from 1 to 10 from 176 (0.0s).
Loading phrases to help you
refine your search...
[SPARK-25525] Do not update conf for existing SparkContext in SparkSession.getOrCreate. - Spark - [issue]
...In SPARK-20946, we modified SparkSession.getOrCreate to not update conf for existing SparkContext because SparkContext is shared by all sessions.We should not update it in PySpark side as we...
http://issues.apache.org/jira/browse/SPARK-25525    Author: Takuya Ueshin , 2018-10-10, 06:45
welcome a new batch of committers - Spark - [mail # dev]
...Congratulations!On Wed, Oct 3, 2018 at 6:35 PM Marco Gaido  wrote:> Congrats you all!>> Il giorno mer 3 ott 2018 alle ore 11:29 Liang-Chi Hsieh > ha scritto:>>>>...
   Author: Takuya UESHIN , 2018-10-03, 09:41
[SPARK-25540] Make HiveContext in PySpark behave as the same as Scala. - Spark - [issue]
...In Scala, HiveContext sets a config spark.sql.catalogImplementation of the given SparkContext and then passes to SparkSession.builder.The HiveContext in PySpark should behave as the same as ...
http://issues.apache.org/jira/browse/SPARK-25540    Author: Takuya Ueshin , 2018-10-02, 15:51
[SPARK-25431] Fix function examples and unify the format of the example results. - Spark - [issue]
...There are some mistakes in examples of newly added functions. Also the format of the example results are not unified. We should fix and unify them....
http://issues.apache.org/jira/browse/SPARK-25431    Author: Takuya Ueshin , 2018-09-28, 15:23
[SPARK-25418] The metadata of DataSource table should not include Hive-generated storage properties. - Spark - [issue]
...When Hive support enabled, Hive catalog puts extra storage properties into table metadata even for DataSource tables, but we should not have them....
http://issues.apache.org/jira/browse/SPARK-25418    Author: Takuya Ueshin , 2018-09-14, 05:23
[SPARK-25208] Loosen Cast.forceNullable for DecimalType. - Spark - [issue]
...Casting to DecimalType is not always needed to force nullable.If the decimal type to cast is wider than original type, or only truncating or precision loss, the casted value won't be null....
http://issues.apache.org/jira/browse/SPARK-25208    Author: Takuya Ueshin , 2018-09-06, 11:45
[SPARK-25223] Use a map to store values for NamedLambdaVariable. - Spark - [issue]
...Currently we use functionsForEval, NamedLambdaVarible s in which are replace with of arguments from the original functions, to make sure the lambda variables refer the same instances as of a...
http://issues.apache.org/jira/browse/SPARK-25223    Author: Takuya Ueshin , 2018-08-25, 00:38
[SPARK-25141] Modify tests for higher-order functions to check bind method. - Spark - [issue]
...We should also check HigherOrderFunction.bind method passes expected parameters....
http://issues.apache.org/jira/browse/SPARK-25141    Author: Takuya Ueshin , 2018-08-19, 00:19
[SPARK-25142] Add error messages when Python worker could not open socket in `_load_from_socket`. - Spark - [issue]
...Sometimes Python worker can't open socket in _load_from_socket for some reason, but it's difficult to figure out the reason because the exception doesn't even contain the messages from {{soc...
http://issues.apache.org/jira/browse/SPARK-25142    Author: Takuya Ueshin , 2018-08-18, 09:24
[SPARK-25068] High-order function: exists(array<T>, function<T, boolean>) → boolean - Spark - [issue]
...Tests if arrays have those elements for which function returns true....
http://issues.apache.org/jira/browse/SPARK-25068    Author: Takuya Ueshin , 2018-08-14, 08:02