clear query| facets| time Search criteria: author:"Wenchen Fan".   Results from 11 to 20 from 649 (0.0s).
Loading phrases to help you
refine your search...
[SPARK-21335] support un-aliased subquery - Spark - [issue]
...un-aliased subquery is supported by Spark SQL for a long time. Its semantic was not well defined and has confusing behaviors, and it's not a standard SQL syntax, so we disallowed it in https...    Author: Wenchen Fan , 2018-06-27, 03:41
[SPARK-24267] explicitly keep DataSourceReader in DataSourceV2Relation - Spark - [issue]    Author: Wenchen Fan , 2018-06-15, 04:49
[SPARK-22387] propagate session configs to data source read/write options - Spark - [issue]
...This is an open discussion. The general idea is we should allow users to set some common configs in session conf so that they don't need to type them again and again for each data source ope...    Author: Wenchen Fan , 2018-09-06, 23:24
[SPARK-25445] publish a scala 2.12 build with Spark 2.4 - Spark - [issue]    Author: Wenchen Fan , 2018-09-18, 14:30
[SPARK-12321] JSON format for logical/physical execution plans - Spark - [issue]    Author: Wenchen Fan , 2018-09-06, 06:13
[SPARK-25317] MemoryBlock performance regression - Spark - [issue]
...eThere is a performance regression when calculating hash code for UTF8String:  test("hashing") {    import org.apache.spark.unsafe.hash.Murmur3_x86_32    import org....    Author: Wenchen Fan , 2018-09-07, 19:15
[SPARK-23880] table cache should be lazy and don't trigger any job - Spark - [issue]
...val df = spark.range(10000000000L)  .filter('id > 1000)  .orderBy('id.desc)  .cache()This triggers a job while the cache should be lazy. The problem is that, when creating ...    Author: Wenchen Fan , 2018-09-09, 04:03
[SPARK-21320] Make sure all expressions support interpreted evaluation - Spark - [issue]
...Most of time Spark SQL will evaluate expressions with codegen. However codegen is a complex technology, and we have already fixed a lot of bugs for it, and there will be more. To make Spark ...    Author: Wenchen Fan , 2018-09-10, 14:34
[SPARK-25192] Remove SupportsPushdownCatalystFilter - Spark - [issue]
...This interface was created for file source(not migrated yet) to implement hive partition pruning. It turns out that, we can also use the public Filter API in the hive partition pruning. Thus...    Author: Wenchen Fan , 2018-08-23, 03:21
[SPARK-25188] Add WriteConfig - Spark - [issue]
...The current write API is not flexible enough to implement more complex write operations like `replaceWhere`. We can follow the read API and add a `WriteConfig` to make it more flexible....    Author: Wenchen Fan , 2018-08-22, 21:27