clear query| facets| time Search criteria: author:"Reynold Xin".   Results from 1 to 10 from 1250 (0.0s).
Loading phrases to help you
refine your search...
[SPARK-16026] Cost-based Optimizer Framework - Spark - [issue]
...This is an umbrella ticket to implement a cost-based optimizer framework beyond broadcast join selection. This framework can be used to implement some useful optimizations such as join reord...    Author: Reynold Xin , 2018-04-01, 09:17
[SPARK-12850] Support bucket pruning (predicate pushdown for bucketed tables) - Spark - [issue]
...We now support bucketing. One optimization opportunity is to push some predicates into the scan to skip scanning files that definitely won't match the values....    Author: Reynold Xin , 2018-07-14, 04:10
[SPARK-12436] If all values of a JSON field is null, JSON's inferSchema should return NullType instead of StringType - Spark - [issue]
...Right now, JSON's inferSchema will return StringType for a field that always has null values or an ArrayType(StringType)  for a field that always has empty array values. Although this b...    Author: Reynold Xin , 2018-06-19, 19:39
[SPARK-6236] Support caching blocks larger than 2G - Spark - [issue]
...Due to the use java.nio.ByteBuffer, BlockManager does not support blocks larger than 2G....    Author: Reynold Xin , 2018-08-21, 18:53
[SPARK-22779] ConfigEntry's default value should actually be a value - Spark - [issue]
...ConfigEntry's config value right now shows a human readable message. In some places in SQL we actually rely on default value for real to be setting the values....    Author: Reynold Xin , 2018-08-21, 18:59
[SPARK-16281] Implement parse_url SQL function - Spark - [issue]    Author: Reynold Xin , 2018-08-27, 20:09
[SPARK-6237] Support uploading blocks > 2GB as a stream - Spark - [issue]    Author: Reynold Xin , 2018-09-20, 17:39
[SPARK-15693] Write schema definition out for file-based data sources to avoid schema inference - Spark - [issue]
...Spark supports reading a variety of data format, many of which don't have self-describing schema. For these file formats, Spark often can infer the schema by going through all the data. Howe...    Author: Reynold Xin , 2018-09-21, 05:48
[SPARK-19480] Higher order functions in SQL - Spark - [issue]
...To enable users to manipulate nested data types, which is common in ETL jobs with deeply nested JSON fields. Operations should include map, filter, reduce on arrays/maps....    Author: Reynold Xin , 2018-09-14, 20:27
[SPARK-19489] Stable serialization format for external & native code integration - Spark - [issue]
...As a Spark user, I want access to a (semi) stable serialization format that is high performance so I can integrate Spark with my application written in native code (C, C++, Rust, etc)....    Author: Reynold Xin , 2018-09-12, 00:31