clear query| facets| time Search criteria: .   Results from 1 to 10 from 202325 (0.0s).
Loading phrases to help you
refine your search...
[expand - 1 more] - RPC timeout error for AES based encryption between driver and executor - Spark - [mail # user]
...I don't think "spark.authenticate" works properly with k8s in 2.4(which would make it impossible to enable encryption since it requiresauthentication). I'm pretty sure I fixed it in master, ...
   Author: Marcelo Vanzin , Sinha, Breeta , ... , 2019-03-26, 15:40
[SPARK-27285] Support describing output of a CTE - Spark - [issue]
...SPARK-26982 allows users to describe output of a query. However, it had a limitation of not supporting CTEs due to limitation of the grammar having a single rule to parse both select and ins...
http://issues.apache.org/jira/browse/SPARK-27285    Author: Dilip Biswal , 2019-03-26, 15:38
[SPARK-27224] Spark to_json parses UTC timestamp incorrectly - Spark - [issue]
...When parsing ISO-8601 timestamp, if there is UTC suffix symbol, and more than 3 digits in the fraction part, from_json will give incorrect result. scala> val schema = new StructType().add...
http://issues.apache.org/jira/browse/SPARK-27224    Author: Jeff Xu , 2019-03-26, 14:45
[SPARK-17914] Spark SQL casting to TimestampType with nanosecond results in incorrect timestamp - Spark - [issue]
...In some cases when timestamps contain nanoseconds they will be parsed incorrectly. Examples: "2016-05-14T15:12:14.0034567Z" -> "2016-05-14 15:12:14.034567""2016-05-14T15:12:14.000345678Z"...
http://issues.apache.org/jira/browse/SPARK-17914    Author: Oksana Romankova , 2019-03-26, 14:35
[SPARK-27283] BigDecimal arithmetic losing precision - Spark - [issue]
...When performing arithmetics between doubles and decimals, the resulting value is always a double. This is very strange to me; when an exact type is present as one of the inputs, I would expe...
http://issues.apache.org/jira/browse/SPARK-27283    Author: Mats , 2019-03-26, 14:27
[expand - 1 more] - Spark Profiler - Spark - [mail # user]
...I have found ganglia very helpful in understanding network I/o , CPU andmemory usage  for a given spark cluster.I have not used , but have heard good things about Dr Elephant ( which It...
   Author: manish ranjan , Jack Kolokasis , ... , 2019-03-26, 14:24
[expand - 2 more] - [DISCUSS] Spark Columnar Processing - Spark - [mail # dev]
...Cloudera reports a 26% improvement in hive query runtimes by enablingvectorization. I would expect to see similar improvements but at the costof keeping more data in memory.  But rememb...
   Author: Bobby Evans , Wenchen Fan , ... , 2019-03-26, 13:57
[SPARK-27248] REFRESH TABLE should recreate cache with same cache name and storage level - Spark - [issue]
...If we refresh a cached table, the table cache will be first uncached and then recache (lazily). Currently, the logic is embedded in CatalogImpl.refreshTable method.The current implementation...
http://issues.apache.org/jira/browse/SPARK-27248    Author: William Wong , 2019-03-26, 13:35
[SPARK-27284] Spark Standalone aggregated logs in 1 file per appid (ala  yarn -logs -applicationId) - Spark - [issue]
...Feature: Spark Standalone aggregated logs in 1 file per appid (ala  yarn -logs -applicationId) This would be 1 single file per appid with contents of ALL the executors logshttps://stackoverf...
http://issues.apache.org/jira/browse/SPARK-27284    Author: t oo , 2019-03-26, 13:26
[SPARK-27277] Recover from setting fix version failure in merge script - Spark - [issue]
...I happened to meet this cases few times before:Enter comma-separated fix version(s) [3.0.0]: 3.0,0Restoring head pointer to mastergit checkout masterAlready on 'master'git branchTraceback (m...
http://issues.apache.org/jira/browse/SPARK-27277    Author: Hyukjin Kwon , 2019-03-26, 12:14