clear query| facets| time Search criteria: .   Results from 1 to 10 from 57 (0.0s).
Loading phrases to help you
refine your search...
[SPARK-30849] Application failed due to failed to get MapStatuses broadcast - Spark - [issue]
...Currently, we encountered an issue in Spark2.1. The exception is as follows: Job aborted due to stage failure: Task 18 in stage 2.0 failed 4 times, most recent failure: Lost task 18.3 in sta...    Author: liupengcheng , 2020-02-24, 14:13
[FLINK-14038] ExecutionGraph deploy failed due to akka timeout - Flink - [issue]
...When launching the flink application, the following error was reported, I downloaded the operator logs, but still have no clue. The operator logs provided no useful information and was cance...    Author: liupengcheng , 2020-02-21, 16:38
[SPARK-30346] Improve logging when events dropped - Spark - [issue]
...Currently, spark will logging events dropped count info every 60s when events dropped, however, we notice that this not working as expected in our production environment.We looked into the c...    Author: liupengcheng , 2020-02-19, 03:11
[FLINK-15906] physical memory exceeded causing being killed by yarn - Flink - [issue]
...Recently, we encoutered this issue when testing TPCDS query with 100g data. I first meet this issue when I only set the `` to `4g` with `-tm` option. The...    Author: liupengcheng , 2020-02-19, 02:39
[KUDU-3054] Init kudu.write_duration accumulator lazily - Kudu - [issue]
...Currently, we encountered a issue in kudu-spark that will causing spark sql query failure:```Job aborted due to stage failure: Total size of serialized results of 942 tasks (2.0 GB) is bigge...    Author: liupengcheng , 2020-02-17, 02:25
[FLINK-15702] Make sqlClient classloader aligned with other components - Flink - [issue]
...Currently, Flink sqlClient still use hardcoded `parentFirst` classloader to load user specified jars and libraries, this is easily causing classes conflicts. In FLINK-13749 , we already make...    Author: liupengcheng , 2020-02-16, 15:21
[SPARK-30712] Estimate sizeInBytes from file metadata for parquet files - Spark - [issue]
...Currently, Spark will use a compressionFactor when calculating `sizeInBytes` for `HadoopFsRelation`, but this is not accurate and it's hard to choose the best `compressionFactor`. Sometimes,...    Author: liupengcheng , 2020-02-07, 04:27
[SPARK-30394] Skip collecting stats in DetermineTableStats rule when hive table is convertible to  datasource tables - Spark - [issue]
...Currently, if `spark.sql.statistics.fallBackToHdfs` is enabled, then spark will scan hdfs files to collect table stats in `DetermineTableStats` rule. But this can be expensive and not accura...    Author: liupengcheng , 2020-02-07, 02:52
[SPARK-30713] Respect mapOutputSize in memory in adaptive execution - Spark - [issue]
...Currently, Spark adaptive execution use the MapOutputStatistics information to adjust the plan dynamically, but this MapOutputSize does not respect the compression factor. So there are cases...    Author: liupengcheng , 2020-02-04, 20:03
[FLINK-15848] Support both fixed allocator and dynamic allocator in flink - Flink - [issue]
...Currently, we removed static allocator and only support dynamic allocation in flink1.10, however, this allocator still has some drawbacks: Can not allocate resources in a range, which means ...    Author: liupengcheng , 2020-02-03, 02:28