clear query| facets| time Search criteria: .   Results from 1 to 10 from 16 (0.0s).
Loading phrases to help you
refine your search...
[SPARK-31100] Detect namespace existence when setting namespace - Spark - [issue]
...We should check if the namespace exists while calling "use namespace", and throw NoSuchNamespaceException if namespace not exists....
http://issues.apache.org/jira/browse/SPARK-31100    Author: Jackey Lee , 2020-07-02, 14:52
[SPARK-31694] Add SupportsPartitions Catalog APIs on DataSourceV2 - Spark - [issue]
...There are no partition Commands, such as AlterTableAddPartition supported in DatasourceV2, it is widely used in mysql or hive or other places. Thus it is necessary to defined Partition Catal...
http://issues.apache.org/jira/browse/SPARK-31694    Author: Jackey Lee , 2020-05-23, 01:21
[SPARK-30514] add ENV_PYSPARK_MAJOR_PYTHON_VERSION support for JavaMainAppResource - Spark - [issue]
...In apache Livy, the program is first started with JavaMainAppResource, and then start the Python worker. At this time, the program needs to be able to pass Python environment variables.In sp...
http://issues.apache.org/jira/browse/SPARK-30514    Author: Jackey Lee , 2020-05-17, 18:25
[SPARK-30513] Question about spark on k8s - Spark - [issue]
...My question is, why we wrote the domain name of Kube-DNS in the code? Isn'tit better to read domain name from the service, or just use the hostname?In our scenario, we run spark on Kata-like...
http://issues.apache.org/jira/browse/SPARK-30513    Author: Jackey Lee , 2020-05-17, 18:24
[SPARK-29771] Limit executor max failures before failing the application - Spark - [issue]
...ExecutorPodsAllocator does not limit the number of executor errors or deletions, which may cause executor restart continuously without application failure.A simple example for this, add --co...
http://issues.apache.org/jira/browse/SPARK-29771    Author: Jackey Lee , 2020-05-17, 18:23
[SPARK-31346] Add new configuration to make sure temporary directory cleaned - Spark - [issue]
...In InsertIntoHiveTable and InsertIntoHiveDirCommand, we use deleteExternalTmpPath to clean temporary directories after Job committed and cancel deleteOnExit if succeeded. But sometimes (e.g....
http://issues.apache.org/jira/browse/SPARK-31346    Author: Jackey Lee , 2020-04-30, 16:51
[SPARK-31438] Support JobCleaned Status in SparkListener - Spark - [issue]
...In Spark, we need do some hook after job cleaned, such as cleaning hive external temporary paths. This has already discussed in SPARK-31346 and GitHub Pull Request #28129. The JobEnd St...
http://issues.apache.org/jira/browse/SPARK-31438    Author: Jackey Lee , 2020-04-30, 06:47
[SPARK-30868] Throw Exception if runHive(sql) failed - Spark - [issue]
...At present, HiveClientImpl.runHive will not throw an exception when it runs incorrectly, which will cause it to fail to feedback error information normally.Examplespark.sql("add jar file:///...
http://issues.apache.org/jira/browse/SPARK-30868    Author: Jackey Lee , 2020-04-28, 08:07
[SPARK-31142] Remove useless conf set in pyspark context - Spark - [issue]
...In Pyspark/context.py, we extract the configuration of the "spark.executorEnv" prefix from conf and put it in env. But in PythonWorkerFactory, we have already obtained these environment vari...
http://issues.apache.org/jira/browse/SPARK-31142    Author: Jackey Lee , 2020-03-31, 06:07
[SPARK-31241] Support Hive on DataSourceV2 - Spark - [issue]
...There are 3 reasons why we need to support Hive on DataSourceV2.1. Hive itself is one of Spark data sources.2. HiveTable is essentially a FileTable with its own input and outputformats, it w...
http://issues.apache.org/jira/browse/SPARK-31241    Author: Jackey Lee , 2020-03-25, 04:02