clear query| facets| time Search criteria: .   Results from 1 to 10 from 69 (0.0s).
Loading phrases to help you
refine your search...
java.lang.NegativeArraySizeException occurred when compact - CarbonData - [mail # dev]
...Hi :It seems that MemoryBlock is cleaned by some other thread. i willinvestigate this ,you can continue by setting up below parameter incarbon.properties. enable.unsafe.in.query.processing=f...
   Author: BabuLal , 2018-10-16, 05:21
[CARBONDATA-2991] NegativeArraySizeException during query execution - CarbonData - [issue]
...During Query Execution sometime NegativeArraySizeException  Exception in Some Tasks . And sometime Executor is lost (JVM crash) ava.lang.NegativeArraySizeException at org.apache.carbondata.c...
http://issues.apache.org/jira/browse/CARBONDATA-2991    Author: Babulal , 2018-10-04, 15:42
[CARBONDATA-2986] Table Properties are lost when multiple driver concurrently creating table - CarbonData - [issue]
...Create 2 Sets of create table (each with 100 commands)run set 1 with JDBCserver (beeline)  and Run set2 using spark-submit .Some tables table properties like block_size,sort columns are lost...
http://issues.apache.org/jira/browse/CARBONDATA-2986    Author: Babulal , 2018-10-04, 12:43
[CARBONDATA-1225] Create Table Failed for partition table having date and timestamp when format is not specified - CarbonData - [issue]
...Create Table Failed for partition table having date or timestamp when format is not specified in carbon.properties ( as it is not mandatory , should use default )  create table if not e...
http://issues.apache.org/jira/browse/CARBONDATA-1225    Author: Babulal , 2018-09-26, 23:01
[CARBONDATA-2744] Streaming lock is not released even Batch processing is not happening - CarbonData - [issue]
...if Streaming Application is running , DDLs like finish streaming ,close streaming are blocked. ideally DDLs like finish streaming ,close streaming should be blocked if Batch Processing is ru...
http://issues.apache.org/jira/browse/CARBONDATA-2744    Author: Babulal , 2018-09-26, 22:00
[CARBONDATA-2925] Wrong data displayed for spark file format if carbon file has mtuiple blocklet - CarbonData - [issue]
...// LoadDatadef loadData(spark: SparkSession): Unit ={ spark.experimental.extraOptimizations=Seq(new CarbonFileIndexReplaceRule()) val fields=new Array[Field](8) fields...
http://issues.apache.org/jira/browse/CARBONDATA-2925    Author: Babulal , 2018-09-14, 14:05
[CARBONDATA-2885] Broadcast Issue and Small file distribution Issue - CarbonData - [issue]
...Carbon Relation size is getting calculated wrongly ( always 0 ) for External Table.Root Cause:- Because Tablestatus file is not present for external table  ...
http://issues.apache.org/jira/browse/CARBONDATA-2885    Author: Babulal , 2018-08-27, 07:29
[CARBONDATA-2870] explain command failed for count(*) query - CarbonData - [issue]
...spark.sql("drop table orders") spark.sql("create external table orders stored by 'carbondata' location 'D:/tpch_data/orders' ")spark.sql("explain select count from orders ").show(false)...
http://issues.apache.org/jira/browse/CARBONDATA-2870    Author: Babulal , 2018-08-20, 14:16
[CARBONDATA-2530] [MV] Wrong data displayed when parent table data are loaded - CarbonData - [issue]
...Spark Release:- Spark2.2.1Create table and load data to it create MV rebuild datamaprun query (used during mv creation) which hits MV and get dataNow load data to main table again run query ...
http://issues.apache.org/jira/browse/CARBONDATA-2530    Author: Babulal , 2018-08-13, 06:45
[CARBONDATA-2843] S3 Load is not working when load initiated  on starup of JDBCServer. - CarbonData - [issue]
...Configure AK/SK and endpoint in spark-default.confcreate table in S3 and then restart thrift server once JDBCserver is up start data loading  0: jdbc:hive2://ha-cluster/default> load data...
http://issues.apache.org/jira/browse/CARBONDATA-2843    Author: Babulal , 2018-08-08, 17:09