clear query| facets| time Search criteria: author:"Xiao Li".   Results from 1 to 10 from 140 (0.0s).
Loading phrases to help you
refine your search...
[expand - 1 more] - Quotes within a table name (phoenix table) getting failure: identifier expected at Spark level parsing - Spark - [mail # dev]
...: > >> HI, Nico, >> >> We use back ticks to quote it. For example, >> >> CUSTOM_ENTITY.`z02` >> >> Thanks, >> >> Xiao Li >> >> 2016-10-10 12:49 GMT-07:00 Nico Pappagianis < >> nico.pappagianis...
...) > > at org.apache.phoenix.jdbc.PhoenixConnection.prepareStatement( > PhoenixConnection.java:714) > > > It appears that Phoenix and Spark's query parsers are in disagreement. > > Any ideas? > > > Thanks! > > On Mon, Oct 10, 2016 at 3:10 PM, Xiao Li  wrote...
...@salesforce.com>: >> >>> Hello, >>> >>> *Some context:* >>> I have a Phoenix tenant-specific view named CUSTOM_ENTITY."z02" (Phoenix >>> tables can have quotes to specify case-sensitivity). I am...
...) >>> >>> at org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWri >>> ter.scala:164) >>> >>> Looking at the stack trace it appears that Spark doesn't know what to do >>> with the quotes around z02. I've tried escaping them in every way I...
... could >>> think of but to no avail. >>> >>> Is there a way to have Spark not complain about the quotes and correctly >>> pass them along? >>> >>> Thanks >>> >> >> > ...
   Author: Xiao Li , 2016-10-11, 02:42
Missing HiveConf when starting PySpark from head - Spark - [mail # dev]
...-06-14 9:18 GMT-07:00 Li Jin : > Are there objection to restore the behavior for PySpark users? I am happy > to submit a patch. > > On Thu, Jun 14, 2018 at 12:15 PM Reynold Xin  wrote...
...: > >> The behavior change is not good... >> >> On Thu, Jun 14, 2018 at 9:05 AM Li Jin  wrote: >> >>> Ah, looks like it's this change: >>> https://github.com/apache/spark/commit/b3417b731d4e323398a0d7ec6e8640...
... at 10:38 AM Li Jin  wrote: >>>> >>>>> Hey all, >>>>> >>>>> I just did a clean checkout of github.com/apache/spark but failed to >>>>> start PySpark, this is what I did: >>>>> >>>>> git clone git...
...@github.com:apache/spark.git; cd spark; build/sbt >>>>> package; bin/pyspark >>>>> >>>>> And got this exception: >>>>> >>>>> (spark-dev) Lis-MacBook-Pro:spark icexelloss$ bin/pyspark >>>>> >>>>> Python 3.6.3...
... something wrong in the build process? >>>>> >>>>> >>>>> Thanks much! >>>>> Li >>>>> >>>>> >>> ...
   Author: Xiao Li , 2018-06-14, 16:19
[expand - 1 more] - Spark SQL: what does an exclamation mark mean in the plan? - Spark - [mail # dev]
... Xiao Li 2015-10-19 11:16 GMT-07:00 Michael Armbrust : > It means that there is an invalid attribute reference (i.e. a #n where the > attribute is missing from the child operator). > > On Sun...
..., Oct 18, 2015 at 11:38 PM, Xiao Li  wrote: > >> Hi, all, >> >> After turning on the trace, I saw a strange exclamation mark in >> the intermediate plans. This happened in catalyst analyzer...
...! >> >> Xiao Li >> > > ...
   Author: Xiao Li , 2015-10-19, 23:25
Optimizer rule ConvertToLocalRelation causes expressions to be eager-evaluated in Planning phase - Spark - [mail # dev]
... help your case too. 2018-06-08 13:22 GMT-07:00 Li Jin : > Sorry I am confused now... My UDF gets executed for each row anyway > (because I am doing with column and want to execute the UDF...
..., 2018 at 9:51 PM Li Jin  wrote: >> >>> I see. Thanks for the clarification. It's not a a big issue but I am >>> surprised my UDF can be executed in planning phase. If my UDF is doing...
... it >>>> is still lazy. >>>> >>>> >>>> On Fri, Jun 8, 2018 at 12:35 PM Li Jin  wrote: >>>> >>>>> Hi All, >>>>> >>>>> Sorry for the long email title. I am a bit surprised to find that the >>>>> current...
... if this behavior is by design? >>>>> >>>>> Thanks! >>>>> Li >>>>> >>>>> >>>>> >>> > ...
   Author: Xiao Li , 2018-06-11, 04:04
Happy Diwali everyone!!! - Spark - [mail # user]
...Happy Diwali everyone!!! Xiao Li ...
   Author: Xiao Li , 2018-11-07, 23:09
[expand - 2 more] - A question about creating persistent table when in-memory catalog is used - Spark - [mail # dev]
..., Xiao Li 2017-01-23 0:01 GMT-08:00 Shuai Lin : > Cool, thanks for the info. > > I think this is something we are going to change to completely decouple >> the Hive support and catalog...
... Li  wrote: > >> Agree. : ) >> >> 2017-01-22 11:20 GMT-08:00 Reynold Xin : >> >>> To be clear there are two separate "hive" we are talking about here. One >>> is the catalog, and the other...
..., Jan 22, 2017 at 11:18 AM Xiao Li  wrote: >>> >>>> We have a pending PR to block users to create the Hive serde table when >>>> using InMemroyCatalog. See: https://github.com/apache/spark/pull...
... the metadata is >>>> persistently stored or not. >>>> >>>> Thanks, >>>> >>>> Xiao Li >>>> >>>> 2017-01-22 11:14 GMT-08:00 Reynold Xin : >>>> >>>> I think this is something we are going to change...
   Author: Xiao Li , 2017-01-23, 08:13
[VOTE] Apache Spark 2.1.0 (RC5) - Spark - [mail # dev]
...+1 Xiao Li 2016-12-16 12:19 GMT-08:00 Felix Cheung : ...
   Author: Xiao Li , 2016-12-16, 23:15
[vote] Apache Spark 2.0.0-preview release (rc1) - Spark - [mail # dev]
...Changed my vote to +1. Thanks! 2016-05-19 13:28 GMT-07:00 Xiao Li : ...
   Author: Xiao Li , 2016-05-20, 04:57
[expand - 1 more] - [spark-packages.org] Jenkins down - Spark - [mail # dev]
..., Jan 24, 2020 at 10:29 AM Xiao Li  wrote: > >> It does not block any Spark release. Reduced the priority to Critical. >> >> Cheers, >> >> Xiao >> >> Dongjoon Hyun  于2020年1月24日周五 上午10:24写道...
... 24, 2020 at 10:20 AM Xiao Li  wrote: >>> >>>> Hi, all, >>>> >>>> Because the Jenkins of spark-packages.org is down, new packages or >>>> releases are unable to be created in spark...
   Author: Xiao Li , 2020-02-05, 18:10
[expand - 1 more] - SQL language vs DataFrame API - Spark - [mail # dev]
...Hi, Michael, Does that mean SqlContext will be built on HiveQL in the near future? Thanks, Xiao Li 2015-12-09 10:36 GMT-08:00 Michael Armbrust : ...
   Author: Xiao Li , 2015-12-09, 19:02