clear query| facets| time Search criteria: .   Results from 11 to 20 from 558 (0.0s).
Loading phrases to help you
refine your search...
[SPARK-15191] createDataFrame() should mark fields that are known not to be null as not nullable - Spark - [issue]
...Here's a brief reproduction:>>> numbers = sqlContext.createDataFrame(...     data=[(1,), (2,), (3,), (4,), (5,)],...     samplingRatio=1  # go through all t...
http://issues.apache.org/jira/browse/SPARK-15191    Author: Nicholas Chammas , 2019-05-21, 04:33
[SPARK-19216] LogisticRegressionModel is missing getThreshold() - Spark - [issue]
...Say I just loaded a logistic regression model from storage. How do I check that model's threshold in PySpark? From what I can see, the only way to do that is to dip into the Java object:mode...
http://issues.apache.org/jira/browse/SPARK-19216    Author: Nicholas Chammas , 2019-05-21, 04:17
[SPARK-19553] Add GroupedData.countApprox() - Spark - [issue]
...We already have a pyspark.sql.functions.approx_count_distinct() that can be applied to grouped data, but it seems odd that you can't just get regular approximate count for grouped data.I ima...
http://issues.apache.org/jira/browse/SPARK-19553    Author: Nicholas Chammas , 2019-05-21, 04:15
[SPARK-2141] Add sc.getPersistentRDDs() to PySpark - Spark - [issue]
...PySpark does not appear to have sc.getPersistentRDDs()....
http://issues.apache.org/jira/browse/SPARK-2141    Author: Nicholas Chammas , 2019-05-21, 04:11
Suggestion on Join Approach with Spark - Spark - [mail # dev]
...This kind of question is for the User list, or for something like StackOverflow. It's not on topic here.The dev list (i.e. this list) is for discussions about the development ofSpark itself....
   Author: Nicholas Chammas , 2019-05-15, 18:04
[PySpark] Revisiting PySpark type annotations - Spark - [mail # user]
...I think the annotations are compatible with Python 2 since Maciejimplemented them via stub files, which Python 2simply ignores. Folks using mypy  to check typeswill get the benefit whet...
   Author: Nicholas Chammas , 2019-01-25, 19:08
[expand - 1 more] - Ask for reviewing on Structured Streaming PRs - Spark - [mail # dev]
...OK, good to know, and that all makes sense. Thanks for clearing up myconcern.One of great things about Spark is, as you pointed out, that improvementsto core components benefit multiple feat...
   Author: Nicholas Chammas , 2019-01-15, 01:45
[SPARK-19217] Offer easy cast from vector to array - Spark - [issue]
...Working with ML often means working with DataFrames with vector columns. You can't save these DataFrames to storage (edit: at least as ORC) without converting the vector columns to array col...
http://issues.apache.org/jira/browse/SPARK-19217    Author: Nicholas Chammas , 2019-01-04, 10:41
[expand - 1 more] - Noisy spark-website notifications - Spark - [mail # dev]
...I'd prefer it if we disabled all git notifications for spark-website. Folkswho want to stay on top of what's happening with the site can simply watchthe repo on GitHub , no?On Wed, Dec 19, 2...
   Author: Nicholas Chammas , 2018-12-20, 03:22
[SPARK-18818] Window...orderBy() should accept an 'ascending' parameter just like DataFrame.orderBy() - Spark - [issue]
...It seems inconsistent that Window...orderBy() does not accept an ascending parameter, when DataFrame.orderBy() does.It's also slightly inconvenient since to specify a descending sort order y...
http://issues.apache.org/jira/browse/SPARK-18818    Author: Nicholas Chammas , 2018-12-13, 02:46