clear query| facets| time Search criteria: .   Results from 1 to 10 from 40 (0.0s).
Loading phrases to help you
refine your search...
restarting ranger kms causes spark thrift server to stop - Spark - [mail # user]
...Hi,From what I can tell, that's an error in Ranger, not in Spark, as you cansee by the package where the exception is thrown.Spark Thrift server in this instance is merely trying to call a H...
   Author: Rick Moritz , 2018-06-24, 12:11
[ZEPPELIN-346] Ambiguous parsing of replName in  zeppelin-zengine/src/main/java/org/apache/zeppelin/notebook/NoteInterpreterLoader.java#get(String replName) - Zeppelin - [issue]
...This issue was probably introduced into master some time between 0.5.0 and around 4 weeks ago.This issue addresses a combination of a bad test and ambiguous logic in the Loader.The test uses...
http://issues.apache.org/jira/browse/ZEPPELIN-346    Author: Rick Moritz , 2018-05-09, 05:23
how to create a DataType Object using the String representation in Java using Spark 2.2.0? - Spark - [mail # user]
...Hi,We solved this the ugly way, when parsing external column definitions:private def columnTypeToFieldType(columnType: String): DataType = {  columnType match {    case "Integ...
   Author: Rick Moritz , 2018-01-26, 13:52
[StructuredStreaming] multiple queries of the socket source: only one query works. - Spark - [mail # user]
...Hi Gerard, hi List,I think what this would entail is for Source.commit to change itsfuncationality. You would need to track all streams' offsets there.Especially in the socket source, you al...
   Author: Rick Moritz , 2017-08-12, 05:59
[STORM-2028] Exceptions in JDBCClient are hidden by subsequent SQL-Exception in close() - Storm - [issue]
...When an Exception is triggered in JdbcClient.executeInsertQuery there is the potential for a follow-up Exception in close() to take precedence over the previously thrown Exception, when trig...
http://issues.apache.org/jira/browse/STORM-2028    Author: Rick Moritz , 2017-08-09, 04:45
Reading Hive tables Parallel in Spark - Spark - [mail # user]
...Put your jobs into a parallel collection using .par -- then you can submitthem very easily to Spark, using .foreach. The jobs will then run using theFIFO scheduler in Spark.The advantage ove...
   Author: Rick Moritz , 2017-07-17, 12:48
"Sharing" dataframes... - Spark - [mail # user]
...Keeping it inside the same program/SparkContext is the most performantsolution, since you can avoid serialization and deserialization.In-Memory-Persistance between jobs involves a memcopy, u...
   Author: Rick Moritz , 2017-06-21, 07:20
[SPARK-20489] Different results in local mode and yarn mode when working with dates (silent corruption due to system timezone setting) - Spark - [issue]
...Running the following code (in Zeppelin, or spark-shell), I get different results, depending on whether I am using local[*] -mode or yarn-client mode:test caseimport org.apache.spark...
http://issues.apache.org/jira/browse/SPARK-20489    Author: Rick Moritz , 2017-05-15, 10:40
Spark consumes more memory - Spark - [mail # user]
...I would try to track down the "no space left on device" - find out wherethat originates from, since you should be able to allocate 10 executorswith 4 cores and 15GB RAM each quite easily. In...
   Author: Rick Moritz , 2017-05-11, 17:34
Create multiple columns in pyspak with one shot - Spark - [mail # user]
...In Scala you can first define your columns, and then use thelist-to-vararg-expander :_*  in a select call, something like this:val cols = colnames.map(col).map(column => {  *lit...
   Author: Rick Moritz , 2017-05-04, 08:06