Subject: This MapR-DB Spark Connector with Secondary Indexes

I am at loss why one needs Spark to load one row from the DB as in the
example below

val data = sparkSession
  .loadFromMapRDB("/user/mapr/tables/data", schema)
  .filter("uid = '101'")

Assuming that _id is the primary key so we are just going to load one row
only. Spark as a distrubted processing is designed to work on large data
sets that require a cluster

Also my second point is that what happens if you have a composite index or
rather can one create composite index in this DB?

Third point is it is expected that you already know your search pattern so
create indexes beforehand as needed. Again this negates the use of Spark.

There are tools in the market that do create cubes and indexes dynamically
like Jethro <>. That tool would be more appropriate.


Dr Mich Talebzadeh

LinkedIn *
*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.
On Sat, 27 Apr 2019 at 17:33, Mich Talebzadeh <[EMAIL PROTECTED]>