Subject: UR evaluation


Here is a discussion of how we used it for tuning with multiple input types: https://developer.ibm.com/dwblog/2017/mahout-spark-correlated-cross-occurences/

We used video likes, dislikes, and video metadata to increase our MAP@k by 26% eventually. So this was mainly an exercise in incorporating data. Since this research was done we have learned how to better tune this type of situation but that’s a long story fit for another blog post.
From: Marco Goldin <[EMAIL PROTECTED]>
Reply: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
Date: May 10, 2018 at 9:54:23 AM
To: Pat Ferrel <[EMAIL PROTECTED]>
Cc: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
Subject:  Re: UR evaluation  

thank you very much, i didn't see this tool, i'll definitely try it. Clearly better to have such a specific instrument.

2018-05-10 18:36 GMT+02:00 Pat Ferrel <[EMAIL PROTECTED]>:
You can if you want but we have external tools for the UR that are much more flexible. The UR has tuning that can’t really be covered by the built in API. https://github.com/actionml/ur-analysis-tools They do MAP@k as well as creating a bunch of other metrics and comparing different types of input data. They use a running UR to make queries against.
From: Marco Goldin <[EMAIL PROTECTED]>
Reply: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
Date: May 10, 2018 at 7:52:39 AM
To: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
Subject:  UR evaluation

hi all, i successfully trained a universal recommender but i don't know how to evaluate the model.

Is there a recommended way to do that?
I saw that predictionio-template-recommender actually has the Evaluation.scala file which uses the class PrecisionAtK for the metrics. 
Should i use this template to implement a similar evaluation for the UR?

thanks,
Marco Goldin
Horizons Unlimited s.r.l.