Subject: Re: Parallel ALS-WR on very large matrix -- crashing (I think)

  Kate Ericson 2012-02-02, 00:58
  Ted Dunning 2012-02-02, 01:12
  Nicholas Kolegraff 2012-02-02, 01:23
  Kate Ericson 2012-02-02, 01:32
  Ted Dunning 2012-02-02, 01:44
  Nicholas Kolegraff 2012-02-02, 02:03
  Sean Owen 2012-02-02, 08:53
Nicholas,

can you give us the detailed arguments you start the job with? I'd
especially be interested in the number of features (--numFeatures) you
use. Do you use the job with implicit feedback data
(--implicitFeedback=true)?

The memory requirements of the job are the following:

In each iteration either the item-features matrix (items x features) or
the user-features matrix (users x features) is loaded into the memory of
each mapper. Then the original user-item matrix (or its transpose) is
read row-wise by the mappers and they recompute the features via
AlternatingLeastSquaresSolver/ImplicitFeedbackAlternatingLeastSquaresSolver.

--sebastian
On 02.02.2012 09:53, Sean Owen wrote:
  Nicholas Kolegraff 2012-02-02, 16:25
  Sebastian Schelter 2012-02-02, 16:31
  Nicholas Kolegraff 2012-02-02, 16:37
  Sebastian Schelter 2012-02-02, 16:40
  Nicholas Kolegraff 2012-02-02, 18:56
  Ken Krugler 2012-02-02, 19:25
  Nicholas Kolegraff 2012-02-03, 01:48
  Nicholas Kolegraff 2012-02-09, 02:50