The most commonly asked question. If you have a question which is not answered here, please send it to us at camra2011 [at] camrachallenge.com
Datasets
[Qd1]: Will the additional dataset used for deciding the winners of the challenge be of the same format as the currently released ones?
[Ad1]: Yes, the additional evaluation sets will be of the same format, except the true positive values will not be provided.
[Qd2]: When will the the additional dataset be published?
[Ad2]: It will not be published, the participants will be given a list of users/households to recommend/identify a set of movies to for each track. They will be asked to return a list of recommended movies/users which we will evaluate. This will be around the time of the camera-ready deadline.
[Qd4]: Is it allowed to use information, e.g. from Wikipedia, about the Oscar nominations in 2010?
[Ad4]: No, we know that this might sound harsh, but in order to know that we’re comparing apples to apples we are imposing this limit. The idea is not to scrape extra meta-data off of IMDb/Wikipedia.
[Qd5]: The password does not seem to work.
[Ad5]: It does, try again. If it fails, try to download the file again. The files have been tested on Win/Linux/OSX/Unix and do work.
[Qd6]: Are you going to release the datasets after the workshop?
[Ad6]: The datasets are released exclusively for the challenge. However, by submitting a paper to the workshop, you are granted usage of the data in further research.
[Qd7]: Is it compulsory to send a paper to the workshop in order to use the data?
[Ad7]: Yes, we expect the teams to contribute a paper to the workshop. If you do not contribute a paper to the workshop, you are give up the right to use the datasets in any future work. Please see the Dataset page. The paper can be from 4 to 8 pages. We will consider allowing a shorter format.
[Qd8]: There are ratings in the dataset with the rating value 0, are these actual ratings?
[Ad8]: Yes, the scale is from 0 to 100, 0 being the lowest rating value and 100 being the highest.
Evaluation
[Qe1]: Will we send our final predictions to the organizers, which will then perform the evaluation?
[Ae1]: Yes
[Qe2]: Why not use XXX as the evaluation metric instead?
[Ae2]: There are as many evaluation metrics as there are ways to implement a recommender, we have settled for these metrics as they are commonly used and well known by the majority of RecSys people. You are welcome to use any other evaluation metric in you paper in addition to the ones specified.
Publications
[Qp]: Do you know of any relevant publications?
[Ap]: Yes, please have a look at these papers
- Herlocker2004, Evaluating collaborative filtering recommender systems
- CAMRa 2010, CAMRa 2010 proceedings (ACM DL)