I realized that when Professor Belongie asked me about related work on Monday, he had (probably) meant this (humorous) paper he sent me about using computer vision to determine the quality of CVPR paper submissions.
Very witty paper by "von Bearnensquash", can be found at http://vision.ucsd.edu/sites/default/files/gestalt.pdf. They used "standard computer vision features" (LUV histograms, HoG, and gradient magnitude) and AdaBoost classification and found that "good" paper features include brightly colored graph and math equations, and "bad" paper features include complicated tables and missing pages (illustrated below).
They found that allowing for a false positive rate of 15%, they could successfully reject half of the "bad" papers.
The problem I'm addressing is similar but there's an important distinction. They use the content of the thing itself to evaluate quality, so it is sensible for their to be a relationship, but book cover images are not necessarily related to the content of what I'm evaluating for quality (the book itself).
In any case, AdaBoost could be a good classification method to try as it is simple and doubles as a feature selection method. There is a nice overview at https://hpcrd.lbl.gov/~meza/projects/MachineLearning/EnsembleMethods/introBoosting.pdf
Very witty paper by "von Bearnensquash", can be found at http://vision.ucsd.edu/sites/default/files/gestalt.pdf. They used "standard computer vision features" (LUV histograms, HoG, and gradient magnitude) and AdaBoost classification and found that "good" paper features include brightly colored graph and math equations, and "bad" paper features include complicated tables and missing pages (illustrated below).
They found that allowing for a false positive rate of 15%, they could successfully reject half of the "bad" papers.
The problem I'm addressing is similar but there's an important distinction. They use the content of the thing itself to evaluate quality, so it is sensible for their to be a relationship, but book cover images are not necessarily related to the content of what I'm evaluating for quality (the book itself).
In any case, AdaBoost could be a good classification method to try as it is simple and doubles as a feature selection method. There is a nice overview at https://hpcrd.lbl.gov/~meza/projects/MachineLearning/EnsembleMethods/introBoosting.pdf

No comments:
Post a Comment