I think you are aware of that loss function; after all, an MLE minimizes the (-loglikelihood) loss. Anyway, my point in bringing it up is perhaps a bit involved; we can talk about it IRL.
Since the proper scoring rule is not unique, perhaps it suggests that a subjective probability does not encapsulate all of one's (un)certainty.
I agree, it seems that one would want more than just classification when doing a meta-analysis, but how much more...? I don't like subjective probability, but on the other hand it seems to be useful. Then again there are methods like boosting which do meta-analysis without confidence/subjective probability. On the fourth hand, boosting seems very brittle to noise (http://www.phillong.info/publications/LS10_potential.pdf).
no subject
Date: 2010-11-19 06:19 pm (UTC)Since the proper scoring rule is not unique, perhaps it suggests that a subjective probability does not encapsulate all of one's (un)certainty.
I agree, it seems that one would want more than just classification when doing a meta-analysis, but how much more...? I don't like subjective probability, but on the other hand it seems to be useful. Then again there are methods like boosting which do meta-analysis without confidence/subjective probability. On the fourth hand, boosting seems very brittle to noise (http://www.phillong.info/publications/LS10_potential.pdf).
It is interesting.