Phish.net is a non-commercial project run by Phish fans and for Phish fans under the auspices of the all-volunteer, non-profit Mockingbird Foundation.
This project serves to compile, preserve, and protect encyclopedic information about Phish and their music.
Credits | Terms Of Use | Legal | DMCA
The Mockingbird Foundation is a non-profit organization founded by Phish fans in 1996 to generate charitable proceeds from the Phish community.
And since we're entirely volunteer – with no office, salaries, or paid staff – administrative costs are less than 2% of revenues! So far, we've distributed over $2 million to support music education for children – hundreds of grants in all 50 states, with more on the way.
The sampling and response biases inherent in Phish.Net ratings are such that, short of full-blown, well-designed survey of all Phish fans, we will never get the methodology "exactly right". What I'm trying to do is identify the low hanging fruit: are there biases inherent in the ratings data that we can fix relatively easily?
Obviously, I think we can correct for some of the biases we all know are in the data. My approach has been to review the extensive academic literature to see what others have done when they have encountered the same issues as .Net. I'm not proposing anything new here; I am simply advocating that we do what others have done.
As for ranking of shows, show ratings estimated to the third decimal is highly likely to be false precision--even if a weighting system is adopted! Weights will reduce the overall bias in the data, but not eliminate it. So, it is NOT ME who is arguing whether Mondegreen N3 is #5 or #20 or whatever.
Since May of 2010 I've seen countless complaints about biased ratings, and all I'm saying is, "Hey, we can fix at least some of that shit."