I made a rather startling discovery in my research the other day. I was looking for a better way of filtering out pixels in the videos I was analyzing; picking an arbitrary constant to set the threshold didn’t seem to be working out too well, since some videos required a higher threshold while others needed something softer.
The more technically-inclined might immediately say “well what about an adaptive threshold, nyuk nyuk”, to which I first say:
And then I’ll admit that, yes, some sort of adaptive threshold is indeed the way to go. But which kind? Do a Google Scholar search and you can find one for every grain of sand on the beaches.
I decided to look at the data itself. Oddly enough, I’d been doing this video thresholding without looking at how the initial data looked.
The basic idea is this: for each pixel in a video, compute its standard deviation. Then, to determine whether or not that pixel is showing us anything useful, compare its standard deviation to the average standard deviation over all the pixels. If it’s larger than the average (suggesting an above-average amount of movement) or some constant times the average, keep it. Otherwise, discard it.
Obviously you can replace the mean of the standard deviation with the median, but the result is more or less the same: for some videos, this works just fine. For others, its too stringent, i.e. useful pixels are being thrown out. So to get a feel for the data, I collected all the standard deviations for each pixel, and plotted them as a histogram.
I was kind of shocked when I saw the graph:
It’s pretty much a textbook gamma distribution. Obviously, this varied from video to video, sometimes with some pretty weird histograms:
The problem became apparent in looking at these graphs: by choosing the average of these distributions as the cut-off, we were picking some value along the downhill of the slope–effectively eliminating the tallest point, or the most common value, and hence most likely cutting out some important pixels.
Further examination, however, revealed that some of the standard deviations followed what looked to be a normal distribution, instead of a gamma:
In these cases, the mean would actually work very well.
So I had two behaviors to consider: were the standard deviations of the pixels behaving collectively as a gamma, or as a gaussian?
Turns out, SciPy has a way to figure that out: specifically, it uses the Kolmogorov-Smirnov statistical test to give a likelihood that a bunch of points were drawn from a particular distribution. Also, it gets you drunk (rimshot!).
My basic plan: perform a KS test on the standard deviations for both gamma and gaussian. If the data are closer to a gamma, use the peak value as the cut-off. If the data are closer to a gaussian, use the mean value instead.
# First, compute the standard deviation for each pixel. stds = numpy.std(pixels, axis = 1) # Bin the standard deviations into a histogram. hist, bins = numpy.histogram(stds, bins = 50) cutoff = numpy.mean(stds) # Keeps this value if distribution is closer to a gaussian # "Fit" the standard deviations to a gamma and a normal, respectively. This # generates estimates for what the parameters would be, assuming the data # were drawn from the specified distribution. a, l, b = scipy.stats.gamma.fit(stds) m, s = scipy.stats.norm.fit(stds) # Use the estimated parameters and the data to perform the KS-test. Dg, pg = scipy.stats.kstest(stds, 'gamma', args = (a, l, b)) Dn, pn = scipy.stats.kstest(stds, 'norm', args = (m, s)) # Dg and Dn are effectively "distance from this distribution" for gamma and # normal, respectively. The smaller the value, the better fit it is. if Dg < Dn: # closer to a gamma distribution, use the peak value instead maxind = numpy.argmax(hist) cutoff = (bins[maxind] + bins[maxind]) / 2.0
Since implementing this new method of thresholding the video data, the analysis I’ve been doing has become noticeably more accurate; we’re talking a 10-15 percentage point jump. Pretty freaking neat!
tl;dr I SWEAR I’M GOING TO GET THIS PAPER OUT BEFORE 2013.