This is an interesting concept to me, as its applicability is endless.
Remember your high school / college calculus courses? Remember Taylor/Maclaurin/power series? Remember the whole thing about summing an infinite series of numbers together, and posing the question of whether the sum “diverged” (went off to infinity) or “converged” (the sum of those infinite numbers was actually a single number)?
If x ≥ 1, the sum would diverge. If, however, x < 1, things got a little tricky, since for some series of numbers the sum would diverge, others would converge, and it would require a bit more thorough an investigation to tell for sure.
Still, I think most of us would agree that numbers smaller than 1 – even 1 itself – are pretty small numbers. But it’s also telling that if you were to sum these “small” numbers together infinitely, your result would be infinity. This paradigm shows up absolutely everywhere.
- Fallacies of composition and division: somewhat more concrete examples where the properties of constituent parts are substituted as the properties for the whole, essentially saying that “since the sum is comprised of small operands, the sum must be small as well.” Sorry, no can do.
- While I was interviewing at the USACE, we brainstormed over a problem involving a city’s construction laws and how they tracked impact of these improvement plans on the surrounding environment. If the construction project was sufficiently small, its impact on the environment was deemed negligible and no further record of the project was kept. Problem was, hundreds of thousands of these “negligible” projects were moving forward and there definitely was a noticeable impact.
- A classic machine learning paradigm: a small change in input produces a small change in output. If you took the function f(x) = x2, you would find that the value of f(x) doesn’t change much between values of, say, 1 and 1.01 for x. Less trivially, if you’re trying to classify some sort of unobserved data, but you’ve already observed data which is strikingly similar, you could probably get away with saying the unobserved data belongs to the same class / cluster as the very similar data you’ve already seen. Probably.
Here’s the catch: this is indeed a fallacy, but it’s so hard-wired into the genetic code of humans because, frankly, it’s not a bad heuristic. By and large, similar things behave similarly. By and large, a finite sum of small numbers will yield a small number. As a matter of fact, this idea of similar things behaving similarly is a basis for my master’s thesis; it’s a hypothesis, and whether or not it’s proved true (or some mixture of true and false under differing conditions) it will be informative (though in this case it deals exclusively with protein behavior, so either way will be exceptionally informative).
Where we short-sighted humans get burned by this paradigm is, indeed, long-term planning. When I was President of my fraternity at Georgia Tech, I inherited an interesting problem: our local chapter had, over the previous several years, been drifting stealthily away from adherence to a particular rule, and only within the few months leading up to my election had it been noticed. Kind of a “how did this happen?” moment. In order to fix, it required what felt like a massive shift in policy.
The applications of this paradigm are endless. Generally, assuming that similar things behave similarly is a good rule of thumb; it’s how our hunter-gatherer ancestors survived moving across unfamiliar terrain and territories. But like any rule of thumb, if obeyed mindlessly, it will chomp yo tushie hard.
And this lolcat will have zero sympathy: