My Probability and Statistics course this semester, contrary to what may be the norm in college, is the first course I have ever taken in the last five years that is truly graded on a bell curve.
For the sake of having all two of my readers on the same page, the bell curve involves normalizing the performance of the entire class and assigning those seemingly-arbitrary letter grades based on the normalization, rather than absolute points. For example, if the class average in the whole class is a 50, then a 50 becomes a C, and all the grades are assigned from there. There are some more details that go into it, such as how spread out the individual student averages are, but if a 50 is a C, then a 70 could easily be an A.
In another class full of geniuses, an 80 could be a C.
See where I’m going? Basically, you’re in competition with your peers.
My opinion on this can and always been summed up in one word: bullocks. My thinking is that any class in which you can spend your entire semester kicking your arse and getting solid grades in return and still end up getting a C has something wrong with it. My thinking is that any class in which you can spend your entire semester forgoing your studies and instead focusing on lacing your classmates’ drinks with laxatives and still end up getting a C also has something wrong with it.
My thinking is that any class in which both of the above statements are true is completely whack.
However, in speaking with my Dad and Cathryn, as well as a few of my fraternity brothers, it was pointed out to me that this grading system also works very well at normalizing against bad professors. This is a particularly good point; if the class was graded on an absolute system, where anything above a 90 was an A, but the highest grade was a 75, I think everyone would be a little peeved to discover that the class brainiac got an “average” grade.
Still, it assigns a true bell curve to the class: the majority of the students will receive a C. As and Fs will be in the minority compared to Bs and Ds. And calculating scenarios as final exams are looming is all but futile: it depends almost exclusively on how well your classmates do in relation to you.
My girlfriend brought up a good point, though: what if all the kids in the class are morons? People who should get Fs could end up getting Cs.
After having mulled over this, I have come up with the following thought: I’ll prove that I am, in fact, learning something in ProbStat and prove why the bell curve system is much more effective at trapping incompetent professors than passing along incompetent students.
Let’s say you have a class (say, a ProbStat class) with a professor and a bunch of students (say, 46 students). The probability of any individual out of those 47 being incompetent is 50%. How likely, then, is it that the professor will be incompetent, thus maximizing use of the bell curve? How likely is it all the students will be incompetent, thus exploiting the bell curve?
As stated in the problem, the probability of any one person being incompetent is 50%, so the professor has a simple chance of 50% of being an utter failure. With all 46 students sporting a 50% chance of incompetency, the calculation becomes (0.5)46, or 0.0000000000004%.
Even the probability of half the students being incompetent is miniscule: (0.5)23, or 0.00001192%.
Ok, so obviously the likelihood of the system being exploited in favor of awful students is all but impossible. So let’s make the calculations a bit more realistic. Professors, by definition, have had years and years of studying and research and teaching behind them, so by that reasoning it’s unlikely (though still possible) that they’re incompetent. Let’s say they have a 10% chance of incompetency. Students, on the other hand, have literally no (or very little) experience under their belts, and are most likely piecing things together for the first time. Let’s say their chance of being incompetent is 80%.
Even with these incredibly skewed numbers, the chances of a professor being incompetent vastly overshadows the possibility that all the students (or even half of them) are utter failures. To wit: a professor will be incompetent 10% of the time, while a class of 46 students will be entirely incompetent 0.00035% of the time, and half that class will be incompetent 0.59% of the time.
That’s still practically a factor of 20 difference between the two probabilities.
Isn’t it comforting to know that it’s much more likely that your professor will be an idiot rather than half your classmates?
Obviously, this is still an oversimplification, but I think the proof of concept is there.
(by the way, if any of my calculations are incorrect, please feel free to point that out…gently)
While we’re on the topic of math, though, can anyone help me with the following problem:
You have 10 bins and 100 balls. 5 of the bins can only hold even numbers, 4 of the bins can only hold odd numbers, and one of the bins can hold any number, even 0. How many different ways are there of distributing the balls among the bins?
I can set up the equation: ten variables, each representing one of the bins, added together and equaling 100. The idea is to simplify it to the point of being able to use the formula (n + k – 1) choose (k – 1), but that requires reducing the restrictions of each variable to xk >= 0, and with one of the bins not having to possess an odd or even number of balls, it destroys the possibility of factoring a 2 out of every variable and dividing it out of the whole equation. Help would be appreciated!
Ok I think I’m done with math for the day. Got a senior design meeting with our customer in 5 minutes, and we’ll be delivering our product vision statement and relaying news of our progress thus far. More details to follow.
Stay outta trouble!