Another week gone by and we’re headed into one of those really interesting times–Presidential primary campaigns are heating up, the Holidays are closing in, and winter is just around the corner (heck, we even got something that could charitably be described as snow here last weekend).
So, with all that going on, here are a few of the more interesting things I read this week in the world of politics, stats, and behavioral sciences that might be worth spending some of your weekend time on. Or, if it’s as beautiful as today is in DC, you should skip all of this and get outside–it won’t last forever.
- The DCCC is starting to roll out its plans for 2012 in the form of a “Drive for 25.”
There’s a lot to think about here including the effects of redistricting on the target list, whether a party can simultaneously embrace a general anti-incumbent message–rather than a policy message–at the House level while defending the incumbent in the White House, and the probability of success in these districts. We’ll take a tour of the races and data ourselves in the coming week or so, but this is the first real shot of House 2012, so it’s worth a read and a thought.
- I loved this article about how the Khan Academy is using machine learning and a good dose of inquisitiveness and willingness to experiment to make great strides in understanding how to really assess and encourage learning and mastery.
I love this article for a few reasons–it is a really cool application of some fun statistical techniques, it’s the kind of things we need to be thinking more of when it comes to educational reform (it makes No Child Left Behind look like Medieval medicine), and there’s no way that something like this would ever have come out of our antiquated and ossified traditional education system.
- If you’re looking for an educational read this weekend on a topic that matters more than you think, I recommend Stephen Ziliak and Deirdre McCloskey’s The Cult of Statistical Significance: How the Standard Error Cost Us Jobs, Justice, and Lives.
Those who spend too much time talking to me know that I’m fascinated by philosophy of science arguments. Those who haven’t been exposed to this peculiarity are probably better off. One of my favorite arguments is an often dense debate about the nature of certainty and uncertainty, independence of observations, standards of proof, and a whole host other nerdy subjects.
When a book comes along that does an okay job of critiquing the confidence-interval-dependent worldview that dominated much of science in the second half of the 20th Century (that is, most of what we learned in school up to and including our undergraduate educations), it’s worth a read even if it has warts.
This is a book with many flaws, as any search of reviews will reveal, but it also provides a fairly accessible (and, for the topic, page turning) exploration of why things like the standard error of the estimate and the 95% confidence interval (and their bastard child in polling, the margin of error) are neither handed down from the gods nor necessarily even good things to use in all cases.
- Finally and without comment, the best statistical problem ever written. Just because.