clock menu more-arrow no yes mobile

Filed under:


One of a select few Sabermetrics worth a damn.

Imagine I thought of a clever pun involving WHIP and chickens
Imagine I thought of a clever pun involving WHIP and chickens

A wise man once said, "Oh, people can come up with statistics to prove anything, Kent.  Fourfty [sic] percent of all people know that!". The fact that this wise man was defending bludgeoning people with sacks of doorknobs, or let the Springfield cat burglar rob the world's largest cubic zirconia is immaterial to his point - you really can come up with statistics to prove anything you want.   From sample manipulation (i.e. outright fraud) to cleverly "massaging" your data (also fraud) to confirmation bias (less egregious fraud), there's a number of ways to say whatever you want with convincing numbers.  If you want some more specifics on this diatribe, I covered them in my last incoherent rambling.

Today, I want to talk about a certain attempt to better quantify a ballplayer's performance - the Walks + Hits divided by Innings Pitched, or WHIP.  This will be a more accessible discussion than that of before, as I feel I may have lost people with the last installment.  Either that or we're all stuck in a sort of blase fog before the warm weather comes back.

Many of the AoG readers grew up in a time without WHIP.  The biggest and "best" metric for comparing pitchers was Earned Run Average (ERA) AND THAT'S THE WAY WE LIKED IT!  Back then, we had to compare pitchers using nothing more than their ERA and an unhealthy knowledge of how the ball has changed in 120 years of baseball.  We'd argue over live balls until the cows came home, someone once even started talking about the damned tanning process from different factories to me.

ERA is a simple and easy metric, it's simply the amount of EARNED runs a pitcher lets up over the innings he has pitched.  There is some confusion amongst baseball fans (mostly casual fans) about what happens during a pitching change.  The answer is that any runner that pitcher lets on is "his".  So if a starting pitcher (SP) lets on a runner, and a reliever (RP) gives up the hit that brings that runner around, that earned run is on the SP.  This has been cited as a weakness towards the ERA - that while the baserunner was "let on" by one pitcher, he didn't really "let him in" -- that's the other pitcher's fault, really.

The biggest weakness of ERA is how reliant it is on the defense behind him.  We're not talking about errors here (as that's not a earned run), we're talking about a bad defense.  No one could fault a pitcher who had Roger Dorn as a third baseman.  Anyone who has played MLB "The Show" as a starting pitcher, only to throw their PlayStation controller across the room like a child when your stupid SS can't field a simple grounder, giving up your 7 perfect innings, knows this.  Screw you, Derek "ol' cement Shoes" Jeter

Another issue is that of who you play regularly, and what league you play in.  In a sense, this is an argument that should be familiar to a lot of SEC fans.  If you are in a division where you have to play 2 or 3 of the best hitting teams in the league 18 times a season, you of course are going to have higher ERAs.  The better the competition, the more leeway you should give people for bad games and losses.  Further still, the cowardly National League still uses the archaic idea that a pitcher should give a "free out" every 9 batters, which leads to much crowing over artificially lower ERAs in the National League.

Regardless, there's a reason ERA is still standard for judging a pitcher's overall performance - it works, and it's easy to understand.  In the here and now, you have an idea of how many runs (per 9) you can expect your pitcher to give up.  All the weaknesses become a moot point when you are a fan trying to second guess a manager's decisions on who to start as your pitchers have all the same defense and schedule.  Managers of course have many more things they go on, but as far as good metrics that the general populace can understand, it's perfect.

Along comes WHIP.  WHIP was actually invented by the father of fantasy baseball (and therefore fantasy sports in general) in 1979.  Through the years, it entered the mainstream, and over the last 10 years or so has become one of the few "Sabermetrics"  that has done so.  I'd argue this is due to a combination of it being such a core stat in fantasy, but also largely due to it being another easy-to-understand statistic.  In essence, WHIP is measuring the on base percentage that the pitcher allows.  It's not *exactly* that, but it's basically OBP in different units.  A 1.0 WHIP means that a pitcher is letting an average of one man on (through hits OR walks) an inning.  An MLB average WHIP is something like 1.3, with the best pitchers being below 1.1 and the worst being above 1.5.  That's a fairly tight distribution!  That means the average pitcher is only letting up an average of 1.8 (earned) baserunners a game less than the bad ones

One can make the argument that WHIP is also subject to the defense behind a pitcher like ERA is.  I'd argue while this is true, a pitcher is in better control of his WHIP than his ERA.  Whether the pitcher is a K guy or a ground-out guy, he's going to try and control the tempo and more times than not, the hits/walks are his fault.  It's also just cleaner than ERA, which can come from a various amount of gametime issues, and also just has a virtue of high number of measurements, which I'm always for.  More measurements ensure that the noise of a bad play here and there get washed out.  If you've got the Phillies middle infield behind you, you may need to pray, or buy them AARP memberships.  Either way.

For my money, I'd use WHIP ahead of ERA.  That being said, it is when you couple the two measurements where you get a much better idea of a pitcher's worth.  A pitcher may have a great ERA and a terrible WHIP.  This is indicative of a pitcher who's been letting a lot of guys on and bailing himself out.  A lucky tear? Maybe he's just really good under pressure?  I don't want to find out, his luck is about to run out, and his ERA is going to inflate like Kirstie Alley.  What about a pitcher who's got a terrible ERA but a low WHIP?  Maybe this pitcher has had a few bad plays behind him, maybe he throws the occasional hanging slider (low WHIP, high ERA right there).  Either way, he's an unlucky pitcher who's ERA is due to drop off like Pauly Shore's career.

The best pitchers have low ERA and WHIP, the worst have high ERA and WHIP.  It is when you couple these together, however that you get a fuller picture of the pitcher you're getting.  Ground out pitchers tend to be a little more annoyed at WHIP as they tend to put more balls in play which lends itself to more chances for putting men on base.  However, that always leads to the *chance* of more runs.  So I'd argue looking at the two metrics together tends to give a good picture of not only the pitcher's performance but a sort of futures indicator.

Finally, I'd like to leave with the statement that these are by no means the only metrics that can define a pitcher.  K/9, K/BB and the like can be a good indicator of the RAW talent of the pitcher.  While WHIP and ERA both have an element outside the control of the pitcher, a pitcher must also play to the strengths of the defense around him.  More than that, a coach needs to understand which of his pitchers are better at what.  The less said about BABIP for pitchers, the better.  That's a discussion that I can only have in the presence of a suicide hotline counselor.

Disclaimer: VandyTigerPhD is an arrogant ass who loves the sound of his own voice.  At no point in his incoherent ramblings should you expect to find anything resembling a point, thesis, or really any sort of flow.  He also may have be in some sort of drug-based stupor from a combination of whiskey and robotussin.