The weekly Mailbag column is your chance to ask your questions about whatever is on your mind, i.e. what we do, how we do it, why we rank a certain player, or whatever. Just attach your questions as a comment to this article or email us at firstname.lastname@example.org.
This week’s question comes from KaneCountyKeith who asks:
After reading your articles for the last couple of months, it is clear that you have somewhat of an ‘anti-tools’ bias. You have repeatedly made negative comments about players like Devaris Gordon and Jeremy Jeffress. Is it my imagination, or are you just another ‘stathead’ crusading against the scouting community?
Well that’s a lot of question there. First a brief comment on the players that you specifically mentioned. I think Dee Gordon has a fairly high-ceiling and I don’t really have anything against him, I just think others have made him out to be more than he really is right now due to his athleticism and ‘famous father’. My points that I have made on him earlier in the season is that he is only 5’11”, is extremely slight in build and is a 21yo that isn’t even putting up a .750 OPS in the Midwest (MWL) League. All the tools in the world won’t make up for the fact that he is a ‘fringy’ prospect at this stage of his career. Jeffress is another story entirely. He has repeatedly been classified as a Top 100 prospect by others, all on the strength of one skill—he can hit a 100mph with his fastball. While you can’t teach a 100mph fastball, pitching is much more than velocity on one pitch and his results just haven’t justified the high rankings. While he is a legitimate prospect, at least in our book, he isn’t an elite prospect…and that was even before his latest suspension.
Which I guess brings me to the next part of what you ask. I guess if we have to choose a camp, we are closer to ‘stathead’, as you call it, than we are to traditional scouting methods. Personally I don’t like to think of myself as either, as prospect evaluation requires both some of the science that is performed in the ‘stathead’ community in combination with some of the art that is performed in traditional scouting methods—it isn’t an either or. But I think it is important to note that what we do at Diamond Futures is less about the individual player than it is about prospect evaluation as a process. What I mean by this is that we gain nothing by identifying an Albert Pujols, a Pablo Sandoval or a Carlos Santana before anyone else (all of which we have done in the past). Nor do we lose anything when we are high on a player like Andy Marte that hasn’t panned out. What we try to do is to look at a player and determine, based on all of the information that we have available to us, what is the probability of levels of future performance for a player with those characteristics. To us they are simply Player X, with a given set of data points (all of which are not necessarily performance-based). And based upon that, we have determined from, historical data, what the odds are for the future success.
I think the problem that many talent evaluators have is the old statistical conundrum of confusing ‘correlation’ with ‘causation’. In other words, they see that most of the truly elite Major League ballplayers have some outstanding tools. From there they immediately jump to the illogical conclusion that outstanding tools create elite Major League ballplayers. This, logically doesn’t follow, and I did a little experiment that I hope demonstrates the point.
I took a look at the current Major League Top 100 pitchers, with at least 8 games started and under 35 years of age (I needed this cutoff, because we have only produced Prospect Rankings since 1998), ranked by Component ERA. I then went back to our Prospect Top 100 lists since 1998, and did the same with Baseball America’s. A brief prefacing, this is not a knock on Baseball America, as I have personally been a subscriber to their print edition for nearly 25 years and think they do a wonderful job. I chose Baseball America, because they tend to be extremely ‘toolsy’ and because they have their Top 100 lists readily available ( I will begin posting our historical lists on this site next week). What I wanted to see was how many of these Top 100 were identified by our approach and how many were identified by a more ‘tools’ focus. I used 100 pitchers for three reasons: 1) It is large enough to get meaningful data; 2) It is real easy to convert into percentages and 3) It represents, approximately, the top half of Major League starting pitchers.
From the Top 100, between our two methods, we identified 73 of them. Now three names on this list that weren’t identified (Mark Buerhle, Johan Santana and Dan Haren) would have been identified, had it not been unique circumstances that had them advance at least three levels in their rookie season. So the first interesting observation, was that roughly a quarter of the best pitchers in baseball never appeared on a Top 100 list. That should give you some idea as to the upper limits of talent evaluation. Baseball America correctly identified 65 of the Top 100. Our lists correctly identified 70 of the pitchers. The three players that Baseball America had, that we didn’t (Nick Blackburn, Josh Johnson and Ricky Romero) all were ranked between 100-150 by us in the same year. Likewise Baseball America had three players identified by us, but just missed their lists: John Lackey was the Angels #3 prospect in 2002, Sean Marshall was the Cubs #6 prospect in 2006 and Aaron Laffey was the Indians #5 prospect in 2008. But I want to focus on the other 5 players that our methods identified but Baseball America wasn’t as high on.
The first player Kyle Lohse. Lohse was rated by Baseball America as the Twins #7 prospect in 2001. They described him as a “right-hander without a blazing fastball” who had a solid slider and change…in other words he lacked a big fastball. The next player is Scott Baker, who made it to #10 on the Twins list in 2005. About Baker they said that he “doesn’t have a true out-pitch” and at 24yo had already “reached his ceiling.” You can read that as his demonstrated performance was because he was a polished pitcher without significant tools. Next up is Joe Saunders, who checked in as high as #9 on the Angels 2006 list. About Saunders, they said he”doesn’t have overpowering stuff”, his best pitch is a “deceptive change”, and he doesn’t possess a “put-away breaking ball”. Once again, he isn’t very ‘toolsy’. That same year, Ricky Nolasco checked in as the Marlins #8 prospect. Although Baseball America described Nolasco as having two above average pitches, his ‘command’ was his strongest ‘tool’. And finally we have Sharon Martis, who’s highest ranking was #18 on the National’s list in 2008. Martis was never a Baseball America favorite, as he has been described as “lacking overpowering stuff” and supposedly had “limited upside”.
Now don’t get me wrong, I don’t quibble with the descriptions provided by Baseball America. The problem I have is that talent evaluators who focus heavily on the ‘tools’ don’t uncover anyone that those more focused on performance would otherwise miss. It is a rare situation when our datapoints fail to identify a ‘toolsy’ player that succeeds without demonstrating measurable plus performance. But the converse is just not true, there are far more players in the history of baseball that weren’t exceptionally ‘toolsy’ players, but had successful Major League careers. It isn’t because we don’t like players with ‘tools’ and it certainly isn’t because we don’t believe in what traditional scouting methods achieve. It is just because our methods utilize measurable variables that have been validated by the historical results, we just happen to do it better.