Even though this is a financial website, I thought it might be fun to assess the 2017 NFL draft through the lens of the JXM to see how top prospects withstand our unique scoring system. With a focus on data obtained from college statistics and combine results, I attempted to the form basis of a JXM fundamental, Level 1-type analysis.
Since defensive end Myles Garrett was the number 1 pick in the draft, I decided to concentrate on how he and his fellow defensive ends measure up. The draft went heavy with defensive ends, as seven were selected in the first round and a total of seventeen in the first four. Coincidentally, my favorite team (the Philadelphia Eagles) also selected a defensive end in the first round, Derek Barnett.
My first step was to gather information on the top seventeen defensive ends taken in the draft. The below table lists these players in the order they were drafted, with a column that calls out their selection round and overall draft position:
The statistics I focused on came from their college and combine performance and are listed below:
While I simply selected all combine statistics available, the college stats were selected to the measure the performance of the defensive end position, leaving out stats such as interceptions and TD returns. The FBS rank was added to reward strength of schedule, weeding out results that may have been padded by weak competition.
It should be noted that a complete analysis is not possible without access to personal interviews, workout results, Wonderlic scores, video analysis, or privileged information on character and injury concerns. Further, several players listed may be projected to switch positions in the pros. This basic study is based only on quantifiable and readily available data.
I gathered the following information on college stats, filling any missing data with the sample average (highlighted in red font):
These stats are ordered in the position that the player was drafted. The green/ yellow/red colors are added to show the best and worst per column. For example, McKinley’s 50 solo tackles are the top in his draft class and thus the dark green cell color. On the other hand, the 13 solo tackles by Hendrickson are the worst of the group and show a dark red. The same color scheme is also applicable to variables where a lower value is better, such as the dark green calling out Walker’s outstanding -118 yards lost from sacks.
I gathered the following information on the prospects’ combine results, again filling any missing data with the sample average (highlighted in red font):
Again, these stats are ordered in the position the player was drafted while employing the same color scheme.
JXM Level 1
I applied a JXM Level 1-type system to analyze these players under four criteria: equal weight of all stats, college only stats, combine only stats, and my own criteria. Each of the below tables rank the players based on JXM score, versus the round the player was drafted and the overall pick order.
With each measure weighted equally I came up with the following scores:
As can be seen, an equally weighed analysis does not fully describe how NFL GMs evaluated the defensive ends in this draft class, explaining only 6.9 % of the draft order. Second round pick Walker stands out here, while third round picks Willis and Hendrickson best first round picks Barnett, Thomas, and top overall pick Garrett. Fellow first round picks McKinley, Charlton, and Allen place in the middle and form a tightly packed triplet of mediocrity, while Harris fell to sixteenth place.
Given the likelihood that teams value certain statistics over others, it is not surprising that an equally weighed scoring system would not reflect the actual draft order. With this in mind, I focused only on college stats and came up with the following scores:
Considering college stats tailored to the defensive end position results in a slightly more predictive model but explains only 15.9% of the draft order. Three of the four top picks were first round selections as Barnett, Allen, and McKinley place second, third, and fourth respectively. However, Walker dominates this analysis and scores seven points higher than his closest competitor, even though he lasted until the second round. Third round picks Hendrickson and Willis also have strong scores while the other first round picks struggled. Third overall pick Thomas has a decent showing as does Harris, but Charlton lags to twelfth and Garrett places two picks later with a score in the mid-thirties.
While certainly not perfect, these ranks do make it clear that Philadelphia, Washington, and Atlanta value college stats, while Cleveland and Dallas do not. Why would Cleveland take Garrett first overall and Dallas jump on Charlton later in the first round? Why was Walker only the eighth end drafted? Would combine results reveal their logic?
If only combine stats are considered I came up with the following scores:
Combine stats predict the overall top pick but are otherwise fairly worthless, explaining only 5.1% of the overall draft order. Garrett finally earns the top spot in a category, but he is the only first round selection that finishes in the top six. Willis scores a close second to Garrett even though he wasn’t even the first end selected in the third round. While Charlton, Thomas, and Walker finish in the top 10 here, their scores are indiscernible from a group of six ends that includes one player taken as late as the fourth round. Furthermore, fellow first round selections McKinley, Barnett, Allen, and Harris bring up the rear, with four of the five worse scores.
Garrett’s top score may reveal how strongly Cleveland believes in the combine, while it appears Atlanta, Philadelphia, Washington do not. I cannot figure out what Dallas or Miami were thinking with their selections of Charlton and Harris respectively. Neither distinguished themselves in college or at the combine. Further, Thomas was the third pick in the draft but is a middling rank in both college and the combine.
Obviously, the combine was not a very good predictor of draft order. Maybe NFL GMs are looking at something more sophisticated, combining both combine and college stats? This is where I tried to guess which variables NFL GMs used to analyze defensive end prospects. I focused on sacks and tackles in college with a slight tilt towards combine measurements including height, weight, arm strength, and the vertical jump. Considering these factors reveals the following ranks:
This analysis did improve the predictive nature of our screener, but still only explains 25.6% of draft order. Walker again wins in dominant fashion, followed by Allen, Barnett, Willis, and Hendrickson. Of the top five, only Barnett and Allen were first round picks, though McKinley posted a strong score and places sixth. Thomas, Charlton and Garrett reach the top 10 but post only pedestrian scores. Again, Harris does not place well and falls all the way to twelfth. Obviously, GMs must be looking something else.
Given that college stats and the combine did not seem to be very good predictors, I decided to form a Guru-like level that focused on how the experts would rank these players. I came up with the following:
The experts beat all Level 1 screeners, explaining 68.9% of the overall draft order. The dominant score of Garrett certainly explains why he was the consensus number 1 overall pick. The experts also solidify the first round status of Thomas, Allen, Charlton, McKinley, Barnett, and Harris. However, they did miss on Walker and Kpassagnon, who were ranked in the lower third of this list but were drafted much higher. This is also the only area where Charlton and Harris score well, suggesting that Dallas and Miami value the opinion of experts above all else.
It is probably safe to assume that these experts have access to some insider information and consider more than the data available to our Level 1 analysis. However, there is still over 30% of the draft order they couldn’t explain.
In an attempt to fill in that 30% gap, I tried one last measure. In this last table, I sprinkled in my user analysis with those of the experts:
This analysis does improve on expert opinion, explaining 75.2% of the draft order and doing a very good job of predicting the first round selections; it guesses the top ten almost perfectly. However, first round selections Barnett, Charlton, McKinley, and Harris score very closely to Willis and Walker, who went much later. This suggests that the Eagles may have reached a bit by taking Barnett as high as they did. On the other hand, it seems like Willis and Walker lasted longer than they should have, while Kansas City may have missed by selecting Kpassagnon so high in the second round.
The Sum2 screener does surprisingly well in explaining the draft position of the defensive ends selected in the 2017 draft. A combination of expert opinion and measurable statistics support most of the draft order. Individually, the Level 1 and Guru screeners reveal how certain GMs set their draft board. The Level 1 screeners explained which information the teams focused on, while the Gurus revealed that most teams listen to expert opinion. In the case of Garrett, Cleveland’s pick was backed by the experts as well as his freakish physical abilities. The Eagles’ pick of Barnett was backed by his college stats and certain combine measurements, as was Atlanta’s selection of McKinley and Washington’s pick of Allen. All three teams were also supported by the experts. Thomas was a fine scorer but didn’t stand out anywhere aside from his Guru score. Though the experts were not high on Kansas City’s pick of Kpassagnon, he did perform well at the combine. Dallas certainly did not perform a Level 1-type analysis with their selection of Charlton as they seemed to rely only on expert opinion. The same can be said about Miami. Based on this analysis the steals of the draft were clearly Denver’s pick of Walker in the second round and Cincinnati’s find of Willis in the third.
This exercise was interesting because it shows the potential of a simple JXM analysis, explaining over 75% of the 2017 draft order without the benefit of insider information. It was also able to reveal how certain teams evaluated talent, which could be useful to opposing GMs in future drafts.
Now imagine how powerful such a tool would be if it were expanded to include the following:
- Consider injury issues and character concerns
- Include more defensive end prospects
- Analyze every position with criteria tailored to reflect the requirements of each
- Statistics that populate the database automatically
- Tested and refined over a period of time to prove its effectiveness
- Regularly adjusted to keep it current
Such a tool may look like the following:
- Level 1 – Fundamentals – How does the player stack up in quantifiable measurements?
- Level 2 – Technical – Is the player moving up or down the draft board?
- Level 3 – Valuation – Was the player taken in the right round?
- Level 4 – Guru – How do all of those websites that assign prospect grades feel about this player?
- Level 5 – Rules – What do the proprietary metrics say about this player?
- Level 6 – Growth – How did the player develop from junior to senior year?
- Level 7 – New Growth – Did the player continue to grow as a senior or take a step back from the growth displayed as a junior?
This theoretical model could prove valuable to a GM if it could find the best players while predicting how competing GMs will operate during the draft.
Relation to the JXM Stock Analysis
The reason I wrote this was because I wanted to have a bit of fun. I also wanted to show the value of the JXM system and attempt to explain how it works. The JXM stock analysis works similarly to the above, but has already been expanded to include a much more extensive analysis. Over the years we have formed an expansive database that considers nearly 300 ratios. It can evaluate up to 75 companies in a matter of minutes. It considers seven levels through a variety of proprietary algorithms. It has been tested and refined over a five year period. Our methods have proven to continually find the winners. Our system works.