Yesterday I wrote that Corsi is better at predicting future goals than expected goals are. This post is going to be a brief update on that and I’m going to get right to the point, so if you want more details about theory, methods, etc. I recommend reading the first post.
In that first post I tested how well you could predict a team’s goal ratio in the second half of the season based on the first half of the season, and I found that Corsi is more predictive than expected goals are, although the gap has been narrowing over time. Since the goal is to predict future games using past results, using a chronological ordering of games is the only method that makes sense.
However, you could argue that splitting the season in half chronologically might tend to understate true talent differences between teams because rosters change over the course of a season as players get traded or injured. On top of that, coaching changes can sometimes have a dramatic effect on how a roster perfoms. One thing we can do to try to account for that is to split the games in a way that ensures the two halves have comparable rosters.
Continue reading “Corsi vs xG, Part 2”
This is going to be a long post, so I’m going to front-load the top line results (with a little bit of history) and then get into a longer discussion of some of the details after that for anyone who is interested.
One of the earliest and arguably still most important discoveries in hockey statistics is that, at the team level, past goal scoring is not a very good predictor of future scoring. Early writers in hockey statistics discovered that you could do a better job of predicting how well teams would score at even strength by looking at shot attempts instead of goals. Over time many people began to argue that because there are differences in the quality of shots, improvements could be made by adjusting each shot for its likelihood of becoming a goal, based on factors like how close to the net the shot is and what type of shot was taken. We call this adjusted measure Expected Goals, or xG.
The first example of an xG model that I’m aware of was created by former Florida Panthers analyst Brian MacDonald back in 2012. Unfortunately, his research does not appear to be available any longer. [UPDATE: Since publishing this article, it’s come to my attention that statistics like expected goals go back to at least 2006, prior to the NHL first publishing shot location data!] [UPDATE 2: And an even older xG model from Alan Ryder in 2004.] It wasn’t until 2015 that an xG model gained wider public attention, when Dawson Sprigings (who now works for the Colorado Avalanche) and Asmae Toumi collaborated on a model for Hockey Graphs (for lack of a better name, I will refer to this as the DA model for the rest of this post). According to their article, expected goals are better at predicting future results than Corsi is. This was the breakthrough that many people had been waiting for, a metric that tried to account for the quality of shots rather than just their quantity.
In the years since then, a number of people have created their own xG models. While the raw data for the DA model is not publically available, the makers of more recent metrics have put the data online so that anyone can use it. While it is impossible to evaluate every model that’s out there, I collected data from three of the most commonly-used public models to do some new testing of these metrics. The data I’m using comes from Moneypuck, Evolving Hockey, and Natural Stat Trick.
Continue reading “Corsi Is Better At Predicting Future Goals Than Expected Goals Is”
The other day on Twitter I said this:
This is a topic I’ve briefly looked into in the past, and I’ve posted the results to Twitter, but I deleted all my old tweets at one point and that data is gone, so I figured I’d try to do some work on this question again, and this time I’d put the results somewhere that’s both more thorough and more permanent. It’s widely acknowledged, at least among people who follow hockey statistics, that goaltending performance is subject to a significant element of randomness. This has led some people, including myself, to argue that teams should try to limit their spending on goaltending, since one goalie would seem to be as likely as any other to get good results. In particular, I’ve argued some variation on “Nobody knows who the good goalies are” on many occasions.
If paying more money doesn’t actually get better results in net, then it’s pointless to have a highly priced goaltender, especially in a salary cap system where every dollar spent on one roster spot reduces the amount of money available for each other roster spot. To try to figure out if there’s any benefit to signing a highly priced goalie, the simplest solution is to compare goalie cap hits to goalie results and see what it looks like. So I’ve done that. Some brief notes on methodology before I get to the results.
Continue reading “Do NHL Teams Get Better Goaltending By Spending More Money?”