For a few months now, I have participated in the Good Judgment Open (GJO) forecasting tournament. One of the questions I am trying my luck at is,
How many total hurricanes will occur in the Atlantic Ocean in the 2022 hurricane season, according to the National Hurricane Center?
GJ Open, 2022
The answer should be probabilistic in nature: rather than answering with a single number (“eight!”), the questions asks for probabilities to be assigned to various bins or categories: “3 or fewer”, “between 4 and 6”, “between 7 and 9”, “between 10 and 12”, “between 13 and 15” and “16 or more”. Answers can be updated at all times during the question’s lifetime. Once the question is closed, the answers are scored using Brier’s probability score (a.k.a. the “Brier score”).
The question closes tomorrow (December 1) and on the assumption that the current tally (8) will remain unchanged, it’s time to analyse my approach.
My approach
As for an approach, I figured I’d try to formulate a prior distribution which I’d update using current information that I deemed relevant. For a prior, I used data from the ‘hurdat’ database which was developed and is maintained by the US National Hurricane Centre. HURDAT includes information about all tropical storms that have occurred from 1851 through 2021. For every tropical storm, the location of the storm’s eye is included at 00Z, 06Z, 12Z and 18Z during the lifetime of the storm. At these times, the storm is classified also, for example as a topical depression, a tropical storm or a hurricane. I simply assumed that the information contained in HURDAT is correct. Using this information, I constructed the time series of the number of hurricanes in the June through November season as shown in below figure.
Taking the timeseries at face value, I constructed a prior by simply looking at the frequencies of hurricane occurrence per year:
Bin definition | Count | Fraction |
---|---|---|
3 or fewer | 39 | 0.23 |
between 4 and 6 | 82 | 0.48 |
between 7 and 9 | 34 | 0.2 |
between 10 and 12 | 14 | 0.08 |
between 13 and 15 | 2 | 0.01 |
16 or more | 0 | 0 |
Sum | 171 | 1 |
Prior distribution of hurricane counts, 1851 – 2021
As for updating the data, I figured that three bits of information would have predictive value:
- the trend in the timeseries;
- Sea Surface Temperature; and
- the observed number of hurricanes to date.
I spent a lot of time trying to combine these three bits of information in such a way that I could arrive at a posterior distribution. Alas, I did not succeed. Eventually, I let go of the trend and of the Sea Surface Temperatures. My posterior was simply based on any hurricane occurrences during the 2022 season. Mind, the trend and the SST would have likely increased my estimate of the number of hurricane occurrences and as the 2022 season saw a modest number of hurricane occurrences only, maybe my inability to take into account those predictors turned out to be a good thing ;).
Below figure shows the principle. For every hurricane, I determined when in the season it (first) occurred. That yielded 171 trajectories in the plot: one for every year in my data record. This allowed me to, at any date during the lifetime of the question, identify the years in which, at the same date, the number of hurricanes was identical to that of the current year. In the plot, the highlighted trajectories are of those years where on October 30, the number of observed hurricanes equaled 5. Actually, the selection is slightly less strict: I also include the trajectories where a 6th hurricane had been observed in the previous 15 days.
Example: on October 30, the hurricane count of the 2022 season was 5. In the hurdat record, I identified 40 years where that was true also. I then determined what the total number of hurricanes in the hurricane seasons of those years was. Within those 40 years, the distribution was as outlined in below table. Note that by that date, the first bin (“3 or fewer”) should, by construction, contain no hurricanes.
Bin definition | Count | Fraction |
---|---|---|
3 or fewer | 0 | 0 |
between 4 and 6 | 39 | 0.97 |
between 7 and 9 | 1 | 0.03 |
between 10 and 12 | 0 | 0.00 |
between 13 and 15 | 0 | 0.00 |
16 or more | 0 | 0.00 |
Sum | 40 | 1.00 |
Conditional distribution of years that had seen 5 hurricanes on October 30
Once this procedure was in place, I was able to routinely apply the formula using information on current number of observed hurricanes. That resulted in below forecast. The plot shows that the passing of time results in a gradual change of probabilities. Hurricane occurrences (dashed vertical lines) often result in a much less gradual change. Both ‘events’ fit my belief of how the probabilities should change upon a change in available information.
The initial forecast is identical to the climatology of hurricane occurrence: in the absence of any data to update that prior with, it remains static until at least the start of the season (June 1). As of that date, information about hurricane occurrence to date is used to update the prior. For over three months, no hurricanes were observed. The probability of this being a ‘3 or fewer’ year grew accordingly. In the first days of September, when the number of observed hurricanes still stood at 0, the probability of that bin occurring exceeded 50%. The probability of this bin occurring then took a hit upon the first, second and third hurricane occurrence. It reduced to zero as soon as the fourth hurricane occurred. The probability of a four, five or six hurricanes remained stable for a long time. Early September, it started to dive as the likelihood of 4 – 6 hurricanes in the remaining three months of the season became more remote. This changed upon the rapid succession of hurricanes in September. Early November, as hurricanes Lisa, Martin and Nicole occurred (numbers 6, 7 and 8), the probability of the 4 – 6 bin reduced to 0 and the probability of the 7 – 9 bin rose quickly.
My actual forecasts
While (I believe) I have a decent system in place now, it took me until mid October to develop this. Until then, I tried various approaches. My actual forecasts are depicted in Figure X below. Only from mid October onward do my actual forecasts coincide with those produced by my ‘system’ – and then only for the dates where I was able to find the time to update my forecast.
There are a few additional reasons why these forecasts are, to say the least, not great. I failed to update them regularly. This resulted in the forecast continue to say there was a 20% probability of “3 or fewer” hurricanes when a fourth had been observed.
Observed hurricanes
For updating my forecats, I relied on this Wikipedia page to inform me of hurricane observations. I am pretty sure that the National Hurricane Centre would have provided these, too, but I was too lazy to look these up. These are the observations I included in my updating procedure:
Date | Hurricane name | Hurricane count |
---|---|---|
September 2 | Danielle | 1 |
September 7 | Earl | 2 |
September 18 | Fiona | 3 |
September 26 | Ian | 4 |
October 8 | Julia | 5 |
November 2 | Lisa | 6 |
November 2 | Martin | 7 |
November 7 | Nicole | 8 |
Verification
In GJ Open, contributions are scored using the original implementation of Brier’s probability score, also known as the Brier Score. The scoring is described here and here. I computed the Brier score for my actual forecasts, for my ‘system’ and for a climatological forecast. I also computed a ‘skill’: how much better (or worse) did the forecasts do compared to (my) climatology? The results are in below table.
Scenario | Brier score | Brier skill score |
---|---|---|
My forecasts | 0.74 | 0.20 |
My system | 1.02 | -0.10 |
My climatology | 0.93 | 0.00 (by definition) |
Some conclusions: climatology came in at approximately half way on a possible range of 0 to 2. My forecasts did a little better than that – resulting in a skill of 0.20. Sadly, and unexpectedly, my ‘system’ scored worse, at a Brier score of 1.02, equaling a negative skill.
Conclusion
My system heavily relies on analogies with past years. For the 2022 season, those analogies proved to be poor predictions. I’m curious how the approach would have fared for the other historical years for which I have data. Alas, that will take some time to figure out and it’s unlikely that I’ll have that time available any time soon.
Having said that, I think I really need a different approach altogether. Suggestions are very welcome…
Ideally, I would have been able to apply an updating procedure that takes into account the trend in the historic timeseries and current information with predictive value such as sea surface temperature. My fellow forecaster BayesianChimp suggested looking at hurricane occurrence by month and then use a Poisson or a negative binomial distribution. Alas, I am not versed in using these but maybe I should look into these approaches. I am also aware that my system would have failed me if the current year would show more hurricanes than previously observed.
I updated my system upon confirmed occurrence of a hurricane only. Possibly, I could improve the forecasts a little bit if I included the near real-time hurricane forecasts from NHC. That’s something to look into on a next occasion, too.
Am not sure if GJ Open will ask a similar question in the 2023 hurricane season. If they do, I am game!