3. Forecasting and Skill Testing Algorithms

FlowCast is not a mathematical model but rather a toolkit of mathematical and statistical algorithms. The main algorithms relate to (a) forecast generation, including stratified climatalogical forecasting and discriminant analysis algorithms; and (b) forecast skill assessment, including LEPS, ROC, percent consistent; and significance testing methodologies. These will now be described in turn.


3.1 Forecasting algorithms


3.1.1 Stratified Climatological Forecasting


Stratified climatological forecasts are generated by ‘sampling’ a subset of analogue years (a summary of the period of interest for each year) from the historical record according to some relevant criterion, and calculating relevant probabilities from the subset (Stone et al. 2003). The number of possible stratifications or ‘phases’ is predetermined for each predictive system, and impacts on the ‘quality’ of the results. Typically this ranges from three to five phases with greater numbers leading to small sample sizes. It is recommended that each stratification or subset contain at least 15-20 years of data in order for the methodology to be statistically viable. Also stratifications must be statistically different from one another (which can be tested using non-parametric hypothesis testing) for there to be any skill in the forecasts.


3.1.2 Discriminant Analyisis Forecasting


The discriminant analysis methodology employed in FlowCast is the same as that used by the Australian Bureau of Meteorology in their operational forecast system (Drowdowsky and Chambers 1998, 2001; Jones 1998). It aims to calculate the probability that rainfall at individual locations will be in a particular category (tercile, or above or below median) for the current state of predictor conditions. This method uses Bayes Theorem to ‘invert conditional probabilities’ (Huberty 1994; Wilks 1995) in a procedure similar to that used by Ward and Folland (1991) and He and Barnson (1996). It assesses the historical record to analyse how the predictand category varies with different predictor observations (such as SOI or SSTa principal compoents) and calculates conditional probabilities for the occurrence of new observations of predictor value for each category of predictand. This is not to be confused with ‘linear’ discriminant analysis (for example, see Wilks, 1995, pp409-415) which effectively stratifies rainfall data dynamically based on the discriminant groupings, resulting in only a subset of the training data being used to calculate probabilities. In comparison, the method described above uses all training data in calculating probabilities.


3.2 Skill testing algorithms


3.2.1 Percent Consistent


Percent consistent scores is one of the simplest hindcast based skill scores to calculate and interpret. They are calculated as the percentage of hindcast events where observed and predicted categories coincide. A predicted category is defined as the one with the highest forecast probability. For a tercile probability forecast, the base value for climatology is 33.3%, meaning that percent consistent scores higher than this will have greater skill than chance. For above/below median forecasts, the threshold is 50%. A limitation of this methodology is that that it weights all hindcasts the same, regardless of the strength of the forecast.


3.2.2 LEPS – Linear Error in Probability Space


FlowCast uses tercile (and above/below median) LEPS skill scores (Ward and Folland, 1991;Potts et al. 1996) as the principal tool for skill assessment. LEPS is a ‘hindcast’ based technique analogous to a scoring system that rewards good predictions and penalises bad ones by assigning some weighting proportional to the degree of difficulty of a forecast. This is achieved through measurement of the forecast error in probability space as opposed to measurement space. For tercile (and above/below median) LEPS scores, the “reward” and “penalty” coefficients can be pre-determined based upon probability-space considerations (Walsh et al. 2001). Multiple LEPS analyses can identify forecast “signals” highlighting the times of the year when a forecast will be most reliable, and the corresponding envelope of lead times that will maintain forecast reliability.


The range of possible tercile LEPS skill scores theoretically ranges from -100% to 100%. In practice, a score of 100% would never be achieved. For this to occur, the ‘hindcast’ analysis would have to be correct every year with rainfall occurring in the first or third category to achieve the maximum reward weighting. LEPS skill scores typically range non-linearly from -30% to 40%, but this can be influenced by the length of training record (LEPS skill score for a 100 year analysis can be about half that of a 50 years analysis), the number of phases in a stratification-based forecast, and also the number of predictors included in the discriminant analysis forecast. These factors also affect the ‘zero’ or base-value representing climatology, with LEPS scores less then ten often being ‘unskillful’ (see Rainman manual). For these reasons, it can be difficult to directly compare LEPS scores across different forecast systems.


3.2.3 Receiver Operating Characteristics


ROC scores are used to assess how skillful a forecasting system is in predicting different predictand categories. The methodology used in FlowCast is the same as that defined in WMO No.485 (2002). For a tercile probability forecast, there are three associated ROC scores. ROC is derived from contingency tables defining the hit rate and false alarm rate for hindcast outputs, which are considered binary with only two possible outcomes being an occurrence or non-occurrence. Hit rates and false alarm rates are calculated for each probability threshold to derive a ROC characteristic curve. ROC scores are then effectively the area under this curve with a perfect forecast system having a ROC score of one, and a system providing no valid information lying on the diagonal with an area of 0.5.


3.2.4 Significance testing : p-values


Significance testing is used to assess whether the skill of a forecast system is greater than that which is achievable by chance. This is done through a Monto-Carlo like process of comparing the skill of a given predictor/predictand system with multiple realisations of the predictand data formulated according to its statistical distribution pattern (Lennox et al. 2006). In practice, for each realisation, the individual predictand analogues are shuffled randomly to create a new time-series with the same characteristics of the reference predictand. These scores cannot be used to quantify the skill of a forecast system, as results only indicate whether the system is skillfull or not. P-values can be generated for any type of skill-score although they are very processor intensive to calculate with hundreds of realistions required for an accurate assessment.