12  Series Temporales

Motivation:

Applications - COVID - covid19forecast

Applications - from the book “Introductory Time Series with R, Paul S.P. Cowpertwait · Andrew V. Metcalfe, Springer”

  1. Kyoto protocol - The arguments for reducing greenhouse gas emissions rely on a combination of science, economics, and time series analysis.
  1. Change in the world’s climate - https://climatedata.imf.org/pages/climatechange-data, https://svs.gsfc.nasa.gov/12828/.

Reports from [crudata.uea.ac.uk](https://crudata.uea.ac.uk/~timo/diag/tempdiag.htm?_ga=2.140802551.495728210.1698272062-1795576896.1698272062){.uri}

Time series analysis

Time series analysis
  1. Singapore Airlines placed an initial order for twenty Boeing 787-9s and signed an order of intent to buy twenty-nine new Airbus planes, twenty A350s, and nine A380s (superjumbos). The airline’s decision to expand its fleet relied on a combination of time series analysis of airline passenger trends and corporate plans for maintaining or increasing its market share.
  1. Gas suppliers - Variation about the average for the time of year depends on temperature and, to some extent, the wind speed. Time series analysis is used to forecast demand from the seasonal average with adjustments based on one-day-ahead weather forecasts.

Forecasting is one of the principal applications in data science.

See Forecasting: Principles and Practice

Tell us what the future holds, so we may know that you are gods. (Isaiah 41:23)

Forecasting is the set of techniques modelling how to predict the future as accurately as possible, given all of the information available, including historical data and any future data. These methods try to extrapolate trend and seasonal patterns

Applications of Forecasting

See The statistical forecasting perspective

The study of adjacent points using conventional statistical methods have problems.

12.1 Method

Basic steps

  • Problem study
  • Data collection
  • Data analysis
  • Model selection and fitting
  • Model validation
  • Forecasting model deployment
  • Monitoring forecasting model performance

As an example, we introduce how we can analyze using R a Time Series:

data(AirPassengers)
Psg <- AirPassengers
class(Psg)
[1] "ts"
time(Psg)
          Jan      Feb      Mar      Apr      May      Jun      Jul      Aug
1949 1949.000 1949.083 1949.167 1949.250 1949.333 1949.417 1949.500 1949.583
1950 1950.000 1950.083 1950.167 1950.250 1950.333 1950.417 1950.500 1950.583
1951 1951.000 1951.083 1951.167 1951.250 1951.333 1951.417 1951.500 1951.583
1952 1952.000 1952.083 1952.167 1952.250 1952.333 1952.417 1952.500 1952.583
1953 1953.000 1953.083 1953.167 1953.250 1953.333 1953.417 1953.500 1953.583
1954 1954.000 1954.083 1954.167 1954.250 1954.333 1954.417 1954.500 1954.583
1955 1955.000 1955.083 1955.167 1955.250 1955.333 1955.417 1955.500 1955.583
1956 1956.000 1956.083 1956.167 1956.250 1956.333 1956.417 1956.500 1956.583
1957 1957.000 1957.083 1957.167 1957.250 1957.333 1957.417 1957.500 1957.583
1958 1958.000 1958.083 1958.167 1958.250 1958.333 1958.417 1958.500 1958.583
1959 1959.000 1959.083 1959.167 1959.250 1959.333 1959.417 1959.500 1959.583
1960 1960.000 1960.083 1960.167 1960.250 1960.333 1960.417 1960.500 1960.583
          Sep      Oct      Nov      Dec
1949 1949.667 1949.750 1949.833 1949.917
1950 1950.667 1950.750 1950.833 1950.917
1951 1951.667 1951.750 1951.833 1951.917
1952 1952.667 1952.750 1952.833 1952.917
1953 1953.667 1953.750 1953.833 1953.917
1954 1954.667 1954.750 1954.833 1954.917
1955 1955.667 1955.750 1955.833 1955.917
1956 1956.667 1956.750 1956.833 1956.917
1957 1957.667 1957.750 1957.833 1957.917
1958 1958.667 1958.750 1958.833 1958.917
1959 1959.667 1959.750 1959.833 1959.917
1960 1960.667 1960.750 1960.833 1960.917
start(Psg)
[1] 1949    1
end(Psg)
[1] 1960   12
frequency(Psg)
[1] 12
plot(Psg)

summary(Psg)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
  104.0   180.0   265.5   280.3   360.5   622.0 
aggregate(Psg)
Time Series:
Start = 1949 
End = 1960 
Frequency = 1 
 [1] 1520 1676 2042 2364 2700 2867 3408 3939 4421 4572 5140 5714
plot(aggregate(Psg))

ts.plot(Psg) 

Advanced packages

Note: Install TTR, forecast packages

library(TTR)
library(forecast)
Registered S3 method overwritten by 'quantmod':
  method            from
  as.zoo.data.frame zoo 

To analyse your time series data we need to read it into R, and to plot the time series.

You can read data into R using scan(), which assumes that your data for successive time points is in a simple text file with one column.

The file ts_k1.dat contains data about average business sales of a product.

# Read from an URL
# Skip 1 line - comment
data.ts.k1 <- scan("data/ts_k1.dat", skip = 1)
head(data.ts.k1)
[1] 60 43 67 50 56 42

The next step is to store the data in a time series object in R.

A time series is a list of numbers with temporal information stored as a ts object in R

There is a lot of packages and functions to deal with time series. Firstly, we will use ts function, which is used to create time-series objects.

The sintax of ts function is:

serie_1<-  ts(data, start, end, frequency)

# data - the  values in the time series

# start -  the first forecast observations in a time series

# end -  the last observation  in a time series

# frequency -  periods  (month, quarter, annual)

#     frequency=1 (by default) -  no periods or season in your data
#     frequency=12 - period is monthly
#     frequency=4  - period is quarterly
#Example

# data <- c(2, 8, 6, 10,
#           5, 14, 7, 14,
#           9, 23, 11, 29, 12)
# serie1 <- ts(data,start = c(2023,1),frequency = 12)

# plot(serie1)
# autoplot(serie1) - ggplot of the ts object

See ts-objects.

ts.k1 <- ts(data.ts.k1)
ts.k1
Time Series:
Start = 1 
End = 42 
Frequency = 1 
 [1] 60 43 67 50 56 42 50 65 68 43 65 34 47 34 49 41 13 35 53 56 16 43 69 59 48
[26] 59 86 55 68 51 33 49 67 77 81 67 71 81 68 70 77 56

If the data set has values for periods less than one year, for example, monthly or quarterly, one can specify the number of times that data was collected per year by using the ‘frequency’ parameter in the ts function.

Dataset of unemployment in a city:

tsn1 <- scan("data/ts_n1.dat", skip = 1)
ts.n1 <- ts(tsn1, frequency = 12, start = c(1980, 1))
ts.n1
        Jan    Feb    Mar    Apr    May    Jun    Jul    Aug    Sep    Oct
1980 26.663 23.598 26.931 24.740 25.806 24.364 24.477 23.901 23.175 23.227
1981 21.439 21.089 23.709 21.669 21.752 20.761 23.479 23.824 23.105 23.110
1982 21.937 20.035 23.590 21.672 22.222 22.123 23.950 23.504 22.238 23.142
1983 21.548 20.000 22.424 20.615 21.761 22.874 24.104 23.748 23.262 22.907
1984 22.604 20.894 24.677 23.673 25.320 23.583 24.671 24.454 24.122 24.252
1985 23.287 23.049 25.076 24.037 24.430 24.667 26.451 25.618 25.014 25.110
1986 23.798 22.270 24.775 22.646 23.988 24.737 26.276 25.816 25.210 25.199
1987 24.364 22.644 25.565 24.062 25.431 24.635 27.009 26.606 26.268 26.462
1988 24.657 23.304 26.982 26.199 27.210 26.122 26.706 26.878 26.152 26.379
1989 24.990 24.239 26.721 23.475 24.767 26.219 28.361 28.599 27.914 27.784
1990 26.217 24.218 27.914 26.975 28.527 27.139 28.982 28.169 28.056 29.136
1991 26.589 24.848 27.543 26.896 28.878 27.390 28.065 28.141 29.048 28.484
1992 27.132 24.924 28.963 26.589 27.931 28.009 29.229 28.759 28.405 27.945
1993 26.076 25.286 27.660 25.951 26.398 25.565 28.865 30.000 29.261 29.012
        Nov    Dec
1980 21.672 21.870
1981 21.759 22.073
1982 21.059 21.573
1983 21.519 22.025
1984 22.084 22.991
1985 22.964 23.981
1986 23.162 24.707
1987 25.246 25.180
1988 24.712 25.688
1989 25.693 26.881
1990 26.291 26.987
1991 26.634 27.735
1992 25.912 26.619
1993 26.992 27.897

ts_f1.dat dataset contains monthly unemployment rates in a city.

tsf1 <- scan("data/ts_f1.dat")
ts.f1 <- ts(tsf1, frequency = 12, start = c(1987, 1))
ts.f1
           Jan       Feb       Mar       Apr       May       Jun       Jul
1987   1664.81   2397.53   2840.71   3547.29   3752.96   3714.74   4349.61
1988   2499.81   5198.24   7225.14   4806.03   5900.88   4951.34   6179.12
1989   4717.02   5702.63   9957.58   5304.78   6492.43   6630.80   7349.62
1990   5921.10   5814.58  12421.25   6369.77   7609.12   7224.75   8121.22
1991   4826.64   6470.23   9638.77   8821.17   8722.37  10209.48  11276.55
1992   7615.03   9849.69  14558.40  11587.33   9332.56  13082.09  16732.78
1993  10243.24  11266.88  21826.84  17357.33  15997.79  18601.53  26155.15
           Aug       Sep       Oct       Nov       Dec
1987   3566.34   5021.82   6423.48   7600.60  19756.21
1988   4752.15   5496.43   5835.10  12600.08  28541.72
1989   8176.62   8573.17   9690.50  15151.84  34061.01
1990   7979.25   8093.06   8476.70  17914.66  30114.41
1991  12552.22  11637.39  13606.89  21822.11  45060.69
1992  19888.61  23933.38  25391.35  36024.80  80721.71
1993  28586.52  30505.41  30821.33  46634.38 104660.67

Let us make a basic plot of the time series data (for the two first datasets):

plot.ts(ts.k1)

plot.ts(ts.n1)

In the second one, there seems to be seasonal variation in the number of unemployees per month: There is a peak every summer, and a trough every winter.

The plot for the third dataset is the following:

plot.ts(ts.f1)

The size of the seasonal fluctuations and random fluctuations seem to increase with the level of the time series. In these cases it is reasonable to transform the time series by calculating the natural logarithm of the original data.

logts.f1 <- log(ts.f1)
plot.ts(logts.f1)

See time plots.

With one of the previously imported series (tsf1), we are fitting the model:

\[y=a+bx+cx^2\]

library(ggplot2)
library(forecast)
str(ts.f1)
 Time-Series [1:84] from 1987 to 1994: 1665 2398 2841 3547 3753 ...
mydata <- data.frame(x=1:length(ts.f1), y=tsf1)
autoplot(ts.f1)

model <- lm( y ~ x , data=mydata)
summary(model)

Call:
lm(formula = y ~ x, data = mydata)

Residuals:
   Min     1Q Median     3Q    Max 
-15084  -7058  -1498   2394  75362 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -1028.85    2892.21  -0.356    0.723    
x             361.05      59.11   6.108 3.22e-08 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 13140 on 82 degrees of freedom
Multiple R-squared:  0.3127,    Adjusted R-squared:  0.3043 
F-statistic: 37.31 on 1 and 82 DF,  p-value: 3.222e-08
prediction_model <- forecast::forecast(model, newdata = data.frame(x=86:100))
plot(prediction_model)

12.2 Time Series Components

A time series has usually a trend component and an irregular component and, if it turns out to be a seasonal time series, a seasonal component as well.

See Time series patterns: Trend, Seasonal, Cyclic.

12.2.1 Seasonal plot

# https://www.rdocumentation.org/packages/forecast/versions/8.4/topics/ggseasonplot
library("forecast")
data("AirPassengers")
ggseasonplot(AirPassengers, col = rainbow(12), year.labels = TRUE)

ggseasonplot(AirPassengers, year.labels = TRUE, continuous = TRUE)

seasonplot(AirPassengers, col = rainbow(12), year.labels = TRUE)

12.2.2 Subseries plot

library(ggplot2)
ggsubseriesplot(AirPassengers) +
  ylab("N. Passengers") +
  ggtitle("Seasonal subseries plot: Airpassengers")

The mean is indicated by horizontal lines.

See Time series graphics: Scatterplots, lag plots, etc..

12.3 Basic forecasting methods

We denote \(\hat{y}_{T+h\mid T}\) the value of the variable \(y\) in the time \(T+h\) based on \(y_1, \ldots, y_T\).

The residual will be computed by:

\[e_T=y_T-\hat{y}_{T}\] The forecast error is: \[e_t(T)=y_t-\hat{y}_{t}(t-T)\]

12.3.1 Average method

\[\hat{y}_{T+h\mid T}=\frac{(y_1, \ldots, y_T)}{T}\]

data.ts.k1 <- scan("data/ts_k1.dat", skip = 1)
mdata.ts.k1 <- meanf(data.ts.k1, 10)
plot(mdata.ts.k1)

12.3.2 Naïve method

The prediction will be the last \(Y_T\).

\[\hat{y}_{T+h\mid T}= y_T\]

data.ts.k1 <- scan("data/ts_k1.dat", skip = 1)
h <- 10
mdata.ts.k1 <- naive(data.ts.k1, h)
plot(mdata.ts.k1)

12.3.3 Seasonal naïve method

The prediction will be the same that the value of the variable in the last year in the same time.

\[\hat{y}_{T+h\mid T}= y_{T+h-m(k+1)}\] (where \(m\) is the season in the year and \(k\) is the truncated value of \((h-1)/m\))

tsn1 <- scan("data/ts_n1.dat", skip = 1)
ts.n1 <- ts(tsn1, frequency = 12, start = c(1980, 1))
h <- 10
mdata.ts.k1 <- snaive(ts.n1, h)
plot(mdata.ts.k1)

12.3.4 Drift method

It is based on naïve method where the amount of change in time (the drift) is set as the average change in the historical data:

\[\hat{y}_{_{T+h\mid T}}=y_{_{T}}+\frac{h}{T-1}\displaystyle\sum_{t=2}^T(y_t-t_{t-1})=y_{_{T}}+h\frac{y_{_T}-y_{_{1}}}{T-1}\]

h <- 20
mdata.ts.k1 <- rwf(ts.n1, h, drift = TRUE)
plot(mdata.ts.k1)

See all the methods in the same picture - Forecasts for quarterly beer production.

Note: It is necessary the package fpp2 to execute the code in this page.

As example, we can see how forecast in Malaga - Basic forecasting methods.

To analyse the evolution of minimum temperatures since 2001 in Malaga.

First of all we obtain a time-series object with the function ts, as our data are divided into months, we set the parameter frecuency to 12 and we indicate that we start in January 2001. After this, we draw the object obtained:

library('ggplot2')
library('TTR')
library('forecast')
library('tseries')
library(readr)
temperaturas <- read_csv("data/temp.csv")
Rows: 214 Columns: 6
── Column specification ────────────────────────────────────────────────────────
Delimiter: ","
chr (2): fecha, indicativo
dbl (4): X1, X1_1, tm_max, tm_min

ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
ts.p1 <- ts(temperaturas$tm_min, start = c(2001, 1), frequency = 12)
plot.ts(ts.p1, xlab = "años", ylab = "temperatura mínima", main = "Temperaturas mínimas Málaga")

We decompose the data: Seasonal component, Trend component, Cycle component y residual.

# obtenemos una serie temporal con frecuencia 12 meses
obj.ts <- ts(na.omit(temperaturas$tm_max), frequency = 12, start = c(2001, 1))

# descompose
decomp <- stl(obj.ts, s.window = "periodic") # uses additive model by default
# decomposing the series and removing the seasonality can be done with seasadj() within the forecast package
deseasonal_cnt <- seasadj(decomp)
plot(decomp)

Let’s see if it is seasonal, let’s look at the data by year:

ggseasonplot(ts.p1, col = rainbow(12), year.labels = TRUE)

As we can see, all the years are identical, which indicates that the temperatures repeat annual and monthly cycles.

With these graphs we can see that our series is seasonal, at the beginning of the year it is down, during the year it rises and from August onwards it starts to fall. This cycle is repeated every year.

ggsubseriesplot(ts.p1) +
  ylab("Temperatura") +
  ggtitle("Seasonal subseries plot: Temperaturas mínimas")

With this last graph we see the subseries by months. From it we see how each month moves in the same range. With this we confirm that it is a seasonal series.

Average method:

With this method we see that the prediction is basically an average.

mdata.ts.p1 <- meanf(ts.p1,10)
plot(mdata.ts.p1)

Naïve method:

This method uses the last value of \(y_t\).

mdata.ts.p1 <- naive(ts.p1, 10)
plot(mdata.ts.p1)

Seasonal naïve method:

This method uses the last year to predict. As we will see in the graph, it generates what has happened in the last year.

mdata.ts.p1 <- snaive(ts.p1, 12)
plot(mdata.ts.p1)

Drift method:

This method is based on the previous one and that the amount of change over time is set as the average change in the historical data.

mdata.ts.p1 <- rwf(ts.p1, 20, drift = TRUE)
plot(mdata.ts.p1)

Conclusion:

  • My data series as I have mentioned is for minimum temperatures, so these methods are too simple to predict well what will happen with the temperature in the following years. The only model that could be used is the Seasonal naïve method which uses the last year to see what will happen next year. But we still need to use better methods.

12.4 Decomposing Time Series

Decomposing a time series means separating it into its constituent components: trend, cycles, irregular, a seasonal component and a remainder component.

If \(S_t\) is the seasonal component, \(T_t\) is the trend-cycle component, and \(R_t\) is the remainder component, we can consider an additive decomposition or a multiplicative decomposition as follows:

Additive decomposition: \[y_t=S_t+T_t+R_t\] Multiplicative decomposition: \[y_t=S_t\times T_t\times R_t\]

  • Addditve decomposition is adequate when the seasonal part and the trend-cycle does not vary too much.

  • Multiplicative decomposition is adequate when the amplitude of the seasonal part increases or decreases with the average level of the time series.

Multiplicative decomposition can be transformed into an additive one just by applying logarithms: \[\log y_t= \log S_t + \log T_t +\log R_t\]

12.4.1 Additive decomposition

We describe how decompose a time series considering additive decomposition.

Note: Multiplicative decomposition is similar, except that the subtractions are replaced by divisions.

12.4.2 Non-Seasonal Data

  • A non-seasonal time series consists of a trend component and an irregular component.

  • To estimate the trend component of a non-seasonal time series that can be described using an additive model, it is common to use a smoothing method, such as calculating the Simple Moving Average (SMA) of the time series.

The SMA() function in the TTR package is used to smooth time series data using a simple moving average:

\[\hat{y_{_t}}=\frac{1}{m}\displaystyle\sum_{j=-k}^k y_{_{t+j}}\]

To compute the average eliminates randomness in the data.

To specify the order of the simple moving average, the parameter \(n\) will be used. For example, to calculate a simple moving average of order 2, we set \(n=2\) in the SMA() function.

library("TTR")
# simple moving average of order 2
# applied in the first dataset
data.ts.k1 <- scan("data/ts_k1.dat", skip = 1)
ts.k1 <- ts(data.ts.k1)

# Applying the MA of order 2,5,9
ts.k1.sma2 <- SMA(ts.k1, n = 2)
ts.k1.sma5 <- SMA(ts.k1, n = 5)
ts.k1.sma9 <- SMA(ts.k1, n = 9)

# Draw all the pictures together
par(mfrow = c(2, 2))
plot.ts(ts.k1)
plot.ts(ts.k1.sma2)
plot.ts(ts.k1.sma5)
plot.ts(ts.k1.sma9)

There still appears to be quite a lot of random fluctuations in the time series smoothed using a simple moving average of order 2.

Thus, to estimate the trend component more accurately, we might want to try smoothing the data with a simple moving average of a higher order. This takes a little bit of trial-and-error, to find the right amount of smoothing.

With MA of order 5, and order 9 the trend starts to be clearly identified in the plots.

12.4.3 Seasonal Data

A seasonal time series consists of a trend component, a seasonal component and an irregular component.

We can use the decompose function in R which estimates:

  • the trend-cycle component using MA \(\hat{y_t}\),
  • the detrended serie \(y_t-\hat{y_t}\)
  • the seasonal component \(S_t\), computing the average of the detrended values for that season,
  • the random components \(R_t=y_t-\hat{y_t}-S_t\).

To compute the detrended serie in additive decomposition: \(y_t-T_t =S_t+R_t\)

Given the time series ts.n1, we will decompose the time serie using the following:

tsn1 <- scan("data/ts_n1.dat", skip = 1)
ts.n1 <- ts(tsn1, frequency = 12, start = c(1980, 1))
ts.n1.comp <- decompose(ts.n1)

The estimated seasonal factors are given for the months January-December, and are the same for each year.

plot(ts.n1.comp)

The different parts of the decomposed series can be managed as usual:

ts.n1.comp$trend
ts.n1.comp$seasonal
ts.n1.comp$random

12.4.4 Seasonal Adjusting

As we have explained, if you have a seasonal time series that can be described using an additive model, you can seasonally adjust the time series by estimating the seasonal component, and subtracting the estimated seasonal component from the original time series.

ts.n1.comp <- decompose(ts.n1)
ts.n1.comp.aj <- ts.n1 - ts.n1.comp$seasonal

ts.n1.comp.aj
          Jan      Feb      Mar      Apr      May      Jun      Jul      Aug
1980 27.34019 25.68096 26.06848 25.54168 25.55435 24.51726 23.02095 22.73641
1981 22.11619 23.17196 22.84648 22.47068 21.50035 20.91426 22.02295 22.65941
1982 22.61419 22.11796 22.72748 22.47368 21.97035 22.27626 22.49395 22.33941
1983 22.22519 22.08296 21.56148 21.41668 21.50935 23.02726 22.64795 22.58341
1984 23.28119 22.97696 23.81448 24.47468 25.06835 23.73626 23.21495 23.28941
1985 23.96419 25.13196 24.21348 24.83868 24.17835 24.82026 24.99495 24.45341
1986 24.47519 24.35296 23.91248 23.44768 23.73635 24.89026 24.81995 24.65141
1987 25.04119 24.72696 24.70248 24.86368 25.17935 24.78826 25.55295 25.44141
1988 25.33419 25.38696 26.11948 27.00068 26.95835 26.27526 25.24995 25.71341
1989 25.66719 26.32196 25.85848 24.27668 24.51535 26.37226 26.90495 27.43441
1990 26.89419 26.30096 27.05148 27.77668 28.27535 27.29226 27.52595 27.00441
1991 27.26619 26.93096 26.68048 27.69768 28.62635 27.54326 26.60895 26.97641
1992 27.80919 27.00696 28.10048 27.39068 27.67935 28.16226 27.77295 27.59441
1993 26.75319 27.36896 26.79748 26.75268 26.14635 25.71826 27.40895 28.83541
          Sep      Oct      Nov      Dec
1980 22.48338 22.45176 22.78177 22.24682
1981 22.41338 22.33476 22.86877 22.44982
1982 21.54638 22.36676 22.16877 21.94982
1983 22.57038 22.13176 22.62877 22.40182
1984 23.43038 23.47676 23.19377 23.36782
1985 24.32238 24.33476 24.07377 24.35782
1986 24.51838 24.42376 24.27177 25.08382
1987 25.57638 25.68676 26.35577 25.55682
1988 25.46038 25.60376 25.82177 26.06482
1989 27.22238 27.00876 26.80277 27.25782
1990 27.36438 28.36076 27.40077 27.36382
1991 28.35638 27.70876 27.74377 28.11182
1992 27.71338 27.16976 27.02177 26.99582
1993 28.56938 28.23676 28.10177 28.27382
plot(ts.n1.comp.aj)

The seasonal variation has been removed from the seasonally adjusted time series. The seasonally adjusted time series now just contains the trend component and an irregular component.

With the previous example, we can see how decompose: We have seen that the time series we use is seasonal so we are going to use the method of Additive decomposition.

A seasonal time series consists of a trend component, a seasonal component and an irregular component. To decompose it we are going to use decompose.

# we decompose the time series and draw it
ts.p1.comp <- decompose(ts.p1)
plot(ts.p1.comp)

If we have a seasonal time series that can be described using an additive model, we can seasonally adjust the time series by estimating the seasonal component and subtracting the estimated seasonal component from the original time series.

# we subtract the stationary component
ts.p1.comp.aj <- ts.p1 - ts.p1.comp$seasonal
ts.p1.comp.aj
          Jan      Feb      Mar      Apr      May      Jun      Jul      Aug
2001 15.21772 14.62312 16.29322 15.00204 13.99887 15.21996 13.84395 14.84199
2002 16.01772 15.02312 14.69322 13.70204 13.59887 14.21996 13.74395 13.04199
2003 14.01772 14.12312 15.39322 14.10204 14.89887 15.81996 14.34395 14.64199
2004 16.31772 15.32312 14.59322 13.20204 12.79887 14.71996 14.54395 14.44199
2005 12.51772 11.82312 14.29322 14.50204 15.29887 14.71996 15.34395 13.94199
2006 14.51772 17.42312 17.59322 16.30204 16.09887 14.61996 14.84395 14.04199
2007 13.31772 16.52312 14.19322 14.40204 14.89887 14.71996 14.74395 14.64199
2008 15.31772 17.12312 15.59322 14.70204 14.99887 14.61996 16.04395 15.84199
2009 13.91772 14.82312 14.49322 13.80204 15.34887 15.91996 16.04395 14.84199
2010 15.41772 15.92312 14.89322 15.40204 14.39887 14.51996 15.74395 15.94199
2011 15.71772 14.42312 15.09322 16.20204 15.79887 14.41996 14.34395 14.94199
2012 13.91772 11.82312 13.89322 14.60204 14.99887 16.41996 15.14395 15.84199
2013 16.11772 13.62312 14.89322 15.00204 13.99887 13.11996 13.54395 14.74199
2014 16.01772 15.42312 14.89322 16.60204 15.59887 15.31996 14.44395 14.94199
2015 14.86772 14.22312 15.09322 16.30204 16.19887 14.71996 16.74395 16.54199
2016 16.11772 15.92312 14.39322 15.60204 14.99887 15.01996 15.54395 15.74199
2017 14.21772 16.22312 14.79322 15.20204 15.19887 16.21996 14.84395 14.84199
2018 15.21772 13.82312 14.89322 14.10204 14.09887 14.41996 14.34395 15.34199
          Sep      Oct      Nov      Dec
2001 14.85424 14.82508 13.62900 14.95081
2002 13.85424 14.52508 16.72900 16.05081
2003 14.95424 14.42508 14.92900 14.25081
2004 14.25424 13.92508 13.82900 14.35081
2005 14.05424 14.62508 13.72900 15.05081
2006 14.55424 15.72508 16.52900 13.95081
2007 14.85424 14.82508 14.82900 15.35081
2008 14.95424 14.72508 12.82900 13.65081
2009 15.15424 15.82508 16.72900 15.65081
2010 15.25424 13.82508 14.52900 15.55081
2011 14.85424 14.42508 14.12900 14.05081
2012 15.05424 14.22508 15.62900 15.25081
2013 15.45424 16.12508 14.62900 14.75081
2014 16.05424 15.92508 16.32900 14.15081
2015 15.05424 15.52508 15.52900 16.35081
2016 15.35424 15.92508 14.72900 16.25081
2017 15.15424 14.22508 14.32900 13.95081
2018 16.55424 14.32508                  
# we draw the adjusted values
plot(ts.p1.comp.aj)

In the previous example, orecasts using Exponential Smoothing:

Exponential Smoothing considers weigths to the values of the variables: weighted averages of past observations. The weights decrease exponentially for distant values.

Taxonomy

12.5 Simple Exponential Smoothing

The idea is to consider a weighted average between the value of the variable and the previous one:

\[\hat{y}_{T+1\mid T}=\alpha y_T + (1-\alpha)\hat{y}_{T\mid T-1}\]

If we have into account more past values of the variable, after some computations, the forecast will be:

\[\hat{y}_{T+1\mid T}=\alpha y_T + \alpha(1-\alpha){y}_{T-1}+\alpha(1-\alpha)^2{y}_{T-2}+\ldots \]

The parameter \(\alpha\) controls the rates at which the weights decrease. The weights decrease exponentially and this is the reason of the name of the method.

  • \(\alpha\) has values between 0 and 1.

  • \(\alpha\) close to 0, more important are the longer observations -  slow learning (past observations have a large influence on forecasts).

  • \(\alpha\) close to 1, more important the more recent observation s- fast learning (that is, only the most recent values influence the forecasts).

  • With \(\alpha=0\) or \(\alpha=1\) we have the extreme cases.

If you have a time series that can be described using an additive model with constant level (no clear trend) and no seasonality, you can use simple exponential smoothing to make short-term forecasts.

  • The simple exponential smoothing method provides a way of estimating the level at the current time point.

The file ts_p1.dat contains total annual snowfall for a city.

tsp1 <- scan("data/ts_p1.dat", skip = 1)
ts.p1 <- ts(tsp1, start = c(1900))
plot.ts(ts.p1)

  • The mean stays constant at about 25 - roughly constant level.
  • The random fluctuations seem to be roughly constant.

Thus, we can make forecasts using simple exponential smoothing.

In R, we can fit a simple exponential smoothing predictive model using the HoltWinters function.

Main parameters:

  • alpha: Smoothing is controlled by this parameter; the value of alpha lies between 0 and 1 (close to 0 mean that little weight is placed on the most recent observations when making forecasts of future values).
  • beta=FALSE: the function will do exponential smoothing.
  • gamma=FALSE: a non-seasonal model is fitted.
  • seasonal: additive by default or multiplicative seasonal model.
ts.p1.forecasts <- HoltWinters(ts.p1, beta = FALSE, gamma = FALSE)
ts.p1.forecasts
Holt-Winters exponential smoothing without trend and without seasonal component.

Call:
HoltWinters(x = ts.p1, beta = FALSE, gamma = FALSE)

Smoothing parameters:
 alpha: 0.02412151
 beta : FALSE
 gamma: FALSE

Coefficients:
      [,1]
a 24.67819

The output of HoltWinters() tells us that the estimated value of the alpha parameter is about 0.024. This is very close to zero, telling us that past observations have more influence on forecasts.

plot(ts.p1.forecasts)

The plot shows the original time series in black, and the forecasts of these values as a red line. The time series of forecasts is much smoother than the time series of the original data here.

plot(forecast(ts.p1.forecasts))

The function forecast predicts the value as the previous plot shows.

As a measure of the accuracy of the forecasts, we can calculate the sum of squared errors for the in-sample forecast errors, that is, the forecast errors for the time period covered by our original time series. The sum-of-squared- errors is stored in a named element of the list variable ts.p1.forecasts called “SSE”, so we can get its value by typing:

ts.p1.forecasts$SSE
[1] 1828.855

Finally, we try with different values of \(\alpha\) and the errors are computed:

ts.p2.forecasts <- HoltWinters(ts.p1, alpha = 0.5, beta = FALSE, gamma = FALSE)
ts.p2.forecasts
Holt-Winters exponential smoothing without trend and without seasonal component.

Call:
HoltWinters(x = ts.p1, alpha = 0.5, beta = FALSE, gamma = FALSE)

Smoothing parameters:
 alpha: 0.5
 beta : FALSE
 gamma: FALSE

Coefficients:
      [,1]
a 26.45644
plot(forecast(ts.p2.forecasts))

ts.p2.forecasts$SSE
[1] 2428.339
ts.p3.forecasts <- HoltWinters(ts.p1, alpha = 0.9, beta = FALSE, gamma = FALSE)
ts.p3.forecasts
Holt-Winters exponential smoothing without trend and without seasonal component.

Call:
HoltWinters(x = ts.p1, alpha = 0.9, beta = FALSE, gamma = FALSE)

Smoothing parameters:
 alpha: 0.9
 beta : FALSE
 gamma: FALSE

Coefficients:
      [,1]
a 27.57778
plot(forecast(ts.p3.forecasts))

ts.p3.forecasts$SSE
[1] 3399.419

Note: The forecasts made by HoltWinters are stored in the list called fitted.

ts.p3.forecasts$fitted
Time Series:
Start = 1901 
End = 1999 
Frequency = 1 
         xhat    level
1901 23.56000 23.56000
1902 25.81900 25.81900
1903 22.25590 22.25590
1904 30.34159 30.34159
1905 24.31916 24.31916
1906 23.92392 23.92392
1907 26.16139 26.16139
1908 23.01914 23.01914
1909 30.82291 30.82291
1910 24.55629 24.55629
1911 24.15463 24.15463
1912 31.60246 31.60246
1913 24.09425 24.09425
1914 22.72242 22.72242
1915 22.97224 22.97224
1916 27.38922 27.38922
1917 25.52692 25.52692
1918 25.12469 25.12469
1919 27.49647 27.49647
1920 20.58765 20.58765
1921 24.36076 24.36076
1922 20.54408 20.54408
1923 23.96041 23.96041
1924 27.07404 27.07404
1925 20.20340 20.20340
1926 21.48734 21.48734
1927 26.88973 26.88973
1928 20.17597 20.17597
1929 30.03460 30.03460
1930 23.78446 23.78446
1931 25.64345 25.64345
1932 22.94934 22.94934
1933 22.76993 22.76993
1934 26.00099 26.00099
1935 18.53010 18.53010
1936 28.68201 28.68201
1937 23.50520 23.50520
1938 19.64852 19.64852
1939 20.53185 20.53185
1940 33.85919 33.85919
1941 26.68692 26.68692
1942 19.45369 19.45369
1943 22.69937 22.69937
1944 22.25894 22.25894
1945 22.18789 22.18789
1946 19.11179 19.11179
1947 27.30018 27.30018
1948 31.74602 31.74602
1949 23.21760 23.21760
1950 27.13476 27.13476
1951 22.14448 22.14448
1952 17.45145 17.45145
1953 28.27714 28.27714
1954 31.26771 31.26771
1955 26.75177 26.75177
1956 23.73518 23.73518
1957 25.25152 25.25152
1958 21.71315 21.71315
1959 24.68932 24.68932
1960 32.94293 32.94293
1961 23.69729 23.69729
1962 19.30773 19.30773
1963 27.52677 27.52677
1964 26.29668 26.29668
1965 27.98267 27.98267
1966 33.47027 33.47027
1967 33.78503 33.78503
1968 30.63050 30.63050
1969 28.19105 28.19105
1970 27.24511 27.24511
1971 24.68451 24.68451
1972 20.78345 20.78345
1973 26.05435 26.05435
1974 26.91443 26.91443
1975 19.98044 19.98044
1976 26.96404 26.96404
1977 24.16140 24.16140
1978 21.52314 21.52314
1979 27.48731 27.48731
1980 23.09773 23.09773
1981 20.12977 20.12977
1982 27.15898 27.15898
1983 22.03890 22.03890
1984 23.37189 23.37189
1985 22.91119 22.91119
1986 18.21212 18.21212
1987 22.10721 22.10721
1988 23.16272 23.16272
1989 22.26927 22.26927
1990 20.98293 20.98293
1991 36.38829 36.38829
1992 22.22383 22.22383
1993 22.89538 22.89538
1994 24.12354 24.12354
1995 23.12135 23.12135
1996 23.61514 23.61514
1997 26.43651 26.43651
1998 25.46765 25.46765
1999 24.85777 24.85777

12.6 forecast package

We can make forecasts for further time points by using the forecast.HoltWinters function in the R forecast package.

require("forecast")
library("forecast")

When using the forecast.HoltWinters function, as its first argument (input), you pass it the predictive model that you have already fitted using the HoltWinters function.

You specify how many further time points you want to make forecasts for by using the “h” parameter in forecast.HoltWinters().

The output for the next 20 years is:

# ts.p1.forecasts2 <- forecast.HoltWinters(ts.p1.forecasts, h=20)

ts.p1.forecasts2 <- forecast:::forecast.HoltWinters(ts.p1.forecasts, h = 20)
ts.p1.forecasts2
     Point Forecast    Lo 80    Hi 80    Lo 95    Hi 95
2000       24.67819 19.17493 30.18145 16.26169 33.09470
2001       24.67819 19.17333 30.18305 16.25924 33.09715
2002       24.67819 19.17173 30.18465 16.25679 33.09960
2003       24.67819 19.17013 30.18625 16.25434 33.10204
2004       24.67819 19.16853 30.18785 16.25190 33.10449
2005       24.67819 19.16694 30.18945 16.24945 33.10694
2006       24.67819 19.16534 30.19105 16.24701 33.10938
2007       24.67819 19.16374 30.19265 16.24456 33.11182
2008       24.67819 19.16214 30.19425 16.24212 33.11427
2009       24.67819 19.16054 30.19584 16.23968 33.11671
2010       24.67819 19.15895 30.19744 16.23724 33.11915
2011       24.67819 19.15735 30.19904 16.23479 33.12159
2012       24.67819 19.15576 30.20063 16.23235 33.12403
2013       24.67819 19.15416 30.20223 16.22991 33.12647
2014       24.67819 19.15257 30.20382 16.22748 33.12891
2015       24.67819 19.15097 30.20542 16.22504 33.13135
2016       24.67819 19.14938 30.20701 16.22260 33.13379
2017       24.67819 19.14778 30.20860 16.22016 33.13623
2018       24.67819 19.14619 30.21020 16.21773 33.13866
2019       24.67819 19.14460 30.21179 16.21529 33.14110

The forecast for a year, a 80% prediction interval for the forecast, and a 95% prediction interval for the forecast.

Plotting the predictions:

forecast:::plot.forecast(ts.p1.forecasts2)

The forecasts for 2000-2020 are plotted as a blue line, the 80% prediction interval as a dark blue shaded area, and the 95% prediction interval as a grey shaded area.

Forecast errors

Forecast errors are stored in the element residuals.

  • If the predictive model cannot be improved upon, there should be no correlations between forecast errors for successive predictions.
  • If there are correlations between forecast errors for successive predictions, it is likely that the simple exponential smoothing forecasts could be improved upon by another forecasting technique.

To figure out whether this is the case, we can obtain a correlogram of the in-sample forecast errors. We can calculate a correlogram of the forecast errors using the acf function in R.

acf computes (and plots, by default) the estimation of the autocovariance or AutoCorrelation Function (ACF).

acf(ts.p1.forecasts2$residuals,na.action = na.pass, lag.max = 20)

The autocorrelation at lag 3 is just touching the significance bounds (significant evidence for non-zero correlations).

To test whether there is significant evidence for non-zero correlations at lags, we can carry out a Ljung- Box test. This can be done in R using the Box.test function.

Box.test(ts.p1.forecasts2$residuals, lag = 20, type = "Ljung-Box")

    Box-Ljung test

data:  ts.p1.forecasts2$residuals
X-squared = 17.401, df = 20, p-value = 0.6268

The p-value is 0.6, so there is little evidence of non-zero autocorrelations in the in-sample forecast errors at lags 1-20.

12.7 ACF test

ACF (AutoCorrelation Function):

  • Purpose: The ACF is used to study and visualize the autocorrelation in a time series. Autocorrelation measures the relationship between a data point and its past lags (previous observations in the series). It helps you understand the patterns, cycles, and seasonality in the data.

  • How it works: The ACF computes and plots the autocorrelation coefficients for different lags. It shows the strength and direction (positive or negative) of the correlation between each observation and its lags.

  • Interpretation: ACF can help you identify significant lags in the data, which can inform the choice of lag order in models like ARIMA (see below). Significant lags may indicate seasonality or other patterns in the data.

In an ACF (AutoCorrelation Function) test, if you observe a significant autocorrelation at lag 3, it means that there is a statistically meaningful relationship between data points that are separated by a lag of 3 time periods (lags). Here’s what this implies:

  • Lag 3 Autocorrelation: At lag 3, the ACF measures the correlation between the current data point and the data point that is three time periods (time steps) in the past. A positive autocorrelation suggests that as the current data point increases, the data point at lag 3 also tends to increase, and vice versa for a negative autocorrelation.

  • Significance: When we say that the autocorrelation at lag 3 is “significant,” it means that the correlation is unlikely to have occurred by random chance. In other words, the autocorrelation is strong enough to be considered a real relationship in the data.

  • Interpretation: The significance of a lag in the ACF can provide insights into potential patterns or seasonality in the time series. For example, a significant positive autocorrelation at lag 3 might suggest that the time series exhibits a repeating pattern or cycle with a length of 3 time periods.

  • Modeling: When building time series models like ARIMA, the presence of significant lags in the ACF can inform the choice of model parameters (e.g., the autoregressive order, denoted as ‘p’ in ARIMA) to capture these relationships and patterns.

In summary, a significant autocorrelation at lag 3 in the ACF suggests that there is a meaningful relationship between data points separated by a lag of 3 time periods, which can be indicative of repeating patterns or seasonality in the time series. It’s an important finding when analyzing and modeling time series data.

Analyse the evolution of minimum temperatures since 2001 in Malaga with the methods seen in this document. Malaga with the methods seen in this document**.

We obtain a time-series object with the ts function. data are divided into months, the parameter frequency is set to 12 and we indicate that we start in January 2001. After this, we draw the object:

library('ggplot2')
library('TTR')
library('forecast')
library('tseries')
library(readr)
temperaturas <- read_csv("data/temp.csv")
Rows: 214 Columns: 6
── Column specification ────────────────────────────────────────────────────────
Delimiter: ","
chr (2): fecha, indicativo
dbl (4): X1, X1_1, tm_max, tm_min

ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
ts.p1 <- ts(temperaturas$tm_min, start = c(2001, 1), frequency = 12)
plot.ts(ts.p1, xlab = "años", ylab = "temperatura mínima", main = "Temperaturas mínimas Málaga")

12.8 Simple Exponential Smoothing

Exponential Smoothing considers weights on variable values: weighted averages of past observations. The weights decrease exponentially for distant values.

  • alpha parameter of Holt-Winters Filter.
  • beta parameter of Holt-Winters Filter. If set to FALSE, the function will do exponential smoothing.
  • gamma parameter used for the seasonal component. If set to FALSE, an non-seasonal model is fitted.

Looking at these parameters we have to assemble a model with beta=FALSE as we want exponential smoothing.

# we obtain the model
ts.p1.forecasts <- HoltWinters(ts.p1, beta = FALSE)
ts.p1.forecasts
Holt-Winters exponential smoothing without trend and with additive seasonal component.

Call:
HoltWinters(x = ts.p1, beta = FALSE)

Smoothing parameters:
 alpha: 0.1412013
 beta : FALSE
 gamma: 0.2045694

Coefficients:
          [,1]
a   14.8002193
s1  -3.0179721
s2  -5.3483612
s3  -6.0676567
s4  -5.8300885
s5  -4.2971166
s6  -2.0597869
s7   0.5443399
s8   4.3512303
s9   6.9081866
s10  7.9327353
s11  5.3361482
s12  1.4610327
# we draw the model
plot(ts.p1.forecasts)

The graph shows the original time series in black, and forecasts of these values as a red line. The forecast time series is very close to the original data.

We now draw the forecast using forecast.

plot(forecast(ts.p1.forecasts))

As we can see it fits quite well with what is happening in recent years.

Let’s try a higher alpha value:

ts.p2.forecasts <- HoltWinters(ts.p1, alpha = 0.5, beta = FALSE)
ts.p2.forecasts
Holt-Winters exponential smoothing without trend and with additive seasonal component.

Call:
HoltWinters(x = ts.p1, alpha = 0.5, beta = FALSE)

Smoothing parameters:
 alpha: 0.5
 beta : FALSE
 gamma: 0.3927417

Coefficients:
          [,1]
a   15.0512589
s1  -3.1697282
s2  -5.4838631
s3  -6.2075694
s4  -5.9219038
s5  -4.2667725
s6  -1.8548302
s7   0.8149946
s8   4.5957651
s9   7.0279317
s10  8.0279557
s11  5.3161216
s12  1.1929326
plot(forecast(ts.p2.forecasts))

ts.p3.forecasts <- HoltWinters(ts.p1, alpha = 0.9, beta = FALSE)
ts.p3.forecasts
Holt-Winters exponential smoothing without trend and with additive seasonal component.

Call:
HoltWinters(x = ts.p1, alpha = 0.9, beta = FALSE)

Smoothing parameters:
 alpha: 0.9
 beta : FALSE
 gamma: 0.9999496

Coefficients:
          [,1]
a   14.7654331
s1  -3.0613814
s2  -5.3688209
s3  -5.8712039
s4  -5.2291009
s5  -3.4451379
s6  -1.4794315
s7   0.5904564
s8   4.0760361
s9   6.5257969
s10  7.4170932
s11  4.6994542
s12  1.1345783
plot(forecast(ts.p3.forecasts))

As we can see there is not much difference between the first model and the other 2 created later, so the alpha value generated by the first model is the optimal one. We can see that the higher the alpha value, the narrower the fixed values, and the gamma values increase along with alpha.

When we use the forecast.HoltWinters function, as the first argument, it is passed the predictive model that we have already set using the HoltWinters function.

To specify how many additional time points we want to predict, we use the parameter “h”.

The prediction for the next 2 years is:

# we indicate the parameter h=24 for 2 years, as we have the time series by months
ts.p1.forecasts2 <- forecast:::forecast.HoltWinters(ts.p1.forecasts, h = 24)
ts.p1.forecasts2
         Point Forecast     Lo 80    Hi 80     Lo 95    Hi 95
Nov 2018      11.782247 10.448881 13.11561  9.743039 13.82146
Dec 2018       9.451858  8.105265 10.79845  7.392421 11.51129
Jan 2019       8.732563  7.372871 10.09225  6.653094 10.81203
Feb 2019       8.970131  7.597467 10.34279  6.870822 11.06944
Mar 2019      10.503103  9.117587 11.88862  8.384139 12.62207
Apr 2019      12.740432 11.342183 14.13868 10.601995 14.87887
May 2019      15.344559 13.933692 16.75543 13.186823 17.50229
Jun 2019      19.151450 17.728075 20.57482 16.974587 21.32831
Jul 2019      21.708406 20.272634 23.14418 19.512582 23.90423
Aug 2019      22.732955 21.284891 24.18102 20.518333 24.94758
Sep 2019      20.136368 18.676116 21.59662 17.903106 22.36963
Oct 2019      16.261252 14.788913 17.73359 14.009504 18.51300
Nov 2019      11.782247 10.250481 13.31401  9.439613 14.12488
Dec 2019       9.451858  7.908565 10.99515  7.091594 11.81212
Jan 2020       8.732563  7.177827 10.28730  6.354800 11.11032
Feb 2020       8.970131  7.404038 10.53622  6.574998 11.36526
Mar 2020      10.503103  8.925733 12.08047  8.090724 12.91548
Apr 2020      12.740432 11.151866 14.32900 10.310930 15.16993
May 2020      15.344559 13.744875 16.94424 12.898054 17.79106
Jun 2020      19.151450 17.540725 20.76217 16.688058 21.61484
Jul 2020      21.708406 20.086715 23.33010 19.228243 24.18857
Aug 2020      22.732955 21.100371 24.36554 20.236134 25.22978
Sep 2020      20.136368 18.492964 21.77977 17.622999 22.64974
Oct 2020      16.261252 14.607099 17.91541 13.731443 18.79106
forecast:::plot.forecast(ts.p1.forecasts2)

The forecasts for the next 2 years are represented as a blue line, the 80% prediction interval as a dark grey shaded area and the 95% prediction interval as a grey shaded area.

Forecast errors

Errors are stored in residuals.

  • If the model cannot be improved, there should be no correlations between forecast errors for successive predictions.
  • If there are correlations between forecast errors for successive predictions, it is likely that the predictions can be improved with another technique.

To find out which is the case, we can obtain a correlation of the errors. We can calculate it using the acf function. acf calculates the estimate of the autocovariance or autocorrelation function (ACF).

Let’s see what happens, we use acf with residuals and lag max 24.

# na.action, to take or not to take into account the NA values
acf(ts.p1.forecasts2$residuals, na.action = na.pass, lag.max = 24)

We performed a Ljung Box text to see if there is significant evidence of non-zero correlations in the lag.

Box.test(ts.p1.forecasts2$residuals, lag = 24, type = "Ljung-Box")

    Box-Ljung test

data:  ts.p1.forecasts2$residuals
X-squared = 41.723, df = 24, p-value = 0.01386

As we can see, the p-value is 0.01, which tells us that there is evidence of autocorrelations other than 0.

Note: The ACF (AutoCorrelation Function) chart is used to identify non-stationary time series. Stationary: if the ACF is rapidly declining

12.9 ARIMA models versus Holt-Winters models

The difference between exponential smoothing models and ARIMA models is that the former are based on a description of the trend and seasonality of the data, while ARIMA models study the autocorrelations of the data.

Characteristics:

ARIMA (AutoRegressive Integrated Moving Average): - ARIMA decomposes a time series into three components: AutoRegressive (AR), Integrated (I), and Moving Average (MA). - ARIMA is particularly effective when dealing with stationary time series data, meaning that the statistical properties of the data do not change over time.

Holt-Winters: - Holt-Winters is a triple exponential smoothing method that decomposes a time series into three components: level, trend, and seasonality. - Holt-Winters is designed for time series data with trend and seasonality, making it a good choice for data that exhibits systematic patterns over time.

12.10 Stationarity

A stationary time series is one where the statistical properties of the data do not change over time, specifically with respect to time-dependent moments like mean, variance, and covariance.

 a. Constant Mean: The mean (average) of the time series data remains roughly constant over time.
b. Constant Variance: The variance (spread or dispersion) of the data remains roughly constant over time.
c. Constant Autocovariance: The autocovariance (covariance between data points at different time lags) remains roughly constant over time.
  • A time series is said to be stationary when it does not depend on the instant of observation (white noise series).

    In a stationary time series, the behavior of the data is not influenced by when the data was collected. This means that the patterns and characteristics of the data are consistent regardless of the time at which you look at it.

    White noise is a specific type of stationary time series where each data point is independently and identically distributed with a constant mean and constant variance. In other words, white noise is a stationary time series with no systematic patterns, trends, or correlations between data points.

  • If a series has a trend or stationarity it is not stationary.

  • A time series that has a cyclical component is stationary.

When a time series it exhibits these stationary characteristics, and it is often easier to analyze and model because the statistical properties do not change with time.

This is an important assumption in many time series analysis techniques, such as autoregressive integrated moving average (ARIMA) modeling.

If a time series is not stationary, it may require transformation (e.g., differencing) to make it stationary before applying certain analytical methods.

Stationarity with ARIMA?: - ARIMA models require that the time series be made stationary, often through differencing. - ARIMA is suitable for data with or without seasonality as long as it can be made stationary.

Stationarity with Holt-Winters?: - Holt-Winters can handle non-stationary data but is especially effective for time series with seasonality and trends.

Example:

From a dataset from sales, we check for stationarity using the Augmented Dickey-Fuller (ADF) test and fit an ARIMA model to the stationary data.

# Create a simple example dataset
date <- seq(as.Date("2020-01-01"), by = "1 month", length.out = 36)
sales <- c(100, 110, 105, 120, 130, 125, 140, 150, 145, 160, 170, 165,
           180, 190, 185, 200, 210, 205, 220, 230, 225, 240, 250, 245,
           260, 270, 265, 280, 290, 285, 300, 310, 305, 320, 330, 325)

# Create a data frame
sales_data <- data.frame(Date = date, Sales = sales)

# Save the data to a CSV file

library(tseries)
library(forecast)

 
# Convert the date column to a date object (adjust column name as needed)
sales_data$Date <- as.Date(sales_data$Date)

# Create a time series object
sales_ts <- ts(sales_data$Sales, frequency = 12)  # Assuming monthly data

# Plot the time series
plot(sales_ts, main = "Monthly Sales Data", xlab = "Date", ylab = "Sales")

# Check for stationarity
adf_test <- adf.test(sales_ts)
Warning in adf.test(sales_ts): p-value smaller than printed p-value
if (adf_test$p.value < 0.05) {
  cat("The time series is stationary.\n")
} else {
  cat("The time series is not stationary. You may need to difference it.\n")
}
The time series is stationary.

12.10.1 ADF test

  1. ADF (Augmented Dickey-Fuller) Test:

    • Purpose: The ADF test is used to determine whether a time series is stationary or non-stationary. Stationarity is a critical assumption in time series modeling because many time series methods, like ARIMA, require data to be stationary.

    • How it works: The ADF test compares the observed data to a null hypothesis that the data has a unit root (non-stationary). It tests whether differencing the data (removing trends) makes it stationary.

    • Interpretation: The ADF test provides a test statistic and a p-value. If the p-value is less than a chosen significance level (e.g., 0.05), you can reject the null hypothesis and conclude that the data is stationary. If the p-value is greater, you may not be able to conclude stationarity.

The adf.test function will perform the ADF test on the sales_ts time series data. The results will include statistics and a p-value.

The p-value is a critical indicator to determine whether the time series is stationary. If the p-value is less than your chosen significance level (e.g., 0.05), you can conclude that the time series is stationary.

print(adf_test)

    Augmented Dickey-Fuller Test

data:  sales_ts
Dickey-Fuller = -1.6224e+15, Lag order = 3, p-value = 0.01
alternative hypothesis: stationary
  • p-value: The p-value is 0.01.

    This indicates that the stationarity of the time series is strong, and it doesn’t require additional differencing.

  • Alternative hypothesis: The alternative hypothesis is “stationary.”

    The p-value is the key component to determine stationarity. In this case, the p-value is 0.01, which is less than the commonly chosen significance level of 0.05. When the p-value is less than your chosen significance level, you can reject the null hypothesis that the time series is non-stationary. Therefore, based on the low p-value of 0.01, you can conclude that your time series is stationary.

    The “Lag order” value of 3 in the Augmented Dickey-Fuller (ADF) test results indicates the number of lags used in the test to account for potential autocorrelation in the time series data. Specifically, it represents the number of past observations (time steps) that are considered when testing for stationarity.

    In this case, a lag order of 3 means that the test includes three lagged differences, which can be indicative of repeating patterns or seasonality in the time series. s

After fitting the model, we generate a forecast for future sales and plot the forecasted values.

# Fit an ARIMA model (adjust the order based on your analysis)
arima_model <- arima(sales_ts, order = c(1, 1, 1))

# Print model summary
summary(arima_model)

Call:
arima(x = sales_ts, order = c(1, 1, 1))

Coefficients:
          ar1     ma1
      -0.3552  1.0000
s.e.   0.1692  0.0792

sigma^2 estimated as 77.43:  log likelihood = -127.21,  aic = 260.42

Training set error measures:
                   ME     RMSE      MAE      MPE     MAPE      MASE       ACF1
Training set 4.232632 8.676369 5.934374 2.135967 3.156221 0.6020379 -0.5265618
# Forecast future sales (adjust the number of periods as needed)
forecast_periods <- 12  # Number of months to forecast
sales_forecast <- forecast(arima_model, h = forecast_periods)

# Plot the forecast
plot(sales_forecast, main = "Monthly Sales Forecast")

# Print the forecasted values
sales_forecast$mean
       Jan      Feb      Mar      Apr      May      Jun      Jul      Aug
4 325.1436 325.0926 325.1107 325.1043 325.1066 325.1058 325.1060 325.1059
       Sep      Oct      Nov      Dec
4 325.1060 325.1060 325.1060 325.1060

12.11 Forecasting Approaches

ARIMA: - ARIMA models forecast future values based on past values, their lags, and moving averages of past forecast errors. - It models the autocorrelation between observations and their lags.

Holt-Winters: - Holt-Winters models account for three components: level, trend, and seasonality. - It uses a weighted average of these components to make forecasts and is particularly well-suited for data with strong seasonality.

12.11.1 Complexity Models

ARIMA: - ARIMA models are relatively simpler, as they involve setting orders for the AR, I, and MA components. - They can be adjusted by varying the orders (p, d, q) (see below).

Holt-Winters: - Holt-Winters models are more complex due to the triple exponential smoothing approach. - They require setting parameters for level, trend, and seasonality, and potentially smoothing parameters.

Use Cases:

ARIMA: - ARIMA is suitable for modeling time series data without strong seasonality, such as financial data or stock prices. - It is effective when the main focus is capturing lags and autocorrelation.

Holt-Winters: - Holt-Winters is ideal for data with clear seasonal patterns, like monthly sales, quarterly economic data, or daily weather observations. - It is particularly well-suited for forecasting when seasonality and trends are prominent.

In practice, the choice between ARIMA and Holt-Winters depends on the characteristics of your time series data.

It’s important to analyze your data and determine whether it exhibits seasonality, trends, and stationarity, and then choose the method that best fits those characteristics.

Additionally, hybrid models, which combine ARIMA and Holt-Winters components, can also be used for more complex time series data.

12.12 ARIMA models

ARIMA is a popular time series forecasting method used to model and predict data that has temporal dependencies.

As we have already seen, the name ARIMA comes from AutoRegressive Integrated Moving Average.

  • AR: the study variable is analysed using the above values (lag).

    This component models the relationship between the current observation and a specified number of past observations. It captures the influence of past values on the present value. In ARIMA, this parameter is denoted as ‘p.’

  • MA: the regression error is a linear combination of current and past values of the signal.

    This component represents the number of differences needed to make the time series stationary. Differencing is the process of subtracting an observation from a lagged observation. In ARIMA, this parameter is denoted as ‘d.’

  • I: replaces the value of the variable by the difference between the values and the previous values (one or more times).

    This component models the relationship between the current observation and a linear combination of past forecast errors. It accounts for the influence of past forecast errors on the present value. In ARIMA, this parameter is denoted as ‘q.’

The combination of these three components (AR, I, and MA) determines the order of the ARIMA model, which is written as ARIMA(p, d, q).

ARIMA models can be obtained following the Box-Jenkins method.

ARIMA(1, 0, 0):

Let’s say we have monthly sales data for a store. To apply a simple ARIMA(1, 0, 0) model, we are using only the autoregressive component.

  • AR component (p): 1
  • No differencing (d): 0
  • No moving average (q): 0

The model obtained will be:

\[Y_t =\phi_1 \cdot Y_{t−1} +C+ε_t\] This means that the current observation depends only on the previous observation with coefficient \(\phi_1\).

This ARIMA model is relatively simple, and the forecast is based mainly on the previous observation.

library(forecast)

# Generate a simple example time series
sales_data <- c(100, 110, 105, 120, 130)

# Create an ARIMA(1,0,0) model
arima_model <- arima(sales_data, order = c(1, 0, 0))

# phi and C: 
arima_model$coef
       ar1  intercept 
  0.504016 113.577998 
# Forecast future sales (adjust the number of periods as needed)
forecast_periods <- 3
sales_forecast <- forecast(arima_model, h = forecast_periods)

# Print the forecasted values
sales_forecast$mean
Time Series:
Start = 6 
End = 8 
Frequency = 1 
[1] 121.8549 117.7497 115.6806

In this example, we used only the autoregressive component to capture the relationship between the current sales and the previous month’s sales.

ARIMA(0, 1, 1):

Now, let’s consider a more complex ARIMA model with differencing and moving average components:

  • AR component (p): 0

  • One level of differencing (d): 1

    to make the time series stationary

  • Moving Average component (q): 1

    to capture the influence of past forecast errors

The model obtained will be:

\[Y_t =Y_{t−1} + \phi_1 \cdot ε_{t-1} +ε_t\] Coeficient \(\phi_1\) is the coeficient of the MA.

This ARIMA(0,1,1) model implies that future observations \(Y_t\) depend on past observations \(Y_{t-1}\) and a moving average component of order 1 \(\phi_1 \cdot ε_{t-1}\).

The first order differentiation (\(d=1\)) has been applied to make the series stationary before modelling. The error term (\(ε_t\)) is a residual component that captures the variability not explained by the past observations and the moving average term.

This ARIMA model is used to make forecasts of future temperatures based on past observations and the moving average component.

# Generate a simple example time series
temperature_data <- c(70, 72, 75, 78, 80, 82, 85, 88, 90, 92)

# Create an ARIMA(0,1,1) model
arima_model <- arima(temperature_data, order = c(0, 1, 1))

arima_model$coef
      ma1 
0.9999968 
# Print the model summary
summary(arima_model)

Call:
arima(x = temperature_data, order = c(0, 1, 1))

Coefficients:
         ma1
      1.0000
s.e.  0.2901

sigma^2 estimated as 1.733:  log likelihood = -16.4,  aic = 36.79

Training set error measures:
                   ME     RMSE      MAE      MPE     MAPE      MASE       ACF1
Training set 1.151507 1.249193 1.151507 1.416502 1.416502 0.4710709 -0.0770721
# Forecast future temperatures (adjust the number of periods as needed)
forecast_periods <- 3
temperature_forecast <- forecast(arima_model, h = forecast_periods)

# Print the forecasted values
temperature_forecast$mean
Time Series:
Start = 11 
End = 13 
Frequency = 1 
[1] 93.19999 93.19999 93.19999

These examples illustrate how ARIMA models are constructed and applied to time series data for forecasting. The choice of ARIMA order (p, d, q) depends on the specific characteristics of the data and may require some analysis and experimentation.

Autoregressive models - AR(p):

The current value of the \(x_t\) series can be explained as a function of \(p\) past values, \(x_{t-1},x_{t-2},x_{t-p}\), where \(p\) determines the number of past steps needed for the prediction.

\[ y_t=c+\phi_1y_{t-1}+\phi_2y_{t-2}+\cdots+\phi_py_{t-p}+\epsilon_t \]

  • \(\epsilon_t\): Gaussian white noise with mean 0 and variance \(sigma^2_epsilon\).
  • If the mean is non-zero, \(c\) takes the value \(c = \mu(1-\phi_1-\cdots-\phi_p)\).

This is an autoregressive model of order \(p\) that applies to stationary models.

As an alternative to the autoregressive representation where \(x_t\) (on the left-hand side of the equation) are assumed to be linear combinations using moving averages of order \(q\), \(MA(q)\). We assume white noise \(epsilon_t\). Model MA(\(q\)):

\[ y_t=c+\epsilon_t+\theta_1\epsilon_{t-1}+\theta_2\epsilon_{t-2}+\cdots+\theta_q\epsilon_{t-q} \]

As with autoregressive models, changing the parameters affects the time series patterns and changing the error term changes the scale.

To convert a non-stationary time series into a stationary time series, the differences between consecutive signal values are calculated (trend and seasonality are removed or reduced).

The following equation allows to calculate the differences between consecutive signal values:

\[ y'_t=y_t-y_{t-1} \]

If the differentiated series is white noise:

\[ y_t-y_{t-1}=\epsilon_t \] (\(\epsilon_t\) denotes white noise)

A closely related model allows the difference to have a non-zero mean. \[ y_t-y_{t-1}=c+\epsilon_t \] where \(c\) is the average of the changes between consecutive observations.

The difference of the data may have to be recalculated a second time to obtain a stationary series (the change in changes).

A seasonal difference is the difference between one observation and the previous observation: \[ y'_{_t}=y_{_t}-y_{_{t-m}} \] where \(m\) is the number of seasons.

lynx <- read.csv(file = "data/annual-number-of-lynx-trapped-ma.csv", header = TRUE, quote = '\"')
lynx.ts <- ts(lynx$Lynx, frequency = 1, start = c(1821))
electricity <- read.csv(file = "data/monthly-electricity-production-i.csv", header = TRUE, quote = '\"')
electricity.ts <- ts(electricity$Million.kilowatt.hours, frequency = 12, start = c(1956, 1))
plot.ts(lynx.ts, xlab = "Year", ylab = "Lynx Trapped")

Although there are cycles, they are not fixed in time.
plot.ts(electricity.ts, xlab = "Year", ylab = "kW/h")

Clear trend indicates this is not a stationary series.
par(mfrow = c(1, 2))
acf(lynx.ts)
acf(electricity.ts)

Lynx trapped ACF plot (left) and kilowatts per hour in Australia ACF plot (right).

Therefore, in an ARIMA model we will use three parameters: (p, d, q).

  • p: refers to the use of past values of the series. It specifies the number of lags used in the model. For example, AR(2) or, ARIMA(2,0,0) specifies the use of two lags of the series.
  • d: degree of differentiation. Differencing your current and previous values d times.
  • q: model error as a combination of q previous error terms.

ARIMA works best for long and stable series.

If we combine differencing and autoregression we obtain a non-seasonal ARIMA model: \[ y'_t = c + \phi_1y'_{t-1}+\cdots+\phi_py'_{t-p}+\theta_1\epsilon_{t-1}+\cdots+\theta_q\epsilon_{t-q}+\epsilon_t \] where \(y'_t\) is the series to which differencing has been applied. These models are represented as ARIMA(\(p,d,q\)), where \(d\) represents the degree of differencing.

12.13 Auto-ARIMA model

In R we have functions that automatically calculate the parameter values of an ARIMA model.

The auto.arima() function is a modification of the Hyndman-Khandakar algorithm.

R’s auto.arima() function combines tests, and minimisations to obtain an ARIMA model. By default, it can use initial approximations to speed up the model search (approximation=TRUE/FALSE).

If this does not allow to find a good model (minimum AICc) a larger set of models can be searched with the argument stepwise=FALSE. See (forecasting?) for more on this topic.

To choose a custom model for a non-seasonal series, we will use the Arima() function as follows:

  1. Draw the dataset and identify outliers.

  2. Transform the data to stabilise the variance.

  3. If the data is not stationary use differencing until it is.

  4. Analyse ACF/PACF for an ARIMA(\(p,d,0\)) or ARIMA(\(0,d,q\)) model.

  5. Try to pick your model and use AICc to find the best one.

  6. Check that the residuals behave like white noise:

  • if this is not satisfied look for another model
  • if this is true, we can use the model to predict

12.14 A use case

If your time series is not stationary, you should take steps to make it stationary before applying time series analysis techniques like ARIMA, Holt-Winters, or other methods. Stationarity is a fundamental assumption for many time series models, and non-stationary data can lead to unreliable results. Here’s what you should do:

  1. Identify Non-Stationarity:
    • First, ensure that you have identified the non-stationarity in your time series data. Common signs of non-stationarity include trends (long-term changes) and seasonality (repeating patterns).

For instance:

library(tseries)
adf_test_result <- adf.test(time_series_data)

or use Autocorrelation Function (ACF) and Partial Autocorrelation Function (PACF)

acf(time_series_data)
pacf(time_series_data)

or use seasonal decomposition techniques (e.g., stl()) to break down your time series into its trend, seasonal, and residual components. If you observe strong trends or seasonality, it suggests non-stationarity.

decomposed_data <- stl(time_series_data, s.window = "periodic")
plot(decomposed_data)
  1. Differencing:
    • The most common method to achieve stationarity is differencing. You can difference the data by subtracting each data point from the previous one (first-order differencing). Repeat this process until the data becomes stationary. For example, you can use the diff() function in R.

For instance:

# Example time series data (replace with your actual data)
time_series_data <- c(100, 110, 105, 120, 130, 135, 140, 150)

# First-order differencing
first_order_difference <- diff(time_series_data, lag = 1)

# The 'first_order_difference' variable now contains the differenced values

# Second-order differencing
second_order_difference <- diff(time_series_data, lag = 1, differences = 2)

# The 'second_order_difference' variable contains the result of second-order differencing

# Print or visualize the differenced data
print(first_order_difference)
[1] 10 -5 15 10  5  5 10
print(second_order_difference)
[1] -15  20  -5  -5   0   5
# Adjust the lag and differences parameters according to the specific characteristics of your data and the level of differencing needed to achieve stationarity.
  1. Log Transformation:
    • In some cases, taking the logarithm of the data can help stabilize variance and reduce non-stationarity, especially when you have exponential growth.

For instance:

log_transformed_data <- log(original_data)
  1. Seasonal Differencing:
    • If your data exhibits seasonality, consider seasonal differencing by subtracting values from the same season in the previous year or period.

For instance:

seasonal_difference <- diff(time_series_data, lag = 12)
  1. Remove Trends:
    • If you identify a trend in your data, remove it by fitting a trend model (e.g., linear or polynomial) and subtracting it from the data. You can also use rolling means or moving averages.

For instance:

library(tseries)
first_order_difference <- diff(time_series_data, lag = 1)
  1. Use Seasonal Decomposition:

    • Seasonal decomposition techniques, like seasonal decomposition of time series (STL) or classical decomposition, can help in separating the trend and seasonality from the data.

For instance:

  • Seasonal decomposition with STL
decomposed_data <- stl(time_series_data, s.window = "periodic")
detrended_data <- decomposed_data$time.series[, "trend"]
  1. Augmented Dickey-Fuller Test:
    • Perform an Augmented Dickey-Fuller (ADF) test to quantitatively test for stationarity. The test determines whether your data is stationary or not. If the p-value is less than a chosen significance level (e.g., 0.05), you can consider the data stationary.
  2. Select Appropriate Parameters:
    • Once the data is stationary, you can identify appropriate parameters (e.g., p, d, q) for ARIMA modeling.

Visual Inspection Systematically try different combinations of parameter values (p, d, q) and evaluate model fit. Expert Knowledge Use auto.arima()

  1. Model Building and Forecasting:
    • With stationary data and appropriate parameters, you can proceed with building your time series model (e.g., ARIMA) and make forecasts.
  2. Evaluate and Validate:
    • After modeling, evaluate your results, validate the model, and use it for forecasting or analysis.

Remember that the process of making a time series stationary may involve trial and error. It’s essential to understand the nature of your data and apply the appropriate transformations or differencing to achieve stationarity.

Additionally, some data may require more advanced techniques like seasonal adjustment, structural break analysis, or transformation of the underlying process to achieve stationarity.

As example, we analyse the evolution of the minimum temperatures since 2001 in Malaga with the methods seen in this document:

We obtain a time-series object with the ts function, as our data are divided into months, we set the parameter frequency to 12 and we indicate that we start in January 2001. Then we draw the object obtained:

library('ggplot2')
library('TTR')
library('forecast')
library('tseries')
library(readr)
temperaturas <- read_csv("data/temp.csv")
Rows: 214 Columns: 6
── Column specification ────────────────────────────────────────────────────────
Delimiter: ","
chr (2): fecha, indicativo
dbl (4): X1, X1_1, tm_max, tm_min

ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
ts.p1 <- ts(temperaturas$tm_min, start = c(2001, 1), frequency = 12)
plot.ts(ts.p1, xlab = "años", ylab = "temperatura mínima", main = "Temperaturas mínimas Málaga")

ARIMA models

We decompose the data: Seasonal component, Trend component, Cycle component y residual.

  • we obtain a time series with a frequency of 12 months
  • descompose
  • decomposing the series and removing the seasonality can be done with seasadj() within the forecast package
obj.ts <- ts(na.omit(temperaturas$tm_min), frequency = 12, start = c(2001, 1))
decomp <- stl(obj.ts, s.window = "periodic") # uses additive model by default
deseasonal_cnt <- seasadj(decomp)
plot(decomp)

Previously we have seen that the series is stationary, but to fit an ARIMA model requires the series to be stationary.

  • with ADF test we can know if it is stationary or not.
adf.test(obj.ts, alternative = "stationary")
Warning in adf.test(obj.ts, alternative = "stationary"): p-value smaller than
printed p-value

    Augmented Dickey-Fuller Test

data:  obj.ts
Dickey-Fuller = -14.33, Lag order = 5, p-value = 0.01
alternative hypothesis: stationary

We see that it is stationary.

We fix an ARIMA model using auto.arima:

fit <- auto.arima(deseasonal_cnt, seasonal = FALSE)
fit
Series: deseasonal_cnt 
ARIMA(1,1,1) 

Coefficients:
         ar1      ma1
      0.3684  -0.9799
s.e.  0.0668   0.0175

sigma^2 = 0.7702:  log likelihood = -274.66
AIC=555.32   AICc=555.44   BIC=565.41
tsdisplay(residuals(fit), main = "(1,1,1) Model Residuals")

As we can see, even lag=23 is equal, so p or q is 23.

fit2 <- arima(deseasonal_cnt, order = c(1, 1, 23))

fit2

Call:
arima(x = deseasonal_cnt, order = c(1, 1, 23))

Coefficients:
          ar1     ma1      ma2      ma3      ma4     ma5      ma6      ma7
      -0.7624  0.1627  -0.6937  -0.1923  -0.0753  0.0409  -0.1322  -0.0993
s.e.   0.2475  0.2563   0.1611   0.1147   0.0880  0.0930   0.1067   0.1014
         ma8     ma9    ma10     ma11     ma12     ma13    ma14    ma15    ma16
      0.0485  0.2009  0.0295  -0.2316  -0.1224  -0.1785  0.0240  0.3970  0.1431
s.e.  0.0936  0.0982  0.0963   0.1006   0.0939   0.0922  0.0978  0.1027  0.1168
         ma17     ma18    ma19    ma20    ma21     ma22     ma23
      -0.1927  -0.2143  0.0691  0.1088  0.1423  -0.0907  -0.0931
s.e.   0.0902   0.1099  0.0969  0.1216  0.1053   0.1051   0.1117

sigma^2 estimated as 0.6437:  log likelihood = -261.09,  aic = 572.19
tsdisplay(residuals(fit2), main = "Seasonal Model Residuals")

We forecast for the next 30 months:

fcast <- forecast(fit2, h = 30)
plot(fcast)

12.15 Other packages

  • base R: times series class named ts. too restrictive, functiones associated limited

  • zoo, xts packages: to represent time series

    • special structure for time series

    • many functions interestings

12.16 References

Information and images obtained from:

Forecasting: Principles and Practice. Rob J Hyndman and George Athanasopoulos. Monash University, Australia

Introducion to Forecasting with ARIMA in R

AEMET