746

# Choosing Seasonal Autocovariance Structures: PARMA or SARMA?

Authored by: Robert Lund

# Economic Time Series

Print publication date:  March  2012
Online publication date:  March  2012

Print ISBN: 9781439846575
eBook ISBN: 9781439846582

10.1201/b11823-5

#### Abstract

Many time series have some type of seasonality (periodicity) in their first two moments. While seasonality in the first moment is typically removed by differencing at the seasonal lag (or estimated and removed by subtracting periodic sample means), selecting the appropriate type of seasonal model for the autocovariances is a more involved issue. Here, we present a simple test to assess whether a periodic autoregressive moving-average (PARMA) model is preferred to a seasonal autoregressive moving-average (SARMA) model. The test can be used to check for periodic variances (periodic heteroskedasticity) or periodic autocorrelations. The methods are developed in the frequency domain, where the discrete Fourier transform (DFT) is used to check estimates of the series’ frequency increments for periodic correlation. Asymptotic results are proven and the methods are illustrated with applications to two economic series.

#### 3.1  Introduction

Many time series have some type of seasonality (periodicity) in their first two moments. While seasonality in the first moment is typically removed by differencing at the seasonal lag (or estimated and removed by subtracting periodic sample means), selecting the appropriate type of seasonal model for the autocovariances is a more involved issue. Here, we present a simple test to assess whether a periodic autoregressive moving-average (PARMA) model is preferred to a seasonal autoregressive moving-average (SARMA) model. The test can be used to check for periodic variances (periodic heteroskedasticity) or periodic autocorrelations. The methods are developed in the frequency domain, where the discrete Fourier transform (DFT) is used to check estimates of the series’ frequency increments for periodic correlation. Asymptotic results are proven and the methods are illustrated with applications to two economic series.

Autoregressive moving-average (ARMA) models are stationary short-memory time series modeling staples (Brockwell and Davis 1991; Shumway and Stoffer 2006). A zero-mean ARMA(p, q) series ${ X t } t = − ∞ ∞$

satisfies

3.1.1 $X t − ϕ 1 X t − 1 − … − ϕ p X t − p = Z t + θ 1 Z t − 1 + … + θ q X t − q ,$

where {Z t } is zero-mean white noise with Var(Z t ) ≡ σ2. Here, p and q are the autoregressive and moving-average orders, respectively; φ1,…,φ p are the autoregressive coefficients; and θ1,…,θ q are the moving-average coefficients. We assume that the autoregressive and moving-average polynomials, Φ(z) = 1 − φ1 z − … — φ p z p and Θ(z) = 1 + θ1 z + … + θ q z q , have no common roots and that the model is causal and invertible. One can write causal ARMA solutions in the form

3.1.2 $X t = ∑ k = 0 ∞ ψ k Z t − k ,$

for some weight sequence ${ ψ k } k = 0 ∞$

satisfying ψ0 = 1 and $∑ k = 0 ∞ | ψ k | < ∞$ Chapter 3 of Brockwell and Davis (1991) discusses these and other ARMA properties.

All nonunit-root ARMA series ${ X t } t = − ∞ ∞$

are stationary in that Cov(Xt , X t+h ) depends only on h. The purpose of this chapter is to statistically investigate whether (or not) a stationary model is appropriate for the series in the first place. The alternative here is that the autocovariance structure of the series is periodic with period P:

3.1.3 $Cov ( X n + P , X m + P ) = Cov ( X n , X m ) ,$

for all integers n and m. Before we test this hypothesis, we need to review two variants of ARMA series, SARMA and PARMA, which have been used to describe seasonal time series.

#### 3.2  SARMA and PARMA Models

Our goal is to test whether or not a zero-mean series has a periodic autocovariance structure with known period P. We assume that {X t } has short memory (this means that $∑ h = − ∞ ∞ | Cov ( X t , X t + h ) | < ∞$

for all t). Constructing zero-mean short-memory data may require some preprocessing of the original series, such as subtraction of a periodic mean or trend, and/or differencing. Section 3.4 rehashes this point with applications.

SARMA models, which stem from Harrison (1965) and Chatfield and Prothero (1973) and were popularized in Chapter 9 of Box et al. (1994), drive a stationary ARMA equation at lags that are multiples of the period P in an attempt to accommodate seasonality. Specifically, a SARMA(p, q) series ${ X t } t = − ∞ ∞$

satisfies

3.2.1 $X t − ϕ 1 X t − P − ⋯ − ϕ p X t − p P = Z t + θ 1 Z t − P + ⋯ + θ q Z t − q P .$

Written in terms of the backshift operator B, Equation 3.2.1 is Φ(B P )X t = Θ(B P )Z t .

Contrary to the seasonality implied in its acronym, SARMA sequences are actually stationary. Specifically, a SARMA(p, q) model is an ARMA(pP, qP) model with most coefficients equal to zero. Hence, by ARMA theory, SARMA models must have unique stationary solutions. Another property of solutions to Equation 3.2.1 is that their autocovariances are zero unless the lag h is a whole multiple of the period P. This is easily inferred from the SARMA expansion (assuming causality)

3.2.2 $X t = ∑ k = 0 ∞ ψ k Z t − k P ,$

and the fact that {Z t } is white noise. As such structure is not frequently encountered in practice, many authors have modified the assumption that {Z t } is white noise to that of {Z t } satisfying an additional ARMA(p*,q*) recursion of form

3.2.3 $ϕ * ( B ) Z t = θ * ( B ) ∈ t ,$

where now {∈ t } is zero-mean white noise with variance $σ ∈ 2$

. Combining Equations 3.2.1 and 3.2.3 produces a single difference equation of ARMA form:

3.2.4 $Φ ( B P ) ϕ * ( B ) X t = Θ ( B P ) θ * ( B ) ∈ t .$

Assuming no common roots in any of the AR and MA polynomials involved, one sees that {X t } is actually an ARMA series with autoregressive order pP + p* and moving-average order qP + q*. While solutions to Equation 3.2.4 can have nonzero covariances at each and every lag, they are still stationary in structure. It follows that true periodic covariances satisfying Equation 3.1.3 cannot be produced from the SARMA model class.

A model type that does have periodic covariances is the PARMA class. PARMA models were introduced in Hannan (1955) to describe the seasonality in rainfall data and were further developed in Jones and Brelsford (1967), Pagano (1978), and Troutman (1979). Parzen and Pagano (1979) were the first to pursue PARMA economic applications. A series ${ X t } t = − ∞ ∞$

is called a PARMA(p, q) series if it obeys the periodic linear difference equation

3.2.5 $X n P + ν − ∑ k = 1 p ϕ k ( ν ) X n P + ν − k = Z n P + ν + ∑ k = 1 q θ k ( ν ) Z n P + ν − k ,$

where {Zt } is zero-mean periodic white noise. The notation adopted above emphasizes periodicity in that X nP+ν denotes the observation from season ν of the nth cycle. The seasonal index ν runs from season 1 to season P. Periodic white noise refers to white noise with periodic variances—say, Var(Z nP+ν ) = σ2 (ν). The PARMA equation is simply an ARMA equation with periodically varying parameters

$X t − ∑ k = 1 p ϕ k ( t ) X t − k = Z t + ∑ k = 1 q θ k ( t ) Z t − k ,$

where φ k (·), 1 ≤ kp, and θ k (·), 1 ≤ kq are periodic sequences with period P.

Lund and Basawa (1999) show that solutions to Equation 3.2.5 are truly periodic in that they obey Equation 3.1.3 and have short memory whenever the model does not have a unit root. Notions of unit roots and causality in PARMA models are quantified through the P-variate ARMA representations of Equation 3.2.5 and are discussed in Vecchia (1985) and Lund and Basawa (1999). When the PARMA model is causal, one can express its solution in the form

3.2.6 $X n P + ν = ∑ k = 0 ∞ ψ k ( ν ) Z n P + ν − k ,$

where ψ0(ν) ≡ 1 and $∑ k = 0 ∞ | ψ k ( ν ) | < ∞$

for every season ν. Notice that X t in Equation 3.2.6 depends on all prior Z t s (Z t , Z t−1,…) and not only on Z t , Z tP , Z t−2P , … One should compare Equations 3.2.2 with 3.2.6.

Viewed collectively, SARMA and PARMA models are two ARMA-like models that are used to describe time series with periodic characteristics. Holan et al. (2010) discuss SARMA and PARMA models in more detail along with other ARMA model modifications. Often, the application or physical reasoning suggests which model type the practitioner should adopt. For example, in forecasting daily high temperatures (take P = 365) beyond a simple seasonal mean, physical reasoning would prefer the PARMA paradigm. In fact, if one tried to forecast, say, an April 5th temperature, then the most important predictands would be the most recent April 4 temperature (yesterday), April 3 temperature (2 days ago), etc., and not temperatures from April 5 of last year, April 5 of two years ago, etc. Indeed, daily weather forecasts have little predictive power beyond about 10 days. But what about a situation in economics, say, monthly housing starts (P = 12), where it is generally accepted that this November’s observation is more heavily correlated with last November’s observation than the latest October and September observations? This seems to suggest the SARMA paradigm. In the next section, a test for such situations is constructed based on the work in Lund et al. (1995). Section 3.4 applies the test to several simulated and real economic series.

#### 3.3  Average Squared Coherences

Suppose that X 0,…, X N−1 is sampled from a zero-mean short-memory series with possibly periodic covariances with known period P. To avoid trite work, N is assumed as a whole multiple of P; i.e., d = N/P is a whole number. Our objective is to test whether or not the series is best modeled by a SARMA model (our null hypothesis) or whether a PARMA structure is needed (our alternative hypothesis). The test statistics we devise is based on the DFT of the series, which is defined by

$I j = 1 2 π N ∑ t = 0 N − 1 X t e − i t λ j ,$

where λ j = 2πj/N is the jth Fourier frequency and $i = − 1$

.

Our test is based on the spectral representation for stationary and periodic series. Specifically, all zero-mean stationary and periodic series can be written with the harmonic (spectral) representation

$X t = ∫ [ 0 , 2 π ) e i t λ d Z ( λ )$

for some complex-valued process {Z(λ), 0 λ < 2π} (Loéve 1978). The frequency increments of {X t } are estimated by the DFT: $d Z ( λ j ) ^ = N / ( 2 π ) I j$

.

It is well known that a series is stationary if and only if its frequency increments are uncorrelated (see Chapter 4 of Brockwell and Davis [1991] or Shumway and Stoffer [2006]). It is also known that a series has the periodic structure in Equation 3.1.3 if and only if its frequency increments are uncorrelated except “every periodic now and again.” Specifically, series obeying Equation 3.1.3 have $E [ d Z ( λ 1 ) d Z ( λ 2 ) ¯ ] = 0$

unless λ2 = λ1 + 2πP/T for some k = 0, ±1,…,±(P − 1) (Hurd 1991). Here, an overline denotes complex conjugation.

Phrasing the above concept in another way, if {X t } is SARMA, then its frequency increments should always be uncorrelated; however, if {X t } is in truth PARMA, then the frequency increments should be uncorrelated except for every periodic now and again. It follows that to test for a PARMA presence, one should check $d Z ( λ j ) ^$

for nonzero correlations that occur every periodic now and again.

We now make this idea precise. Define the squared sample correlation of the estimated frequency increments as

3.3.1 $ρ h 2 ( j ) = | ∑ m = 0 M − 1 I j + m I j + h + m ¯ | 2 ∑ m = 0 M − 1 | I j + m | 2 ∑ m = 0 M − 1 | I j + h + m | 2 = Corr { ( I j I j + 1 ⋮ I j + M − 1 ) , ( I j + h I j + h + 1 ⋮ I j + h + M − 1 ) } 2 ,$

where M ≥ 1 is a smoothing parameter to be selected later. We take I j = I jN should an index j ≥ N be encountered. The above discussion translates into the following: if {X t } is SARMA, then $| ρ h 2 ( j ) |$

should be statistically small (approximately zero) for all h ≥ 1 and j; however, if {X t } is PARMA, then $| ρ h 2 ( j ) |$ may be significantly positive for some h that are whole multiples of d = N/P. Again, d is the number of complete cycles of observed data.

There are redundancies in the values of $ρ h 2 ( j )$

. In fact, N 2 values of $ρ h 2 ( j )$ were made from N series observations. To summarize the squared coherences, we define the average squared coherence as

3.3.2 $ρ ¯ h 2 = 1 N ∑ j = 0 N − 1 ρ h 2 ( j ) .$

The PARMA diagnostic test that we propose is now simply stated: plot the values of $ρ ¯ h 2$

for varying h. If one encounters large average squared coherences at any of the hs that are multiples of N/P, then the SARMA null hypothesis should be rejected. The next result quantifies how large an average squared coherence should be to be declared statistically large.

Theorem 3.1 If {X t } is a causal SARMA series where the error components are independent and identically distributed with finite fourth moments, then the asymptotic normality

3.3.3 $ρ ¯ h 2 ∼ A N ( 1 M , η M 2 N )$

holds as N → ∞, M → ∞, and M/N → 0. The value of η M is determined solely on the smoothing parameter M and not on any properties of the time series model or the distribution of time series errors.

Proof. We present an outline only because a full proof is technical, lengthy, and somewhat out of spirit with an expository book chapter. We build the result up, first establishing what happens in the case of Gaussian noise. Then, spectral density transfer results are used to handle the general null hypothesis SARMA case.

First, suppose that {X t } is Gaussian white noise. Then the DFT ${ I j } j = 1 N / 2 − 1$

is an IID sequence of complex-valued Gaussian random variables (there is a qualifier to this below). In this case, $ρ h 2 ( j )$ is known to have (exactly) the beta-type density

3.3.4 $P [ | ρ h 2 ( j ) | > x ] = ( 1 − x ) M − 1 , 0 < x < 1 ,$

for each fixed h ≥ 1 and j (Enochson and Goodman 1965). Koopmans (1995, Chapter 8) lists the elementary properties of correlation statistics. Because each I j has the same complex Gaussian distribution and I j and I k are independent when jk, one sees that for a fixed h, ${ ρ h 2 ( j ) }$

is a stationary sequence in j. In fact, if j 1 and j 2 are indices such that |j 1j 2| ≥ h + M, then no DFT value used in calculation of $ρ h 2 ( j 1 )$ in Equation 3.3.1 is also used in computing $ρ h 2 ( j 2 )$ . It follows that for a fixed h, ${ ρ h 2 ( j ) }$ is an (h + M)-dependent strictly stationary sequence. Hence, the asymptotic normality in Equation 3.3.3 follows from an application of the central limit theorem for K-dependent strictly stationary sequences (Theorem 6.4.2 in Brockwell and Davis 1991). Observe that the mean of the distribution in Equation 3.3.4 is M −1.

Now suppose that {X t } is IID but not Gaussian and that $E [ X t 4 ] < ∞$

. Under these conditions, the DFT is asymptotically complex Gaussian. To see this, use Proposition 10.3.2 in Brockwell and Davis (1991) or alternatively verify the classical Lindeberg conditions of asymptotic normality. Hence, in this case, the asymptotic normality quoted in Equation 3.3.3 is again seen to hold.

Finally, suppose that {X t } can be written as $X t = ∑ k = 0 ∞ ψ k Z t − k$

, where $∑ k = 0 ∞ k | ψ k | < ∞$ and {Z t } is IID with finite fourth moments. The summability $∑ k = 0 ∞ k | ψ k | < ∞$ holds under the causal SARMA null hypothesis. Equation 10.3.12 in Brockwell and Davis (1991) relates the DFT of {X t } and {Z t } via

3.3.5 $I X , j = ψ ( e − i λ j ) I Z , j + D ( λ j ) ,$

where {D k )} are error terms and the notation has used

$I X , k = ∑ t = 0 N − 1 X t e − i t λ k , I Z , k = ∑ t = 0 N − 1 Z t e − i t λ k ,$

with $ψ ( z ) = ∑ k = 0 ∞ ψ k z k$

. The error term in Equation 3.3.5 is known to be uniform in that

3.3.6 $max ⁡ 0 ≤ j ≤ N − 1 E [ | D ( λ j ) | 2 ] = O ( N − 1 ) .$

Under the null hypothesis of a causal SARMA structure, ψ(z) is continuous in z (in fact, it is infinitely differentiable) and is bounded away from zero inside and on the complex unit circle. Hence,

3.3.7 $Corr { ( I X , j I X , j + 1 ⋮ I X , j + M − 1 ) , ( I X , j + h I X , j + h + 1 ⋮ I X , j + h + M − 1 ) } 2 ≈ Corr { ( ψ ( e − i λ j ) I Z , j ψ ( e − i λ j + 1 ) I Z , j + 1 ⋮ ψ ( e − i λ j + M − 1 ) I Z , j + M − 1 ) , ( ψ ( e i λ j + h ) I Z , j + h ψ ( e − i λ j + h + 1 ) I Z , j + h + 1 ⋮ ψ ( e i λ j + h + M − 1 ) I Z , j + h + M − 1 ) } 2 ≈ Corr { ( I Z , j I Z , j + 1 ⋮ I Z , j + M − 1 ) , ( I Z , j + h I Z , j + h + 1 ⋮ I Z , j + h + M − 1 ) } 2 .$

The last approximation above follows from the continuity of ψ(·) and the fact that as N → ∞, M/N → 0, meaning that the middle correlation in Equation 3.3.7 is essentially a squared sample correlation between aX and bY for some positive constants a and b and some random variables X and Y—and this of course equals the square of Corr(X, Y).

While the general idea is clear, there are issues that merit precise book-keeping. First, the DFT {I j } over the full range j = 0,1,…,N − 1 is not truly IID, even when the series is Gaussian noise. This is due to the conjugate symmetry Ī Nh = I h , which implies that $ρ ¯ N − h 2 = ρ ¯ h 2$

. Also, the DFT ordinates at j = 0 and j = N/2 do not behave as the other DFT ordinates (Brockwell and Davis 1991, Chapter 10). However, since M → ∞ as N → ∞, these two edge terms are asymptotically negligible in a squared correlation. The conjugate symmetry structure is easily dealt with in a sample average—the effective sample size is merely N/2. Also, the approximations made should be justified in rigor. For example, the error induced by the first approximation in Equation 3.3.7 can be bounded with the Cauchy-Schwarz inequality, the fact that ψ(z) is bounded away from zero inside and on the complex unit circle, and the bound in Equation 3.3.6. The error in the second approximation in Equation 3.3.7 involves the fact that the squared correlation is continuous in its arguments. Another issue for rigorization lies with how non-Gaussian the DFT is for IID non-Gaussian series when $E [ X t 4 ] < ∞$ .

Table 3.1 lists values of ηM for varying M and were obtained by simulating Gaussian white noise akin to Lund et al. (1995).

In spirit, Theorem 3.1 shows how to distinguish SARMA and PARMA dynamics: look for large values of $ρ ¯ h 2$

at hs that are multiples of d. While such a graphical procedure does not give a formal level-α test, one could attempt to construct a conservative test from a Bonferroni procedure involving $ρ ¯ h 2$ at all hs that are multiples of d. Another approach for constructing a definitive level-α test lies with deriving the joint asymptotic distribution of the $ρ ¯ h 2 s$ at all hs that are whole multiples of d. Unfortunately, quantifying the asymptotic correlation between $ρ ¯ h 1 2$ and $ρ ¯ h 2 2$ when h 1h 2 does not appear easy at this time.

### Table 3.1   Simulated Values of η M

M

η M

2

0.4310

4

0.4193

6

0.3971

8

0.3574

10

0.3369

12

0.3128

16

0.2728

20

0.2443

24

0.2276

32

0.1962

#### 3.4  Applications

We first investigate how the test of the last section works on simulated series where ultimate truth is known. Figure 3.1 shows a realization of length N = 240 of a Gaussian SARMA series with P = 12. The model is Equation 3.2.4, where the chosen polynomials are

$Φ ( B P ) = 1 − 1 2 B P , Θ ( B P ) = 1 + 1 4 B P , ϕ * ( B ) = 1 − 1 2 B + 1 18 B 2 , θ * ( B ) = 1 + 3 38 B − 1 28 B 2 ,$

and σ2 = 1. Here, the roots of Φ are 21/12 (all 12 complex values of 21/12), the roots of Θ are (−4)1/12 (again all 12 complex values), the roots of φ* are 3 and 6, and the roots of θ* are −4 and 7. The second graphic in Figure 3.1 shows an average squared coherence plot with M = 2 for this data. The dotted line is a pointwise 99% confidence threshold for the average squared coherences constructed under a null hypothesis of a stationary SARMA model. As no large $ρ ¯ h 2 s$

are seen, one could have concluded that this series is SARMA without having known this a priori.

Figure 3.1   A SARMA realization and its averaged squared coherences. The dashed line in the bottom graphic is a pointwise 99% threshold for SARMA (stationary) dynamics.

Figure 3.2   A PARMA realization and its averaged squared coherences. The dashed line in the bottom graphic is a pointwise 99% threshold for PARMA dynamics. The large exceedence at d = 40 suggests a PARMA model.

Figure 3.2 shows a realization of a Gaussian PARMA series of length N = 480 with P = 12 (d = 40), p = 1, q = 1, and model coefficients selected as

$ϕ 1 ( ν ) = 1 2 + 1 4 cos ⁡ { 2 π ( ν − 5 ) P } ; θ 1 ( ν ) = 1 2 − 1 3 cos ⁡ { 2 π ( ν − 7 ) P } .$

The white noise variances were chosen as

$σ 2 ( ν ) = 5 + 4 cos ⁡ { 2 π ( ν + 8 ) P } .$

This model can be verified as causal. The averaged squared coherences in Figure 3.2 for M = 12 show a large value at h = 40, which is a multiple of the number of observed cycles of data. Clearly, the conclusion is that a PARMA model is appropriate for this series. One can ask why large averaged squared coherences did not appear at lags 80, 120, etc. When a series is PARMA, our results do not mandate that there should be large average squared coherences at all lags that are multiples of d; however, at least one multiple of d should have a large averaged squared coherence. A more detailed explanation lies with the location of the nonzero coefficients in the Fourier expansion of the PARMA model’s seasonal autocovariance structure, but we will not discuss this here.

Now consider a causal PAR(1) series with P = 8 and the parameters

$ϕ 1 ( ν ) = 1 2 + κ cos ⁡ { 2 π ( ν − 5 ) P } ; σ 2 ( ν ) = 5 + Δ cos ( 2 πν P ) .$

Unless κ = 0 and Δ = 0, the model has PARMA-type dynamics. To study the power of the above methods in detecting PARMA-type structure when d = 32, we simply compare the average squared coherence $ρ ¯ d 2$

against its asymptotic distribution in Theorem 3.1 at level 95% with M = 16. As mentioned above, a more detailed test would consider $ρ ¯ h 2$ at h = 2d, h = 3d, etc. Here, we have selected an M that is arguably too big. Our motivation for this choice lies with demonstrating that the methods still work well when M is not chosen optimally. Table 3.2 shows empirical powers of PARMA detection aggregated from 1000 independent simulations for each pair of κ and Δ values. For example, when κ = 0.2 and Δ = 3, the test rejects a SARMA structure (in favor of PARMA dynamics) 98% of the time. In general, the powers appear reasonable. When κ = 0 and Δ = 0, the test erroneously rejects SARMA dynamics 7.6% of the time, which is reasonably close to the true asymptotic 5% type-one error. The powers increase with increasing Δ. This is because the seasonality in the series’ variances increases with increasing Δ, which makes PARMA detection easier. Also, the powers increase with increasing κ. This is because the PAR(1) autocovariance structure varies more from one season to another when φ1(ν) varies more over ν. Lund and Basawa (1999) give expressions for the PAR(1) autocovariances in terms of the PAR(1) model parameters.

As the above methods have worked well on simulated series, we now move to data applications. In particular, we are curious whether or not the PARMA structure surfaces in economic series.

Figure 3.3 shows 14 years of monthly housing-start data for the United States during 1992–2005. Data for 2006 is also available but is not included because a major mean shift in the series is visible at this time, which is presumably due to the subprime mortgage crisis. The top panel of this graphic shows the raw data and the middle panel shows the data after removal of a linear trend and a seasonal mean (we refer to these as mean-adjusted residuals). One can, of course, use other tactics to preprocess the series to a zero-mean state. The lower panel plots the average squared coherences of the mean-adjusted residuals and reveals a large value at d = 14. In fact, the average squared coherence at lag h = 14 was 0.2208 and has a p-value of 0.000274 exceeding 1/M, where we have used M = 8. For comparison's sake, $ρ ¯ d 2 = 0.3592$

when M = 4 (p-value of 0.000366) and $ρ ¯ d 2 = 0.1610$ when M = 12 (p-value of 0.000641). The average squared coherence in Figure 3.3 with M = 8 also slightly exceeds the 99% threshold at h = 37. It is not clear why this is the case, but it could be due to the almost periodic structure of our Julian calendar (our calendar is not periodic in a strict mathematical sense) and the varying number of trading days per month. We refer to Hurd (1991) for a related topic called almost periodic time series. Overall, it appears that this U.S. housing starts segment has periodic covariances and that a PARMA model is preferable to a SARMA model.

### Table 3.2   Empirical PAR(1) Detection Powers

κ = 0.0

κ = 0.1

κ = 0.2

κ = 0.3

κ = 0.4

Δ = 0

0.076

0.198

0.425

0.793

0.963

Δ = 1

0.223

0.329

0.574

0.849

0.977

Δ = 2

0.645

0.710

0.837

0.946

0.998

Δ = 3

0.970

0.975

0.980

0.996

1.000

Δ = 4

1.000

1.000

1.000

1.000

1.000

Figure 3.3   U.S. monthly housing starts, their detrended values, and average squared coherences. The exceedence of the 99% threshold at d = 14 gives evidence for PARMA dynamics.

One may ask in what manner the mean-adjusted residuals are periodic. We contend, akin to the conclusions in Trimbur and Bell (2008) for other related series, that our series has heteroskedastic variances. Parzen and Pagano (1979) suggest fitting ARMA models to seasonally standardized versions of the residuals in financial series. However, the U.S. housing start residuals are not well described by a SARMA model when scaled by a monthly standard deviation. To see this, the mean-adjusted residuals were divided by an estimate of their monthly standard deviation. Figure 3.4 shows the average squared coherences of this sequence. Observe that the large average squared coherence at h = 14 is still present. The implication is that the entire autocorrelation structure of this series is periodic and that a PARMA model is needed. As a tangential issue, one sees that our methods can be used to check for seasonal variances.

Figure 3.4   Average squared coherence of seasonally standardized monthly housing starts. The exceedence of the 99% threshold at d = 14 suggests periodic autocorrelations.

As a final example, we examine the d = 15 years of monthly U.S. inventories plotted in the top graph in Figure 3.5. These data are recorded from January 1992 to December 2006. Data for 2007–2009 were discarded due to an obvious changepoint presumably induced by the economic crisis. To obtain zero-mean short-memory data, we have fitted and removed a linear trend and monthly seasonal means. The middle graphic in Figure 3.5 shows residuals adjusted for this mean structure. The bottom graphic in Figure 3.5 plots the average squared coherences of the mean-adjusted residuals with M = 8. Apparent are large averaged squared coherences at lags 15 and 45, which are both multiples of d. In fact, the average squared coherence at h = 15 is 0.2272, which has a one-sided p-value of 0.0000623 of being larger than M −1, and the average squared coherence at h = 45 is 0.2115 (a p-value of 0.000277 of being larger than M −1). For comparison's sake, $ρ ¯ d 2 = 0.3681$

when M = 4 (p-value of 0.0000793) and $ρ ¯ d 2 = 0.1741$ when M = 12 (p-value of 0.0000491). Again, the series seems best described by PARMA dynamics. There is also a “small exceedence” of the 99% confidence threshold for the averaged squared coherence at h = 32. It is not known what to attribute this to—if anything—but it is conceivable that cycles exist in economic data other than that at the fundamental period P = 12. Figure 3.6 shows the average squared coherences of the mean-adjusted residuals after division by an estimated monthly standard deviation and reveals several exceedences of the 99% threshold. As with the housing start series above, it appears that the autocorrelation structure of this series is seasonal and that a PARMA model is truly needed.

Figure 3.5   U.S. monthly inventories, their detrended values, and average squared coherences. The exceedence of the 99% threshold at d = 15 gives evidence for PARMA dynamics.

#### 3.5  Summary and Comments

The methods presented here allow one to infer whether a SARMA or PARMA model is preferable for a zero-mean economic series. From a modeling stand-point, SARMA models are advantageous in that they have fewer parameters than PARMA models, especially for large to moderate P. While parsimonious PARMA models can be fitted (Lund et al. 2006), software for such a task has not been automated to date and the parsimonizing task remains somewhat laborious.

Figure 3.6   Average squared coherence of seasonally standardized monthly inventories. The exceedences of the 99% threshold at d = 15 and d = 45 suggest periodic autocorrelations.

With economic series, one might want to test seasonal autoregressive integrated moving-average (SARIMA) versus periodic autoregressive integrated moving-average (PARIMA) models; i.e., repeat the above analysis allowing for a random walk component in each model class. Unfortunately, the limit theory presented in Theorem 3.1 breaks down in such settings. This problem seems considerably more involved.

We have not delved into selection of the smoothing parameter M. Indeed, there are smoothing reasons why one should not choose M too big or too small. While optimal asymptotic quantification of M (in terms of N) likely depends on curvatures and other structure of the P-variate spectral densities of {X t }, practical selection of M is not difficult and reasonable advice is akin to histogram bandwidth selection: try a few values of M and see which ones work best (visually). As demonstrated in the last section, conclusions generally do not depend greatly on M.

Finally, while the mean-adjusted monthly housing starts and inventory series examined here showed PARMA seasonality with P = 12, other series examined—such as monthly unemployment and manufacturing indices—were far more complex and could not be adequately described by either PARMA or SARMA series. In some cases, long-memory components appeared; other cases involved spectrums where periodicities beyond the annual one were present.

#### Acknowledgments

The author acknowledges support from National Science Foundation Grant DMS-0905570. Comments from two referees and the editors helped improve this manuscript.

#### References

Box, G. E. P. , Jenkins, G. M. , and Reinsel, G. C. (1994). Time Series. Fore-casting and Control. 3rd edition. Englewood Cliffs, New Jersey: Prentice Hall.
Brockwell, P. J. and Davis, R. A. (1991). Time Series: Theory and Methods. 2nd edition. New York: Springer-Verlag.
Chatfield, C. and Prothero, D. L. (1973). Box–Jenkins seasonal forecasting: Problems in a case-study. Journal of the Royal Statistical Society, Series A 136:295–336.
Enochson, L. D. and Goodman, N. R. (1965). Gaussian Approximation to the Distribution of the Sample Coherence. Online at http://handle.dtic.mil/100.2/AD620987.
Hannan, E. J. (1955). A test for singularities in Sydney rainfall. Australian Journal of Physics 8:289–297.
Harrison, P. J. (1965). Short-term sales forecasting. Journal of the Royal Statistical Society, Series C 14:102–139.
Holan, S. , Lund, R. B. , and Davis, G. (2010). The ARMA alphabet soup: A tour of ARMA model variants. Statistics Surveys 4:232–274.
Hurd, H. L. (1991). Correlation theory of almost periodically correlated processes. Journal of Multivariate Analysis 37:24–45.
Jones, R. H. and Brelsford, W. M. (1967). Time series with periodic structure. Biometrika 54:403–408.
Koopmans, L. H. (1995). The Spectral Analysis of Time Series. 2nd edition. New York: Academic Press.
Lo’eve, M. M. (1978). Probability Theory II. New York: Springer.
Lund, R. B. and Basawa, I. V. (1999). Modeling for periodically correlated time series. In Asymptotics, Nonparametrics, and Time Series, ed. S. Ghosh , 37–62. New York: Marcel Dekker.
Lund, R. B. , Hurd, H. , Bloomfield, P. , and Smith, R. L. (1995). Climatological time series with periodic correlation. Journal of Climate 11:2787–2809.
Lund, R. B. , Shao, Q. , and Basawa, I. V. (2006). Parsimonious periodic time series modeling. Australian & New Zealand Journal of Statistics 48:33–47.
Parzen, E. and Pagano, M. (1979). An approach to modelling seasonally stationary time series. Journal of Econometrics 9:137–153.
Shumway, R. H. and Stoffer, D. S. (2006). Time Series Analysis and its Applications: With R Examples. 2nd edition. New York: Springer.
Trimbur, T. M. and Bell, W. R. (2008). Seasonal heteroskedasticity in time series data: Modeling, estimation, and testing. Research Report # 2008-11, U.S. Census Bureau.
Troutman, B. M. (1979). Some results in periodic autoregressions. Biometrika 66:219–228.
Vecchia, A. V. (1985). Periodic autoregressive moving-average (PARMA) modeling with applications to water resources. Water Resources Bulletin 21:721–730.

## Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.