Alternatives to the Cox Model

Authored by: Torben Martinussen , Limin Peng

Handbook of Survival Analysis

Print publication date:  July  2013
Online publication date:  April  2016

Print ISBN: 9781466555662
eBook ISBN: 9781466555679

10.1201/b16248-5

Abstract

The Cox regression model is clearly the most used hazards model when analyzing survival data as it provides a convenient way of summarizing covariate effects in terms of relative risks. The proportional hazards assumption may not always hold, however. A typical violation of the assumption is time-changing covariate effects which is often encountered in bio-medical applications. A typical example is a treatment effect that varies with time such as treatment efficacy fading away over time due to, for example, tolerance developed by the patient. Under such circumstances, the Aalen additive hazard model (Aalen, 1980) may provide a useful alternative to the Cox model as it easily incorporates time-varying covari-ate effects. For Aalen’s model, the hazard function for an individual with a p-dimensional covariate X vector takes the form

Alternatives to the Cox Model

The Cox regression model is clearly the most used hazards model when analyzing survival data as it provides a convenient way of summarizing covariate effects in terms of relative risks. The proportional hazards assumption may not always hold, however. A typical violation of the assumption is time-changing covariate effects which is often encountered in bio-medical applications. A typical example is a treatment effect that varies with time such as treatment efficacy fading away over time due to, for example, tolerance developed by the patient. Under such circumstances, the Aalen additive hazard model (Aalen, 1980) may provide a useful alternative to the Cox model as it easily incorporates time-varying covari-ate effects. For Aalen’s model, the hazard function for an individual with a p-dimensional covariate X vector takes the form

3.1 $α ( t | X ) = X T β ( t ) .$

The time-varying regression function β(t) is a vector of locally integrable functions, and usually the X will have 1 as its first component allowing for an intercept in the model. Model (3.1) is very flexible and can be seen as a first order Taylor series expansion of a general hazard function α(t|X) around the zero covariate:

$α ( t | X ) = α ( t , 0 ) + X T α ′ ( t , X * )$

with X* on the line segment between 0 and X. As we shall see below, it is the cumulative regression function

$B ( t ) = ∫ 0 t β ( s ) d s$

that is easy to estimate and its corresponding estimator converges at the usual n 1/2-rate. Estimation and asymptotical properties were derived by Aalen (Aalen, 1980), and further results for this model were given in Aalen (1989), Aalen (1993), and Huffer and McKeague (1991) derived efficient estimators.

The full flexibility of the model (3.1) may not always be needed and it is usually also of interest to try to simplify the model investigating whether or not a specific covariate effect really varies with time. A very useful model in this respect is the semiparametric additive hazards model of McKeague and Sasieni (1994). It assumes that the hazard function has the form

3.2 $α ( t | X , Z ) = X T β ( t ) + Z T γ,$

where X and Z are p-dimensional and ç-dimensional covariate vectors, respectively. Apart from its use for testing time-varying effects, this model is useful in its own right because it is then possible to make a sensible bias-variance trade-off, where effects that are almost constant can be summarized as such and effects that are strongly time-varying can be described as such. Note also that model (3.2) covers model (3.1) simply by leaving out the last part of model (3.2) by leaving Z empty. On the other hand, in practice, one may start out with model (3.1) and then test for a specific covariate whether its effect varies significantly with time. If this is not the case one may then simplify to model (3.2) with Z being that specific covariate, and one can then proceed in that way with some or all of the remaining covariates; the starting point, though, will now be model (3.2). Below we show how to do such inference in model (3.2). Lin and Ying (1994) considered a special case of (3.2), where only the intercept term is allowed to depend on time; this model is often referred to as the Lin and Ying model. A nice discussion of pros and cons of the additive hazards model can be found in Aalen et al. (2008).

3.1.1  Model specification and inferential procedures

We present estimation and inferential procedures based on model (3.2) since, as mentioned above, model (3.1) is covered by model (3.2) simply by leaving Z empty and therefore the below results also cover model (3.1).

Let T be the survival time of interest with conditional hazard function α(t|X, Z) given the covariate vectors X and Z .Inpractice T may be right-censored by U so that we observe $( T ˜ = T ∧ U , Δ = I ( T ≤ U ) , X , Z )$

. We assume that T and U are conditionally independent given $( X , Z )$ . Let $( T i , Δ i , X i , Z i )$ be n iid replicates from this generic model. Under the above independent right-censoring scheme, the ith counting process $N i ( t ) = I ( T ˜ i ≤ t , Δ i = 1 )$ has intensity

$λ i ( t ) = Y i ( t ) [ X i T β ( t ) + Z i T γ ] ,$

where $Y i ( t ) = I ( t ≤ T ˜ i )$

is the at risk indicator. We assume that all counting processes are observed in the time-interval [0,τ], where τ< ∞. Each counting process has compensator $Λ i ( t ) = ∫ 0 t λ i ( s ) d s$ such that M i (t) = N i (t) Λ i (t) is a martingale. Define the n-dimensional counting process N =(N 1,…,N n ) T and the n-dimensional martingale M =(M 1,…,M n ) T . Let also X = (Y 1 X 1,...,Y n X n ) T , Z =(Y 1 Z 1,...,Y n Z n ) τ . The results presented below hold under some regularity conditions; see Martinussen and Scheike (2006). Writing the model in incremental form

$d N ( t ) = X ( t ) d B ( t ) + Z ( t ) γ d t + d M ( t ) ,$

and since E{dM(t)} = 0, this suggests the following (unweighted) least squares estimator of ${ B ( t ) = ∫ 0 t β ( s ) d s , γ)}$

McKeague and Sasieni (1994):

3.3 $γ ^ = ( ∫ 0 τ Z T H Z d t ) − 1 ∫ 0 τ Z T H d N ( t ) ,$
3.4 $B ^ ( t ) = ∫ 0 τ X − d N ( t ) − ∫ 0 τ X − Z d t γ ^ ,$

where X denotes the generalized inverse (XT X) −1 X T and H = I — XX assuming here that the required inverses exist. Considering the model with only non-parametric terms, model (3.1), means that Z is empty and is readily seen from the latter display that the estimator of B(t) in that case is given by

$B ^ ( t ) = ∫ 0 τ X − d N ( t ) ,$

which is the estimator initially proposed by Aalen (1980). Weighted estimators, and in theory efficient estimators, exist. However, they depend on the unknown parameters, and therefore, to calculate them, some kind of smoothing is needed which may compromise efficiency in practice; see McKeague and Sasieni (1994) for more on the efficient estimators. Here we will only deal with the unweighted estimators that can be calculated without any use of smoothing. It is quite obvious that these simple and direct estimators have sound properties as the following arguments show. Considering 7, the counting process integral can be written as

$∫ 0 τ Z T H d N ( t ) = ∫ 0 τ Z T H d { X d B ( t ) + Z γ d t } + ∫ 0 τ H d M ( t ) = ∫ 0 τ Z T H d t γ + ∫ 0 τ Z T H d M ( t )$

since HX = 0, and therefore γ is an essentially unbiased estimator of γ since the martingale term has zero-mean. Similar arguments concerning $B ^ ( t )$

are readily given. The limit distributions of the estimators are

$n 1 / 2 { γ ^ − γ } = C − 1 ( τ ) n − 1 / 2 ∫ 0 τ Z T H d M ( t ) , n 1 / 2 { B ^ ( t ) − B ( t ) } = n 1 / 2 ∫ 0 τ X − d M ( t ) − P ( t ) n 1 / 2 { γ ^ − γ } ,$

where

$C ( t ) = n − 1 ∫ 0 t Z T H Z d s , P ( t ) = ∫ 0 t X − Z d t .$

These limit distributions may be written as sums of essentially iid terms:

$n 1 / 2 { γ ^ − γ } = n − 1 / 2 ∑ i = 1 n ϵ i γ + o P ( 1 ) , n 1 / 2 { B ^ ( t ) − B ( t ) } = n − 1 / 2 ∑ i = 1 n ϵ i B ( t ) + o P ( 1 ) ,$

where

$ϵ i γ = c − 1 ( τ ) ∫ 0 τ { Z i − ( z T x ) ( x T x ) − 1 X i } d M i ( t ) , ϵ i B ( t ) = ∫ 0 τ ( x T x ) − 1 X i d M i ( t ) − p ( t ) ϵ i γ ,$

where c(t) and p(t) denote the limits in probability of C(t) and P(t), respectively. Also, x T x is used as notation for the limit in probability of n −1 X T X, and similarly with z T x. The limit distributions may be simulated likewise what Lin et al. (1993) suggested in a Cox-model setting:

$n 1 / 2 { γ ^ − γ } ∼ n − 1 / 2 ∑ i = 1 n ϵ ^ i γ G i , n 1 / 2 { B ^ ( t ) − B ( t ) } ∼ n − 1 / 2 ∑ i = 1 n ϵ ^ i B ( t ) G i ,$

where we let ~ indicate that two quantities have the same limit distribution. In the latter display, G 1 ,...,G n are independent standard normals and $ϵ ^ i γ$

is obtained from $ϵ i γ$ by replacing deterministic quantities with their empirical counterparts and by replacing M i (t) with $M ^ i ( t ) , i = 1 , … , n ,$ , and similarly with $ϵ ^ i B ( t )$ . The result is that, conditional on the data,

$( n − 1 / 2 ∑ i = 1 n ϵ ^ i γ G i , n − 1 / 2 ∑ i = 1 n ϵ ^ i B G i , )$

will have the same limit distribution as

$( n − 1 / 2 { γ ^ − γ } , n 1 / 2 { B ^ ( t ) − B ( t ) } ) .$

The hypothesis of time-invariance

$H 0 : β p ( t ) = β p ,$

focusing without loss of generality on the pth regression coefficient, may be reformulated in terms of the cumulative regression function $B p ( t ) = ∫ 0 t β p ( s ) d s$

as

$H 0 : β p ( t ) = β p ⋅ t .$

Martinussen and Scheike (2006) studied the test process

3.5 $V n ( t ) = n 1 / 2 ( β ^ p ( t ) − β ^ p ( τ ) t τ )$

that is easy to compute and considered test statistics as

$sup ⁡ t ≤ τ | V n ( t ) | .$

Under H0,we have

$V n ( t ) = n 1 / 2 { ( B ^ p ( t ) − B p ( t ) − B ^ p ( τ ) − B p ( τ ) ) t τ } .$

Clearly, the limit distribution of V n (t) is not a martingale due to the term $B ^ p ( τ )$

, but one may use the above resampling technique of Lin et al. (1993) to approximate its limit distribution. Specifically, the limit distribution of V n (t) may be approximated by

$V ^ n ( t ) = n − 1 / 2 ∑ i = 1 n [ { ϵ ^ i B ( t ) } p − { ϵ ^ i B ( τ ) } p t τ ] G i$

where v k is the kth element of a given vector v and where we are fixing the data. The above limit distributions and the conditional multiplier approach may also be used to test other hypothesis such as overall effect of a given covariate. Such inferential procedures and estimation are implemented in the contributed R package timereg; for examples of its use, see Martinussen and Scheike (2006).

3.1.2  Goodness-of-fit procedures

The additive hazard model is very versatile but it is still of importance to check whether the model provides an adequate fit to the data at hand. The starting point will then usually be model (3.1) and if this model is found to give an adequate fit then one may proceed to investigate the time-dynamics of the covariate effects. The goodness-of-fit procedures presented below are based on the martingale residuals

$M ^ ( t ) = N ( t ) − ∫ 0 t X ( s ) d B ^ ( s ) = ∫ 0 t H ( s ) d M ( s ) .$

Under the model, the martingale residuals are thus themselves martingales. To validate the model fit, one possibility is to sum these residuals depending on the level of the covariates (Aalen, 1993; Grønnesby and Borgan, 1996). A K-cumulative residual process may then be defined as

$M K ( t ) = ∫ 0 t K T ( s ) d M ^ ( s ) = ∫ 0 t K T ( s ) H ( s ) d M ( s ) ,$

where $K ( t ) = { K 1 T ( t ) , … , K n T ( t ) } T$

is an n × m matrix, possibly depending on time. A typical choice of K is to let it be a factor with levels defined by the quartiles of the considered covariates. Based on a iid representation of M K (t) one may calculate p-values using for instance a supremum-test, see Martinussen and Scheike (2006). With a continuous covariate one may also target a test to investigate whether the specific covariate is included in the model on the right scale, a typical alternative is to use a log-transformed version of the covariate. This construction is similar to what Lin et al. (1993) proposed under the Cox model and is based on

$M c ( z ) = ∑ i = 1 n I ( X i 1 ≤ z ) M ^ i ( τ )$

here assuming that the covariate under investigation is X 1. This process in z has a decomposition similarly to M K developed above, and resampling may thus be used to validate the fit of the model (Martinussen and Scheike, 2006). These goodness-of-fit procedures are implemented in the R-package timereg. For other goodness-of-fit testing concerning the additive hazard model, see Gandy and Jensen (2005a) and Gandy and Jensen (20056).

3.1.3  Further results on additive hazard models

The additive hazard is applied in some more specialized areas of survival analysis. Below we describe some of these. But first we present some structural properties of the model that underline that this model may indeed provide an appealing alternative to the proportional hazard model.

3.1.3.1  Structural properties of the additive hazard model

3.6 $α ( t | X , Z ) = β 0 ( t ) + β X ( t ) X + β Z ( t ) Z$

and let $B X ( t ) = ∫ 0 t β X ( s ) d s$

and similarly with B z (t). We will explore when the additive hazard structure is preserved when only conditioning on X which was first investigated in Aalen (1993). The model above is formulated with X and Z both being scalar valued but the following can easily be generalized to cover the vector situation also. The hazard function, when only conditioning on X,is

$α ( t | X ) = β 0 ( t ) + β X ( t ) + ψ Z | X ( B Z ( t ) ) ,$

where ψ z|χ (•) denotes minus the derivative of the log Laplace transform of Z given X. It is therefore readily seen that additive structure is preserved if X and Z are independent as only the intercept term is changed. This also holds true if (X, Z) has a normal distribution also allowing for dependence between the two. This is in contrast to the proportional hazard model where the proportional structure is not preserved even when X and Z are independent Struthers and Kalbfleisch (1986).

When dealing with data from a non-randomized study it may be of interest to calculate the causal hazard function as it corresponds to the hazard function under a randomized study. To do so we briefly introduce Pearl's (Pearl, 2000) do operator, $d o ( X = x ) ( or X ^ = x )$

, which is an intervention setting variable X to x. Therefore, the conditional distribution of T given X is set to x, $P ( T | X ^ = x )$ , is different from P(T|X = x) where the latter is the conditional distribution of T given that we find X equal to x. Taking again model (3.6) as the starting point it may be seen, using the so-called “G-computation formula” (Robins, 1986; Pearl, 2000), that

3.7 $α ( t | X ^ = x ) = β ˜ 0 ( t ) + β X ( t ) x , β ˜ 0 ( t ) = β 0 ( t ) + β Z ( t ) ψ Z ( B Z ( t ) ) ,$

where ψ z (•) denotes minus the derivative of the log Laplace transform of Z. It is seen from display (3.7) that the additive hazard structure is again preserved and therefore the causal effect of X can be estimated from model (3.6) formulated for the observed data. Such a property does not hold true for the proportional hazard model; see Martinussen and Vansteelandt (2012) for more details on these issues. The additive hazard model also forms the basis for investigation of so-called “direct effects” in Fosen et al. (2006); Martinussen (2010); Lange and Hansen (2011); and Martinussen, Vansteelandt, Gerster and Hjelmborg (2011). In a situation as depicted in Figure 3.1, the direct effect of X, formulated via the hazard function on absolute scale, is

$α ( t | X ^ = x + 1 , K ^ ) − α ( t | X ^ = x , K ^ ) |$

Figure 3.1   Situation with L and K being observed confounders and U being unobserved.

3.1.3.2  Clustered survival data and additive hazard model

Clustered failure time data arise in many research areas, for example, in studies where time to disease onset is recorded for members of a sample of families. One approach to analyze such data is to use a frailty model, and recently Martinussen, Scheike and Zucker (2011) proposed the use of the additive hazard model in such a setting. The Aalen additive gamma frailty hazard model specifies that individual i in the kth cluster has conditional hazard function

3.8 $α i k ( t | Z k , X i k ) = Z k X i k T β ( t ) ,$

where Z k is the unobserved frailty variable, taken to be gamma distributed with mean 1 and variance θ, X ik is the covariate vector and β(t) is the vector of time-dependent regression coefficients. The counting process associated with individual i in the kth cluster has the following intensity with respect to the marginal history:

3.9 $λ i k m ( t ) = { 1 + θ X i k T B ( t − ) } − 1 Y i k ( t ) X i k T β ( t ) ,$

where Y ik (t) is the at-risk indicator. Thus the marginal intensity still has an additive structure, but with the design depending on B(t−). Based on this, an estimator was developed that has a recursive structure, and large sample properties were presented; see Martinussen, Scheike and Zucker (2011) for more details. Another approach was taken by Cai and Zeng (2011) where the frailty effect also enters at the additive scale

3.10 $α i k ( t | Z k , X i k ) = β 0 ( t ) + X i k T β + Z k ,$

using the Lin and Ying version of the additive model and where Z k follows a one-parameter distribution with zero-mean. Estimators and large sample are provided in Cai and Zeng (2011).

As alternative to the frailty approach one may instead use a marginal regression model approach if interest centers on estimating regression effects at the population level and estimation of correlation is of less interest. For use of the fully nonparametric version of the additive hazards model in this context, see Martinussen and Scheike (2006). Instead one may also apply the Lin and Ying version of the additive model; see Pipper and Martinussen (2004) and Yin and Cai (2004).

3.1.3.3  Additive hazard change point model

As described in Section 3.1 the Aalen additive hazard model is well suited to investigate the time dynamics of the effect of a given covariate. Hence one can test the constant effects model against a model where the structure of the effect is left fully unspecified. An appealing model in between the constant effects model and the full Aalen model is the change-point model in which the effect of a covariate is constant up to an unknown point in time and changes thereafter to a new value. This model can be written as

3.11 $α i k ( t | X , Z ) = X ′ β ( t ) + Z { γ 1 + γ 2 I ( t > θ ) } ,$

where α(t|X, Z) is the hazard rate, X is a p-dimensional covariate and Z is a scalar covariate, β(t) is a vector of unknown locally integrable time-dependent regression functions, and γ 1 , γ 2 and θ are unknown parameters with the last being the change-point parameter that is also taken as an unknown parameter. To test whether or not there is evidence of a change-point effect corresponds to testing the hypothesis H 0 : γ 2 =0, which is a nontrivial problem as the change-point parameter θ is only present under the alternative. Martinussen and Scheike (2007) derived a test tailored to investigate this hypothesis, and also gave estimators for all the unknown parameters defining the model. They also proposed a test that can be used to compare the change point model to full Aalen model with no prespecified form of the covariate effects.

3.1.3.4  Additive hazard and high-dimensional regressors

It has become of increasing interest to develop statistical methods to explore a potential relationship between a high-dimensional regressor, such as gene-expression data, to the timing of an event such as death. It has been demonstrated that the additive hazard model may also be useful in this respect. The starting point for these developments has so far been the additive hazard model with all effects being constant with time, that is, the Lin and Ying model,

$α ( t | Z ) = β ( t ) + Z T γ,$

where Z is p-dimensional covariate vector; in this case with p large. With gene expression data, p may be very large and often much larger than the sample size n. This is referred to as the p>>nsituation. Based on the Lin and Ying model Martinussen and Scheike (2009a) derived a partial least squares estimator via the so-called “Krylov sequence,” and Ma et al. (2006) derived a principal components analysis in this context. Other popular approaches in a high-dimensional regressor setting is the Lasso and Ridge regression to mention some; see Bovelstad et al. (2009) for a comparison of methods using the Cox model as the base model. One complication with the additive hazard model compared with the proportional hazard model is, however, that there is no simple likelihood to work with for the additive model. Leng and Ma (2007) suggested to use a least squares criterion, similarly to the partial likelihood for Cox's regression model, for the Lin and Ying model:

3.12 $L ( β ) = β T { ∫ Z T ( t ) Z ( t ) d t } β − 2 β T { ∫ Z T ( t ) H ( t ) d N ( t ) } .$

using the notation introduced in Section 3.1. This criterion was also suggested independently by Martinussen and Scheike (2009b) that provided further motivation. This criterion opens the route to many popular estimation procedures applied in the high-dimensional regressor setting when the underlying model is the Lin and Ying model. Recently Gorst-Rasmussen and Scheike (2012) took the same model to propose a sure independence screening method that shows good performance. For estimation in the high-dimensional case one may use the contributed R package ahaz that can also do other types of estimation and inference within the additive hazard model context.

3.1.3.5  Combining the Cox model and the additive model

The Cox model and the Aalen model may be combined in various ways to improve model fit or to enhance interpretation of parameters. Scheike and Zhang (2002) considered the model

$α ( t | X , Z ) = { X T β ( t ) } exp ⁡ { Z T γ } .$

Figure 3.2   Gastrointestinal tumour data. Left display: Kaplan-Meier estimates of survival probabilities of the chemotherapy (solid line) and chemotherapy plus radiotherapy (dashed line) groups. Right display: effect of combined therapy - Aalens least squares estimate of B 1(t) with 95% pointwise confidence bands with the dashed curve being the estimate based on a change-point model).

termed the Cox-Aalen model, where the baseline hazard function of the Cox model is replaced by an additive hazard term depending on X. Martinussen and Scheike (2002) considered a proportional excess risk model that also combines the Cox model and the additive model.

It is also of interest to decide whether the Cox model or the additive hazard is most appropriate to apply in a given application. This is a nontrivial problem as these two classes of models are non-nested. Martinussen et al. (2008) used the MizonRichard encompassing test for this particular problem, which corresponds to fitting the Aalen model to the martingale residuals obtained from the Cox regression analysis.

3.1.3.6  Gastrointestinal tumour data

A typical instance where the Cox model provides a poor fit is when crossing survival curves is observed as shown in Figure 3.2 (left display) that displays the Kaplan-Meier estimates of survival probabilities of the chemotherapy and chemotherapy plus radiotherapy groups of gastric cancer patients in a randomized clinical trial (Stablein and Koutrouvelis, 1985); these data were also considered in Zeng and Lin (20076). The crossing of the survival curves can be explained by the patients receiving the more aggressive treatment (radiotherapy) who are at elevated risk of death initially but may benefit from the treatment in the long run if they are able to tolerate the treatment initially. That the Cox model provides a poor fit to the data is confirmed by the supremum test based on the score process Lin et al. (1993) with p< 0.001. We fitted Aalens additive hazards model to the gastrointestinal tumour data:

$α ( t | X ) = β 0 ( t ) + β ( t ) X ,$

where X is the indicator of chemotherapy plus radiotherapy treatment. The cumulative regression coefficient corresponding to the combined therapy group is depicted in Figure 3.2 (right display), showing nicely that a time varying effect is indeed the case for these data with a negative effect of the combined treatment in the first 300 days or so, and then apparently with an adverse effect thereafter. For these data one may be interested in applying the change-point model

$α ( t | X ) = β 0 ( t ) + X { γ 1 + γ 2 I ( t > θ ) } ,$

with θ being the change-point parameter. For the gastrointestinal tumour data we obtain the estimates θ = 315 and $( γ ^ 1 , γ ^ 2 ) = ( 0.00175 , − 0.00255 )$

with the estimated cumulative regression function superimposed on the Aalen estimator in Figure 3.2 indicating that this model gives a good fit to these data.

3.2  The accelerated failure time model

Another useful model in survival analysis is the accelerated failure time (AFT) model that can be formulated using survival functions. Let T be a survival time and Z acovariate vector that does not depend on time and let S(t|Z) be the conditional survival function. The AFT model assumes that

3.13 $S ( t | Z ) = S 0 ( t e Z T β ) ,$

where S 0 is a baseline survival function. From (3.13) it is seen that covariates act multiplica-tively on time so that their effect is to accelerate (or decelerate) time-to-failure relative to S 0. An equivalent and popular formulation of AFT-model is the following linear regression for the log-transformed event time, log(T), given Z such that

3.14 $log ⁡ ( T ) = − Z T β + ϵ ,$

where the distribution of e ε is given by S 0 and ε is assumed to be independent of Z .The AFT model has received considerable attention in the statistical literature in the last two decades or so. Although, in its semiparametric version (see below) it is computationally intensive, it is now considered as an alternative to the Cox model. It is appealing due to its direct relationship between the failure time and covariates, which was also noted by David Cox in Reid (1994). One may extend the model to cover the case where covariates may change with time by considering

$e ϵ = ∫ 0 T e Z T ( t ) β d t$

see for example Zeng and Lin (2007a), but we will only consider the simpler case here. The AFT model can also be formulated in terms of the hazard function for T given Z:

3.15 $λ ( t | Z ) = λ 0 ( t e Z T β ) e Z T β ,$

where λ0(t) is the hazard associated with the distribution of e ε .

3.2.1  Parametric models

If we are willing to specify a specific form of S 0 then the AFT model is fully parametric. For instance, the Weibull model is given by

$S 0 ( t ) = e − k t α , k , α > 0.$

In such a case, estimation and statistical inference is straightforward using standard likelihood methods for survival data. For the Weibull model we have the hazard function λ0(t) = kαt α−1 and hence, by (3.15),

$λ ( t | Z ) = k α ( t e Z T β ) α − 1 e Z T = λ 0 ( t ) e Z T α β$

being also a proportional hazard function with a Weibull baseline hazard function, so in this specific example the two models coincide.

3.2.2  Semiparametric models

We will now look into the case where S 0(t) is left unspecified, or, equivalently, ε in (3.14) is not specified, which is referred to as the “semiparametric AFT model.” There exist two approaches to tackle this situation. One develops estimation based on the hazard specification (3.15), while the other one uses the additive mean model specification (3.14) as a starting point.

3.2.2.1  Inference based on hazard specification

Let C be the censoring time for T, and put $T ˜ = T ∧ C$

and Δ = I(T ≤ C). Hence the intensity of the counting process $N ( t ) = I ( T ˜ ≤ t ) Δ$ is thus given by Y(t)λ(t) with $Y ( t ) = I ( t ≤ T ˜ )$ being the at-risk indicator and λ(t) given in (3.15). Assume that n i.i.d. counting processes are being observed subject to this generic hazard model. We thus consider N(t) = {N 1(t),.., N n (t)} the n-dimensional counting process of all subjects. Define also the time-transformed counting process

$N * ( t ) = ( N 1 ( t e − Z 1 T β ) , … , N n ( t e − Z n T β ) )$

with associated at-risk process $Y i * ( t , β ) = Y i ( t e − Z i T β ) , i = 1 , … , n$

.This time-transformation is made because then the intensity of $N i ( t e − Z i T β )$ is

$λ i * ( t ) = Y i * ( t ) λ 0 ( t ) ,$

which immediately suggests that $Λ 0 ( t ) = ∫ 0 t λ 0 ( s ) d s$

should be estimated by the Breslow-type estimator

3.16 $Λ ^ 0 ( t , β ) = ∫ 0 t 1 S 0 * ( s , β ) d N · * ( s )$

where $d N · * ( t ) = ∑ i = 1 n d N i * ( t )$

, and

$S 0 * ( t , β ) = ∑ i = 1 n Y i * ( t , β )$

if β were known. The efficient score function for β is

3.17 $∑ i = 1 n ∫ 0 ∞ ∂ ∂ β ( λ i ( t , β ) ) λ i − 1 ( t , β ) ( d N i ( t ) − Y i ( t ) λ i ( t ) d t ) = ∑ i = 1 n ∫ 0 ∞ ( λ ′ 0 ( t e Z i T β ) t e Z i T β λ 0 ( t e Z i T β ) + 1 ) Z i ( d N i ( t ) − Y i ( t ) λ i ( t ) d t ) = ∑ i = 1 n ∫ 0 ∞ ( λ ′ 0 ( u ) u λ 0 ( u ) + 1 ) Z i ( d N i * ( u ) − Y i * ( u , β ) d Λ 0 ( u ) ) ,$

and inserting $d Λ ^ 0 ( u , β )$

for dΛ0(u) gives

3.18 $U W ( β ) = ∑ i = 1 n ∫ 0 ∞ W ( u ) Z i ( d N i * ( u ) − Y i * ( u , β ) S 0 * ( u , β ) d N · * ( u ) ) = ∑ i = 1 n ∫ 0 ∞ W ( u ) ( Z i − E * ( u , β ) ) d N i * ( u ) ,$

where

$S 1 * ( u , β ) = ∑ i = 1 n Y i * ( u , β ) Z i , E * ( u , β ) = S 1 * ( u , β ) S 0 * ( u , β ) ,$

and

3.19 $W ( u ) = ( λ ′ 0 ( u ) u λ 0 ( u ) + 1 )$

is the efficient weight function. We cannot use (3.18) directly for estimation purposes since the weight function W(u) involves the unknown baseline hazard function λ 0(u) and its derivative $λ 0 ′ ( u )$

. These can be estimated and inserted into (3.18) but it is not recommend-able since it is hard to get reliable estimates of especially $λ 0 ′ ( t )$ . A way around this is to take (3.18) and replace the weight function W(u) with one that can be computed as for example W(u)=1 or $W ( u ) = n − 1 S 0 * ( u , β )$ referred to as the “log-rank and Gehan weight functions,” respectively. A practical complication is that the score function U W (β) is a step function of β so U W (β) = 0 may not have a solution. The score function may furthermore not be component-wise monotone in β. It is actually monotone in each component of β if the Gehan weight is chosen (Fygenson and Ritov, 1994). The estimator $β ^$ is usually chosen as the one which minimizes ‖U W (β)‖. The contributed R package lss calculates the Gehan rank estimator. It has been established, under regularity conditions, that $n 1 / 2 ( β ^ − β )$ is asymptotically zero-mean normal with covariance matrix $A W − 1 B W A W − 1$ where A W and B W are the limits in probability of

$1 n ∑ i = 1 n ∫ 0 ∞ W ( u ) ( Z i − E * ( u , β ) ) ⊗ 2 ( λ ′ 0 ( u ) u λ 0 ( u ) + 1 ) d N i * ( u ) , 1 n ∑ i = 1 n ∫ 0 ∞ W ( u ) 2 ( Z i − E * ( u , β ) ) ⊗ 2 d N i * ( u ) ,$

respectively; see Tsiatis (1990) and Ying (1993). It is seen that A W and B W coincide in the case where W(u) is taken as the efficient weight function (3.19). The asymptotic covariance matrix depends on $λ 0 ′$

, which is difficult to estimate. One may, however, apply a resampling technique avoiding estimation of $λ 0 ′$ ; see Lin et al. (1998) and Jin et al. (2003).

The score function can also be written on the (log)-transformed time scale

3.20 $U W ˜ ( β ) = ∑ i = 1 n ∫ − ∞ ∞ W ˜ ( t ) ( Z i − E * ( e t , β ) ) d N i * ( e t ) = ∑ i = 1 n ∫ − ∞ ∞ W ˜ ( t ) ( Z i − E ˜ ( t − Z i T β , β ) ) d N ˜ i ( t − Z i T β ) ,$

where $N ˜ i ( t ) = I ( log ⁡ ( T ˜ i ) ≤ t ) Δ i , Y ˜ i ( t ) = I ( t ≤ log ⁡ ( T ˜ i ) )$

,

$E ˜ ( t , β ) = ∑ i = 1 n Z i Y ˜ i ( t ) / ∑ i = 1 n Y ˜ i ( t ) , W ˜ ( t ) = λ ′ 0 ε ( t ) λ 0 ε ( t ) ,$

with λ(t) the hazard function for ε. Another way of motivating (3.20) is by classical nonparametric testing theory; see Tsiatis (1990) and Kalbfleisch and Prentice (2002) for much more details on that approach.

An interesting variant of (3.15), considered by Chen and Jewell (2001), is

3.21 $λ ( t ) = λ 0 ( t e Z T β 1 ) e Z T β 2 ,$

which contains the proportional hazard model (β 1 = 0) and the accelerated failure time model (β1 = β 2 ), and for β 2 = 0 what is called “the accelerated hazard model” (Chen and Wang, 2000). Chen and Jewell (2001) suggested estimating equations for estimation of β = (β 1,β 2) and showed for the resulting estimator, $β ^$

, that $n 1 / 2 ( β ^ − β )$ is asymptotically zero-mean normal with a covariance that also involves the unknown baseline hazard (and its derivative). They also suggested an alternative resampling approach for estimating the covariance matrix without having to estimate the baseline hazard function or its derivative. With these tools at hand, one may then investigate whether it is appropriate to simplify (3.21) to either the Cox model or the AFT model.

3.2.2.2  Inference using the additive mean specification

There exist other ways of estimating the regression parameters β of the semiparametric AFT-model, which build more on classical linear regression models estimation. Starting with (3.14), let V = log(T), U = log(C) and Ṽ = V Λ U, and write model (3.14) as

$V = − β 0 − Z T β + ε$

assuming that ε is independent of Z =(Z1,...,Z p ) T and has zero mean. If V was not right-censored, then it is of course an easy task to estimate the regression parameters using least squares regression. A number of authors (Miller, 1976; Buckley and James, 1979; and Koul et al., 1981) have extended the least-squares principle to accommodate censoring; we describe the approach of Buckley and James. The idea is to replace V with a quantity that has the same mean as V, and which can be computed based on the right-censored sample. With

$V * = V Δ + ( 1 − Δ ) E ( V | V > U , Z ) ,$

and Δ = I(V ≤ U), then E(V* |Z) = E(V | Z). Still, V* is not observable but it can be estimated as follows. Since

$E ( V | V > U , Z ) = − Z T β + ∫ U + Z T β ∞ v d F ( v ) 1 − F ( U + Z T β )$

with F the distribution of V + Z T β, one can construct the so-called “synthetic data points:”

$V ^ i * ( β ) = V i Δ i + ( 1 − Δ i ) ( − Z i T β + ∫ U i + Z i T β ∞ v d F ^ ( v ) 1 − F ^ ( U i + Z i T β ) ) ,$

where $F ^$

is the Kaplan-Meier estimator based on $( V ˜ i + Z i T β , Δ i )$ , i =1,...,n.One may then estimate the parameters from the normal equations leading to the following estimating equation for the regression parameter vector β:

3.22 $U ( β ) = ∑ i = 1 n ( V ^ i * ( β ) + Z i T β ) ( Z i − Z ¯ ) = 0 ,$

where $Z ¯ = n − 1 ∑ i Z i$

. Equation (3.22) needs to be solved iteratively if it has a solution. This estimator is referred to as the “Buckley-James estimator.” The large sample properties of the resulting estimator were studied by Ritov (1990). Equation (3.22) can also be written as, with S(v)= v,

3.23 $∑ i = 1 n ( Δ i S ( U i + Z i T β ) + ( 1 − Δ i ) ∫ U i + Z i T β ∞ S ( v ) d F ^ ( v ) 1 − F ^ ( U i + Z i T β ) ) ( Z i − Z ¯ ) = 0 ,$

that may be derived from a likelihood principle; the efficient choice of S(v) being S(v) = f′(v)/f(v) with f(v) = F′(v) the density function. Ritov (1990) also established an asymptotic equivalence between the two classes of estimators given by (3.20) and (3.23).

The estimating function U(β) in (3.22) is neither continuous nor monotone in β,which makes it difficult to calculate the estimator in practice. Jin et al. (2006) proposed an iterative solution to (3.22) with a preliminary consistent estimator as starting value; this estimator is also available in the R package lss.

3.3.1  Introduction

In survival analysis, quantile regression (Koenker and Bassett, 1978) offers a significant extension of the accelerated failure time (AFT) model. Let T denote the failure time of interest and Z ≡ (Z 1,...,Z p )T denote a p × 1 vector of covariates. Define $Z ˜ = ( 1 , Z ⊺ ) ⊺$

and V = log(T). For the response V ,the τ -th conditional quantiles of V given $Z ˜$ is defined as $Q V ( τ | Z ˜ ) ≡ inf ⁡ { t : Pr ⁡ ( V ≤ t | Z ˜ ) ≥ τ }$ ,where τ ∊ (0,1). We adopt the same quantile definition for other response random variables. With survival data, a common quantile regression modeling strategy is to link the conditional quantile $Q V ( τ | Z ˜ ) to Z ˜$ through a linear model:

3.24 $Q V ( τ | Z ˜ ) = Z ˜ ⊺ β 0 ( τ ) , τ ∈ [ τ L , τ U ] ,$

where β0(τ) is a (p +1) × 1 vector of unknown regression coefficients possibly depending on τ,and0≤ τL ≤ τ U 1. Like the AFT model, the quantile regression model (3.24) offers an easy coefficient interpretation as a covariate effect on event time. Specifically, a non-intercept coefficient in β0(τ ) represents the change in the τ th quantile of log(T) given one unit change in the corresponding covariate.

It is important to note that the AFT model,

3.25 $log ⁡ ( T ) = Z ⊺ b + ε , ε ⊥ Z ,$

is a special case of model (3.24) with β0(τ) = (Qε(τ), b T)T.Here ⊥ stands for independence, and Q ε(τ ) represents the τ th quantile of ε. From this view, we see that the AFT model (3.25) requires the effects of covariates on $Q V ( τ | Z ˜ )$

be constant for all τ ↓τ L ,τ U ] (i.e., location shift effects). In contrast, the quantile regression model (3.24) flexibly formulates coefficients as functions of τ, thereby permitting covariate effects to vary across different quantile levels. As a result, it can accommodate a more general relationship between T and Z compared to the AFT model (3.25).

Note, when τ L = τ U , model (3.24) would only assert “local” linearity between the quantile of log(T) and $Z ˜$

at a single quantile level, and therefore imposes weaker restrictions than a “global” quantile regression model corresponding to a case with τ L U . A practical advantage of conducting “global” quantile regression is the capability of investigating the dynamic pattern of covariate effects.

In this section, we shall primarily focus on quantile regression with randomly right-censored data. Let C denote time to censoring. Define $T ˜ = T ∧ C$

and $Δ = I ( T ≤ C )$ . The observed data under right censorship consists of n i.i.d. replicates of $( T ˜ , Δ , Z ˜ )$ , denoted by $( T ˜ i , Δ i , Z ˜ i )$ . In addition, we define $V ˜ = log ⁡ ( T ˜ ) , V ˜ i = log ⁡ ( T ˜ i ) , U = log ⁡ ( C )$ , and U i = log(C i ). In Sections 3.3.2, 3.3.3 and 3.3.4, we consider three different random censoring scenarios, with general inference procedures discussed in Section 3.3.5. Extensions to more complex survival settings are briefly described in Section 3.3.6.

3.3.2  Estimation under random right censoring with C always known

Early work on quantile regression with censored data by Powell (1984, 1986) was targeted at type I censoring cases where the censoring time C is fixed and prespecified. Motivated by the fact that $Q V ˜ ( τ | Z ˜ ) = { Z ˜ ⊺ β 0 ( τ ) } ∧ U$

, an estimator of β 0(τ) is defined as the minimizer of

$r ( b , τ ) = ∑ i = 1 n ρ τ { V ˜ i − ( Z ˜ i ⊺ b ) ∧ U i }$

with respect to b,where ρ τ (x) = x{τ I(x< 0)}. This estimation method can also be applied to a more general case where C is independent of T given $Z ˜$

, and is always known but not necessarily fixed. In the absence of right censoring (i.e., C i = ∞), r( b ,τ) reduces to the check function of Koenker and Bassett (1978), the objective function for defining sample regression quantiles with complete data without censoring. Note that r( b ,τ) is not convex in b and thus it may have multiple local minima. Further efforts have been made to improve the numerical performance of this approach by several authors, for example: Fitzenberger (1997); Buchinsky and Hahn (1998); and Chernozhukov and Hong (2001). An implementation of Powell's method is available in the crq function in the contributed R package quantreg by Koenker (2012).

3.3.3  Estimation under covariate-independent random right censoring

Under the assumption that T and C are independent and C is independent of $Z ˜$

(i.e., covariate-independent random censoring), a natural estimating equation for β 0(τ), derived from Ying et al. (1995)'s work, is given by

3.26 $n − 1 / 2 ∑ i = 1 n Z ˜ i [ I { V ˜ i − Z ˜ ⊺ β ( τ ) > 0 } G ^ { Z ˜ ⊺ β ( τ ) } − ( 1 − τ ) ] = 0 ,$

where Ĝ(•) is the Kaplan-Meier estimate for G(•), the survival function of $C ˜$

. The estimating function, the left-hand side (LHS) of (3.26), is not continuous in β(τ), and thus an exact zero-crossing may not exist. The solution to Equation (3.26) may be alternatively defined as a minimizer of the L 2 norm of the estimating function. Such an objective function is discontinuous and may have multiple minima. To solve Equation (3.26), as suggested by Ying et al. (1995), the grid search method may be used for cases with low-dimensional $Z ˜$ and simulated annealing algorithm (Lin and Geyer, 1992) may be used for high-dimensional cases.

For the same right censoring scenario, an alternative estimating equation for β0(τ) is suggested by the work of Peng and Fine (2009). Specifically, the inverse probability of censoring weighting (IPCW) technique (Robins and Rotnitzky, 1992) can be used to estimate β0(τ) based on the fact that

$E { I ( V ˜ ≤ t , Δ = 1 ) G ( V ˜ ) | Z ˜ } = Pr ⁡ ( V ≤ t | Z ˜ ) .$

This leads to the following estimating equation for β0(τ):

3.27 $n − 1 / 2 ∑ i = 1 n Z ˜ i [ I { V ˜ i ≤ Z ˜ i ⊺ β ( τ ) , Δ i = 1 } G ^ ( V ˜ i ) − τ ] = 0.$

Note, the estimating function in (3 27) is not continuous but monotone (Fygenson and Ritov, 1994). By the monotonicity of (3.27), the solution to Equation (3.27) can be reformulated as the minimizer of the L -type convex function (of b ),

$∑ i = 1 n { I ( Δ i = 1 ) | V ˜ i G ^ ( V ˜ i ) − b ⊺ Z ˜ i G ^ ( V ˜ i ) | } + | M − b ⊺ ∑ l = 1 n ( − Z ˜ l I ( Δ l = 1 ) G ^ ( V ˜ l ) + 2 Z ˜ l τ ) | ,$

where M is an extremely large positive number selected to bound $b ⊺ ∑ l = 1 n ( − z l I ( Δ l = 1 ) G ^ ( V ˜ l ) + 2 Z l τ )$

from the above for all b 's in the compact parameter space for β0(τ). This minimization problem can be readily solved by the l1fit function in S-PLUS or the rq() function in the contributed R package quantreg (Koenker, 2012).

3.3.4  Estimation under standard random right censoring

Standard random right censoring refers to a censoring mechanism where C is only assumed to be independent of T given $Z ˜$

. It is less restrictive than the censorship considered in Sections 3.3.2 and 3.3.3.

Two major types of approaches have been developed for quantile regression with survival data subject to standard random right censoring; one type employs the principle of self-consistency (Efron, 1967) and the other type utilizes the martingale structure associated with randomly right-censored data. Hereafter, we shall refer to them as a self-consistent approach and martingale-based approach, respectively. Here we focus on approaches that are oriented to the following global linear quantile regression model,

3.28 $Q V ( τ | Z ˜ ) = Z ˜ ⊺ β 0 ( τ ) , τ ∈ [ 0 , τ U ] ,$

which is a special case of model (3.24) with τ L = 0. Of note, model (3.28) entails ${ Z ˜ ⊺ β 0 ( 0 ) } = 0$

, which is useful boundary information to be employed in the estimation of β0(τ) with τ> 0. Some efforts, for example, by Wang and Wang (2009), have been made to address a local linear quantile regression model (i.e., τ L = τ U ) by incorporating nonparametric estimation of the distribution function of C given $Z ˜$ .

3.3.4.1  Self-consistent approach

Portnoy (2003) made the first attempt to tackle the estimation of model (3.28) under the standard random right censoring assumption by employing the principle of self-consistency (Efron, 1967). The critical idea behind his algorithm is about splitting a censored event time, i with Δ i = 0, between U i and with appropriately designed weights, sharing the same spirit as that adopted for Efron's self-consistent estimator of survival function (Efron, 1967). The initial iterative self-consistent algorithm (Portnoy, 2003) was simplified into a grid-based sequential estimation procedure (Neocleous et al., 2006), which is implemented by the crq function in the contributed R package quantreg (Koenker, 2012). The corresponding asymptotic studies were established by Portnoy and Lin (2010).

The self-consistent approach can be formulated through stochastic integral equations (Peng, 2012). Define N i (t) = I(Ṽ i ≤ t, Δ i = 1), R i (t)= I( i ≤ t, Δ i = 0), and F V (t) Pr(V ≤ t). First, consider Efron's (Efron, 1967) self-consistent estimating equation for F V (t) in the one-sample case:

3.29 $F V ( t ) = n − 1 ∑ i = 1 n { N i ( t ) + R i ( t ) F V ( t ) − F V ( V ˜ i ) 1 − F V ( V ˜ i ) } .$

Expressing the right-hand side (RHS) of (3.29) by a stochastic integral and further applying stochastic integral by parts, one can rewrite Equation (3.29) as

3.30 $F V ( t ) = n − 1 ∑ i = 1 n [ N i ( t ) + R i ( t ) { 1 − F V ( t ) } ∫ 0 t R i ( u ) { 1 − F V ( u ) } 2 d F V ( u ) ] .$

With t replaced by $Z ˜ i ⊺ β ( τ )$

, Equation (3.30) evolves into an estimating equation for β0(τ):

3.31 $n 1 / 2 S n ( S C ) ( β , τ ) = 0 ,$

where $S n ( S C ) ( β , τ )$

equals

$n − 1 ∑ i = 1 n Z ˜ i [ N i { Z ˜ i ⊺ β ( τ ) } + R i { Z ˜ i ⊺ β ( τ ) } ( 1 − τ ) ∫ 0 τ R i { Z ˜ i ⊺ β ( u ) } ( 1 − u ) 2 d u − τ ] .$

Define a grid of τ-values, G,as0 1 2 <… <τ M = τ U .Let ‖G‖ denote the size of G,max k=0,…, M (τ k+1 τ k ). Without further mentioning, G will be adopted throughout Section 3.3.4. A self-consistent estimator, $β ^ S C ( · )$

, is defined as a cadlag step function that only jumps at the grid points of G and approximates the solution to Equation (3.31). The algorithm taken to obtain $β ^ S C ( · )$ is outlined as follows.

1. Set $exp ⁡ { Z ˜ i β ^ S C ( 0 ) } = 0$ for all i.Set k = 0.
2. Given $exp ⁡ { Z ˜ i β ^ S C ( τ l ) }$ for l ≤ k, obtain $β ^ S C ( τ k + 1 )$ as the minimizer of the following weighted check function:
3.32 $[ ∑ Δ i = 1 ρ τ ( V ˜ i − Z ˜ i ⊺ b ) + ∑ Δ i = 0 { w ˜ k + 1 , i ρ τ ( V ˜ i − Z ˜ i ⊺ ) + ( 1 − w ˜ k + 1 , i ) ρ τ ( Y * − Z ˜ i ⊺ b ) } ] ,$
where $w ˜ k + 1 , i = ∑ l = 0 k R i { Z ˜ i T β ^ S C ( τ l ) } ( 1 − τ k + 1 1 − τ l + 1 − 1 − τ k + 1 1 − τ l )$ ,and Y * is an extremely large value.
3. Replace k by k + 1 and repeat step 2 until k = M or only censored observations remain above $exp ⁡ { Z ˜ i T β ^ S C ( τ k − 1 ) }$ .

Large sample studies for $β ^ S C ( · )$

are facilitated by the stochastic integral equation representation of (3.31). Specifically, under certain regularity conditions and given lim n∞ ‖G‖ = 0, $sup ⁡ τ ∈ [ v , τ U ] ‖ β ^ S C ( τ ) − β 0 ( ( τ ) ) ‖ → p 0$ , where 0 <ν <τ U .If n 1/2 lim n→∞ G‖ = 0 is further satisfied, then $n 1 / 2 { β ^ S C ( τ ) − β 0 ( τ ) }$ converges weakly to a Gaussian process for τ ∊ [v,τ U ]. Peng (2012) also investigated several variants of $β ^ S C ( · ) ( · )$ , showing their asymptotic equivalence to $β ^ S C$ as well as their connection with the self-consistent estimator proposed by Portnoy (Portnoy, 2003; Neocleous et al., 2006).

3.3.4.2  Martingale-based approach

Model (3.28) can also be estimated based on the martingale structure of randomly right-censored data (Peng and Huang, 2008). Define $Λ V ( t | Z ˜ ) = − log ⁡ { 1 − Pr ⁡ ( V ≤ t | Z ˜ ) } , N ( t ) = I ( V ˜ ≤ t , Δ = 1 )$

, and $M ( t ) = N ( t ) − Λ V ( t ∧ Y | Z ˜ )$ . Let N i (t) and M i (t) be sample analogs of N(t) and M(t), respectively, i =1,...,n.Since M i (t) is the martingale process associated with the counting process $N i ( t ) , E { M i ( t ) | Z ˜ i } = 0$ for all t > 0. This implies

3.33 $E { ∑ i = 1 n Z ˜ i [ N i { Z ˜ i ⊺ β 0 ( τ ) } − Λ V { Z ˜ i ⊺ β 0 ( τ ) ∧ V ˜ i | Z ˜ i } ] } = 0.$

By the monotonicity of $Z ˜ i T β 0 ( τ )$

under model (3.28), we have

3.34 $Λ V { Z ˜ i ⊺ β 0 ( τ ) ∧ V ˜ i | Z ˜ i } = ∫ 0 τ I { V ˜ i ≥ Z ˜ i ⊺ β 0 ( u ) } d H ( u ) ,$

where H(x) = − log(1 −x). Coupling the equalities (3.33) and (3.34) suggests the estimating equation,

3.35 $n 1 / 2 S n ( P H ) ( β , τ ) = 0 ,$

where

$S n ( P H ) ( β , τ ) = n − 1 ∑ i = 1 n Z ˜ i [ N i { Z ˜ i ⊺ β ( τ ) } − ∫ 0 τ I { V ˜ i ≥ Z ˜ i ⊺ β ( u ) } d H ( u ) ] .$

An estimator of ß 0(τ), denoted by $β ^ P H ( τ )$

, can be obtained through approximating the stochastic solution to Equation (3.35) by a cadlag step function. The sequential algorithm for obtaining $β ^ P H ( τ )$ is outlined as follows:

1. Set $exp ⁡ { Z ˜ i ⊺ β ^ P H ( τ 0 ) } = 0$ for all i.Set k = 0.
2. Given $exp ⁡ { Z ˜ i ⊺ β ^ P H ( τ l ) }$ for l ≤ k, obtain $β ^ P H ( τ k + 1 )$ as the minimizer of the following L 1 -type convex objective function:
$l k + 1 ( h ) = ∑ i = 1 n | Δ i V ˜ i − δ i Z ˜ i ⊺ h | + | Y * − ∑ l = 1 n ( − Δ l Z ˜ i ⊺ h ) | + | Y * − ∑ r = 1 n [ ( 2 Z ˜ r ⊺ h ) ∑ l = 0 k I { V ˜ r ≥ Z ˜ i ⊺ β ⌢ P H ( τ l ) } { H ( τ l + 1 ) − H ( τ l ) } ] | ,$
where Y* is an extremely large value.
3. Replace k by k + 1 and repeat step 2 until k = M or no feasible solution can be found for minimizing l k (h).

The crq function in the contributed R package quantreg (Koenker, 2012) provides an implementation of $β ^ P H ( τ )$

based on an algorithm slightly different from the one presented above. More recently, Huang (2010) derived a grid-free estimation procedure for model (3.28) by using the concept of quantile calculus.

Peng and Huang (2008) established the uniform consistency and weak convergence of $β ^ P H ( ⋅ )$

. Moreover, $β ^ P H ( ⋅ )$ was shown to be asymptotically equivalent to the self-consistent estimator $β ^ S C ( ⋅ )$ (Peng, 2012). The numerical results reported in Koenker (2008) and Peng (2012) confirm this theoretical result and show comparable computational performance between these two approaches.

3.3.5  Variance estimation and other inference

The estimators of ß 0(τ) discussed in Sections 3.3.2–3.3.4 generally have asymptotic variances that involve unknown density functions. Under random right censoring with known censoring time or covariate-independent censoring, we can adapt Huang (2002)'s technique to avoid density estimation. Specifically, let ß(τ) be general notation for an estimator of ß 0(τ), and S n (ß(τ),τ) denote the estimating function associated with $β ^ ( τ )$

, for example, the LHS of (3.26) and (3.27). Asymptotic theory may show that Sn{ßo(τ),τ} converges to a multivariate normal distribution with variance matrix Σ(τ). Suppose one can obtain a consistent estimator of Σ(τ ), denoted by $∑ ^ ( τ )$ . The following are the main steps to estimate the asymptotic variance of $β ^ ( τ )$ :

1. Find a symmetric and nonsingular (p+1)×(p+1) matrix E n (τ) {e n,1(τ),...,e n,p + 1(τ)} such that $∑ ^ ( τ ) = { E n ( τ ) } 2$ .
2. Calculate $D n ( τ ) = ( S n − 1 { e n , 1 ( τ ) , τ } − β ^ ( τ ) , … , S n − 1 { e n , p + 1 ( τ ) , τ } − β ^ ( τ ) )$ , where $S n − 1 ( e , τ )$ is defined as the solution to S n (b,τ) e = 0.
3. Estimate the asymptotic variance matrix of $n 1 / 2 { β ^ ( τ ) − β 0 ( τ ) }$ by $n { D n ( τ ) } ⊕ 2$ .

Bootstrapping procedures are also frequently used for variance estimation under quantile regression. Resampling methods that follow the idea of Parzen and Ying (1994) were shown to yield valid variance estimates and other inference (Peng and Huang, 2008). Simple bootstrapping procedures based on resampling with replacement also seem to have satisfactory performance (Portnoy, 2003; Peng, 2012).

One practical appeal of quantile regression with survival data is its capability of accommodating and exploring varying covariate effects. Second-stage inference can be employed to serve this need. Given $β ^ ( τ )$

for a range of τ, it is often of interest to investigate: (1) how to summarize the information provided by these estimators to help understand the underlying effect mechanism; and (2) how to determine whether some covariates have constant effects so that a simpler model may be considered.

A general formulation of problem (1) may correspond to the estimation of some functional of ß 0(•), denoted by Ψ(ß 0). A natural estimator for Ψ(ß) is given by $Ψ ( β ^ 0 )$

. Such an estimator may be justified by the functional delta method provided that Ψ is compactly differentiable at ß 0 (Andersen et al., 1998).

Addressing question (2) can be formulated as a testing problem for the null hypothesis $H ˜ 0 , j : β 0 ( j ) ( τ ) = ρ 0 , τ ∈ [ τ L , τ U ]$

, where the superscript (j) indicates the jth component of a vector, and ρ 0 is an unspecified constant, j = 2,..., or p + 1. An example test procedure is presented in Peng and Huang (2008). Of note, accepting $H ˜ 0 , j$ for all j{2,...,p +1} may indicate the adequacy of an AFT model. This naturally renders a procedure for testing the goodness-of-fit of an AFT model.

Model checking is often of practical importance. When the interest only lies in checking the linearity between covariates and conditional quantile at a single quantile level, one can adapt the approaches developed for uncensored data, for example, the work by Zheng (2000), Horowitz and Spokoiny (2002), and He and Zhu (2003). When the focus is to test a global linear relationship between conditional quantiles and covariates, a natural approach is to use a stochastic process which has mean zero under the assumed model. For example, a diagnostic process for model (3.28) (Peng and Huang, 2008) may take the form,

$K n ( τ ) = n − 1 / 2 ∑ i = 1 n q ( Z ˜ i ) M i ( τ ; β ^ ) ,$

where α(•) is a known bounded function, and

$M i ( τ ; β ) = N i ( exp ⁡ { Z ˜ i ⊺ β ( τ ) } ) − ∫ 0 τ I { V ˜ i ≥ Z ˜ i ⊺ β ( u ) } d H ( u ) .$

It can be shown that K(τ) converges weakly to a zero-mean Gaussian process, whose distribution can be approximated by that of

$K * ( τ ) = n − 1 / 2 ∑ i = 1 n q ( Z ˜ i ) M i ( τ ; β ^ ) ( 1 − ζ i ) + n 1 / 2 ∑ i = 1 n q ( Z ˜ i ) { M i ( τ ; β * ) − M i ( τ ; β ^ ) } .$

Here β * (τ) denote the resampling estimator obtained by perturbing the estimating equation (3.35) by ${ ζ i } i = 1 n$

, which are independent variates from a nonnegative known distribution with mean 1 and variance 1. An unusual pattern of K(•) compared to that of K*(•) would suggest a lack-of-fit of model (3.28). A formal lack of fit test is given by the supremum statistic, $sup ⁡ τ ∈ [ l , u ] | K ( τ ) |$ ,where 0 <l <u<τ U .The p-value may be approximated by the empirical proportion of supτ∊[ l,u ] |K*(τ)| exceeding supτ∊[l,u] |(τ)|

3.3.6  Extensions to other survival settings

In practice, survival data may involve more complex censoring mechanism than random right censoring. Truncation can also present, for example, in many observational studies. There are recent method developments for quantile regression in more complex survival scenarios. For example, Ji et al. (2012) proposed a modification of Peng and Huang (2008)'s martingale-based approach for survival data subject to known random left censoring and/or random left truncation in addition to random right censoring. Quantile regression with competing risks or semicompeting risks data was addressed by the work of Peng and Fine (2009) and Li and Peng (2011).

There are continued research efforts to address quantile regression for other important survival scenarios, such as interval censored data and dependently censored data. As a promising regression tool for survival analysis, quantile regression may be recommended in a greater extent in real applications to provide complementary and yet useful secondary analysis.

3.3.7  An illustration of quantile regression for survival analysis

To illustrate quantile regression for survival analysis, we use a dataset from a dialysis study that investigated predictors of mortality risk in a cohort of 191 incident dialysis patients (Kutner et al., 2002). Analysis covariates included patient's age (AGE), the indicator of fish consumption over the first year of dialysis (FISHH), the indicator of baseline HD dialysis modality (BHDPD), the indicator of moderate to severe symptoms of restless leg symptoms (BLEGS), the indicator of education equal or higher than college (HIEDU), and the indicator of being black (BLACK). We first fit the data with AFT model (3.25), where T stands for time-to-death. In Table 3.1, we present the estimation results based on the log-rank estimator, Gehan's estimator, and the least-squares estimator. All covariates except for BLEGS are consistently shown to have significant effects on survival by all three different estimators. For the coefficient of BLEGS, Gehan's estimator and the least-squares estimator yield significant p values while the log-rank estimator does not.

We next conduct quantile regression based on model (3.28) for the same dataset. Figure 3.3 displays Peng and Huang (2008)'s estimator of β0(τ) along with 95% pointwise confidence intervals. In Figure 3.3, we observe that the coefficient for BLEGS diminishes gradually with τ whereas estimates for the other coefficients seem to be fairly constant. We apply the second-stage inference to formally investigate the constancy of each coefficient. The results confirm our observation from Figure 3.3, suggesting a varying effect of BLEGS and constant effects of the other covariates. This may lead to an interesting scientific implication that BLEGS may affect the survival experience of dialysis patients with short survival times, but may have little impact on that of long-term survivors. In addition, the evidence for the nonconstancy of BLEGS coefficient may indicate some degree of lack-of-fit of the AFT model for the dialysis data.

Table 3.1   Results from fitting AFT model to the dialysis dataset. Coef: coefficient estimate; SE: standard error.

Gehan's estimator

Log-rank estimator

Least-squares estimator

Coef

SE

p value

Coef

SE

p value

Coef

SE p

value

AGE

−.031

.005

<. 001

− .035

.004

<.001

.033

.006

<.001

FISHH

.402

.139

.004

.485

.128

<.001

.507

.163

.002

BHDPD

−.505

.156

.001

−.473

.136

<.001

−.509

.164

.002

BLEGS

−.340

.166

.040

−.173

.145

.232

−.412

.176

.019

HIEDU

—.352

.133

.008

−.364

.161

.024

.335

.139

.016

BLACK

.640

.144

<.001

.591

.138

<.001

.643

.153

<.001

We further estimate the average quantile effects defined as $∫ l u β 0 ( i ) ( u ) d u ( i = 2 , … , 7 )$

. The results are given in Table 3.2. We observe that the estimated average effects are similar to the AFT coefficients obtained by Gehan's estimator and the least-squares estimator, but have relatively larger discrepancies with the log-rank estimates. This may indirectly reflect the presence of some nonconstant covariate effect. When quantile regression model (3.28) holds without any varying covariate effects, we would expect to see more consistent results between Gehan's estimator, which emphasizes early survival, and the log-rank estimator, which treats early and late survival information equally.

Table 3.2   Estimation of average covariate effects based on quantile regression. AveEff: Estimated average effect; SE: standard error.

AveEff

SE

p value

AGE

− .030

.003

<.001

FISHH

.327

.116

.005

BHDPD

−.489

.162

.003

BLEGS

−.369

.161

.022

HIEDU

−.350

.137

.011

BLACK

.654

.144

<.001

Figure 3.3   Peng and Huang's estimator (solid lines) and 95% pointwise confidence intervals (dotted lines) of regression quantiles in the dialysis example.

Bibliography

Aalen, O. O. (1980), A model for non-parametric regression analysis of counting processes, in W. Klonecki , A. Kozek and J. Rosinski , eds., ‘Lecture Notes in Statistics-2: Mathematical Statistics and Probability Theory’, New York: Springer-Verlag, pp. 1–25.
Aalen, O. O. (1989), ‘A linear regression model for the analysis of life times’, Statist. Med. 8, 907–925.
Aalen, O. O. (1993), ‘Further results on the non-parametric linear regression model in survival analysis’, Statist. Med. 12, 1569–1588.
Aalen, O. O. , Borgan, Ø. and Gjessing, H. K. (2008), Survival and event history analysis, Statistics for Biology and Health, Springer, New York. A process point of view. URL: http://dx.doi.org/10.1007/978-0-387-68560-1
Andersen, P. K. , Borgan, Ø. , Gill, R. D. and Keiding, N. (1998), Statistical Models Based on Counting Processes, 2nd edn, New York: Springer-Verlag.
Bovelstad, H. , Nygard, S. and Borgan, Ø. (2009), ‘Survival prediction from clinico-genomic models - a comparative study’, BMC Bioinformatics 10(1), 413.
Buchinsky, M. and Hahn, J. (1998), ‘An alternative estimator for censored quantile regression’, Econometrica 66, 653–671.
Buckley, J. and James, I. R. (1979), ‘Linear regression with censored data’, Biometrika 66, 429–436.
Cai, J. and Zeng, D. (2011), ‘Additive mixed effect model for clustered failure time data’, Biometrics 67(2), 1340–1351.
Chen, Y. and Jewell, N. (2001), ‘On a general class of semiparametric hazards regression models’, Biometrika 88, 687–702.
Chen, Y. Q. and Wang, M.C. (2000), ‘Analysis of accelerated hazards models’, J. Amer. Statist. Assoc. 95(2), 608–618.
Chernozhukov, V. and Hong, H. (2001), ‘Three-step censored quantile regression and extramarital affairs’, J. Amer. Statist. Assoc. pp. 872–882.
Efron, B. (1967), ‘The two-sample problem with censored data’, Proc. Fifth Berkley Symposium in Mathematical Statistics, IV, 831–853.
Fitzenberger, B. (1997), ‘A guide to censored quantile regressions’, Handbooks of Statistics: Robust Inference 15, 405–437.
Fosen, J. , Ferkingstad, E. , Borgan, Ø. and Aalen, O. (2006), ‘Dynamic path analysis: a new approach to analyzing time-dependent covariates’, Lifetime Data Anal. 12(2), 143–167.
Fygenson, M. and Ritov, Y. (1994), ‘Monotone estimating equations for censored data’, Ann. Statist. 22, 732–746.
Gandy, A. and Jensen, U. (2005a), ‘Checking a semiparametric additive risk model’, Lifetime Data Anal. 11, 451–472.
Gandy, A. and Jensen, U. (2005), ‘On goodness-of-fit tests for Aalen's additive risk model’, Scand. J. Statist. 32, 425–445.
Gorst-Rasmussen, A. and Scheike, T. (2012), ‘Independent screening for single-index hazard rate models with ultra-high dimensional features’, J. Roy. Statist. Soc. Ser. B 75, 217– 245.
Grønnesby, J. K. and Borgan, Ø. (1996), ‘A method for checking regression models in survival analysis based on the risk score’, Lifetime Data Anal. 2, 315–328.
He, X. and Zhu, L.X. (2003), ‘A lack-of-fit test for quantile regression’, J. Amer. Statist. Assoc. 98, 1013–1022.
Horowitz, J. and Spokoiny, V. (2002), ‘An adaptive, rate-optimal test of linearity for median regression models’, J. Amer. Statist. Assoc. 97, 822–835.
Huang, Y. (2002), ‘Calibration regression of censored lifetime medical cost’, J. Amer. Statist. Assoc. 98, 318–327.
Huang, Y. (2010), ‘Quantile Calculus and Censored Regression’, Ann. Statist. 38, 1607–1637.
Huffer, F. W. and McKeague, I. W. (1991), ‘Weighted least squares estimation for Aalen's additive risk model’, J. Amer. Statist. Assoc. 86, 114–129.
Ji, S. , Peng, L. , Cheng, Y. and Lai, H. (2012), ‘Quantile regression for doubly censored data’, Biometrics 68, 101–112.
Jin, Z. , Lin, D. Y. , Wei, L. J. and Ying, Z. (2003), ‘Rank-based inference for the accelerated failure time model’, Biometrika 90, 341–353.
Jin, Z. , Lin, D. Y. and Ying, Z. (2006), ‘On least-squares regression with censored data’, Biometrika 93, 147–161.
Kalbfleisch, J. D. and Prentice, R. L. (2002), The Statistical Analysis of Failure Time Data, Wiley, New York.
Koenker, R. (2008), ‘Censored quantile regression redux’, Journal of Statistical Software 27, http://www.jstatsoft.com.
Koenker, R. (2012), Quantile Regression, R package version 4.81, http://cran.r-project.org/web/packages/quantreg/quantreg.pdf.
Koenker, R. and Bassett, G. (1978), ‘Regression quantiles’, Econometrica 46, 33–50.
Koul, H. , Susarla, V. and Van Ryzin, J. (1981), ‘Regression analysis with randomly right censored data’, Ann. Statist. 9, 1276–1288.
Kutner, N. G. , Clow, P. W. , Zhang, R. and Aviles, X. (2002), ‘Association of fish intake and survival in a cohort of incident dialysis patients’, American Journal of Kidney Diseases 39, 1018–1024.
Lange, T. and Hansen, J. (2011), ‘Direct and indirect effects in a survival context’, Epidemiology (Cambridge, Mass.) 22, 575–581.
Leng, C. and Ma, S. (2007), ‘Path consistent model selection in additive risk model via lasso’, Statist. Med. 26, 3753–3770. URL: http://dx.doi.org/10.1002/sim.2834
Li, R. and Peng, L. (2011), ‘Quantile regression for left-truncated semi-competing risks data’, Biometrics 67, 701–710.
Lin, D. and Geyer, C. (1992), ‘Computational methods for semi-parametric linear regression with censored data’, Journal of Computational and Graphical Statistics 1, 77–90.
Lin, D. Y. , Wei, L. J. and Ying, Z. (1993), ‘Checking the Cox model with cumulative sums of martingale-based residuals’, Biometrika 80, 557–572.
Lin, D. Y. , Wei, L. J. and Ying, Z. (1998), ‘Accelerated failure time models for counting processes’, Biometrika 85, 605–618.
Lin, D. Y. and Ying, Z. (1994), ‘Semiparametric analysis of the additive risk model’, Biometrika 81, 61–71.
Ma, S. , Kosorok, M. R. and Fine, J. P. (2006), ‘Additive risk models for survival data with high-dimensional covariates’, Biometrics 62, 202–210.
Martinussen, T. (2010), ‘Dynamic path analysis for event time data: large sample properties and inference’, Lifetime Data Anal. 16, 85–101.
Martinussen, T. , Aalen, O. O. and Scheike, T. H. (2008), ‘The Mizon-Richard encompassing test for the Cox and Aalen additive hazards models’, Biometrics 64, 164–171. URL: http://www.jstor.org/stable/25502033
Martinussen, T. and Scheike, T. H. (2002), ‘A flexible additive multiplicative hazard model’, Biometrika 89, 283–298. URL: http://www.jstor.org/stable/4140577
Martinussen, T. and Scheike, T. H. (2006), Dynamic Regression Models for Survival Data, Springer-Verlag, New York.
Martinussen, T. and Scheike, T. H. (2007), ‘Aalen additive hazards change-point model’, Biometrika 94, 861–872.
Martinussen, T. and Scheike, T. H. (2009 a), ‘The additive hazards model with high-dimensional regressors’, Lifetime Data Anal. 15, 330–342.
Martinussen, T. and Scheike, T. H. (2009 b), ‘Covariate selection for the semiparametric additive risk model’, Scand. J. Statist. 36, 602–619.
Martinussen, T. , Scheike, T. H. and Zucker, D. M. (2011), ‘The Aalen additive gamma frailty hazards model’, Biometrika 98, 831–843. URL: http://biomet.oxfordjournals.org/content/early/2011/10/21/biomet.asr049.abstract
Martinussen, T. and Vansteelandt, S. (2012), A note on collapsibility and confounding bias in Cox and Aalen regression models, Technical report, Department of Biostatistics, University of Copenhagen.
Martinussen, T. , Vansteelandt, S. , Gerster, M. and Hjelmborg, J. V. (2011), ‘Estimation of direct effects for survival data by using the Aalen additive hazards model’, J. Roy. Statist. Soc. Ser. B 73, 773–788. URL: http://dx.doi.org/10.1111/j.1467-9868.2011.00782.x
McKeague, I. W. and Sasieni, P. D. (1994), ‘A partly parametric additive risk model’, Biometrika 81, 501–514.
Miller, R. G. (1976), ‘Least squares regression with censored data’, Biometrika 63, 449–464.
Neocleous, T. , Vanden Branden, K. and Portnoy, S. (2006), ‘Correction to censored regression quantiles by S. Portnoy , 98 (2003), 1001–1012’, J. Amer. Statist. Assoc. 101, 860–861.
Parzen, M. I. , Wei, L. J. and Ying, Z. (1994), ‘A resampling method based on pivotal estimating functions’, Biometrika 81, 341–350.
Pearl, J. (2000), Causality: Models, Reasoning, and Inference, New York: Cambridge University Press.
Peng, L. (2012), ‘A note on self-consistent estimation of censored regression quantiles’, J. Multivariate Analysis 105, 368–379.
Peng, L. and Fine, J. (2009), ‘Competing risks quantile regression’, J. Amer. Statist. Assoc. 104, 1440–1453.
Peng, L. and Huang, Y. (2008), ‘Survival analysis with quantile regression models’, J. Amer. Statist. Assoc. 103, 637–649.
Pipper, C. B. and Martinussen, T. (2004), ‘An estimating equation for parametric shared frailty models with marginal additive hazards’, J. Roy. Statist. Soc. Ser. B 66, 207–220.
Portnoy, S. (2003), ‘Censored regression quantiles’, J. Amer. Statist. Assoc. 98, 1001–1012.
Portnoy, S. and Lin, G. (2010), ‘Asymptotics for censored regression quantiles.’, Journal of Nonparametric Statistics 22, 115–130.
Powell, J. (1984), ‘Least absolute deviations estimation for the censored regression model’, Journal of Econometrics 25, 303–325.
Powell, J. (1986), ‘Censored regression quantiles’, Journal of Econometrics 32, 143–155.
Reid, N. (1994), ‘A conversation with Sir David Cox’, Statistical Science 9, 439–455.
Ritov, Y. (1990), ‘Estimation in a linear regression model with censored data’, Ann. Statist. 18, 303–328.
Robins, J. (1986), ‘A new approach to causal inference in mortality studies with sustained exposure periods - application to control of the healthy worker survivor effect’, Mathematical Modeling 7, 1393–1512.
Robins, J. and Rotnitzky, A. (1992), Recovery of information and adjustment for dependent censoring using surrogate markers, in N. Jewell , K. Dietz and V. Farewell , eds., ‘AIDS Epidemiology-Methodological Issues’, Boston: Birkhauser, 24–33.
Scheike, T. H. and Zhang, M.J. (2002), ‘An additive-multiplicative Cox-Aalen regression model’, Scand. J. Statist. 29, 75–88. URL: http://www.jstor.org/stable/4616700
Stablein, D. M. and Koutrouvelis, I. A. (1985), ‘A two-sample test sensitive to crossing hazards in uncensored and singly censored data’, Biometrics 41, 643–652. URL: http://www.jstor.org/stable/2531284
Struthers, C. A. and Kalbfleisch, J. D. (1986), ‘Misspecified proportional hazard models’, Biometrika 73, 363–369.
Tsiatis, A. A. (1990), ‘Estimating regression parameters using linear rank tests for censored data’, Ann. Statist. 18, 354–372.
Wang, H. and Wang, L. (2009), ‘Locally weighted censored quantile regression’, J. Amer. Statist. Assoc. 104, 1117–1128.
Yin, G. and Cai, J. (2004), ‘Additive hazards model with multivariate failure time data’, Biometrika 91, 801–818.
Ying, Z. (1993), ‘A large sample study of rank estimation for censored regression data’, Ann. Statist. 21.
Ying, Z. , Jung, S. H. and Wei, L. J. (1995), ‘Survival analysis with median regression models’, J. Amer. Statist. Assoc. 90, 178–184.
Zeng, D. and Lin, D. (2007a), ‘Efficient estimation for the accelerated failure time model’, J. Amer. Statist. Assoc. 102, 1387–1396.
Zeng, D. and Lin, D. (20076), ‘Maximum likelihood estimation in semiparametric regression models with censored data’, J. Roy. Statist. Soc. Ser. B 69, 507–564.
Zheng, J. (2000), ‘A consistent test of conditional parametric distributions’, Econometric Theory 16, 667–691.

Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.