Importance and Sensitivity Analysis in Dynamic Reliability

In dynamic reliability, the evolution of a system is governed by a piecewise deterministic Markov process, which is characterized by different input data. Assuming such data to depend on some parameter p ∈ P , our aim is to compute the ﬁrst-order derivative with respect to each p ∈ P of some functionals of the process, which may help to rank input data according to their relative importance, in view of sensitivity analysis. The functionals of interest are expected values of some function of the process, cumulated on some ﬁnite time interval [ 0 , t ] , and their asymptotic values per unit time. Typical quantities of interest hence are cumulated (production) availability, or mean number of failures on some ﬁnite time interval and similar asymptotic quantities. The computation of the ﬁrst-order derivative with respect to p ∈ P is made through a probabilistic counterpart of the adjoint state method, from the numerical analysis ﬁeld. Examples are provided, showing the good efﬁciency of this method, especially in case of a large P .


Introduction
In dynamic reliability, the time-evolution of a system is described by a piecewise deterministic Markov process (PDMP) (I t , X t ) t≥0 introduced by Davis (1984). The first component I t is discrete, with values in a finite state space E. Typically, it indicates the state (up/down) for each component of the system at time t. The second component X t , with values in a Borel set V ⊂ R d , stands for environmental conditions, such as temperature, pressure, and so on. Both components of the process interact one in each other: the process jumps at countably many isolated random times; by a jump from (I t − , X t − ) = (i, x) to (I t , X t ) = ( j, y) (with (i, x), ( j, y) ∈ E × V), the transition rate between the discrete states i and j depends on the environmental condition x just before the jump and is a function x −→ a (i, j, x). Similarly, the environmental condition just after the jump X t is distributed according to some distribution μ (i, j,x) (dy), which depends on both components just before the jump (i, x) and on the after jump discrete state j. Between jumps, the discrete component I t is constant, whereas the evolution of the environmental condition X t is deterministic, solution of a set of differential equations which depends on the fixed discrete state: given that I t = i for all t ∈ [a, b ], we have d dt X t = v(i, X t ) for all t ∈ [a, b ], where v is a mapping from E × V to V. Contrary to the general model from Davis (1993), we here assume that the eventual reaching of the frontier of V does not entail jumps for the process (I t , X t ) t≥0 . Now, let (I t , X t ) t≥0 be a PDMP for which the jump rates a (i, j, x), the jump distribution μ (i, j,x) (dy) and the velocity fields v (i, x) are assumed to depend on some family of parameters P ∈ R k , where k ∈ N can be quite large. Our aim here is to provide information about the sensitivity with respect to the elements of P, of expressions with the following form: where ρ 0 is the initial distribution of the process and h is some bounded measurable function which can also depend on p ∈ P. Such expressions include e.g. cumulative availability or production availability on some [0, t], mean number of failures on [0, t], mean time spent by (X s ) 0≤s≤t between two given bounds. We are also interested in the sensitivity with respect to p ∈ P of the corresponding asymptotic quantities per unit time, namely in quantities of the form lim t→+∞ R ρ0 (t) t .
This sensitivity analysis can be guided by the knowledge of the first-order derivatives of R ρ0 (t) with respect to p, where p ∈ P. More specifically, we consider the following normalized derivative: for t ≤ +∞ and p ∈ P, which we call the importance factor of parameter p in R ρ0 (t). Such a dimensionless expression may help to rank the different parameters p ∈ P according to their relative importance in R ρ0 (t). This kind of sensitivity analysis was already studied by Gandini (1990) and by Cao and Chen (1997) for pure jump Markov processes with countable state space, and extended to PDMPs by Mercier and Roussignol (2008), with more restrictive a model than in the present paper however.
The starting point for this study is the Markov property of the process (I t , X t ) t≥0 , which allows to write the associated Chapman-Kolmogorov equations fulfilled by its marginal distributions, see Davis (1993) or Cocozza-Thivent et al. (2006b). Such equations appear as weak forms of linear first order hyperbolic equations, see Eymard et al. (2008) e.g. The expressions for the derivatives of interest can then be obtained by solving the dual problem, as suggested by Lions (1968) for a wide class of partial differential equations. The Chapman-Kolmogorov equations and its dual problem are here solved using finite volume methods as provided by Cocozza-Thivent et al. (2006a) or Eymard et al. (2008), which here prove to be well adapted.
An alternate way to compute the marginal distributions of the PDMP might be to start from the Markov renewal equations they fulfill, as proposed in Chiquet and Limnios (2008) for a specific case of PDMPs, or to use Monte-Carlo simulations.
In order to study the asymptotic quantities, one must put assumptions on the process to ensure its positive Harris recurrence. These assumptions may be proved to be true for Markov processes on general state space using techniques like those in Down et al. (1995) and Meyn and Tweedie (1993a, b), as is done for both examples at the end of this paper. Alternately, one may use specific results for piecewise deterministic Markov processes, as provided in Davis (1993) and Dufour (1999, 2003).
The paper is organized as follows: technical assumptions are set up in Section 2. An existence result for the derivatives of the transient quantities is provided in Section 3 and a computable expression is given for them in Section 4, using duality. Asymptotic quantities are studied in Section 5. Numerical procedures are exposed in Section 6, using finite volume methods. Two numerical studies close the paper in Sections 7 and 8.

Assumptions
All the paper is written under the following general assumptions: -for all i, j ∈ E, the function x −→ a(i, j, x) is continuous and bounded on V, -for all i, j ∈ E and all continuous and bounded function f on V, the function These assumptions guarantee the existence and uniqueness of the solution to the set of differential equations dx The jump rates a (i, j, x), the jump distribution μ (i, j,x) , the velocity field v(i, x) and the function h are assumed to depend on some parameter p, where p belongs to an open set O ⊂ R or R k . All the results are written in the case where O ⊂ R but extension to the case O ⊂ R k is straightforward. We add exponent ( p) to each quantity depending on p, such as h ( p) or R Under the above technical assumptions, I is known to be a Markov process with general state space E × V, see Davis (1993), Cocozza-Thivent et al. (2006b), with strong Feller transition semi-group. We denote by ρ ( p) t ( j, dy) the distribution of the process I at time t with initial distribution ρ 0 (independent of p) and by P We then have: In order to prove existence and to calculate derivatives of the functional R ( p) ρ 0 , we must give a sense to the derivatives of the transition probability distributions. With that aim, we need the following additional assumptions that we denote as assumptions H 1 (resp. H 2 ): continuously differentiable on V × O, with all partial derivatives uniformly bounded on V × N( p), -for all function f ( p) (x) bounded and once (resp. twice) continuously differentiable on V × O, with all partial derivatives uniformly bounded on V × N( p), The third point implies (see e.g. Cartan 1967) that, for all i ∈ E, the func- , with all partial derivatives uniformly bounded on V × N( p).
Throughout the paper, under assumptions H 1 or H 2 , for each p in O, we shall refer to a N( p) fulfilling the four points of the assumptions without any further notice.
We may now give a sense to the derivatives of the transition probability distributions, which is done in next section.

Existence Result
We shall use the infinitesimal generators of both Markov processes (I t , X t ) t≥0 and (I t , X t , t) t≥0 : For f ∈ D H0 , we define We then have for all f ∈ D H0 : and for all f ∈ D H : These are Chapman-Kolmogorov equations. Thanks to these equations and to Theorem 4 in the Appendix, we get the following result: Proof Let p 0 ∈ P and N( p 0 ) be a neighborhood of p 0 . Setting ϕ (s, i, (x, p) can be written as: where ∇ x stands for the gradient with respect of x. Using a similar method as in Cocozza-Thivent et al. (2006b), we introduce the functionφ defined by: Noting that Equation 7 may now be written as: Using the notations of Theorem 4, we set The functionφ then satisfies the equations: (u,φ(u, ., .))(i, z) du.
We now check that assumptions of Theorem 4 are fulfilled and we set I to be an interval such that [0, T] ⊂ I.
and J (F ) are as in the Appendix. With such notations, the functionφ 0 belongs to C b 1 (E, W) and assumptions H 1 imply that: , then for all s ∈ I, F(s, ) and ∇F (s, ) are uniformly Lipschitz with respect to .
, then for all s ∈ I, F(s, ), ∇F (s, ) and J (F ) (s, ) are uniformly Lipschitz with respect to .
All required assumptions are then checked, which provides the result.
Remark 1 Using the explicit form of transition probabilities (see Cocozza-Thivent et al. 2006b

and possibly other derivatives). Limiting this functional to the set of functions
is infinitely differentiable with a compact support for all i ∈ E, it may then be seen as a distribution. We actually define this functional on the greater space of bounded continuously differentiable functions with uniformly bounded partial derivatives and we use the following notation: Next corollary is straightforward.
We derive the following theorem: where we set: is continuously differentiable only for almost all (a.a.) x ∈ V, where a.a. means with respect to Lebesgue measure.
Our purpose now is to compute this derivative. The marginal distribution ρ ( p) u ( j, dy) may be estimated by different methods, as indicated in the introduction. However, we do not know how to compute directly the derivative of the marginal distribution which appear in the above expression. We now transform it in order to make it easier to compute.

Results in the Transient Case
Let us first define the notion of importance functions.

Definition 2 We say that a function ϕ ( p) t
∈ D H is the importance function associated to the function h ( p) and to t if: Proof Let ϕ ( p) ∈ D H (eventually depending on p) and let us set: The functionφ ( p) is bounded, continuously differentiable on V with partial derivative with respect of time given by: The function ϕ ( p) andφ ( p) are in one-to-one correspondence with: Using Eqs. 10-11, we get: The problem now resumes to show existence, uniqueness and regularity ofφ ( p) such thatH Eq. 13 may be written as: Theorem 4 from the Appendix then provides the result in the same way as for Proposition 1.
Next lemma provides a duality result based on the definition of ϕ ( p) t , which transports the differentiation with respect of p from the marginal distribution ρ ( p) s to operator H ( p) .

Lemma 1 Under assumptions H
where we set: (·, ·, t) = 0. By differentiating this expression with respect to p, we derive: Chapman-Kolmogorov Eq. 6 applied to ∂ϕ ( p) t ∂ p gives: Hence the result, substituting this last outcome in Eq. 14.
We easily derive the following result, using the definition of R where ϕ ( p) t is the importance function associated to h ( p) , t .
Remark 3 In case that R ρ0 (t) depends on a family of parameters P, the computation of all for each p ∈ P consequently requires only one single computation of both ρ (P) s and ϕ (P) t , and simple summations on [0, t] for each value of p. This is to be compared to the usual finite differences method, which requires one first evaluation of ρ (P) s and another evaluation of ρ (Pp,ε) s for each value of p ∈ P and some ε > 0, where P p,ε stands for the family P in which parameter p has been changed into p + ε. In case of a large P, the present methods, which only requires two effective computations, consequently appears as cheaper than finite differences.
In applications, the importance function will generally be computed numerically. An analytical form is however available, which is also useful for the asymptotic study: be the function defined by Eq. 16. It is clear that ϕ due to the Chapman-Kolmogorov Eq. 3, which completes the proof.
We easily derive the following Corollary from Theorem 2 and the previous lemma.

Corollary 2
Under assumptions H 2 , we have: Equation 17 is an extension of the results of Gandini (1990) for pure jump Markov processes with countable state space.

Asymptotic Results
We are now interested in asymptotic results and we need to assume the process (I t , X t ) t≥0 to be uniformly ergodic, according to the following assumptions H 3 : -the process (I t , X t ) t≥0 is positive Harris-recurrent with π ( p) as unique stationary distribution, -for each p ∈ O, there exists a function f ( p) such that and for all (i, x) ∈ E × V and all u ≥ 0.
As already noted in the introduction, such assumptions may be proved to be true by using results for Markov processes on general state spaces or specific results for piecewise deterministic Markov processes (see the introduction for references).
In order not to give too technical assumptions difficult to check in practice, we constraint our asymptotic study to the special case where only the jump rates a ( p) (i, j, x) and the function h ( p) (i, x) depend on the parameter p. The quantities μ (i, j,x) and v(i, x) are consequently assumed to be independent of p. Assumptions H 2 are then substituted by assumptions H 2 , where conditions on μ (i, j,x) and on v(i, x) are removed.
We now transform Eq. 17 in view of studying its asymptotic expression.

Lemma 3
Under assumptions H 2 and H 3 , we have: Proof The first term is clear. Besides, setting 1 to be the constant function equal to 1, and consequently: Whence the result.
We may now prove existence and provide an asymptotic expression for 1 Theorem 3 Let us assume that μ (i, j,x) and v(i, x) are independent of p and that H 2 , H 3 are true. Then: where we set: for each measurable and bounded ϕ ( p) . The first term in right side of Eq. 20 consequently converges to the first term in Eq. 21. For the second term, setting By assumption, the function ∂ H ( p) ∂ p Uh ( p) is bounded and independent of time, and consequently: It now remains to prove that We have: This ends the proof.
The previous theorem provides an extension of the results given in Cao and Chen (1997) for pure jump Markov processes with countable state space.
Next proposition gives a tool to compute the function Uh ( p) .
Proposition 3 Let us assume that μ (i, j,x) and v(i, x) are independent of p and that H 2 , H 3 are true. Then the function U h ( p) is the unique solution of the following ordinary differential equation: Proof Under assumptions H 3 , we clearly have π ( p) Uh ( p) = 0 by Fubini's theorem. We now check that Uh ( p) is solution of Eq. 23: under H 3 , we may write which shows that Uh ( p) is solution of Eq. 23. Now, let ϕ be the difference between two solutions ϕ 1 , ϕ 2 of Eq. 23 such that π ( p) ϕ 1 = π ( p) ϕ 2 = 0. We then have: H  Davis 1993, e.g.) and Eq. 3, we get P t ϕ = ϕ and: Whence the result, using π ( p) ϕ = π ( p) ϕ 1 − π ( p) ϕ 2 = 0.
Remark 4 Just as for the transitory results, in the case when the data depend on some family of parameters P, the evaluation of all lim t→+∞ 1 t ∂ R (P) ρ 0 ∂ p (t) only requires one single computation of both π (P) and Uh (P) , which seems cheaper than finite differences methods in case of a large P.

Numerical Procedure
In this section, we propose some method for the numerical evaluation of ∂ R ( p) ρ 0 ∂ p (t) and of its asymptotic rate per unit time when t → +∞, based on the implicit finite volume scheme given in Eymard et al. (2008). These methods are obtained by translating the continuous procedure described above into the discrete setting of finite volume methods, and are provided without proof.

Principle of the Numerical Scheme in the Transient Case
Here, we consider the general case where any quantity (except ρ 0 ) may depend on p. We consider a mesh (or partition) M of V ⊂ R d which satisfies some regularity hypotheses (details are given in Eymard et al. 2008), and some time step δt > 0. Realizing a discrete version of the Chapman-Kolmogorov Eq. 4 and starting from ρ (0) = ρ 0 , the scheme resumes to compute the family of real values Introducing an infinite matrix A = A (i,K)( j,L) (i,K),( j,L)∈E×M , the scheme can be written as for all (i, K, n) ∈ E × M × N (see the quoted reference for the detailed coefficients), or equivalently for all n ∈ N, setting ρ (n) := ρ (n) for all (i, K) ∈ E × M, where m(K) stands for the d−dimensional Lebesgue measure of K.
In order to mimic the procedure used to evaluate ∂ ∂ p R ( p) ρ0 (t) in Eq. 15, we define a discrete versionH of H: for all bounded families θ = θ (n) We next introduce a discrete approximation of the importance functions ϕ t as ϕ = φ (n) i,K (i,K,n)∈E×M×N , single bounded family solution of: The mathematical study of the well-posedness of Eq. 27 and of the resolution of Eq. 28 might be driven under classical hypothesis.
Following the same calculation steps as in the continuous case, we now get: where, for all (i, K, n) ∈ E × M × N: In the case when the data depends on a family P of parameters, the numerical computation of ∂ ∂ p R (P) M for all p ∈ P hence requires to solve two implicit volume schemes provided by Eqs. 24 and 28, which are independent on the choice of p in P (see Remark 3). Due to the definition ofH (see Eq. 27), such schemes appear as dual schemes, which clearly simplifies their implementation. Also, the different summations required for each p ∈ P for the estimation of Eq. 29 are made simultaneously by solving the schemes, which helps saving CPU time and memory stocking size too.

Asymptotical Numerical Procedure
Following the discretization technique introduced in the transient case, a discrete solution of the asymptotic problem is a family of real values π i,K (i,K)∈E×M , such thatπ i,K is an approximation of the quantity K π ( p) (i, dx). Using again the infinite matrix A = A (i,K)( j,L) (i,K),( j,L)∈E×M , the asymptotic scheme can then be written as: Equivalently, the scheme writes: where 1 is the constant family with the required dimension and generic term equal to 1, and where · stands for the dot product (x · y = (i,K)∈E×M x i,K y i,K for all x = x i,K (i,K)∈E×M and y = y i,K (i,K)∈E×M ). This scheme is assumed to have a unique solution. An approximationR In the same way as in the transient case, we mimic the continuous procedure and we consider the discrete versionH 0 of H 0 , given by: We next introduce a discrete versionŪ of the potential function Uh ( p) , solution toH which is assumed to have a unique solution. We finally derive the discrete approxi- , which is given by: The same remarks as in the transitory case are still valid here and in case when the data depends on a family P of parameters, the numerical computation of all ∂R ( p) π ∂ p for each p ∈ P requires to solve exactly two dual volume schemes, provided by Eqs. 32 and 33.

Presentation-Theoretical Results
A single component is considered, which is perfectly and instantaneously repaired at each failure. The distribution of the life duration of the component (T 1 ) is absolutely continuous with respect of Lebesgue measure, with E (T 1 ) > 0. The successive life durations make a renewal process. The time evolution of the component is described by the process (X t ) t≥0 where X t stands for the time elapsed at time t since the last instantaneous repair (the backward recurrence time). There is one single discrete state so that component I t is here unnecessary. The failure rate for the component at time t is λ (X t ) where λ (·) is some non negative continuous and bounded function. The process (X t ) t≥0 is "renewed" after each repair so that μ (x) (dy) = δ 0 (dy) and the evolution of (X t ) t≥0 between renewals is given by g (x, t) = x + t.
We are interested in the rate of renewals on [0, t], namely in the quantity Q (t) such that: where R (t) is the renewal function associated to the underlying renewal process. The function λ (x) is assumed to depend on some parameter p > 0 and to meet with H 2 (= H 2 ) requirement. (Here, both μ (x) (dy) and v (x) are independent of p).
Assuming E T ( p) 1 < +∞, the process is known to have a unique stationary distribution π ( p) which has the following probability density function (p.d.f.): Besides: which is a direct consequence from the key renewal theorem. We now provide conditions which ensure the process to be uniformly ergodic (H 3 ): Proposition 4 Let us assume that E e δT1 < +∞ for some 0 < δ < 1 and that T 1 is new better than used (NBU), namely such that for all x, t ≥ 0, we haveF (x + t) ≤ F (x)F (t), whereF is the survival functionF (t) = P (T 1 > t). Then, there are some C < +∞ and 0 < ρ < 1 such that: Proof We use the following result by Konstantopoulos and Last (1999): assume that E e δT1 < +∞ for some δ > 0. Let η > 0 be such that η < δ or η ≤ δ ∧ 1 and define Then there are positive constants C 1 < +∞ and ρ < 1 such that Now, under the assumption E e δT1 < +∞ for some 0 < δ < 1, let us choose η = δ.
As T 1 is NBU, we know thatF (x+t) F(x) ≤F (t) for all x, t ≥ 0. Setting f to be the p.d.f. of T 1 , we derive: using Fubini's theorem. From the quoted result, we now derive existence of C 2 < +∞ and ρ < 1 such that which easily provides the result due to λ ( p)

Numerical Results
We assume that T 1 is distributed according to some Weibull distribution, which is slightly modified to meet with our assumptions: 0 is small) and P α,β,x0 (x) is some smoothing function which makes x −→ λ (α,β) (x) continuous and non decreasing on R + . For such a failure rate, it is easy to check that assumptions H 2 and H 3 are true, using Proposition 4.
For comparison purpose, we also compute such quantities by finite differences (FD) using: for some small ε. As for the transitory FD quantities, we use the algorithm from Mercier (2007) which provides an estimate for the renewal function R ( p) (t) and hence for Q ( p) (t) = R ( p) (t) The asymptotic results are gathered in Table 1.
The comparison between EMR and FD results for small ε clearly validate the method. The results for IF β (∞) by FD are good and very stable when choosing different values for ε. The approximation for IF α (∞) by FD however requires smaller ε to provide correct results.
The transitory results are next plotted in Figs. 1 and 2 for t ∈ [0, 50] and different values of ε. We observe that FD provides similar results as EMR for IF β (t) but requires smaller ε for IF α (t) to get similar results as EMR, just as in the transitory case. Finally, comparing IF α (t) and IF β (t) for t ≤ ∞, we may note that, for a Weibull distribution, the shape parameter β is much more influent on the rate of renewals than the scale parameter α.

Presentation-Theoretical Results
The following example is very similar to that from Boxma et al. (2005). The main difference is that we here assume X t to remain bounded (X t ∈ [0, R]) whereas X t takes its values in R + in the quoted paper.
A tank is considered, which may be filled in or emptied out using a pump. This pump may be in two different states: "in" (state 0) or "out" (state 1). The level of liquid in the tank goes from 0 up to R. The state of the system "pump-tank" at time t is (I t , X t ) where I t is the discrete state of the pump (I t ∈ {0, 1}) and X t is the continuous level in the tank (X t ∈ [0, R]). The transition rate from state 0 (resp. 1) to state 1 (resp. 0) at time t is λ 0 (X t ) (resp. λ 1 (X t )). The speed of variation for the liquid level in state 0 is v 0 (x) = r 0 (x) with r 0 (x) > 0 for all x ∈ [0, R[ and r 0 (R) = 0: the level increases in state 0 and tends towards R. Similarly, the speed in state 1 is v 1 (x) = −r 1 (x) with r 1 (x) > 0 for all x ∈]0, R] and r 1 (0) = 0: the level of liquid decreases in state 1 and tends towards 0. For i = 0, 1, the function λ i (respectively r i ) is assumed to be continuous (respectively Lipschitz continuous) and consequently bounded on [0, R]. The level in the tank is continuous so that In order to ensure uniform ergodicity of the process (H 3 ), we make the following additional assumptions, where, for i ∈ E = {0, 1} and x, y ∈ [0, R], τ (i) x,y is the deterministic time for reaching y following the curve (t, g (i, x, t)): for all x 0 ∈ [0, R[ and all y 0 ∈]0, R]. This ensures the first jump time T 1 to be finite almost surely: We get the following result: Proposition 5 Under assumptions (36-38), the process (I t , X t ) t≥0 is positive Harris recurrent with single invariant distribution π given by: for all x ∈]0, R[, where K π > 0 is a normalization constant.
Remark 5 Though such results are very similar to some special case from Boxma et al. (2005), we have better give here a quick proof due to a few differences in the results, such as some eventual masses for π at the bounds of the interval in the quoted paper.
Proof Under our assumptions, one can first prove that the process (I t , X t ) t≥0 with values in F := {0, 1} × [0, R] is ϕ−irreducible (see Meyn and Tweedie 1993b), with maximal irreducibility measure ϕ = c {0,1} × l where c {0,1} is the counting measure on {0, 1} and l is the Lebesgue measure on [0, R]. Besides, the process (I t , X t ) t≥0 is nonevanescent, due to values in a compact set. One can also prove that it is a T-process, namely such that there is some probability measure a (dt) and some kernel T such that for all i, j ∈ E : for all x ∈ [0, R] and for all Borel Indeed, let t 0 > 0 be fixed. For any Borel set A ⊂ [0, R] with interior Å, and any We then have: where T ((i, ·) , ( j, A)) is l.s.c. because Å is an open set, so that (I t , X t ) t≥0 is a T-process. Using Theorem 3.3 from Meyn and Tweedie (1993b), the process (I t , X t ) t≥0 now is Harris recurrent and admits a unique invariant measure π up to some multiplicative constant. Besides, π and ϕ = c {0,1} × l are mutually absolutely continuous (see Down et al. 1995) and there is some positive measurable function f i such that: Using the fact that π (·, dx) is such that π H ( p) one easily finds that for i = 0, 1. Solving this system of o.d.e. provides ( f 0 , f 1 ) of the form Eqs. 39-40. Checking that, under our assumptions, R 0 f i (x) dx < ∞ for i = 0, 1, we derive that π is a finite measure which can then be normalized in a single way into a probability measure. Consequently, (I t , X t ) t≥0 is a positive Harris recurrent process, which ends the proof.
We now prove that the process (I t , X t ) t≥0 is uniformly ergodic (H 3 ). (36)(37)(38), assumption H 3 is true, namely there is some function f fulfilling Eq. 18 such that:

Numerical Example
We assume that the system is initially in the state (I 0 , X 0 ) = (0, R/2). Besides, we take: for x ∈ [0, R] with α i > 1 and ρ i > 1. All conditions for irreducibility are here achieved. Our aim is to compute the importance factors with respect to p for p ∈ {α 0 , α 1 , r 0 , r 1 , a, b } both in Q 1 (t) and Q 2 (t), except for parameters a and b which intervenes only in Q 1 (t).
We take the following numerical values: Similarly as for the first method, we test our results using finite differences (FD). For FD, the transitory results are computed via the finite volume scheme from Eymard et al. (2008) and the asymptotic results via Eqs. 39-40. The results are here rather stable choosing different values for ε and they are provided for ε = 10 −2 in case p ∈ {α 0 , α 1 , r 0 , r 1 } and for ε = 10 −3 in case p ∈ {a, b }. EMR results are computed using the finite volume methods from Section 6. The asymptotic results are given in Tables 2 and 3, and the transitory ones are given in Tables 4 and 5 for t = 2.
The results are very similar by FD and EMR both for the asymptotic and transitory quantities, which here again validate the method. Besides, we may observe that the asymptotic results coincides by both methods, even in the case when the velocity field v (i, x) depends on the parameter (here ρ i ), which however does not fit with our technical assumptions from Section 5. Due to that (and to other examples where the same remark is valid), one may conjecture that the results from Section 5 are valid under less restrictive assumptions than those given in that section.
In long-time run, the importance factors of α 0 and α 1 in Q i (t) (i = 1, 2) are comparable, which is conform with intuition. The same remark is valid for ρ 0 and ρ 1 , as well as for a and b .
Finally, parameters ρ 0 and ρ 1 are more important than parameters α 0 and α 1 in Q 1 (t), conversely to what happens in Q 2 (t). This seems coherent with the fact that quantity Q 1 (t) is linked to the level in the tank, and consequently to its evolution, controlled by ρ 0 and ρ 1 , whereas quantity Q 2 (t) is linked to the transition rates, and consequently to α 0 and α 1 .