Order Statistics Based Measure of Past Entropy

Here H(X) measures the average uncertainty associated with the random variable X. If X is the lifetime of a system and if the system has survived up to time t, then to measure the uncertainty about the remaining lifetime of such a system the measure (1) is not appropriate. Ebrahimi (1996) proposed a measure which measures the uncertainty about the remaining lifetime of a system if it is working at time t, given by

1 InTROduCTIOn S hannon entropy (1948) plays a central role in the field of information theory.For a non-negative continuous random variable X with distribution function F (.) and p.d.f f (.), it is given by Here H(X) measures the average uncertainty associated with the random variable X.If X is the lifetime of a system and if the system has survived up to time t, then to measure the uncertainty about the remaining lifetime of such a system the measure (1) is not appropriate.Ebrahimi (1996) proposed a measure which measures the uncertainty about the remaining lifetime of a system if it is working at time t, given by The measure (2) is called the measure of residual entropy.Obviously, when t = 0, it reduces to (1).Further by writing λ F t f t F t ( ) ( ) ( ) , = (2) can be rewritten as where λ F (t) is the hazard rate function and is the survival function of X.
In many realistic situations uncertainty is not necessarily related to the future but can also refer to the past.For example, a system which is monitored only at certain preassigned inspection times is found to be down at time t, then the uncertainty of the system depends on the past, that is at which instant in (0, t) it has failed.Di Crescenzo and Longobardi (2002) studied this case and introduced a measure of past entropy over (0, t) given by which tends to (1) when t → ∞.They have also discussed properties of past entropy and its relationship with the residual entropy.By writing , (4) can be rewritten as where is the reversed hazard rate of X.
Suppose that X 1 , X 2 , • • • , X n are independent and identically distributed observations from cdf F (x) and pdf f (x).The order statistics of the sample is defined by the arrangement of X 1 , X 2 , • • • , X n from the smallest to the largest, denoted as X 1 : n ≤ X 2 : n ≤ • • • ≤ X n : n .Order statistics have been used in a wide range of problems like detection of outliers, characterizations of probability distributions, quality control and strength of materials; for more details see Arnold et al.(1992), David and Nagaraja (2003).Several authors have studied the information theoretic properties of an ordered data.Wong and Chen (1990) showed that the difference between the average entropy of order statistics and the entropy of parent distribution is a constant.Park (1995) obtained some recurrence relations for the entropy of order statistics.Ebrahimi et al. (2004) explored some properties of the Shannon entropy of order statistics and showed that the Kullback-Leibler (1959) relative information measure involving order statistics is distribution free.Arghami andAbbasnejad (2011) studied Renyi entropy (1961) based on order statistics.
In this communication we extend the measure of past entropy (4) to order statistics.In Section 2, we propose the measure of past entropy for order statistics.Section 3 focuses on a characterization result that the measure of past entropy of the i th order statistics under some conditions determines the distribution function uniquely and also we study an upper bound for this measure.Some concluding remarks are mentioned in Section 4.

PAST EnTROP fOR ORdER STATISTICS
Shannon's measure of uncertainty associated with the i th order statistics X i : n is given by where is beta function with parameters a and b, refer to Ebrahimi et al. (2004).Note that for n = 1, (6) reduces to (1).
Using probability integral transformation U = F (X ), where U follows standard uniform distribution, ( 6) can be expressed as where denotes entropy of i th order statistics from standard uniform distribution whose p.d.f is given by is the digamma function, refer to Ebrahimi et al. (2004).
Analogous to (4), we propose the past entropy of i th order statistics as : As in case of ( 5), ( 12) can be rewritten as : : where τ F i n x : ( ) is the reversed hazard rate of i th order statistics given by Note that, if we take t → ∞ in ( 12) and use the probability integral transformation U=F(X) then it reduces to (9).In reliability engineering (n −k + 1)-out-of-n systems are very important kind of structures.A (n − k + 1)-out-of-n system functions iff atleast (n − k + 1) components out of n components function.If X 1 , X 2 , • • • , X n denote the independent lifetimes of the components of such system, then the lifetime of the system is equal to the order statistic X k:n .The special case of k = 1 and n, that is for sample minima and maxima correspond to series and parallel systems respectively.
Next we discuss two specific distributions exponential and Pareto for the case i = n, the case i = 1 follows on similar lines but a little bit complicated.For i = n, from (7) we have f n:n (x) = nF n−1 (x) f (x) and F n:n (x) = F n (x).Using these values and putting U = F (X ) in ( 12), after some simplifications, we get Example 2.1 Let X be an exponentially distributed random variable with pdf Hence using (15) we have As we take limit t →∞ we obtain lim ( ; ) l og log () as the entropy of the n th order statistics for Pareto distribution.

CHARACTERIzATIOn RESulT
Several authors have studied characterization results for information theoretic measures using order statistics.Baratpour et.al. (2007Baratpour et.al. ( ,2008) ) have studied characterization results for Shannon and Ranyi (1961) entropy based on order statistics using Stone-Weistraas Theorm.Continuing with similar kind of approach, in this section we prove a characterization theorem which ensures that under certain condition the proposed measure of past entropy based on order statistics characterizes the distribution function uniquely.To prove this result, we require the following lemma, see Kamps (1998).
Lemma 3.1 For any increasing sequence of positive integers {n j , j ≥ 1}, the sequence of polynomials {x n j } is complete in L(0, 1), if and only if Here, L(0, 1) is the set of all Lebesgue integrable functions on the interval (0, 1).
Next, we state the following characterization theorem.
Proof: The notations having their usual meanings, the reversed hazard rate is Using (7), the above expression can be written as where It can be seen easily that Using ( 16) in ( 17), we get Using probability integral transformation U = F (X ) in ( 18), we get : Using change of variable u u → µ and F (t 0 ) = G(t 0 ) in ( 19) we obtain It is easy to see that h u satisfies the conditions of Lemma 3.1, therefore using this lemma we get log which is possible only when where d is a constant.Using F (t 0 ) = G(t 0 ) we get the desired result.

An upper bound to the past entropy
We prove that, if f i n : ≤1, then

≤
We have

≤
The equality is obtained when t → ∞.

ConClusion
Information measures are applied widely in different areas of physics, communication theory and economics.Residual and past measures of information have found applications in life testing and reliability theory.
The proposed mesure of order statistics based on past entropy characterizes the underlying distribution uniquely.This can be possibly used further for statistical modeling and reliability analysis.
1is the pdf of k th order statistics of standard uniform distribution and H U k n log ( ) .