...

P PROB BABIL

by user

on
Category:

football

3

views

Report

Comments

Transcript

P PROB BABIL
(
(STAT
TISTIC
CS)
P BABIL
PROB
LITY
Y DIST
TRIBU
UTIO
ONS
B
BSc. Ma
athemattics CO
OMPLEMEN
NTARY CO
OURSE
II SEM
MESTER
R
UNIIVER
RSITY
Y OF CALICUT
T
SC
CHOO
OL OF
F DIST
TANCE
E EDU
UCATION
Callicut University P.O.
P
Mala
appuram, Kerala, India 67
73 635
416
School of Distance Education
UNIVERSITY OF CALICUT
SCHOOL OF DISTANCE EDUCATION
B.Sc Mathematics II Semester Complementary Course (STATISTICS)
PROBABILITY DISTRIBUTIONS
Prepared by : Sri. GIREESH BABU .M.
Department of Statistics Government Arts & Science College, Calicut – 18 Scrutinised by: Sri. C.P. MOHAMMED (Rtd.)
Poolakkandy House Nanmanda P.O. Calicut District Layout: Computer Section, SDE ©
Reserved
Probability Distributions‐Semester II Page 2 School of Distance Education
CHAPTER
CONTENTS
PAGE
1
BIVARIATE PROBABILITY DISTRIBUTIONS
6
2
MATHEMATICAL EXPECTATION OF
BIVARIATE RANDOM VARIABLES
13
3
STANDARD DISTRIBUTIONS
25
4
LAW OF LARGE NUMBERS
52
Probability Distributions‐Semester II Page 3 School of Distance Education
Probability Distributions‐Semester II Page 4 School of Distance Education
SYLLABUS
Module 1
Bivariate random variable: definition(discrete and continuous type), Joint
probability mass function and probability density function, marginal and conditional
distributions, independence of random variables. (15 hours).
Module 2
Bivariate moments: Definition of raw and central product moments, conditional
mean and conditional variance, covariance, correlation and regression coefficients.
Mean and variance of a random variable in terms of conditional mean and
conditional variance.(15 hours).
Module 3
Standard
Distributions:
Discrete
type
Bernoulli,
Binomial,
Poisson
distributions(definition, properties and applications) Geometric and Discrete
Uniform(definition, mean, variance and mgf only). Continuous type
Normal(definition, properties and applications) Rectangular, Exponential, Gamma,
Beta (definition, mean, variance and mgf only). Lognormal, Pareto and Cauchy
Distributions (definition only) . (30 hours).
Module 4
Law of large Numbers : Chebychevs inequality, convergence in probability, Weak
Law of Large Numbers for in random variables, Bernoulli Law of Large Numbers,
Central Limit Theorem for independent and identically distributed random
variables(Lindberg-Levy form). (12 hours).
Books for reference:
1. V.K. Rohatgi: An Introduction to Probability theory and Mathematical
Statistics, Wiley Eastern.
2. S.C.Gupta and V.K.Kapoor: Fundamentals of Mathematical Statistics, Sul-tan
Chamd and sons.
3. Mood A.M., Graybill.F.A and Boes D.C.: Introduction to Theory of Statistics
Mc Graw Hill.
4. John E Freund: Mathematical Statistics (Sixth Edition), Pearson Education
(India), New Delhi.
Probability Distributions‐Semester II Page 5 School of Distance Education
Chapter 1
BIVARIATE PROBABILITY DISTRIBUTIONS
1.1 BIVARIATE RANDOM VARIABLES
1.1.1 Definition:
Let S be the sample space associated with a random experiment E. Let X = X(s) and Y
= Y(s) be two functions each assigning a real number to each outcomes s S. Then (X,
Y) is called a bivariate random variable or two-dimensional random variable.
If the possible values of (X, Y) are finite or countably infinite, (X, Y) is called a
bivariate discrete RV. When (X, Y) is a bivariate discrete RV the possible values of (X,
Y) may be represented as (xi, yj), i = 1,2, ..., m, ...; j = 1,2, ..., n, .... If (X, Y) can assume all
values in a specified region R in the xy plane, (X, Y) is called a bivariate continuous
RV.
1.1.2 Joint Probability Mass Function
Let (X,Y) be a pair of discrete bivariate random variables assuming pairs of values
(x1, y1), (x2, y2), ..., (xn, yn) from the real plane. Then the probability of the event X = xi,
Y = yj denoted as f(xi, yj) or pij is called the joint probability mass function of (X, Y).
xi, yj = P(X = xi, Y = yj)
i.e.,
This function satisfies the properties
xi, yj ) ≥ 0 for all (xi,yj)
1.
2. ∑ ∑
xi, yj = 1
i
1 . 1 . 3 J o int P r o b ab i l i ty D ensi ty F un c ti o n
If (X,Y) is a two dimensional continuous random variable such that joint
pdf
of
(X,Y),
provided
x, y ≥ 0 for all (x, y)
1.
2.
R (xi,yj)
f(x,y)
= x, y
satisfies the
then f(x,y) is called the
following conditions.
R, where R is the range space.
= 1.
Moreover if D is a subspace of the range space R, P(X, Y) D is defined as P{(X,Y) D} =
,
. In particular P{a
X
b, c Y d} =
,
1 . 1 . 4 C um u l a t iv e Di s t r i but io n F uncti o n
If (X,Y) is a bivariate random variable (discrete or continuous), then
F(x, y) = P{X x andY y} is called the cdf of of (X,Y).
Probability Distributions‐Semester II Page 6 School of Distance Education
In the discrete case, F(x, y) = ∑ ∑ Pij
i
∞
In the continuous case, F(x, y) =
∞
(x,y)
P r o p e r t i e s o f F(x,y)
(i). F ( -∞, y) = 0 = F(x, -∞) and F(∞, ∞) = 1
(ii). P{a < X < b,Y
(iii).P {X
y} = F(b,y)-F(a,y)
x, c < Y < d}=F(x,d) - F(x,c)
(iv). P {a < X < b, c < Y < d} = F(b,d)-F(a,d)-F(b,c)+F(a,c)
(v). At points of continuity of (x,y)
F
= (x,y)
1.1.5 Marg inal Probabi l ity Di st ri but io n
P(X = xi) = P {(X = xi and Y = y1) or (X = xi and Y = y2) or etc.,}
= pi1 +pi2+... = ∑
P ( X = xi) = ∑
is called the marginal probability function of X. It is defined for X = x1, x2, ... and
denoted as Pi* The collection of pairs xi,pi* , i=1,2,…..,... is called the marginal
probability distribution of X. Similarly the collection of pairs { yj, pi*},j=1,2,... is called the
= P(Y = yj).
marginal probability distribution of Y, where pi*= ∑
.
,
In the continuous case,
P
since
,
,
1
2
, ∞
,
=
=
1
2
may be treated a constant in (x – ½dx, x+ ½dx)
=
=
Similarly,
,
is called the marginal density of X.
=
(x, y)dx is called the marginal density of Y.
Note:
P(a
X
Probability Distributions‐Semester II b)=P(a
X b,- ∞ < Y < ∞)
Page 7 School of Distance Education
=
,
=
,
x
=
Similarly,
P(c
Y
d)=
Y
1 . 1 . 6 C o nd i t io na l P r o b a bi li ty D i st ri b ut io n
In the discrete case
P(X = xi/Y = yj) =
,
=
is called the conditional probability function of X, given Y = yj. The collection of
pairs, {xi,
}, i = 1,2, ….., is called the conditional probability distribution of X, given
Y= yj .
Similarly, the collection of pairs,{Yj,
}, j= 1,2,…. is called conditional probability
distribution of Y given X = xi.
1.1.7 Independent Random Variables
Let (X,Y) be a bivariate discrete random variable such that P{X = xi/Y = yj}=P(X = xi)
i.e.,
=
i.e. pij = pi* × p*j for all i,j then X and Y are said to be independent random variables.
Similarly if (X,Y) be a bivariate continuous random variable such that (x, y) = X(x) ×
Y (y), then X and Y are said to be independent random variables.
1.2 SOLVED PROBLEMS
Problem 1
If X and Y are discrete rv’s with the joint probability function is
(x,y) =
, where (x,y) = (1, 1), (1,2), 2, 1), (2, 2)
= 0 , else where. Are the variable independent.
Probability Distributions‐Semester II Page 8 Solution :
Given
(x,y) =
, where (x,y) = (1,1), (1,2), (2,1), (2,2)
The marginal pdf of X is,
x (x) = ∑
∑
(x,y)
, x 1,2 The marginal pdf of Y is,
Y
(y) = ∑x (x, y)
=∑
+
= =
, y = 1, 2
Clearly ( x , y) ≠ x(x),. Y (y)
There fore X and Y are not independent.
Problem 2
Given
(x/y) =
by4 obtain a, b and also get the
,0 < x < y < 1 and y y
joint pdf ?
Solution:
Since the conditional pdf (x/y) is a pdf, we have
/
i.e.,
i.e.,
a
dx =1
a[
]0
=1
a=2
Similarly,
dy = 1
1
School of Distance Education
b[
]10 = 1
b =5
The joint pdf is,
(x,y) =
= 10xy2,0 < x < y < 1
(y). (x/y) = by 4
Problem 3
The joint probability density function of a two-dimensional random variable (X,Y)
is given by
(x,y) = 2, 0 < x < 1 , 0 < y < x
= 0, elsewhere
(i). Find the marginal density functions of X and Y,
(ii). Find the conditional density function of Y given X=x and conditional
density of X given Y=y, and
(iii). Check for independence of X and Y
Solution
(i). The marginal pdf’s of X and Y are given by
x(x) =
2
xy(x,y)dy =
2 , 0 < x < 1
= 0, elsewhere
y(y) =
1
2
xy(x, y)dx =
= 2(1 - y), 0 < y < 1
y
= 0, elsewhere
(ii). The conditional density function of Y given X is
Y/X(y/x)
,
=
=
, 0 < x < 1
The conditional density function of X given Y is
x/y(x,y) =
(iii). Since x(x) . y(y) = 2(2x)(1 - y) ≠ xy
,
=
,
=
, 0 <y <1
, X and Y are not independent.
Problem 4
Two random variables X and Y have the joint pdf Probability Distributions‐Semester II Page 10 School of Distance Education
(x, y) = k(x 2 + y 2), 0 < X < 2, 1 < Y < 4
= 0, elsewhere
Find k.
Solution:
Since (x,y) is a pdf,
,
1
k
k
k
y + 2
k
2
dy = 1
0
+ 2y2
[
k(
i.e.,
+ xy2
1
dy = 1
4
= 1
1
+
-
]
-
) = 1
50k = 1
i.e.
k =
Problem 5
If X and Y are two random variables having joint density function
(x,y) = (6-x-y); 0 < x < 2 , 2 < y < 4
= 0, otherwise.
Find (i). P(X < 1 ∩Y < 3), (ii).P(X + Y < 3) and (iii)P(X < 1/Y < 3)
Solution
( i ) . P(X < 1
<3) =
=
(ii)
P(X=Y<3) =
Probability Distributions‐Semester II ,
(6-x-y)dxdy =
(6–x –y)dxdy =
Page 11 School of Distance Education
(iii)
P(X < 1/Y<3) =
=
) =
Problem 6
The Joint distribution of X and Y is given by
(x,y) = 4
; 0,
0
Test whether X and Y are independent.
For the above joint distribution, find the conditional density of X given Y=y.
Solution
Joint pdf of X and Y is
(x,y) = 4
; 0,
0
Marginal density of X is given by
,
x(x,) =
(put
= 4
= 4
= t) = 2x.
2 4
2
dy
∞
/
0
/
x(x) = 2x.
0
;x
Similarly, the marginal pdf of Y is given by
Y
(y) =
Since XY (x, y) = x (x).
(x,y)dx =2y.
Y(y),
;y
0.
X and Y are independently distributed.
The conditional distribution of X for given Y is given by:
(X=x/Y=y)=
= 2x.
Probability Distributions‐Semester II ;x
,
0.
Page 12 School of Distance Education
Chapter 2
MATHEMATICAL EXPECTATION OF BIVARIATE RANDOM VARIABLES
2.1 Definition:
Let X1, X2, ..., Xn be n random variables with joint pdf
xn) be any function of these random variables.
Then,
(x1, x2, ..., xn) and let g(x1, x2, ...,
P
E {g (x1, x2, ..., xn)) = ∑ ∑ … . ∑
(x1, x2, ..., xn) (x1, x2, ..., xn) if the rv’s are discrete
=
,
….
(x1, x2, ..., xn)f(x1, x2, ..., xn)dx1,dx2...dxn, if the r.v’s are continuous
provided the sum or integral on the RHS is absolutely convergent.
2.2 Properties of Expectation
2.2.1 Property
If X and Y are any two random variables and (X) be any measurable function of X.
Then for two constants a and b, E(a. (X) + b) = a.E( (X)) + b
Proof:
Let X be a discrete random variable, then
E(a. (X) + b) = ∑
=∑
. (x) (x) + ∑
= a. ∑
=a.E( (x))+b)since ∑
(X) + b]f(x)
(x)+b∑
. (x)
(x)
(x)=1)
If X is continuous, instead of summation use integration.
Remarks:
When a= 0, E(b) = b, i.e., expectation of a constant is the constant itself.
2.2.2 Addition Theorem:
If X and Y are two random variables, then E(X + Y) = E(X) + E(Y), provided all
the expectations exist.
Proof:
Let X and Y are two discrete random variables with the joint pmf f(x,y).
Then by definition
E(X + Y) = ∑ ∑
,
=
Probability Distributions‐Semester II ∑ ∑
.
,
∑ ∑
.
,
Page 13 School of Distance Education
=∑
∑
=∑
.
∑
,
∑
1
.
∑
2
,
y
= E(X) + E(Y)
In the case of two continuous random variables instead of summation use integration.
Remarks:
Let X1, X2, ..., Xn are any finite number of random variables, then,
E(X1 + X 2+ . . . + X n)) = E(X1) + E ( X 2)+...+E(Xn)
2.2.3 Multiplication Theorem
Let X and Y are two independent random variables, then, E(XY) = E(X) + E(Y),
provided all the expectations exist.
Proof
Let X and Y are any two independent discrete random variables with joint pmf
f ( x , y)
Then,
E(XY) = ∑ ∑
,
=∑ ∑
1
2
(x). 2(y)
Since X and Y are independent, ( x , y) =
1
E(XY) = ∑
Hence ,
1
(x)∑
2
= E(X).E(Y)
In the case of continuous random variables use integration instead of summation.
Remarks:
The converse of multiplication theorem need not be true. That is , for two random
variables E(XY) = E(X).E(Y), need not imply X and Y are independent.
Example 1:
Consider two random variables X and Y with joint probability function is
( x , y) = (1 - |x|.|y|), where x = -1, 0, 1; and y = -1, 0, 1= 0, elsewhere.
Here,
1
2
x) = ∑
y ∑
E(X) = ∑
,
x , y ⇒
1
2
( x ) = ( 3 - 2|x |),y= -1,0,1
y ( 3 - 2 |y |), x = -1, 0, 1
. (3-2|x|)
= -1(3-2)+0(3)+1(3-2 ) = 0
Probability Distributions‐Semester II Page 14 School of Distance Education
Similarly, E(Y) = 0 and E(XY) = 0
Hence, E(XY) = E(X).E(Y)
But, (x,y)≠ 1 (x). 2 (y)
That is X and Y are not independent.
2.2.4 Property:
If X
0, the E(X)
0
2.2.5 Property:
If X and Y are two random variables,
Then, [E(XY)]2 E(X2) .E(Y2) (Cauchy-Schwartz Inequality)
Proof :
Consider the real valued function, E(X + tY)2.
Since (X + tY)2
0, we have E(X + tY)2
i.e.,t2E(Y2) + 2tE(XY) + E(X2)
0,
0
LHS is a quadratic equation in t and since it is always greater than or equal to zero,
this quadratic equation is not having more than one real root. Hence its discriminant
must be less than or equal to zero.
The discriminant “b2 - 4ac” for the above quadratic equation in t is
[2E(XY)]2 - 4E(X2)E(Y2) ⇒ [2E(XY)]2 - 4E(X2)E(Y2)
i.e., 4[E(XY)]2
Hence, [E(XY)]2
0
4E(X2).E(Y2)
E(X2)E(Y2)
2.2.6 Property:
If X and Y are two random variables such that Y
ConsiderY X , t h e n Y -X 0,i.e.,X-Y 0
⇒ E(X-Y) 0
⇒ E(X-Y ) = E ( X + ( -Y ) ) = E ( X ) + E ( -Y)
i.e., E(X) - E(Y)
0 ⇒ E(X)
X, then E(Y)
E(X) Proof:
0
E(Y)
2.2.7 Property:
For a random variable X, |E(X)|
E(|X|), provided the expectations exist.
Proof:
We have X |X| ⇒ E(X) E(|X|) .............. (1)
Again, -X
|X| ⇒ E(-X)
(1) and (2) ⇒ |E(X)|
Probability Distributions‐Semester II E(|X|) ………..(2)
E(|X|)
Page 15 School of Distance Education
2.2.8 Property:
If the possible values of a random variable are 0,1,2,....Then ,
E(X) = ∑
Proof : 0
1
2
= [p(X=1) + p (X=2)+p(X=3) +..]+[p(X=2)+p(X=3) +……]+[p(X=3)+p(X=4)+…]+…
= p(X=1)+2p(X=2)+3p(X=3)+4p(X=4)+…..
=∑
⇒∑
2.3 Raw and Central Moments
Let X be a random variable with pmf/pdf f(x) and let A is any constant and r any
non negative integer, then,
E(X - A) = ∑
(x)or
(x)dx
x
according as X is discrete or continuous is called the
A, denoted by µ’ (A), provided it exists.
rth moment of X about
,
When A = 0, then E(X-A)r = E(Xr); which is known as the rth raw moment of X and is
denoted by µ’ .
E(X - )r is the rth central moment of X and is denoted by µr .
We have
= ∑
n
Now the variance
∑ xi= ipi = µ’
r
µ 2 = E(X - E(X)) 2
= E[X2 + (E(X)) 2 - 2XE(X)]
= E(X 2 ) + [E(X)] 2 - 2E(X)E(X)
= E(X 2 ) - [E(X)] 2 = µ’’2 - ( µ’1) 2
Relation between raw and central moments:
We have first central moment, µ 1 = E(X - E(X)) 1 = E(X) - E(X) = 0
The second central moment or variance ,
µ = µ’ -( µ’ )
2
2
1
Probability Distributions‐Semester II 2
Page 16 School of Distance Education
The third central moment,
µ
3
= E(X –E(X))3
= E(X3 - 3X 2 E(X) + 3X[E(X)] 2 - [E(X)] 3 )
= E(X 3 ) - 3E(X 2 )E(X) + 3E(X)[E(X)] 2 - [E(X)] 3
= E(X3 ) - 3E(X 2 )E(X) + 3[E(X)] 3 - [E(X)] 3
= E(X 3 ) - 3E(X 2 )E(X) + 2[E(X)] 3
=
µ’3 - 3 µ ’ µ’1 +2[µ’1]3
The fourth central moment,
µ 4 = E(X - E(X)) 4
= E(X 4 - 4X 3 E(X) + 6X 2 [E(X)] 2 - 4X[E(X)] 3 + [E(X)] 4 )
= E(X 4 ) - 4E(X 3 )E(X) + 6E(X 2 )[E(X)] 2 - 4E(X)[E(X)] 3 + [E(X)] 4
= E(X 4 ) - 4E(X 3 )E(X) + 6E(X 2 )[E(X)] 2 - 3[E(X)] 4
= µ’ 4 - 4µ’ 3µ’ 1 + 6µ’ 2 [µ ’ 1]2 -- 3[µ’ 1 ] 4
In general, the r t h central moment,
µ r = E(X - E(X))
= E(X r - r C 1X r- 1 E(X)+ r C 2X r - 2 [E(X)] 2 - r C 3X r- 3 [E(X) 3 +...+(-1) r [E(X)] r )
2.4 Properties of variance and covariance
1). For a random variable X, var(aX) = a2V(X) Proof:
Var(aX) = E[aX - E(aX)]2
= E[a(X - E(X)]2
= a2E[X - E(X)] 2
= a2V(X)
2). For two independent random variables X and Y,
V(aX + bY) = a2V(X) + b 2V (Y)
Proof:
V(aX + bY) = E[aX + bY - E(aX + bY)]2
= E[a(X - E(X)) + b(Y - E(Y))]2
= a2E[X - E(X)]2 + b 2E[Y - E(Y)]2 + 2abE[X - E(X)]E[Y - E(Y)]
= a2V(X) + b2V(Y) + 2abCov(X, Y)
Since X and Y are independent , Cov(X, Y) = 0 ; hence,
V(aX + bY) = a2V(X) + b2V(Y)
Probability Distributions‐Semester II Page 17 School of Distance Education
3). For two random variables X and Y, Cov(X + a, Y + b) = Cov(X, Y)
C o v ( X + a , Y + b ) = E [ ( X + a ) ( Y + b ) ] -E ( X + a ) E ( Y + b )
E[(XY + bX + aY + ab)] - [E(X)E(Y) + bE(X) + aE(Y) + ab]
= E(XY) - E(X)E(Y)
= Cov(X, Y)
4).For two random variables X and Y; Cov(aX, Y) = aCov(X, Y).
Proof:
Cov(aX, Y) = E[(aX)(Y)] - E(aX)E(Y)
= aE[XY] - a[E(X)E(Y)]
= a[E(XY) - E(X)E(Y)]
= aCov(X, Y)
2.5 Conditional Expectation and Variance:
Let (X,Y) be jointly distributed with pmf/pdf f ( x , y). Then,
Conditional mean of X given Y=y is denoted as E(X/Y = y) and is defied as
(x/y),
E(X/Y = y) = ∑
if X and Y are discrete
,
if X and Y are continuous.
Conditional mean of Y given X=x is denoted as E(Y/X = x) and is defined as,
E(Y/X = x) = ∑
f(y/x)
,if X and Y are discrete
=
/
, if X and Y are continuous.
Conditional variance of X given Y=y is denoted as V(X/Y = y) and is defined as,
V(X/Y = y) = E(X2/Y = y) - [E(X/Y = y)]2
Conditional variance of Y given X=x is denoted as V(Y/X = x) and is defined as,
V(Y/X = x) = E(Y2/X = x) - [E(Y/X = x)]2
Theorem:
If X and Y are two independent r.v’s then,
(t) = Mx(t)My(t)
Proof:
By definition,
Mx+y(t)= E[[et(x+y] = E[etxety]
= E[etx]E[ety] = Mx (t).My (t) ,
Probability Distributions‐Semester II Page 18 School of Distance Education
since X and Y are independent
Theorem:
If X1 , X2 , .. .Xn are n independent r.v’s then
M∑
1
∏
Proof:
By definition,
M∑
∑
∑
∏
E[∏
]
, since Xi ’s are independent
=∏
,
i.e., m.g.f of a sum of n indpendent r.v.’s is equal to the product of their m.g.f’s
Remarks 1
For a pair of r.v.’s (X, Y), then covariance between X and Y (or product moment
between X and Y) is defined as
Cov(X, Y) = E[X - E(X)][Y - E(Y)]
Remarks 2
Cov(X, Y) = E(XY) - E(X)E(Y)
Remarks 3
If X and Y are independent r.v’s. Cov(X, Y) = 0
i.e.,
Cov(X, Y) = E(XY) - E(X)E(Y)
= E(X)E(Y) - E(X)E(Y) = 0
Remarks 4
The correlation coefficient between the two random variables X and Y is de-fined as
pXY =
,
where
Cov(X, Y) = E(XY) - E(X)E(Y)
V(X) = E(X 2 ) - [E(X)] 2
V(Y) = E(Y 2 ) - [E(Y)] 2
2.6 SOLVED PROBLEMS
Problem 1
Let X and Y are two random variables with joint pmf
f(x,y)=
, x = 1 , 2 ; y = 1,2
Find E(X 2 Y)
Probability Distributions‐Semester II Page 19 School of Distance Education
Solution
E(X 2 Y) = ∑ ∑
=∑ ∑
,
y[
]
= 1[ ] + 2[ ] + 5[ ] + 8[ ] =
Problem 2
For two random variables X and Y, the joint pdf f(x,y) = x + y , 0 < x < 1;0
< y < 1,Find E(3XY 2 )
Solution:
3
E(3XY2)=
=
3
(x,y)dydx
(x+y)dydx
=3
ydx
=3
[ ] 1 + x[ ] 1)dx
=3
+ ) dx
0
0
= 3[ ] 1 + 3[ ] 1 =
0
0
Problem 3
Let X and Y are two random variables with joint pmf
f(x,y) =
, x = 1, 2;y=1,2
Find (i). Correlation between X and Y. (ii). V(X/Y = 1).
Solution:
,
Correlation (X,Y) =
.
=
E(XY) = ∑
=∑
∑
Probability Distributions‐Semester II ∑
,
[
]
Page 20 School of Distance Education
+ =
+
E(X) = ∑
+ =
1
where
(x) = ∑
∑
,
+
=
=
5
9
] = 1 × + 2 × =
E(X) = ∑
E(X2) = ∑
=∑
f2(y) =∑
[
1(x)
=
] =1 × +4 ×
=∑
,
, x = 1,2
]=
+
=
E(Y) = ∑
]
= 1 × +2 ×
=
= E(Y2) = ∑
=∑
[
Correlation (X,Y) =
2(y)
+4×
]=1×
. , y = 1,2
=
=-0.026
V(X/Y=1)=E(X2/Y=1)-[E(X/Y=1]
E(X2/Y=1)= ∑
,
= ∑
2
(x/y=1)
,
= 1.
,
+ 4.
=
E(X/Y=1) =∑
∑
,
Probability Distributions‐Semester II 1.
1
,
2.
,
Page 21 School of Distance Education
=
Therefore,
]2 =
–[
V(X/Y=1)
Problem 4
The Joint pdf of two random variables (X,Y) is given by, f(x,y) = 2,0< x <y <1,
= 0, elsewhere.
Find the conditional mean and variance of X given Y=y.
Solution :
The conditional mean of X given Y=y is
E(X/y) =
(x/y)dx
where,
,
f(x/y) =
Then,
,
(y)=
2
=
2
There fore,
=2y, 0 < y<1
= ,0 <x <y < 1
f(x/y) =
. dx =
E(X/y) =
=
0
[ ]0
× = , 0 <y < 1
Also,
V(X/y) = E(X 2 /y) - [E(X/y)] 2
Where,
E(X 2 /y) =
=
.
=
× =
Therefore,
V(X/y) =
Probability Distributions‐Semester II f(x/y)dx
dx = [ ]
0
2
Page 22 School of Distance Education
=
-
=
, 0<y <1
Problem 5
Two random variables X and Y have joint pdf
f ( x , y ) = 2 - x - y; 0 x 1,0 y 1
= 0, otherwise
Find (i). f 1(x), f 2(x), (ii). f(x/Y = y), f(y/X = x), (iii). cov(X,Y)
Solution:
(i)
,
(x) =
=
2
- x, 0 x 1
,
(y) =
=
2
,0
x 1
(ii)
,
/
,
f(y/X=x)=
(iii)
=
=
, 0 y 1
,
E(XY) =
2
=
2
=
=
=[
x 1
, 0 -
=[
-
-
]1 =
-
=[
- ]dx
=
]1 =
0
2
E(Y) =
0
0
1
E(X) =
]1dx =
-
=
-
Probability Distributions‐Semester II ]1 =
0
Page 23 School of Distance Education
There fore,
Cov(X,Y) = -
.
=
Problem 6
Show by an example that independence implies verse is not true always.
Solution:
Two random variables X and Y are independent then cov(X,Y)=0, which implies
ρxy = 0. To prove the converse, consider a r.v X having the pdx,
f(x)= , -1 x
1
= 0, otherwise
Let Y = , i.e., X and Y are not independent.
Here,
E(X) =
dx = 0 = 0
E(Y)= E(X2) =
dx =(
E(XY) =E(X.X2) =E(X3) =
=
1
1 =
dx
dx = ×0 =0
Therefore,
cov(X,Y) = E(XY) –E(X)E(Y) =0 – 0
=0
i.e.
=0
This shows that non correlation need not imply independence always.
Probability Distributions‐Semester II Page 24 School of Distance Education
Chapter 3
STANDARD DISTRIBUTIONS
3.1 DISCRETE DISTRIBUTIONS
3.1.1 Degenerate Distributions
If X is a random variable with pmf
P(X=x) =
k { 1,0,when x otherwise Then the random variable X is said to follow degenerate distribution. Distribution
function of this random variable is
0,
F(x) = {
1,
Mean and Variance :
Mean of X, E(X) = ∑ P(X = x) = k × 1 = k
E(X2) = ∑
P x x 2 P(X =x) = k 2 × 1
In general, E(xr) = k
Then, V(X) = k 2 - [k]2 = 0
Moment generating function:
MX(t) = E(etx ) = ∑
= e tkP(X = k) =
P(X = x)
×1 =
3.1.2 Discrete Uniform Distribution
A random variable X is said to follow the discrete uniform distribution if its pmf
is
,
1 2, … .
f(x) =
0,
Eg: In an experiment of tossing of an unbiased die, if X denote the number shown
by the die.Then X follows uniform distribution with pmf,
f(x) = P(X = x) = ,x = 1,2,...,6.
Mean and Variance:
Mean of X,
E(X) =∑
P(X=x) =∑
= [x1+x2+….+xn] = ∑
E(X2 )= ∑
= [( ) 2 +(
Probability Distributions‐Semester II P(X=x) =∑
) 2 + … +(
) 2]= ∑
Page 25 School of Distance Education
Then,
V(X) =∑
-[∑
]2
3.1.3 Binomial Distribution
This distribution was discovered by James Bernoulli in 1700. Consider a random
experiment with two possible outcomes which we call success and failure. Let p be the
probability of success and q = 1 - p be the probability of failure.p is assumed to be
fixed from trial to trial. Let X denote the number of success in n independent trials.
Then X is a random variable and may take the values 0,1,2,...,n.
Then, P(X = x) = P(x successes and n - x failures in n repetitions of the experiment)
mutually exclusive ways each with probability pxqn-x, for the
There are
happening of x successes out of n repetitions of the experiment.
Hence,
P(X = x) =
pxqn-x, x = 0,1,2, ...,n
Definition:
A random variable X is said to follow the binomial distribution with parameters n
and p if the pmf is,
f(x) = P(X = x) = ( nCx pxqn-x, x = 0, 1,2, ..., n; 0 <p < 1, and p + q = 1
= 0 elsewhere
Mean and variance: Mean,
E(X)=∑
∑
[nCx pxqn-x]= ∑
= np[(1) 1
pxqn-x
!
!
!
px-1qn-x])
]
1
] =np
E(X2) = ∑
[nCx pxqn-x]
=∑
1
=∑
1 [
=n(n-1)p2∑
!
!
= np∑
np[(p+q)
!
[
!
[
!
!
pxqn-x
!
pxqn-x] +∑
!
!
!
!
!
!
pxqn-x]
!
px-2qn-x] +E(X)
!
= n(n - 1)p 2 [(p + q) n-2 ] + np
Probability Distributions‐Semester II Page 26 School of Distance Education
= n(n - 1)p 2 + np
There fore the variance,
V(X) = E(X 2 ) - [E(X)] 2
= [n(n - 1)p 2 +np] - [np] 2
= n 2 p 2 - np 2 + np - n 2 p 2
= n 2 p 2 – np 2 = np(1 -p) = n 2 p 2
np-np 2 =np(1-p)=npq
E(X3) = ∑
=∑
=∑
∑
3
x3[nCx p x qn-x]
1
2 +3x2 -2x][
1
2
!
!
!
[
pxq n-x - ∑
!
= n(n - 1)(n - 2)p3 ∑
!
!
!
!
!
!
pxqn-x
!
pxqn-x+
!
2
!
!
!
pxqn-x
px-3 + qn-x] + 3E(X2)-2E(X)
[n(n - 1)(n-2)p3 [(p+q) n-3] + 3[n(n-1)p2 +np]- 2np
= n(n - 1)(n - 2)p3 + 3[n(n - 1)p2] + np
Similarly,
E(X4) = n(n - 1)(n - 2)(n - 3)p4 + 3[n(n - 1)(n - 2)p3] + 7n(n - 1)p2 + np
Beta and Gamma coefficients:
β1 = µ23 =
γ1 =
β1=
√
A Binomial Distribution is positively skewed, symmetric or negatively skewed
according as γ1 >=< 0 which implies q >=< p.
β2 =
=3+
A Binomial Distribution is leptokurtic, mesokurtic or platykurtic according as
β2 >=< 3 which implies pq > = <
Moment Generating Function:
The mgf,
MX (t) = E(etx)
Probability Distributions‐Semester II Page 27 School of Distance Education
P(X=x)
=∑
=∑
[nCx pxqn-x]
=∑
x
pxqn-x](pet)xqn-x
(q+pet)n
Additive property of the binomial distribution
If X is a B (n1 , p) and Y is B (n2 , p) and they are independent then teir sum X + Y
also follows B(n1 + n2 , p).
Proof:
Since,X ~ B(n1 ,p), Mx (t) = (q +pet)n1
Y ~ B(n2 ,p), My (t) = (q +pet)n2
We have,
Mx + y (t) = Mx (t).My (t)
, since X and Y are independent.
= (q+pet)n1 × (q+pet)n2
= (q + p e t)n1+n2
= mgfofB(n 1 + n 2 , p )
There fore,
X + Y ~B ( n 1 + n 2 , p )
If the second parameter (p) is not the same for X and Y, then X + Y will not be
binomial.
Recurrence relation for central moments
I f X~ B(n, p), then
µr
=
+ 1
pq[nrµ r
μr
-1 +
]
P r o o f : We have
µ r = E[X - E(X)]r
= E(X - np) r
=∑
n C x p x q n-x
Therefore,
μr
= ∑
n C x p x q n-x
nCx(x – np)
Probability Distributions‐Semester II qn-xxpx-1+
Cxpxqn-x +∑
=-nr∑
=-nr µ r-1 +
r(x-np)r-1(-n)+
∑
(x-np)rpx(n-x)qn-x-1(-1)]
Cxpxqn-x[ –
]
f(x)
Page 28 School of Distance Education
i.e.,
μr
=-nr µ r +
µr
+ 1
There fore,
μr
1
= pq[nr μr
1
+
]
r
Using the information , µ 0 = 1 and µ1 = 0 .Also we can determine the values of µ 2 , µ 3 ,
µ 4 etc. by this relation.
Recurrence relation for Binomial Distribution
B(x + 1; n, p) =
B(x; n, p)
Proof:
We have,
B ( x ; n, p ) = C x p x q n - x
B(x+1;n,p) =
Cx+1px+1qn-(x+1)
; ,
=
; ,
!
=
!
!
×
!
!
=
There fore,
B(x;n,p)
B(x+1;n,p) =
3.1.4 Bernoulli Distribution
A random variable X is said to follow Bernoulli distribution if its pdf is given by
f(x) 1
,
0,
0,1
Here X is a discrete random variable taking only two values 0 and 1 with the
corresponding probabilities 1-p and p respectively.
The rth moment about origin is , µ’ r = E(Xr) = 0rq + 1rp = p,r =1,2
µ’2 =E(X2) =p
therefore , Variance = µ2 =p-p2=p(1-p)=pq
3.1.5 Poisson Distribution
Poisson distribution is a discrete probability distribution. This distribution was
developed by the French mathematician Simeon Denis Poisson in 1837.This distribution is
used to represent rare events. Poisson Distribution is a limiting case of binomial
distribution under certain conditions.
Probability Distributions‐Semester II Page 29 School of Distance Education
Definition: A discrete random variable X is defined to have Poisson distribution if
the probability density function of X is given by,
f (x) =
!
,
0,1,2, … ,
0,
0
Poisson distribution as a limiting case of Binomial:
The Poisson distribution is obtained as an approximation to the binomial
distribution under the conditions,
(i). n is ver large , i.e., n 1
(ii).p is very small, i.e., p ! 0
(iii).np = λ, a finite quantity.
Proof:
L e t X ~B ( n , p ) , t h e n , f ( x ) = n C xp x q n-x , x = 0 , 1 , 2 , . . . , n ; p + q = 1
!
=
p x q n-x
!
!
…
=
px(1-p)n-x
!
…
=
!
Now,
1
lim 1
1
2
….. 1
1
1
Also np = λ ⇒ p =
lim
∞
lim (1-p) x =
lim 1
lim (1-p) n
∞
1
=1
=
Applying the above limits , we get,
f(x)=
!
, x = 0, 1, 2, ...
Moments of Poisson Distribution
Mean:
=∑
E(X) =∑
= λ e- λ ∑
=λ
!
!
=λ
Variance,
V(X) = E(X2) - [E(X)]2
Probability Distributions‐Semester II Page 30 School of Distance Education
where,
E(X2) = ∑
=∑
1
=∑
1
=λ
λ∑
=λ
λ
λλ
∑
!
λ
!
λ+
λ
=λ2+λ
Therefore,
V ( X ) = λ 2 + λ -λ 2 = λ
A l s o , S D ( X ) = √λ
For a Poisson Distribution Mean = Variance
µ 3 =µ’ 3 - 3µ 2µ’ 1 +2(µ’ 1 ) 3
Here µ’ 1=λ;µ’ 2 = λ 2 + λ
Now,
µ’ 3 =E(X3) =∑
1
=∑
=∑
2
λλ
(x-1)(x-2)
3 λ =λ e ∑
!
!
3x
2x
+3∑
x
f(x)-2∑
f(x)
+ 3E(X 2 ) - 2E(X)
=λ 3 e -λ e -λ +3(λ 2 +λ)-2λ
= λ 3 + 3λ 2 + λ
Therefore,
µ 3= λ 3 +3λ 2 +λ -3(λ 2 +λ)λ +2λ 3
=λ
In a similar way we can find, µ 4 = 3λ 2 + λ
Measures of Skewness and Kurtosis:
β1 =
= =
1= β1 =
√
Since λ > 0, Poisson Distribution is a positively skewed distribution.
Probability Distributions‐Semester II Page 31 School of Distance Education
Also,
β2 =
μ
=
μ
=3+
2=
β 2 -3=
Since λ > 0, Poisson Distribution is leptokurtic.
Moment Generating Function
M X(t) = E ( e tx)
=∑
f(x)= ∑
=
∑
!
!
=
=
Additive property of Poisson distribution:
Let X1 and X2 be two independent Poisson random variables with parameters λ 1 and λ 2
respectively. Then X = X 1 + X 2 follows Poisson distribution with parameter λ 1 + λ 2.
Proof:
X 1 ~ P (λ 1) ⇒ M X1(t) =
X 2 ~ P (λ 2) ⇒ M X2(t) =
MX(t) =MX1+X2(t) = MX1(t).MX2(t)
, Since X1 and X2 are independent.
⇒ MX(t) =
=
Thus,
X = X 1+X2 ~ P(λ1 +λ2)
Remarks:
In general if Xi ~ P(λi) for i = 1, 2, ..., k; and Xi’s are independent, then
X = X1+ X2 + ... + Xk ~ P(λ1+ λ2 + ... + λk)
3.1.6 Geometric Distribution
Definition:
A random variable X is defined to have geometric distribution if the pdf of X is
given by
,
, for 0,1,2, … . ;
1
f(x)=
0, otherwise
Moments:
Mean,
E(X)=∑
Probability Distributions‐Semester II Page 32 School of Distance Education
=∑
qx p
= p[q + 2q2 + 3q3 + ...]
=pq[1+2q+3q 2 +...]
= pq(1 - q) - 2
=
=
Variance,
V(X) = E(X 2 ) - [E(X)] 2
E(X 2 ) = ∑
= ∑
= ∑
f(x)
1
(x-1)q 2 p+∑
= p[2.1q 2 + 3.2q 3 + 4.3q 4 + ...] + E(X)
= 2pq 2 [1+ 3q + 6q 2 + ...] +
= 2pq2(1-q)-3+ =
=
+
+ – ( )2
V(X)=
=
+ =
+ =
(p+q) =
Moment Generating function:
MX(t)=E(etx
=∑
=p∑
)x
=p[1+qe t +(qe t ) 2 +……]
=p(1-qe t ) -1 =
Probability Distributions‐Semester II Page 33 School of Distance Education
Lack of memory property:
If X has geometric density with parameter p, then
P[X s + t/X s] = P(X t)for s,t = 0,1,2,...
Proof:
P[X
s + t/X
s] =
=
∑∞
∑∞
= qt =P(X
=
t)
Thus geometric distribution possesses lack of memory property.
3.2 SOLVED PROBLEMS
Problem 1.
The mean and variance of a binomial random variable X are 12 and 6 respectively.
(i). Find P(X=0) and (ii). P ( X > 1)
Solution:
Let X ~ B(n,p)
G i v e n E ( X ) = n p = 1 2 and V ( X ) = n p q = 6
=
=
⇒q=
⇒ p = , since p + q = 1
Also, n=24
(i).
P(X = 0) =24 C0 ( )24 = ( )24
P ( X > 1 ) = 1 -P(X
1 ) = 1 -[ P ( X = 0 ) + P ( X = 1 ) ]
= 1 - 25( )24
Problem 2.
I f X ~ B(n, p), Show that Cov( ,
)=
Solution:
Cov(X, Y) = E(XY) - E(X)E(Y)
There fore,
)=E( ×
Cov( ,
=E(
)- E( ) E(
Probability Distributions‐Semester II )- E( )E(
)
)
Page 34 School of Distance Education
= (E(nX-X 2 )- E(X)E(n-X)
= [E(nX)-E(X 2 )]- (E(X)[n-E(X)]
= ( [n.np-[n(n-1)p 2 +np]-np(n-np)]
= [np-(n-1)p 2 -p-np+np 2 ]
=
=
=
Problem 3.
If X ~ B(n, p), find the distribution of Y = n - X.
Solution:
The p d f o f X i s ,
p(x) = n C xp x q n - x , x = 0, 1, 2, ..., n
Given, Y~ n -X, i . e . , X = n -Y
⇒ f(y) =
=
!
Cn yp n-yqn-(n-y), y = n, n - 1, n - 2, ..., 0
-
p n-yqy, y = 0, 1, 2, …..,n
! !
Cy qy , pn-y, y = 0, 1, 2, …..,n
⇒ Y ~ B(n,q)
Problem 4
If X and Y are independent poisson variates such that P(X=1) =P(X=2) and
P(Y=2)=P(Y=3). Find the variance of X-2Y.
Solution:
Let X~P(λ1)and Y~P(λ2)
Given, P(X=1)=P(X=2)
i.e.,
i.e.,
!
1 =
=
!
!
,since λ1>0
There fore,
λ1 =2
Probability Distributions‐Semester II Page 35 School of Distance Education
Also
P(Y=2)=P(Y=3)
=
!
i.e.,
!
1=
There fore,
λ 2 =3
, since λ 2 > 0
There fore,V(X) = λ 1 = 2 and V(Y) = λ 2 =3
Then,
V(X - 2Y) = V(X) + 4V(Y)
Since X and Y are independent.
= 2+4 × 3 = 14
Problem 5.
If X and Y are independent poisson variates, show that the conditional distribution
of X given X+Y is binomial.
Solution:
Given X ~ P(λ 1) and Y ~ P(λ 2)
Since X and Y are independent, X + Y ~ P(λ 1 + λ 2).
There fore,
P(X=x/X+ Y=n)=
=
.
=
!
=
!
!
=
!
!
λ1
! λ1 λ2
λ2
n-x
λ1 λ2
=nCxpxqn-x
Where p =(
λ1
λ1 λ2
)
q = 1 - p, which is a binomial distribution.
Probability Distributions‐Semester II Page 36 School of Distance Education
Problem 6.
Let two independent random variables X and Y have the same geometric distribution.
Show that the conditional distribution of X/X+Y=n is uniform.
Solution:
Given P (X = k) = P (Y = k) = q k p, k = 0, 1, 2, ...
P (X = x/X + Y = n) =
=
=
P (X + Y = n) = P (X = 0, Y = n) + P (X = 1, Y = n - 1)+
P (X = 2, Y = n - 2) + ... + P (X = n, Y = 0)
=
q 0 pq n p + q 1 pq n- 1 p + q 2pq n-2 p + . . . + q n pq 0 p
since X and Y are independent.
= qnp2 + qnp2 + qnp2 + ... + qnp2
= (n + 1)qnp2
There fore,
P(X=x/X+Y=n)=
=
, x = 0, 1, 2, ...
which is a uniform distribution.
Problem 7.
For a random variable following geometric distribution with parameter p, prove the
recurrence formula,P(x + 1) = q.P(x)
Solution:
The pmf of X is p(x) = q xp,x =0,1,2,...
Then,
P(x + 1) = qx+1p
P(x) = qxp
⇒
=
=q
Hence
P(x + 1) = q.P(x)
Probability Distributions‐Semester II Page 37 School of Distance Education
3.3 CONTINUOUS DISTRIBUTIONS
3.3.1 Normal Distribution
A continuous random variable X with pdf f(x) =
√
, ∞ < x <∞ e
is said to follow normal distribution with parameters µ and σ, denoted by X ~
N(µ,σ).
Mean and Variance:
Let X ~ N(µ,σ) , t h e n ,
E(X)=
=
+
=
=
dx
√
dx
√
dx+
√
√
dx
dx+
√
⇒ dx = σdu
Put
⇒E(X)=
=
(Since ue
√
-
σ
e
+µ × 1
× 0 +µ = µ
√
is an odd function of u,
σdu = 0)
There fore mean = µ
V(X) = E(X - E(X)) 2
= E ( X -µ) 2
=
=
=z
Put
=
=
Put
dx
√
dz
√
√
dz
=u
=
√
2
√
=
Probability Distributions‐Semester II √
Page 38 School of Distance Education
=
=
1
√
г
√
=
=
г
√
√ =
√
i.e., V(X) =
There fore Standard Ddeviation =
V X =σ
Odd order moments about mean:
2
=
2 2
=
√
dx
dz
√
=z
By putting
=
=
dz
√
√
×0=0
Since the integrand is an odd function i.e., µ 2 r + 1 = 0, r = 0, 1,2, ...
Even order central moments:
µ 2 r = 1.3.5...(2r - 1)σ 2r
µ 2 r = E(x - µ) 2r
=
2
_
=
Put
dx
=z
=
Put
√
2 2
dz
√
=z
=
=
Probability Distributions‐Semester II √
√
dz,
dz,
Page 39 School of Distance Education
Put
=u
=
=
=
=
√
√
=
=
2
√
√
г
√
√
(r - )(r - ),…., , г
… . .√
√
µ 2r = 1.3.5...(2r - 1)σ
2r
Recurrence relation for even order central moments
We have
µ 2r = 1.3.5...(2r - 1)σ 2r
µ2r+2 = 1.3.5...(2r - 1)(2r + 1)σ2
2
There fore,
2r 2 2r
= (2r + 1)σ 2
i.e.,
µ2r+2 = (2r + 1)σ 2 µ2r
This is the recurrence relation for even order central moments of Normal Distribution.
and 4th moments.
Using this relationship we can find out the 2nd
Put r = 0 then µ2 = σ2
r = 1 ⇒ µ4 = 3σ4
Since µ3 = 0 ⇒ β1 = 0,γ1 = 0
Also,β1 =
=3 and γ2 = 0
Moment generating function:
MX (t) = E(etx)
=
=
Probability Distributions‐Semester II √
dx
Page 40 School of Distance Education
Put
= z
=
=
=
µ
e
√
e
σ
e
√
√
Put z-t =u
=
=
Put
√
2
√
=v
= =
=
= 12 2
2
√2
2
√
dv
√
г
√
√
√
Thus,
M X (t) =
Additive property:
Let X 1 ~ N(µ 1 , σ 1 ), X 2 ~ N(µ 2 , σ 2 ) and if X 1 and X 2 are independent, then
X 1 + X 2 ~ N(µ 1 + µ 2 ,
Proof:
We have the mgf’s of X1 and X2 are respectively,
MX1(t) =
and
MX2(t) =
Since X1 and X2 are independent
MX1+X2 (t) = MX 1(t) + MX 2(t)
=
Probability Distributions‐Semester II ×
Page 41 School of Distance Education
=
i.e.,
X 1 + X 2 ~ N(µ 1 + µ 2 ,
Remarks 1
If X1 , X2 , …..., Xn are n independent normal variates with mean = µi and
is normally
variance = , , i=1,2,...,n respectively. Then the variate Y = ∑
distributed with mean = ∑
and variance = ∑
.
Remarks 2
If X1 , X2 , ..., Xn are n independent normal variates with mean = µi and variance
is normally distributed with
= , , i=1,2,...,n respectively. Then the variate Y = ∑
and variance = ∑
,where
a
i
’s
are
constants.
mean = ∑
3.3.2 Standard Normal Distribution
A normal distribution with mean µ = 0 and standard deviation σ = 1 is called a
standard normal distribution. If Z is a standard normal variable then its pdf is,
,-∞ < z < ∞
f(z) =
Moment generating function:
MZ(t)=
= (t)=
×
=
Normal distribution as a limiting form of binomial distribution
Binomial distribution tends to normal distribution under the following conditions
(i). n is very large (n →∞)
(ii). neither p nor q is very small.
Proof:
let X →B(n,p)
Then
f(x)=nC x p x q n-x ,x = 0, 1, 2, ….n
Also,
E(X)=np,V(X)=npq, MX (t)=(q+pe t ) n
Define
Z=
=
√
=
Now
MZ(t)=
Probability Distributions‐Semester II Page 42 School of Distance Education
=
Mx(t/ )
=
(q+
⁄
)n
Then,
logMZ(t) =
=
+ nlog(q+
+ nlog(q+
)
=
+ nlog [ q + p(1+
=
+ nlog [ q + p + p(
=
=
+ nlog [ 1 + p(
+n
+n
=
=
+
=
+
[ p(
[
!
+
!
!
+
+
!
+……)]
!
+……)]
!
!
+……)
+……)]
-
+0(
(
!
+
!
+……) 2 +…..
]
)
(1-p)+0( )
+
+0( )
0( ) →
=
+
+
!
-
+
)
as n→∞
There fore
Mz(t) =
This is the mgf of a standard normal variate. So Z→ N(0, 1)
i.e.,
=
√
→ N(0, 1) as n→∞
X →N(np,
)
when n is very large.
3.3.3 Uniform Distribution (Continuous)
A continuous random variable X is said to have a uniform distribution if its pdf is
given by,
f(x)=
,a
xb
= 0 , elsewhere
Probability Distributions‐Semester II Page 43 School of Distance Education
Properties :
1. a and b (a¡b) are the two parameters of the uniform distribution on (a,b).
2. This distribution is also known as rectangular distribution, since the curve y= f(x)
describes a rectangle over the x-axis and between the ordinates at x=a and x=b.
3. The d.f., f(x) is given by
0, if
∞- < x < a
,a< x<b
f(x)
1, b < x < ∞
Moments:
Mean = E(x)
=
=
dx
( )
=
=
=
Variance
V(X) = E(X 2 ) - [E(X)] 2
E(X 2 ) =
f(x)dx =
(
)
dx
=
=
Therefore,
V(X) = =
2
2
3
(
)2
=
Also,
SD(X) =
Probability Distributions‐Semester II √
Page 44 School of Distance Education
Moment generating function:
3.3.4 Gamma Distribution
A continuous r.v.is said to have a gamma distribution if its probability density
function is given by
.
f(x) =
,x>0
= 0 , otherwise
where m > 0, p > 0 are called the parameters of the gamma distribution.
Moments
Mean,
E(X) =
=
dx
=
=
.
=
Variance,
V(X) = E(X2) - [E(X)]2
E(X2) =
dx
=
∞
= =
2
1
0
.
=
=
=
+
V(X) =
+
-
There fore,
Probability Distributions‐Semester II =
Page 45 School of Distance Education
Moment generating function:
Mx(t) =E(
dx
)=
1 dx
= )
=(
=
=
( 1- )
3.3.5 Exponential Distribution
Let X be a continuous r.v with pdf,
f(x) = λe-λx, x > 0, λ > 0
Then X is defined to have an exponential distribution.
Moments:
Mean,
E(X) =
λe
=
λ
=λ
Γ3
=λ
=
Variance,
V(X) = E(X 2 ) - [E(X)] 2
λe
E(X2)= e
=λ
=λ
Γ3
=
There fore,
V(X) =
-
Moment Generating Function:
M X(t) = E(e t x )
=
=
=λ
= λ[
λ
=
Probability Distributions‐Semester II λ t
e λe
]
∞
0
-(1- ) -1
λ
Page 46 School of Distance Education
3.3.6 Beta Distribution
Let X be a continuous r.v with pdf,
f(x) =
β m,n
x m-1(1 - x) n-1;0 < x < 1,m > 0,n >0
Then X is called beta distribution of first and kind is denoted as β1 (m, n)
Moments:
Mean,
E(X) =
=
=
=
1
β m,n
1
β m,n
,
β m
β m,n
Γ m 1 Γn
Γ m n
= Γ m n 1 × ΓmΓn
=
Variance = E(X 2)-[E(X)]2
E(X2)= =
1
β m,n
=
,
β(m+2,n)
Γ m 2 Γn
Γ m n
=Γ m n 2 × ΓmΓn
=
There fore,
V(X) =
–(
)
2
=
Probability Distributions‐Semester II Page 47 School of Distance Education
3.3.7 LogNormal Distribution
Let X be a positive random variable , and let Y = log e X. If Y has normal
distribution then X is said to have a log normal distribution. The pdf of log
normal distribution is given by,
f(x) =
; 0 x <∞;- ∞; < <∞ and >0
√
Moments:
=E(Xr) =E(ery)
=
=
=
-
=
=
(
-1)
3.3.8 Pareto Distribution
Let X be a continuous random variable. If the pdf of X is given by
f(x) =
( )
, > 0, x 0 > 0
1
then X is said to follow a Pareto distribution.
Mean =
, for
E(X) =
>1
Variance =
V(X) =
-
2
0
1
, for
>1
Mgf of Pareto distribution does not exist.
3.3.9 Cauchy Distribution
A continuous random variable X is said to follow Cauchy distribution if its pdf is given
by
f(x) =
β1 x
β
2
for - ∞ < x < ∞, ∞ <
< ∞ and β > 0
The two parameters of this distribution are and β.
If = 0, β = 1, then the pdf of Cauchy distribution will be,
f(x) =
Probability Distributions‐Semester II , ∞ < x < ∞
Page 48 School of Distance Education
Properties
1. For a Cauchy distribution mean does not exist.
2. For a Cauchy distribution variance does not exist.
3. mgf of Cauchy distribution does not exist.
3.4 SOLVED PROBLEMS
Problem 1.
IF X ~ N(12,4). Find
(i). P(X 20)
(ii). P(0 X 12)
(iii). Find a such that P(X > a) = 0.24.
Solution:
We have Z =
=
~ N(0,1)
(i)
P(X 20)= P
=P(Z 2)
=0.5-P(0<2)=0.5- 0.4772 =00228
(ii)
P(0
X
12) =P
=P(-3 Z 0)= P(0
Z 3)= 0.4987.
(iii)
Given P(X>a) =0.24⇒P
=0.24
⇒ P( Z >
Hence P( 0 < Z <
) =0.24
)=0.5 -0.24 =0.26
From a Standard Normal table the value of
=0.71 ⇒ a =14.84
Problem 2.
Find k , if P(X k) = 2P(X > k) where X ~N ( , )
Solution :
Given that
P(X k) = 2P(X > k)
⇒
⇒
Probability Distributions‐Semester II =2
+
= 2+1
Page 49 School of Distance Education
⇒
=3
⇒
=3
⇒P(X>k) =
=0.333
P
=0.333
) =0.333
i.e., P( Z >
= 0.44
From table
Then k = µ +0.44
Problem 3.
If X is a normal random variable with mean 6 and variance 49 and if
P(3X + 8 λ) = P(4X -7 µ) and P(5X -2 µ) = P(2X +1 µ), find λ and µ.
Solution:
Given X ~ N(6, 7)
P(3X +8 λ) = P(4X -7
⇒P(X
µ)
)=P(X
P(5X -2
⇒ P(X
) - - - - - - - - (1)
µ) = P(2X + 1
) = P(X
µ)
- - - - - - - - (2)
Since
~ N(0,1)
X ~N (6,7), Z =
From(1),
P
⇒ P(Z
µ
=P
21
) = P(Z
From the standard normal curve,if P(Z
That is
µ
28
)
a) = P(Z
b), then a = -b
=⇒ 4λ +3µ -155=0 - - - - - - -- - - (3)
λ 1
µ
From (2) P
Probability Distributions‐Semester II =P
Page 50 School of Distance Education
µ
⇒ P(Z
35
=
=-
) = P(Z
14
)
λ 13
⇒ 5λ +2µ -121=0 - - - - - - - - - - - - - - - - - (4)
Solving (3) and (4) we get, λ = 7.57 and = 41.571
Problem 4.
For a rectangular distribution,
f(x) =
, -a < x < a, Show that μ
=
Solution :
We have E(X)=
There fore,
f(x)dx=
dx=0
= E[X - E(X)] 2r =E[X62r]
=
=
=
=
Problem 5
If X1, X2, …., Xn are n independent random variables following exponential
parameter λ, find the distribution of y =∑
Solution:
Given that X~exponential with parameter λ
Therefore,
M x(t) = (1- ) -1
Then,
M y (t) =M∑
(t) =∏
=∏
1
1
= (1-
)-n
This is the mgf of a gamma distribution with parameter n and λ. There fore the pdf of
Y is given by,
f(y) =
Probability Distributions‐Semester II λ
Γ
λ (yn-1), y
0
= 0, elsewhere.
Page 51 School of Distance Education
Chapter 4
LAW OF LARGE NUMBERS
4.1 Chebyshev’s Inequality:
Let X be a random variable for which the mean µ and variance
t> 0,
P(|X - µ| t )
P(|X - µ|
exists, then for any
1
t ) Proof
Let X be a continuous random variable , then
= E[X - E(X)]2 = E[x - µ] 2
=
=
+
+
Since f(x) is a pdf which is non negative ,[x - µ]2f(x) is always non negative then,
0
There fore,
In
,
In
,
⇒
⇒
u
2
2
There fore
⇒
2
2 2
⇒
2
2 2
[P(X
[P(X
[P(|X
⇒
-t
-t ) + P(X
|
Also,
-[P(|x-µ) t ] ]
t )]
t )]or
1
2
1
2
i.e.,
1
1 - [P(|x-µ) t ] 1
2
i.e.,
[P(|x-µ| t ] Probability Distributions‐Semester II t
t
[P|X - |
[P(|x- ) t ] +P(X-
1
1
2
Page 52 School of Distance Education
4.2 Convergence in Probability:
Let X1, X2, ... be a sequence of random variables.The random variable Xn is said to
converge in probability to a constant θ, if for any ε > 0,
P(|Xn - θ|
ε)→0 as n →∞.
This type of convergence is also referred to as stochastic convergence or statistical
convergence.
4.3 Bernoulli’s law of large numbers (BLLN):
Consider a random experiment with only two possible outcomes success and failure. Let
p be the probability of success and it remains unchanged from trial to trial. Let n
independent trials of the experiment being conducted. Let Xn be the number of successes
in these n trials. Bernoulli’s law of large numbers (BLLN) states that for any ε > 0,
P(| -p| < ε) → 1 as n →∞.
Proof:
We have X n ~ B(n,p). Hence E(X n) = np and V(X n) = npq.
⇒ E(
For the variable
)= p and V(
)=
by Chebyshev’s Inequality we have,
,
P
>1 - E
i.e.,
P Put, t
= ε ⇒t = ε
pq
p
n
> 1 - ,
Hence,
P | ⇒ P | as n →∞,
ε
→ 0 ⇒ P | |
ε >1- |
|
ε
ε >1-
ε
ε →1.
4.4 Weak law of large numbers(WLLN):
Let X 1, X 2, ..., X n be a sequence of random variables with E(X1) = µ i for all i, for
i = 1 ,2,...,n. Let S n = X1 + X 2 +...Xn. Let S n = X 1 + X 2 +...+ X n, M n = µ 1 + µ 2 + ... + µ n
and B n = V(X 1 + X 2 + ... + X n).Then,
P | Probability Distributions‐Semester II ε → 0 as n →∞,
Page 53 School of Distance Education
provided
→ 0 as n →∞,
Proof:
For the variable
,
by Chebyshev’s Inequality we have,
P
< . . . . . (1)
E
Here
1 2 …….. E( ) = E
=
and
V( )=
V( ) =
put ε = t
V(
)= …….. 2
⇒t=ε
hence,
⇒P
ε < E
⇒P
ε < E
as n →∞ ⇒ P
provided
→ 0 ε
ε
E
ε →0,
∞
4.5 Central limit theorem (CLT)
CLT states that the sum of a very large number of random variables is
approximately normally distributed with mean equal to sum of means of the
variables and variance equal to sum of the variances of the variables provided the
random variables satisfy certain very general assumptions.
4.6 Lindberg -Levy form of CLT:
Let X1 , X2 , ..., Xn be a sequence of independent and identically distributed random
variables with E(Xi ) = µ and V(Xi ) = , i = 1,2, ..., n where we assume that 0 <
<
∞. Letting Sn = X1 + X2 + ... + Xn , the normalised random variable.
Z=
√
→ N(0,1) as n → ∞
Proof:
Given E(Xi ) = µ , V ( X i ) =
, i = 1,2,3,...,n
Sn = X1 + X2 + ... + Xn = ∑
Probability Distributions‐Semester II Xi
Page 54 School of Distance Education
Assume that MX (t) exists for i=1,2,...,n
i
Now,
Mzt =M
(t)
√
=
√
=
√
√
=
∑
√
∏
=
√
Since X s are independent.
Therefore
1
MZ(t)=
1
√
2
0 √
and higher
denotes terms with
where 0
There fore ,
logMZ(t) = =-
+ nlog 1
+n 1
0 √
=
-
2
=-
Since
0 √
2
2
(
-
2
0 √
2
0 + 0 2
→
as n → ∞
Therefore,
MZ(t) →
as n → ∞
This is the mgf of a standard normal variable.
i.e.,
Z → N(0, 1) as n →∞.
Probability Distributions‐Semester II Page 55 School of Distance Education
4.7 SOLVED PROBLEMS
Problem 1
For a geometric distribution with f(x) =
inequality prove that P(|X - 2|
,x
1,2, … . .,using Chebyshev’s
2)
Solution:
We have,
E(X) = ∑
=
1
=
1
2 =
2
1
4 +3×
+ ……
=2
=
1
+2×
3 ×
1
=
E(X2 ) = ∑
=
×
+3 ×
+ 22 ×
9 3
+ ……
=6
Hence
V(X)= 6-4 =2
By Chebyshev’s Inequality,
P ( |X - |
⇒ P ( |X - 2|
t
] 1 -
2)] 1 -
√
=
Put t =√2 , we get,
[ P ( |X - 2|
2)] Problem 2
Find the least value of probability P(1
E(X) = 4 and V(X) = 4.
X
7) where X is a random variable with
Solution:
By Chebyshev’s Inequality,
P ( |X - |
] 1 -
i.e.,
Probability Distributions‐Semester II Page 56 School of Distance Education
P ( |X - 4| 2 k ] 1 But we have to find that least value of
P(1 X
X
P(-3
7)= P(1-4
7
X-4
4
3
⇒ =
Put 2 k = 3 , t h e n k =
There fore
P(1 X
7)= P(|X-4|
3) 1
=
Thus the least value of probability =
Problem 3
If X ~ B(100, 0.5), using Chebyshev’s Inequality obtain the lower bound for,
P(|X - 50| <7.5).
Solution:
Given X ~ B(100, 0.5).Then Mean,µ =np =100X0.5 = 50 and
= npq = 100x0.5x0.5 =25
By Chebyshev’s Inequality
P(|X-µ|
tσ) 1-
i.e.,
P(|X- 50|
5t) 1 - Put 5t =7.5 ,then t=1.5, we get
P(|X - 50| <5 × 1.5) > 1 -
.
⇒ P(|X - 50| <7.5) > 0.56
i.e,the lower bound is 0.56.
Probability Distributions‐Semester II Page 57 
Fly UP