# STATISTICS 983 probability DISTRIBUTIONS UNIVERSITY OF CALICUT

by user

on
20

views

Report

#### Transcript

STATISTICS 983 probability DISTRIBUTIONS UNIVERSITY OF CALICUT
```STATISTICS
probability DISTRIBUTIONS
Complementary course for
B.Sc. MATHEMATICS
II Semester
CU-CBCSS
UNIVERSITY OF CALICUT
SCHOOL OF DISTANCE EDUCATION
Calicut University P.O. Malappuram, Kerala, India 673 635
983
School of Distance Education
INDEX
UNIVERSITY OF CALICUT
SCHOOL OF DISTANCE EDUCATION
STUDY MATERIAL
MODULE
1
5
MODULE
II
26
MODULE
III
50
II Semester
Complementary Course
for B Sc. Mathematics
STATISTICS : PROBABILITY DISTRIBUTIONS
MODULE IV
Prepared and Scrutinised by:
Dr. K.X.Joseph,
Director,
University of Calicut.
Layout & Settings
Computer Section, SDE
Reserved
Statistics - Probability Distributions
2
110
School of Distance Education
Module I
MATHEMATICAL EXPECTATION
We have studied earlier the important ideas on probability,
random variables and probability distributions associated with
it. A probability distribution can often be summarized in terms
of a few of its characteristics known as the moments of the
distribution. For this purpose, here, we introduce the notion of
‘Mathematical Expectation’.
Mathematical
Expectation
With every distribution of a random variable we can associate
certain numbers called the ‘parameters of the distribution’,
which play an important role in mathematical statistics. The
distribution function F(x) or the density function f(x) completely
characterizes the behaviour of a random variable X. Frequently,
however we need a more concise description such as a single
number or a few numbers, rather than an entire function. One
such number is the expectation or the mean of a random
variable X denoted by E(X).
Definition
If X is a discrete r.v. with probability function p(x) [or f(x)],
the expected value for X is, E(X) =
 xp( x) , as long as the sum
x
is absolutely convergent. If X is a continuous r.v with probability
density function f(x), the expected value of X is

E(X) =
 xf ( x)dx

as long as the integral is absolutely convergent.
Note:
i.
If X is a random variable assuming the values x 1, x 2 , …x n
Mathematical Expectation
5
School of Distance Education
with the corresponding probabilities p1, p2, … pn then the expected
value of X is defined as
E(X) = x1 p1 +x2 p2 + … + xn pn
n
=
 xi p i ,  p
i 1
i
School of Distance Education
Definition
Let X be a r.v. The r-th raw moment about a value, say X0, is defined as
r(x0)
1
E(X) denotes the ‘average’ of the values that the random variable
takes on, where each value is weighted with the probability that the
random variable is equal to that value.
iii. As regards terminology expected value is also called expectation,
mathematical expectation, mean or average.
= E(X – X0)r, r = 0, 1, 2, …
=
 (x  x
=
 (x  x
ii.
x
=

x

r
x
as long as the sum or integral is absolutely convergent.
Properties:
1.
2.
3.
4.
If X is a rv. Then E(c) = c, c being a constant
E[c g(X)} = c E[g(X)], c, a constant
E(aX + b) = aE(X) + b where a and b are constants.
E[g(X) + h (X)] = E[g(X)] + E[h(X)] as long as the expectation
exits. We can easily verify this result.
Moments
In statistics moments of the distribution of a random variable or simply
moments of a random variable are of special importance.
Probability Distributions
x
r
=
=
x
f ( x)dx , if X is continuous
x
r
x
p ( x) , if X is discrete
Putting
r = 1.
1
X and it is denoted by
r = 2,
r = 3,
r = 4,
2
3
4
= E(X). This is the mean of the random variable

= E(X2).
= E(X3) and
= E(X4).
Definition
The r-th central moment or moments about their expected value is
defined as
r
= E[X – E(X)]r, r = 0, 1, 2, 3, …
= E[X – ]r
=
6
, if X is continuous
= E(Xr)
g(x) p(x), if X is discrete
g(x) f(x)dx, if X is continuous
) r f ( x)dx
x
Let g (X) be a function of a random variable X. The expected value for
g(X) is
=
0
) r p( x) , if X is discrete
In particular, when X0 = 0, we get the raw moments about the origin.
Thus, the r-th raw moment about the origin is defined as:
Expectation of function of a r.v.
E[g(X)]
0
 (x  )
x
r
p( x) , if X is discrete
Mathematical Expectation
7
School of Distance Education
=
 (x  )
r
f ( x)dx
x
School of Distance Education
Results
, if X is continuous
1. V(X) = E(X2) – {E(X)}2
Putting
Proof
1 = E[X – E(X)] = E(X) – E(X) = 0
r = 2, 2 = E[X – E(X)]2 This is called the
Variance of X and it is denoted by 2, Var (X) orV(X).
r = 3. 3
= E[X – E(X)]3.
r = 4, 4
= E[X – E(X)]4.
We can note that 0 = 1 and 1 = 0.
r = 1,
V(X)
2. V(aX
 b) = a2 V(X), where a and b are constants
Proof
r = r – rC1 r–1 1 + rC2 r–2 (1 )2 – … + (–1) (1)r
Proof
For convenience, let us consider the raw moments about
the origin.
r = E[X – E(X)]r
= E X r  r C1 X r 1 ( 1 ) r C2 X r  2 ( 1 ) 2  ...  (1) r ( 1 ) r

V(aX  b)
= E[aX  b – E(aX  b)]2
= E[aX  b – (aE(X)  b)]2
= E[aX – aE(X)]2
= a2 E[X – E(X)]2
= a2 V(X)
Example: V(3X + 4) = 9 V(X)
Summary Measures
A. Measures of Central tendency
1. Arithmetic Mean
AM
= E(X) =  x p(x) if X is discrete
= E ( X r ) r C1E ( X r 1 )( 1 ) r C2 E ( X r  2 )( 1 ) 2  ...  (1) r ( 1 ) r
i.e., r = r – C1 r–1 1 + C2 r–2 (1 ) – … + (–1) (1)
r
r
2
r
=0
= 2 – (1)2
= 3 – 32 1 + 2(1)3
= 4 – 43 1 + 62 (1)2 – 3(1)4

r
Putting r = 1, 2, 3 and 4, we get the first four central moments as
follows:
1
2
3
4
=
Probability Distributions

x f(x) dx if X is continuous

i.e., AM = 1 (0) = E(X)
2. Median
Median is that value of the r.v. which satisfies the equation
M


8

= E(X2) – 2E(X)E(X) + {E(X)}2
= E(X2) – {E(X)}2
We can express the r-th central moment in terms of raw
moments of order r or less. That is,


= E X 2  2 XE ( X )  {E ( X }2
Relation between raw moments and central moments
We have
= E[X – E(X)]2
1
f(x) dx =
or
2


M
f(x) dx =
1
2
Mathematical Expectation
9
School of Distance Education
School of Distance Education
3. Mode
In the case of a discrete r.v. mode is that value of the r.v. having the
maximum probability. If x is the mode f(x) will be maximum. In this case
we get two conditions. (i) f(x) > f(x–1) and (ii) f(x) > f(x+1) and solving
them we get mode. In the case of a continuous r.v. maximum of f(x) is
given by the conditions.
(
i
)
f
where E(X)2 =
x
2
p( x)

=
and E(X)
=
1. Quartile Deviation
1
 f ( x)dx  4 and
=
3
 f ( x)dx  4
 xf ( x)dx , if X is continuous
The moment measures of skewness are the following.
1
=
1
=
 | X   |p( x) , if X is discrete
X
 32
 23
1 =
3
3

A distribution is said to be positively skewed, symmetric or negatively
skewed according as 1 > = < 0 which demands 3 > = < 0.

D. Measures oD. Measures of Kurtosisosis
 | x   | f ( x)dx , if X is continuous
3. Standard Deviation
SD is the square root of the variance. It is denoted by ‘’ (sigma). i.e.,
  V (X ) .
Here,
10
 xp(x) , if X is discrete
C. Measures of Skewness
Q3
=E|X–|
=
f ( x)dx , if X is continuous

2. Mean Deviation
MD = E | X – E(X) |
=
2

Q3  Q1
QD =
. Here Q1 and Q3 are given by the equations
2
Q1
x

(x) = 0 and (ii) f (x) < 0
B. Measures of dispersion
, if X is discrete

2
= V(X) = E(X ) – {E(X)} = 2 – (1)
Probability Distributions
2
2
2
4
 22
2
=
2
= 2 – 3
A distribution is said to be leptokurtic, mesokurtic or platykurtic
according as 2 > = < 3 or 2 > = < 0.
Mathematical Expectation
11
School of Distance Education
School of Distance Education
Moment Generating Function (mgf)
Since the moments are very useful in representing the characteristic
behaviour of a random variable, a function which generates the moments
is of great importance in theoretical study.
The moments of a r.v. can be evaluated by expectations of powers,
directly from the definition, as explained earlier. They can also be evaluated
from the mgf of the probability law.
tr
in Mx(t)
r!
Differentiating (1) w.r.t t
dM x (t )
dt
The moment generating function for a random variable X is
d 2 M x (t )
Now
dt 2
Mx (t) = E[etx], – < t < , as long as the expectation exists.
= 
etx f(x), if X is discrete
n
=
tx
e
f(x)dx , if X is continuous
r =
d r M x (t )
when t = 0
dt r
Properties
1.
Mcx(t) = Mx(ct), c is any constant
Proof
 tx (tx) 2 (tx) 3

E

 ...
= 1  
2!
3!
 1!

Mcx(t)
t
t 2 E ( X 2 ) + t 3 E ( X 3 ) + ...
= 1+ E ( X ) +
1!
2!
3!
constant
2 . MaX + b(t)
(1)

= E e t (cx )
 = E e  = M (ct), ct being a single
( ct ) x
x
= e bt M X(at), a, b constants
Proof:
MaX + b(t)
Here, 1, 2, 3, … are the moments of X. So r is the coefficient of
Probability Distributions
6t
 3  ...
3!
Proceeding like this we get
For many standard probability laws it is quite easy to
evaluate E(e tX ) or M X (t). This function generates the raw
moments about the origin. This is due to Laplace. We can see
how this function of a real variable t can be used to evaluate
the moments. For any fixed t, we can write.
Mx(t)
= E [etx]
12
=  2 
d 2 M x (t )
when t = 0 gives 2
dt 2

t
t2
t3


1





 3  ...
=
1
2
1!
2!
3!
2t
3t 2
 2 
 3  ...
2!
3!
dM x (t )
when t = 0 gives 1
dt
Definition

= 1 
= E[et(aX + b)]
= E[eatXebt]
= ebtE[eatX]
= ebtMX(at)
Mathematical Expectation
13
School of Distance Education
School of Distance Education
Corollary:
x(t )  E (eitx )
M X   (t )

3.
4.
5.
6.
e
 t

t 
MX 
 
Mx + y (t) = Mx(t) ´ My(t) if x and y are independent. (will be proved
later)
Mx(t) = 1 at t = 0.
The mgf uniquely determines the probability distribution if it exists
for it.
r

r =
d M x (t )
dt
at t = 0, r = 1, 2, 3, …
r

=
 eitx f (x) if X is discrete


=
e
itx
f ( x)dx if X is continuous

Properties
1) (t) is the expectation of a complex function of a r.v. X.
2) (0) = 1
Mgf for Central Moments
3) |(t)|  1
Let X be a random variable having the pdf f(x). Let E(X) =  . Then
the mgf about the expected value or mgf about the mean is defined as:
4) (t) =  (t ) ,  (t ) is the complex conjugate
M X(t ) 
= E[e
=
t(X – )
], where t is a real number
e
t(X – )
f(x), if X is discrete
5) (t) is uniformly continuous on R.
d rx(t )


6) r r t  0  rr provided r exists
i dt
x
SOLVED PROBLEMS
=

et(X – ) f(x)dx, if X is continuous
x
This function generates the central moments and so it is called
central mgf.
Characteristic
function
The characteristic function of a random variable is defined
by
x(t )
Example 1
If X is a.r.v having the pdf f(x) =
x 1
, –1  x < 1. Find E(X) and
2
V(X).
Solution
 E (e ) where t is a real number and i   1 . If X is
itx
r.v. with pdf f(x) then

E(X)
=
 x. f ( x)dx

14
Probability Distributions
Mathematical Expectation
15
School of Distance Education
School of Distance Education
 x 1
dx
2 
=
1  1 1   1 1 
    
2  4 3   4 3 
 x2  x 
dx
=  

2

1
=
1 2 1
 
2 3 3
1
=
 x.
1
1
V(X)
1
1
2
=  ( x  x)dx
2 1
= E(X2) – {E(X)}2
1 1
=  
3  3
1
1  x3 x2 
 
= 
2 3
2  1
1  1 1    1 1 
 
=     
2  3 2   3 2 
=
E(X )
=
Example 2
A coin is tossed until a head appears. What is the expectation of the
number of tosses required?
Let X denote the number of tosses required to get the first head. Then
X can materialize in the following ways:
Event x
Probability p(x)
H
1
1/2
 x . f ( x)dx
2

 x 1
dx
=  x .
 2 
1
2
TH
2
3
1
1/2  1/2  1/2 =  
2


1
1
3
2
=  ( x  x )dx
2 1
=
16
1
1 x
x 
  
2 4
3  1
Probability Distributions
3
TTH

2
1
1/2  1/2 =  
2
1
4
1 1 93 2
 =

3 9
27
9
Solution
1 2 1
 
2 3 3

2
=
2
3

E(X)
=
 xp( x)
x 1
Mathematical Expectation
17
School of Distance Education
2
= 1
School of Distance Education
3
1
1
1
 2     3     ...
2
2
2
E(X)

1
1
1
1  2   3     ...
2 
2
2

1  1
= 1  
2  2
1 1
=  
2 2
2

1
4 = 2
2
18
4
3
2
1
 10   11   12 
36
36
36
36
=
1
(2+6+12+20+30+42+40+36+30+22+12)
36
=
1
 252 = 7
36
Example 4
1
1
1
1
1
1
1
1
1
36 36 36 36 36 36 36 36 36 36
Probability Distributions
0
1/20
1
p1
2
1/5
3
p2
4
p3
5
1/10
6
1/10
If E(X) = 3.1, E(X2) = 12.1. Find p1, p2, p3
We know that
1
1 67
7
(1  2  3  ...  6) = 
=
6
6
2
2
1
X:
Prob:
Solution
1
1
1
1
 1   2   3  ...   6
6
6
6
6
b. The probability distribution of the sum of the numbers X is
Values of X:
x 2 3 4 5 6 7 8 9 10 11
Probability
4
5
6
5
 6  7  8
36
36
36
36
Let X be a r.v. with the following distribution
Let X be the random variable representing the number on a die when
thrown. Then X can take one of the values 1, 2, 3 …, 6 each with
equal probability 1/6. Hence.
=
1
2
3
 3  4 
36
36
36
+ 9
Find the expectation of the number on a die when thrown.
Two unbiased dice are thrown. Find the expected values of the sum
of numbers of points on them.
E(X)=
i
2
Solution
a.
i
i
+ 5
Example 3
a.
b.
x p
= 2
2
=
=
12
1
1
36
36
pi = 1
i.e.,
1
1
1
1
 p1   p 2  p3  
=1
20
5
10 10

p1 + p2 + p3 = 0.55
(1)
Now E(X) = 3.1 gives
0
1
1
 1  p1  2   3  p 2
20
5
Mathematical Expectation
19
School of Distance Education
 4  p3  5 
i.e., p1  3 p 2  4 p3 
1
1
 6
= 3.1
10
10
2 1 3
  = 3.1
5 2 5
i.e., p1  3 p 2  4 p 3 = 1.6
(2)
1
1
 1  p1  4   9  p 2
20
5
 16  p3  25 
i.e., p1  9 p 2  16 p3 
Then,
i
n
d
t
h
e
(3)
0.6
=
= 0.3
2
p1 + 0.15 + 0.3
= 0.55
p1 + 0.4 = 0.55
= 0.55 – 0.4
p1
= 0.1
Thus p1 = 0.1, p2 = 0.30 and p3 = 0.15
Probability Distributions
e
x
p
e
c
t
e
d
v
a
l
u
e
o
f
Y
=
(
X
–
1
)
Solution
= (X – 1)2
= X2 – 2X + 1
E(Y)
= E(X2) – 2E(X) + 1
E(X2)
=
1
1
 36 
= 12.1
10
10
(2) – (1)  2p2 + 3p3 = 1.05
(3) – (2)  8p2 + 15p3 = 4.65
8p2 + 12p3
= 4.2
3p3
= 0.45
p3
= 0.15
2p2 + 3  0.15 = 1.05
2p2
= 1.05 – 0.45
p2
20
2
F
4 5 18
 
= 12.1
5 2 5
i.e., p1  9 p 2  16 p 3 = 5.2
Solving
Let X be a random variable with p.m.f.
X:
0
1
2
3
f(x):
1/3
1/2
1/24
1/8
Given Y
Also E(X2) = 12.1 gives
0
School of Distance Education
Example 5
E(X)
E(Y)
 x2ipi
= 0
1 2 1
1
1
 1   22 
 32 
3
2
24
8
= 0
1 4 9 43

 =
2 24 8 24
= xipi
= 0
1
1
1
1
 1  2 
 3
3
2
24
8
= 0
1 1 3 23
  =
2 12 8 24
= E(X2) – 2E(X) + 1
=
43 23
–
+1
24 24
=
3
21
+1 =
24
24
Mathematical Expectation
21
School of Distance Education
EXERCISES
Multiple Choice Questions
1.
2.
3.
4.
5.
I
f
X
i
s
a
r
.
v
.
w
i
t
h
t
h
e
m
e
a
n
, the expression E(x – )2 represents
a) Variance of X b) Second central moment
c) both a and b
d) none of a and b
tx
If X is a r.v., E(e ) is known as
a) Characteristic function
b) moment generating function
c) Probability generating function d) All the above
If X is a r.v. with the mean , then E(X – )r is called
a) Variance
b) rth raw moment
th
c) r central moment
d) none of the above
If X is a r.v. having the pdf f(x), then E(X) is called
a) arithmetic mean b) geometric mean
c) harmonic mean d) first quartile
If a random variable X has mean 3 and standarddeviation 4 the
variance of the variable Y = 2X+5 is
a) 16
b) 64 c) 32 d) 6
Fill in the blanks
If X is a r.v. having the pdf f(x), a  x  b the mean 1 is equal to …
2.
(t )
If X is a r.v. having mgf M x(t ) , M cx
= ……….
3.
4.
If X is a r.v., V(3X+4) = …………..
For all r.v.s mathematical expectation …………. exist
5.
The characteristic function of a r.v. X is
1.
2.
22
Define mathematical expectation.
Show that E(aX + b) = a E(X) + b.
Probability Distributions
x(t) = …………..
Define rth central moment of a r.v.
th
central moment in terms of raw moments.
Define moment generating function.
State the properties of mgf.
Define characteristic function.
E
x
p
r
e
s
s
t
h
e
4
Short essay questions
1.
2.
4.
Define mathematical expectation and state its properties.
Define moments. Establish the relationship between raw moments
and central moments.
Define moment generating function of a distribution. Show how it
generates moments.
Show that Var (2X–1) = 4 Var X.
5.
Compute the mean and variance for f(x) =
3.
1
2 x
, 0 < X < 1 and f(x)
= 0, elsewhere
6.
7.
8.
9.
1.
School of Distance Education
3.
4.
5.
6.
7.
Given f(x) = C.x, 0 < x < 2, determine C, Mx(t) and k.
The random variable X has the pdf. f(x) = xe–x, x  0 and 0 elsewhere.
Determine the mgf of X
Prove that E(X2)  {E(X))}2
2
Let the mgf of a r.v. be M(t) = e 2(t t ) . Find the first second and
third raw moments.
Long essay questions
1. Suppose that the probabilities that sets consisting of 1, 2, 3, 4 and 5
persons pay a visit to an art gallery are 0.2, 0.5, 0.2, 0.07, 0.03
respectively. What is the expected number of persons per set.
2. A and B toss in turn an ordinary die for a prize of Rs.44. The first to
toss a six wins. If A has the first throw what are their expectations.
3. A person draws cards one by one from a pack until he draws all the
aces. How many cards may he be expected to draw?
Mathematical Expectation
23
School of Distance Education
School of Distance Education
4.
Two dice are thrown. X represents the sum of the two numbers that
come up. Determine E(X) and V(X).
5.
If f(x) = 30x4 (1 – x). 0
6.
Urn A contains 5 cards numbered from 1 to 5 and urn B contains 4
cards numbered from 1 to 4. One card is drawn from each of these
urns. Find the probability function of the numbers which appear on
the cards drawn and its mathematical expectation.
A man draws 2 balls at random from an urn containing 2 white and
4 red balls. If he is to receive Rs.20 for every white ball which he
draws and to lose Rs.40 for every red ball, find how much amount
he is expected to gain or lose.
From a group of 5 men and 3 women, a committee of 3 is selected.
If X represents the number of women in the committee, find E(X)
and V(X).
Find E(X) if the distribution function is
7.
8.
9.
 x  1.find E(X) and V(X).
0 , x  0
1 / 2, 0  x  2


F(x) = 5 / 6, 2  x  3
1 , x  3
10.
11.
24
Probability Distributions
(1)
a
(2)
b
(3)
c
(4)
a
(5)
a
Fill in the blanks
b
(1)
(3)
 xf (x )dx
(2)
Mx(ct)
9V(x)
(4)
do not exist
a
(5)
E(eitx)
Short Essay questions
(5)
1/3, 4/4
(6)
1 1
,
[e 2t (2t  1)  1], t  0, and M (o )  1.2 K 1 / (K  2
2 2t 2
(7)
(1  t )2
1
t<1
(9)
2, 8, 32
(8)
63/56, 0.502
Long Essay questions
(1)
(2)
1
Find the mgf for f(x) = (1+x), 2 < x < 4.
8
Given the following table:
X:
–3
–2
–1
0
1
2
P(X): 0.5
0.10
0.30
0
0.30
0.15
Compute (i) E(X), (ii) E (2X + 3), (iii) E (4X + 5)
(iv) E(X2) (v) V(X) and (vi) V (2X + 3).
Multiple Choice questions
2.23
24, 40
(11) (i) 0.25, (ii) 3.5,  2.5 (iii) 6 (iv) 10.15 (v) 10.08 (vi) 40.32
3
0.10
Mathematical Expectation
25
School of Distance Education
Module II
BIVARIATE PROBABILITY
DISTRIBUTIONS
We have already studied that a random variable is a real
valued function defined over a sample space with a probability
measure. The random variables considered thus far are called
one dimensional, because the observed value for a random
variable can be thought of as a single point on the real time. In
almost all applications random variables do not occur singly.
We will have need for the tools necessary to describe the
behaviour of two, three or ‘n’ random variables in general,
simultaneously. Hence we are forced to extend the definitions
of the distribution and density function of one random variable
to those of several random variables. The case of only two
random variables is considered here. Examples of two random
variables defined on the same sample space are given below.
1 . Consider the random experiment consisting of three tosses
of a coin. Let X = the number of heads in all three tosses and
Y = the number of tails in the last two tosses. Then the pair
(X,Y) represent a two random variable defined on the same
sample space. Here the pair of r.v.s. take the pairs of values
(3, 0), (2, 1), (2, 0), (1, 2), (1, 1) and (0, 2).
2 . Selecting a student at random from a college is a random
experiment. Let X be the age of the chosen student and Y be
the height of the chosen student. Then (X,Y) form a pair of
random variables defined on the same sample space.
If the random variables X and Y are both discrete as in
example 1 then the pair (X,Y) is called a discrete bivariate
random variable. If X and Y are both continuous as in example
2, we will call (X,Y) a continuous bivariate random variable.
For a discrete bivariate r.v. (X,Y) we introduce the concept of
joint probability distribution function (joint probability mass
function) and for a continuous bivariate r.v. the concept of joint
probability density function is introduced.
26
Probability Distributions
School of Distance Education
Joint probability mass function
Let (X,Y) be a pair of discrete bivariate random variables
assuming pairs of values (x 1 , y 1 ), (x 2 , y 2 )..... (x n , y n ) from the
real plane. Then the probability of the event {X = x i , Y = y j }
denoted as f(x i , y j ) or (p ij ) is called the joint probability mass
function of (X,Y).
ie., f(xi, yj) = P[X = xi , Y = yj]
This function satisfies the properties
i. f(xi, yj)  0 for all ( xi, yj)
ii.
  f (x i , y j ) = 1
i
j
Note:
We can describe the joint probability distribution function
of (X,Y) by means of a two-way table. Sometimes a convenient
formula will be available, say f(x.y) which determine all f(x i, yi).
Here we can write f(x,y) = P(X = x, Y = y).
Joint probability density function
Let X and Y be two continuous random variables with ranges
of variation R x and R y respectively.
If P{x < X  x + dx, y < Y  y + dy} = f(x,y)dxdy where dx and
dy are small positive real numbers, then f(x,y) is called the joint
probability density function of (X,Y) and it has the following
p ro perties.
1.
2.
f(x,y)  0 for all (x,y)




 
f(x,y)dxdy = 1
If we choose the ranges of variation of X and Y as the
rectangular region produced by the four lines X = a , X = b, Y =
c and Y = d we get
Bivariate Probability Distributions
27
School of Distance Education
School of Distance Education
b d
P{ a  X  b, c  Y  d} =
 
a
c
f(x, y) dxdy
=
= P( a < X <b, c < Y < d) =
= P{ a  X  b, c  Y  d}
since the probability that X or Y equal to a specified number is equal to
zero.
f(y|x)
=
Also
Definition
P( a  X  b | Y = y)
=
P(c  Y  d | X = x)
=
 f (x , y )
y
Similarly the marginal probability distribution of Y is given by
f2(y)
=
 f (x , y )
x
If X and Y are two continuous r.v.s. having the joint pdf f(x,y), then
the marginal probability density function of X is given by

f1(x)
=

f (x , y )dy

Similarly the marginal probability density function of Y is given by

f2(y)
=

f (x , y )dx

The conditional probability functions
The conditional probability distribution of X given Y = y is given by
28
Probability Distributions
 f(x |
y)dx and
a
d
If X and Y are two discrete random variables with the joint probability
distribution function f(x,y). Then the marginal probability distribution of
X is given by
=
f (x , y )
, f 1 (x )  0
f 1 (x )
b
Marginal probability functions
f1(x)
f (x , y )
, f 2 (y )  0
f 2 (y )
Similarly the conditional probability distribution of Y given
X = x is given by
We can note that
P( a < X  b, c < Y  d)
P{ a  X < b, c  Y < d}
f(x|y)
 f(y |
x)dy
c
Independence of two random variables
Two random variables X and Y with joint pdf f(x,y) and
marginal pdfs f 1 (x) and f 2 (y) respectively are said to be
statistically independent if and only if
f(x,y)
= f1(x) f2(y)
The variables which are not statistically independent are said
to be statistically dependent.
Note:
1 . When conditional pdfs are known, the independence of r.v.s.
can be decided whenever
f(x|y) = f1(x)
and
f(y|x) = f2(y)
2. Two r.v.s X and Y with pdf f(x,y) are statistically independent if and
only if f(x,y) can be expressed as the product of a nonnegative function
of x alone and a non negative function of y alone.
ie., iff f(x,y) = h(x) k(y)
3.
where h(.)  0 and k(.)  0
If the table of joint probabilities for (X,Y) contains a zero entry, then
X and Y are dependent.
Bivariate Probability Distributions
29
School of Distance Education
School of Distance Education
SOLVED PROBLEMS
The joint distribution function
Let (X,Y) be a pair of r.v.s. either discrete or continuous.The joint
distribution function of (X,Y) is denoted by the symbol F(x,y) and is defined
as
= P(X  x , Y  y)
F(x,y)
x
=
x
  f(x,y)if X and Y ar e discr ete
 
Two random variables X and Y have the joint pdf
f(x, y) =
k(x2 + Y2), 0 < X < 2, 1 < Y < 4
=
0, elsewhere
Find k.
Solution
We know that
x
y
 
=
y
Example l
y
 
f(x, y)dxdy if X and Y are continuous
f(x, y)dxdy =
 
2 4
Properties
1
 
ie.,
  k (x
1.
F(+  , +  ) =
1
0 1
2.
F(   ,   )
=
0
3.
F(   , y)
=
F(x,   ) = 0
4
4.
0 F(x, y)

1
5.
F(x,  )
=
F(x), the distribution function of X.
F(  , y)
=
F(y), the distribution function of Y.
ie.,
2
 y 2 )dxdy =
1
}
2
3
x

k 
 xy 2  dy
 3

0
1
=
1
=
1
=
1
 32 128 8 2 
k

  =
3
3 3
 3
1
If X and Y are
discrete
4
8

k    2 y 2  dy
3


1
6. Whenever F(x,y) is differentiable, then
f(x, y)
=
4
8
y3
k  y 2
3
3
1
 2 F (x , y )
x y
7.
When X and Y are independent
F(x,y) =
F(x) F(y)
8. For the real numbers a, b, c and d
P( a < X  b, c < Y  d) = F(b, d) + F( a , d)  F( a , c)  F(b , c)
30
Probability Distributions
4


1
ie.,
 150 
k
 =
 3 
1
ie.,
ie.,
50k
k
1
1/50
=
=
Bivariate Probability Distributions
31
School of Distance Education
School of Distance Education
Example 2
Example 3
X and Y are r.v.s. with the joint pdf
f(x, y) =
x  2y
, where (x, y) = (1, 1), (1, 2), (2, 1), (2, 2)
18
`
= 0, elsewhere. Are the variables independent.
Solution
x  2y
, where (x, y) = (1, 1), (1, 2), (2, 1), (2, 2)
18
The marginal pdf of X is
Given f(x, y) =
f1 (x)
=
 f (x , y )
=

y
2
1
=
=
x  2y
18
x 2 x 4

18
18
=
 f (x , y )

1
x  2y
18
3  4y
, y = 1, 2
18
 f1(x) f2(y)
Clearly
f(x, y)
 X and Y are not independent.
32
Given
f(x, y)
1 1


P 0  X  ,  Y  1 
2
2


= x + y, 0 < x, y < 1
1/ 2 1
=
Probability Distributions
 
1/ 2 1
=
 

0
1/ 2
=
=
(x+y)dx dy
0 1/ 2
1/ 2
=
f (x,y)dx dy
0 1/ 2

0
1/ 2
1  2y 2  2y
=

18
18
=
Solution
x
2
=
1 1


find P 0  X  ,  Y 1
2 2


2x  6
, x = 1, 2
18
The marginal pdf of Y is
f2(y)
If X and Y have joint pdf f(x, y) = x + y, 0 < x, y < 1

0
1

y2 
 xy +
 dx
2 

1/ 2
1 x
1 

 x + 2    2 + 8   dx
 


3
x
 +  dx
8
2
1/ 2
 x2 3 
=  4  8 x 

0
=
1
3
1


16 16 4
Bivariate Probability Distributions
33
School of Distance Education
School of Distance Education
EXERCISES
Example 4
Multiple Choice Question
Given f(x, y) =
c1 x
y
, 0 < x < y < 1 and f2(y) = c2y4 obtain c1, c2 and
2
1.
also get the joint p.d.f.
2.
Solution
Since the conditional p.d.f. f(x|y) is a p.d.f. we have

3.
 f (x| y)dx
= 1

y
ie.,
c1 
0
x
y
2
c. both a and b
dx
Fill in the blanks
1.
Multivariate r.v.s. should be defined over the same..................
2.
The cumulative distribution function F(x, y) lies between
.................. and ..................
3.
Joint cumulative distribution function F(x, y) = ..................
4.
If x and y are two independent r.v.s, then f(x, y) = ..................
5.
If X and Y are two independent r.v.s, the conditional distribution of
x given Y = y ie., f(x/y) = ..................
= 1
c1
1
 c2 y
4
= 2
dy
0
y5
b. P(X  x, Y  y)
d. neither a nor b
= 1
 x2 
c1  2 
 2 y  0
ie.,
Similarly
= 1
1

c2 

 5  0
= 1
ie., c2 = 5
The joint p.d.f. f(x, y) = f(y) f(x|y) = c2y4 
c1 x
y2
= 10xy2, 0 < x < y < 1
34
a. P(X = x, Y = y) b. P(X  x, Y  y)
c. P(X  x, Y = y)
c. P(X  x, Y  y)
Joint cumulative distribution function F(x, y) lies with in the values
a.  1 and +1
b.  1 and 0
c.   and 0
d. 0 and 1
If X and Y are two independent r.v.s, the cumulative distribution
function F(x, y) is equal to
a. F1 (x) F2 (y)
y
ie.,
Joint distribution function of (x, y) is equivalent to the probability
Probability Distributions
Very
1.
2.
3.
4.
5.
Define bivariate discrete random variable?
Define bivariate continuous random variable.
Describe joint distribution function.
Define marginal probability density functions.
Explain conditional probability density functions.
Short Essay Questions
1.
Define Joint probability distribution function and state its properties.
2.
A two dimensional random variable (X. Y) have the joint density
Bivariate Probability Distributions
35
3.
4.
5.
School of Distance Education
f(x,y) = 8xy, 0 < x < y < 1
=0
otherwise
Find the marginal and conditional distribution of X and Y.
The joint probability density function of two dimensional random
variable (X, Y) is given by
f(x,y) = 2; 0 < x < 1; 0 < y < x
=0
elsewhere.
Find the marginal density functions of X and Y.
If p(xy) = xye  (x+y). x  0, y  0, find (1) P(X < 1) (2) P(Y < 2)
1
1

(3) P  X  , Y  
2
2

The joint probability mass function of (X, Y) is given by
(x,y) : (0,0) (0,1)
(1,0)
(1,1) (2,0)
(0,2)
1
1
1
1
1
1
12
6
3
6
6
12
Find the marginal probability mass function of Y and the conditional
probability mass function of Y given X = 1.
f(x,y) :
1.
2.
3.
Long Essay Questions
Let (X, Y) have the joint density function
2
f(x,y) =
if 0 < x < 1, x < y < 2
3
= 0, otherwise
Verify if X and Y are independent and
find P[1 < Y < 1.5| X = 0.5]
A number X is chosen at random from the integers 1, 2, 3, 4. Another,
Y is chosen from among those that are atleast as large as X. Determine
the joint probability distribution of (X,Y).
For the bivariate density function
School of Distance Education
4.
36
Probability Distributions
(x  y )
for x = 1, 2, 3, y = 1, 2. Find the
21
conditional distribution of X given Y = 2 and the conditional
distribution of Y given X = 1.
If f(x,y) = 6x2y, 0 < x < 1, 0 < y < 1 find
is given by f(x,y) =
5.
(1) P(0 < X <
3 1
3 1
, < Y < 1 (2) P(0 < X < | < Y < 1)
4 3
4 3
Short Essay Questions
(2)
f1(x) = 4x (1  x2) 0 < x < 1;
f2(y) = 4y3, 0 < y < 1; f(x|y) = 2x/y2, 0 < x < y, 0 < y < 1;
f(y|x) = 2y/1  x2, x < y < 1, 0 < x < 1
(3)
f1(x) = 2x, 0 < x < 1, f2(y) = 2(1  y), 0 < y < 1
(4)
1) 1  2/e
(5)
y
: 0
f2(y) : 7/12
2)1  3/e2
1
4/12
3 1 /

3) 1  e
2

2



2
2 Total
y
:
1/12 1
f(y|x=1) :
0
2/3
1
1/3
2
0
Total
1
Bivariate Probability Distributions
37
Long Essay Questions : (1) Not independent, 1/3
(2)
k (2x + 3)e y/ 2 , 0  x < 2, y  0
f(x,y) = 
0,
elsewhere

show that f(x,y) = f(x) f(y), k being a constant.
The joint probability distribution of a pair (X,Y) of random variables
X
1
2
3
4
Y
(4)
f(x/y = 2) =
(5)
3/8, 27/64
1
2
3
4
1/16
1/16
1/16
1/16
0
1/12
1/12
1/12
0
1/8
1/8
1/8
0
01
9
1/4
x 2
1y
, x =1, 2, 3 f(y/x = 1) =
, y = 1, 2
12
5
School of Distance Education
School of Distance Education
Mathematical Expectation
If X and Y are two continuous r.v.s. with the joint p.d.f. f(x,y). Then
by definition
(Two random variables)
 
Let us now consider expectation of the function of two random variables.
Let g(X,Y) be a real function such that E|g(X,Y)| <  . In the case of
discrete r.v.s. X and Y, let Pij = P(X = xi, Y = yj)
E(X + Y)
=
E|g(X,Y) =  g(x i ,y j )Pij =  g(x,y) f(x,y)
x y
 

g(x,y)f(x,y) dx dy
=
- -
y f(x,y)dx dy
ie.,
E(X + Y)


x f1 (x)dx +

Proof

y f 2 (y)dy

= E(X) + E(Y)
Assume that E(X) <  and E(Y) < 
where f1(x) and f2(y)
are marginal densities
of X and Y respectively.
Multiplication Theorem
Let X and Y be two discrete r.v.s. with the joint pdf f(x,y). Then by
definition.
E(X + Y)
If X and Y are two independent r.v.s. such that E|XY|< 
then E(XY) = E(X) E(Y)
=   (x + y) f(x,y)
Proof
=   x f(x,y) +   y f(x,y)
Let (X,Y) be a pair of discrete r.v.s. with the joint probability distribution
function f(x,y).
x y
x y
x y
Then,




=  x  f(x,y)   y  f(x,y)
x
y
x

 y
=
 x f(x) +  y f(y)
x
y
= E(X) + E(Y)
38
 
 

 

 

x
f(x,y)
dy
dx
+
y

  f(x,y) dx  dy
=  



 
 


 
 
 
 
x f(x,y)dx dy +

In the case of continuous r.v.s. (X,Y) with joint p.d.f. f(x,y).
E[g(X,Y)] =
  (x  y) f(x,y) dx dy
 
 
Then the mathematical expectation of g(X,Y) is defined as
i j
=
Probability Distributions
E(XY) =   xy f(x,y)
x y
=   xy f1(x) f2(y)
x y


since X and Y are
independent


=  xf1 (x)  yf 2 (y) = E(X) E(Y)
x
y

If X and Y are to continuous r.v.s. with the joint p.d.f. f(x,y) then,
Mathematical Expectation
39
School of Distance Education
School of Distance Education

E(XY)
=

 
Conditional Expectation
x y f(x ,y ) d x d y
 
 
  

=   x f1 (x) dx    y f 2 (y) dy 

 
 since X and Y
are independent
 
=
  (x  y) f(x,y) dx dy
 
= E(X) E(Y)
Let (X,Y) be a pair of random variables. We know how to find the
conditional distribution of X given Y = y. We can therefore compute the
expectation of this distribution. For every particular value Y = y, the
conditional expectation is a number E(X|Y=y).
Definition
Let (X,Y) be a pair of r.v.s. with the joint p.d.f. f(x,y). Then the
conditonal expectation of X given Y = y is defined as
Let E(X/Y = y) =  xf (x | y ) If X and Y are discrete
Theorem:
x
If X and Y are two independent r.v.s. then
Mx+y(t) = Mx(t)My(t)
Proof:
By definition,
 t(x+y)   E etx ety   E etx  E ety 
M (t)
x+y = E e



   
(t)
= M (t)
x  M y , since X and Y are independent
Result.1
For a pair of r.v.s. (X,Y), the covariance between X and Y (or product
moment between X and Y) is defined as
Cov(X,Y)
= E[X  E(X)] [Y  E(Y)]
Result.2
Cov(X,Y) = E(XY)  E(X) E(Y)
Result.3
The correlation coefficient between the two random variables X and Y
is defined as
Pxy
where
40
Cov(X,Y)
V(X)
V(Y)
Cov(X,Y)
=
(X)
(Y)
= E(XY)  E(X) E(Y)
= E(X2)  |E(X)|2
= E(Y2)  |E(Y)|2
Probability Distributions

=

xf (x | y ) dx

If X and Y are continuous
Similarly, the conditional expectation of Y given X = x is defined as
E(Y|X = x) =
 yf (y | x )
y
If X and Y are discrete

=
 yf (y | x )dy

If X and Y are continuous
Note: E(X|Y=y) = E(X|y) is also called condtional mean of X. This
represents the regression of X on Y which may or may not be function of
the given value y. E(Y|X=x) = E(Y|x) is also called conditional mean of Y,
which represents the regression function of Y on x which may or may not
be a function of the given value x.
Conditional Variance
The conditional variance of X given Y = y is given by
V(X/Y = y) = E(X2/Y = y)  {E(X/Y = y)}2
Simiarly the conditional variance of Y given X = x is given by
V(Y/X=x) = E(Y2/X = y)  {E(Y/X = x)}2
Mathematical Expectation
41
School of Distance Education
School of Distance Education
SOLVED PROBLEMS
= 1
Example 1
x y
If X and Y have the joint p.d.f. given by f(x,y) =
, x = 1. 2, 3;
21
y = 1, 2
E(Y)
=  yf1(y)
2
=
Obtain (i) the correlation coefficient  xy
f(x,y)
f1(x)
=
x y
, x = 1. 2, 3; y = 1, 2
21
=
 f (x , y )
=

=
1
x y
x 1 x  2

=
21
21
21
E(Y2)
f1(y)
x y
1y 2y 3 y


= 
=
21
21
21
21
1
E(X)
6  3y
, y = 1, 2
21
=  xf1(x)
3
=
 2x  3 

21 
x 
1
Probability Distributions
5
7
9
114
4
9

21
21
21
21
=  y2f2(y)
= 1
x
=
1
2  6  3y 
= y 

 21 
=  f(x , y )
3
 2x  3 

21 
x 2 
= 1
2x  3
=
, x = 1, 2, 3
21
42
9
12 33
2

21
21 21
3
y
2
Now
1
E(X2) =  x2f1(x)
Solution
Given
 6  3y 

21 
y 
= 1
(ii) E(X/Y = 2) and V(X/Y = 2)
5
7
9
46
2
3

21
21
21 21
V(X)
9
12 57
4

21
21 21
= E(X2)  E(X2)
2
114  45 
278

=
 
21  21 
441
V(Y)
= E(Y2)  E(Y2)
2
57  33 
108

=
 
21  21 
441
E(XY) =
  xy f (x , y )
x
y
Mathematical Expectation
43
School of Distance Education
School of Distance Education
=
  xy
x
y
x y 


 21 
2
3
3
1  2 
 2 1 

21
21
21
= 1 1 
22
=
=
 xy
72 46 33
6



21 21 21 441
= 1
 V(X|Y = 2)
6 / 441
278 108
441 441
6 / 441
=  0.0346
278  108
=
= x.f(x|y=2)
3
=
 x.
1
3
=
f (x , y  2)
f 2 (y  2)
 f (x  2) 
 / (12 / 21)
21 
 x . 
1
=
64  26 


12  12 
=
92
23

144 36
=
64 676

12 144
The probability density function of 2 random variables (X,Y) is given by
f(x,y)
= 2,
0<x<y<1
= 0,
elsewhere
Find the conditional mean and variance of X and Y = y
Solution
The conditional mean of X given Y = y is
E(X|y)
=
f2(y)
=
=
Probability Distributions
2
Example 2
 x f (x | y )dx
x
 f (x | y )dx
x
y
44
3
4
5
64
4
9

12
12
12 12
= E(X2|Y=2)  |E(X|Y=2)|2
V ( X ) V (Y )
6 / 441
278 108
441 441
3
4
5
26
 2.
 3.

12
12
12 12
2x 2
= x 

 12 
Cov( X , Y )
=
1
E(X2|Y = 2) = x2f(x|y=2)
72
21
=
(ii) E(X|Y=2)
4
4
5
 3 1 
32
21
21
21
x 2

21 
 x 
= 1.
= E(XY)  E(X) E(Y)
 Cov (X,Y)

3
=
 2dx
0
 (2 x )y0
Mathematical Expectation
45
School of Distance Education
School of Distance Education
= 2y, 0 < y < 1

f(x|y)
=
f (x , y )
f 2 (y )
=
2
1
= ,0<x<y<1
2y
y
Multiple Choice Questions
1
1 x2 
=  x . y dx = y  2 
  0
0
E(X|y)
=

1.
y
y

EXERCISES
y2 1 y
 
2 y 2
2.
3.
y
= , 0< y < 1
2
E(X|y)
x
2
ie., V(X|y)
46
c) Cov (x,y)
d)  xy
E(Y/x = x) is called
a) regression curve of x as y
b) regression curve of y as x
c) both a and b
d) neither a nor b
T
h
f(x | y )dx
c)
=

b) Var (x). Var (y)
a)
x
y
V(X|y)
a) Var (x,y)
x
0
2

3 y
1
1 x
dx =  
y
y  3 
0
=
y3 1 y2
 
3 y
3
=
y2 y 
 
3 2
=
y2 y2 y2


3
4
12
=
y2
12
Probability Distributions
2
e
c
o
r
r
e
l
a
t
i
o
n
c
o
e
f
f
i
c
i
e
n
t
P
b
e
t
w
e
e
n
t
w
o
v
a
r
i
a
b
l
e
s
X
1
and X2 for a
bivariate population in terms of moments is
Also V(X|y)= E(X2|y)  [E(X|y)]2
E(X2|y) =
The (1,1)th moment 11 of a bivariate probability distribution is called
22
20 . 02
12
11 . 22
b)
d)
11
20 . 02
11
02 .12
Fill in the blanks
4.
5.
6.
7.
8.
E(y/x) is called the ................ of y and x.
If X and Y are two independent r.v.s., then E(xy) = ...........
If the covariance between x and y is zero it mean that X and Y are
................
If x and y are independent r.v.s., then Cov(x,y) = ............
E(x|y = y) is called ............. of x and y.
9. What do you mean by conditional expectation?
10. What are the proportion of conditional expectation.
11. Define joint raw moments.
Mathematical Expectation
47
School of Distance Education
12. Define joint central moments.
13. If x and y are two r.v.s. write the expression for its correlation
coefficient.
Short Essay Questions
14. State and prove the addition theorem of expectation of a sum of
stochastic variables.
15. Show that if X and Y are independent random variables, then E(XY)
= E(X) E(Y).
16. If X and Y are two independent random variables then show that
Var (aX + bY) = a2 Var (X) + b2 Var (Y).
17. Let f(x, y) = 2, 0 < x< y< 1 be the density of (X,Y). Show that
School of Distance Education
21. X and Y are independent random variables with means 10 and  5
and variances 4 and 6 respectively. Find a and b such that z = ax + by
will have mean 0 and variance 28.
22. Let X and Y have joint probability density function
1, 0  y  2 x  2
f(x,y) =  0,
other wise

Find E(XY2)
k (x  y  xy ), 0  x  1, 0  y  1
23. If f(x,y) = 
0,
other wise

Determine k and E(X|y).
1x
y
E(Y|x) =
, 0 < x < 1 and E(X|y) = . 0 < y < 1.
2
2
18. The joint p.d.f. of the random variables X and Y is f(x, y) =
a 2 e a ( x y ) ; x > 0, y > 0 find the m.g.f. of X + Y..
Long Essay Questions
t 

(18) 1  
a


2
(19) 8/5
19. The joint probability function of a bivariate discrete random variable
is given by
(20) 1.58, 0.74, 4/7, 11/16
2 x1  x 2
,
30
where (X1, X2) = (1, 1), (1, 2) (2, 1), (2, 2), (1, 3) and (2, 3). Find
E(X1|X2 = 2)
20. The joint p.d.f. of X and Y are given by the following table.
(23)
f(x1, x2) =
Y
X
0
1
2
0
0.05
0.10
0.02
1
2
3
0.08
0.15
0.05
0.20
0.10
0.05
0.10
0.05
0.05
(21) a  1, b   2
(22)  4 15
4 5y  2
,
, 0  y 1
5 3(3 y  1)
Find E(X), E(Y|X = 2) and V(Y|X = 3)
48
Probability Distributions
Mathematical Expectation
49
School of Distance Education
School of Distance Education
Module-III
=
STANDARD DISTRIBUTIONS
n

x 0
PROBABILITY DISTRIBUTIONS
(DISCRETE)
Definition
An random variable X is defined to have a binomial distribution if the
probability density function of X is given by
(2) Variance
V(X) = E(X2)  {E(X)}2
Now E(X2) = E[X(X  1) + X]
= E[X(X  1)] + E(X)
n
=  x (x  1)f (x )  np
0
f(x) = nCxpxqn-x, for x = 0, 1, 2... n, p + q = 1, 0  p  1
n
=  x (x  1) nCxpxqn  x+np
0
= 0, otherwise
Here n and p are called parameters of the binomial distribution. When
X follows a B.D. we can write X  B(n,p). The symbol b(x; n, p) also
represents the probability for x successes in n trials with probability of
success p.
This is called Binomial distribution because the probabilities as the
random variable X takes values 0, 1, 2.... n are qn, nC1 qn-1 p, nC2 qn  2
p2, .... pn which are the successive terms of the binomial expansion (q +
p) n
Any r.v. which follows Binomial distribution is called a Binomial variate.
Moments of Binomial Distribution
(1) Mean
Mean
= E(X) =
=
5 0
n
 xf(x)
x 0
n
 x nCxpxqn  x
x 0
Property Distribution
n

x 0
(n  1)!
p x 1qn  x
(x  1)!(n  x )!
= np(q + p)n  1
= np
= np
1. Binomial Distribution
n!
p x qn  x
x !(n  x )!
n
n!
=  x (x  1)
pxqn  x + np
x
!(
n
 x )!
0
n
(n  2)!
p x 2 qn  x  np
= n(n  1)p2 
(
x

2)!(
n

x
)!
0
2
= n(n  1)p (q + p)n  2 + np
 V(X)
=
=
=
=
=
n(n  1)p2 + np
n(n  1)p2 + np  (np)2
n2p2  np2 + np  n2p2
np(1  p)
npq
Standard Deviation (SD) of BD is npq
Note
For a BD, Mean is greater than variance.
Standard Distribution
51
School of Distance Education
For determining the moments the mgf can be used. The mean and
variances can be determined as follows.
(3) Beta and Gamma coefficients
=
1
=
22
1
=
=
n 2 p 2 q 2 (q  p )2
2
2 2
n p q
=
(q  p )2
npq

1
1
32
School of Distance Education
qp
npq
 d(q + pet )n
= 
dt

A BD is positively skewed, symmetric, and negatively skewed according
as 1 > = < 0 which implies q > = < p
2
=
4
22 =
= 3 +
= n(q + p)n 1  p
n 2 p 2q 2
= np
1  6pq
npq
 2
1
2    3 which implies pq > = <
6
 d 2 M x (t) 
=  dt 2 

 t = 0
= n (n  1)(q  pe t )n 2 pe t  pe t  n (q  pe t )n 1 pe t
when t = 0

2
Moment generating function (mgf)
= n(n  1)(q + p) n  2 p 2 + n(q + p) n  1p
= n(n  1)p 2 + np
 2 = 2  (1 )2
= E(etx)
= n(n  1)p 2 + np  n 2 p 2
= np(1  p)
= npq
In this way we can determine the other moments using mgf.
n
tx
=  e f (x )
0
n
=
 etx n C x p x qn x
0
n
=
5 2


 t = 0
= n (q + pet )n 1  pet when t = 0
3n 2 p 2 q 2  npq(1  6 pq )
A BD is leptokurtic, mesokurtic or platykurtic according as
By definition,
Mx(t)
 dM x (t ) 
=  dt 
t =0
 n C x (pe t )x qn  x
0
Property Distribution
Additive property of the binomial distribution
= (q + pet )n
If X is B(n1, p) and Y is B(n2, p) and they are independent, then their
sum X + Y also follows B(n1 + n2, p).
Standard Distribution
53
School of Distance Education
School of Distance Education
n
Proof:
When X  B (n 1 , p ), M x (t )  (q  pe t )n 1
r
 Cx (x  np ) q
= (q  pe )
0
 (q  pe )
 X + Y  B(n1 + n2, p)
If the second parameter is not the same for X and Y, then X+Y will not
be binomial. We can generalise this result to k independent r.v.s., keeping
the 2nd parameter same.
Recurrence relation for central moments
When, X  B(n, p),

d r 
r 1 = pq nr r 1 
dp 

Proof:
= E [ X  E ( X )]r
= E ( X  np )r
n
 (x  np )r f (x )
0
n

5 4
d r
dp
=
 (x  np )r .n C x p x q n  x
=
 [n C x p x qn  x r (x  np )r 1 (n )
0
n
x
0

t n2
= mgf of binomial with parameters [n1 + n2, p]
=
n
n x
 (x  np )r n C x p x qn  x  p  1  p 
= nr r 1 
= (q  pe t )n 1  n 2
r
 Cx (x  np )r p x (n  x )(1  p )n  x 1 (1)]
n
n
= Mx(t)  My(t) since X and Y are independent
t n1
xp
x 1
r 1n C p x qn  x 
x
= nr  (x  np )
Y  B (n 2 , p ), M y (t )  (q  pe t )n 2
We have,
Mx+y(t)
n x
d r
dp
1
pq

n
 (x  np )r 1 f (x )
0
1
= nr r 1  pq r 1

dr 

 r 1 = pq nr r 1 +
dp 

Using the information,
0 = 1 and 1 = 0 we can determine
2 , 3 and 4 successively, using this relation.
Fitting of Binomial Distribution
By fitting a binomial distribution means to determine the expected or
theorectical binomial frequencies against the given observed frequencies.
The theoretical frequencies are obtained by multiplying the binomial
probabilities by the total frequency. If f(x) denotes the binomial frequency
function, it is given by
f(x)
= N, b(x; n, p)
= N .n C x p x q n  x x = 0, 1, 2.... n; p + q = 1
To calculate the binomial probabilities we require n and p. n can be
determined from the values of x in the data. If p is not available in the data,
find out the mean of the given frequency distribution and it may be
interpreted as np, the mean of the binomial distribution.
0
Property Distribution
Standard Distribution
55
School of Distance Education
School of Distance Education
Bernoulli Distribution
Consider a random experiment whose outcomes can be classified only
into two; either as a success S or as a failure F such that P(S) = p and
 p = q. This is called a Bernoulli Trial. Here we define a r.v X
such that it takes value 1 when S occurs and 0 if F occurs. In this case we
say that X follows a Bernoulli Distribution.
Definion
Let X be discrete r.v taking only two values 0 and 1. If the
pmf of X is given by
P(X = x) = p(x)= Pxq1x, x = 0, 1, 0 < p < 1 and q = 1p
= 0, otherwise
then X is said to follow a Bernoulli Distribution. This is also
called Point Binomial Distribution. Bernoulli distribution is
actually B(1, p).
M ean:
E(X) = 0  P(X = 0) + 1  P(X = 1)
= 0  q + 1  p = p
Variance:
E(X2) = 02. P(X = 0) + 12  P(X = 1)
= 0  q + 1  p = p.
V(X) = E(X2)  {E(X)}2
= p  p 2 = p(1  p) = pq
Moment generation Function
Mx(t) = E(etx) = e0t P(X = 0) + e 1t P(X = 1)
= q + et. p = q + pet
P
(
F
)
=
1
SOLVED PROBLEMS
Example l
The mean and variance of a binomial distribution are 2.5 and 1.875
respectively. Obtain the binomial probability distribution.
Solution
Given mean; np
=
2.5
...(1)
Variance; npq
=
1.875
...(2)
=
0.75
=
1  0.75 = 0.25
=
2.5,
(2)
1.875
gives q =
(1)
2.5
 p
Since np
n  .25 = 2.5, n = 10
Therefore the binomial probability distributions is
f(x) = 10c x(0.25) x(0.75) 10  x, x = 0, 1, 2, 3, .... 10
where x is the number of successes.
Example 2
If the mean and variance of a binomial distribution are 4 and 2
respectively. Find the probabiliy of
(i) exactly two successes
(ii) less than two successes
(iii) More than two sucesses
(iv) at least two successes
Solution
Given np = 4 and npq = 2
q =
1
1 n
p= ,
= 4 n = 8
2
2 2
 1
Then X  B  8, 
 2
5 6
Property Distribution
Standard Distribution
57
School of Distance Education
= P(X = 2)
(i) P(exactly 2 successes)
2
=
(ii) P(less than 2 successes)
1  1 
8 C2    
2 2
6
=
p 
8
1
1  1 
1  1 
= 8C0      8C1    
2 2
2 2
=
(iii) P(more than 6 successes)
7
29
256
= P(X > 6)
= P(X = 7) + P(X = 8)
8
8
9
1 
1 
= 8C7    8C8   
256
2
2
(iv) P(at least 2 successes)
= P(X  2)
= 1 P( X  2)
= 1
5
2 1 t 
=   e 
3 3 
7
64
= P(X < 2)
= P(X = 0) + P(X = 1)
0
School of Distance Education
29
227

256 2 5 6
1
2
, q  ,n = 5
3
3
1
= 1.667
3
Mean
= np = 5 
Variance
= npq = 5 
1 2
 = 1.111
1
3 3
Example 4
Comment on the statement “The mean of a binomial distribution is 3
and variance is 4”
Solution
Given np
npq
= 3
= 4
 q  4 / 3 or p = 
1
, which is wrong
3
For a binomial distribution mean should be greater than variance.
So the statement is incorrect.
Example 3
5
1 
t 5
Given the mgf of a binomial variable, Mx(t) =   (2  e ) , obtain
3 
the mean and variance.
Solution
5
Given Mx(t)
5 8
1 
t 5
=   (2  e )
3
Property Distribution
Example 5
In litters of 4 mice the number of litters which contained 0, 1, 2, 3, 4
females were noted. The figures are given in the table below.
No. of female mice :
0
1
2
3
4 Total
No. of litteres
:
8
32
34
24
5
103
If chance of obtaining female in a single trial assumed constant, estimate
this constant of unknown probability. Find also expected frequencies.
Standard Distribution
59
School of Distance Education
Solution
Here x
=
 fi x i
 fi
 np
=
1.864, n = 4
p
=
1.864
= 0.466
4
=
32  68  72  20
192
=
= 1.864
103
103
= 1  p = 1  0.466 = 0.534
q
The binomial frequency function is given by
f(x)
= N b(x; n, p)
=
103  4 C x (.466)x (.534 )4  x , x = 0, 1, 2, 3, 4
 f(0)
=
103  4 C0 (0.466 )0 (0.534 )4 = 8.375 = 9
f(1)
=
103  4 C1 (0.466 )1 (0.534 )3 = 29.235 = 29
f(2)
=
103  4 C2 (0.466)2 (0.534 )2 =38.268 = 38
f(3)
=
103  4 C3 (0.466 ) (0.534 ) = 22.263 = 22
f(4)
=
103  4 C4 (0.466 )4 (0.534 )0 = 4.857 = 5
3
2
2. Poisson Distribution
Definition
A discrete r.v. X is defined to have Poisson distribution if the probability
density function of X is given by
f(x)
=
=
e


x!
School of Distance Education
The following are some of the physical situations or national phenomena
illustrating the poisson distribution.
i. The number of total traffic accidents per week in a given state.
ii. The number of radio active particle emissions per unit of time.
iii. The number of telephone calls per hour coming into the switchboard
iv. The number of meteorites that collide with a test satelite during a single
orbit.
v. The number of organisms per unit volume of some fluid.
vi. The number of defects per unit of some material
vii. The number of printing mistages per page of a book.
viii.The number of customers reaching a bank counter in 20 minutes time.
ix. The number of defective screws coming off a production line in an hour.
In real life situations Poisson distribution is used to represent rare events.
That is, events having a very small probability of occurrence. One of the
well known early applications of this distribution is to the number of deaths
in a year, in the Prussian Cavalry from the kick of a horse during the
period 1875 to 1894.
Poisson distribution as a limiting form of Binomial
The Poisson distribution is obtained as an approximation to the binomial
distribution under the conditions
(i) n is very large (n   )
(ii) p is very small (p  0)
(iii) np =  , a finite quantity
Proof:
For binomial distribution,
f(x)
x
n
=
n!
p x qn  x
x !(n  x )!
=
n (n  1)(n  2)...n  x  1) x
p (1  p )n  x
x!
, x = 0, 1, 2, ....;  > 0
0, otherwise
where  is called the parameter of the Poisson distribution.
In this case we can write X  P ( )
6 0
Property Distribution
C x p x q n  x , x = 0, 1, 2, ... n; p + q = 1
=
Standard Distribution
61
School of Distance Education
1 
2 
x 1  x

n
n x 1   1   ... 1 
 p (1  p )
n
n
n


 

=
x !(1  p )x
1 
2 
x 1 

1   1   ... 1 
Now n Lt

 =1
 
n 
n 
n 
Also np =   p 
Lt (1  p )x
n 
School of Distance Education
2. Variance
V(X)
=
E ( X 2 )  {E ( X )}2
E(X2)
=
 x 2 f(x)



n


Lt 1   = 1
n  
n
f(x)
 [ x (x  1)  x ]f (x )
=
 [ x (x  1)
=
 2 e  
=
 2 e  e   
=
2  
=
2    2 = 
0

0
n


Lt 1   = e   , by definition
n 
n  
n
Applying the above limits, we get
Lt (1  p )n
=
x
=
0

=
=
e   x
, x = 0, 1 2... 
x!
This is the pdf of Poisson distribution. ie. x  P( ) . This shows that
binomial distribution tends to Poisson distribution.
Moments of Poisson Distribution
 V(X)
SD
0

e   x
  xf(x)
x!
0
 x 2
 E (X )
(x  2)!

=
Note: For a PD, Mean = variance
1. Mean
3. Calculation of

E(X)
=
 xf(x)
0

=
=
=
6 2
x
0
e  
e   x
x!


3
=
4 =
3 2  
e

(Measures of skewness and Kurtosis)
0
e
and  4
4. Beta and Gamma coefficients
x 1
 (x  1)!
 
3
= 
Property Distribution
1
=
32
23

2

3

1

Standard Distribution
63
School of Distance Education
1
1 
=
Proof:
1

since   0 , poisson distribution is a positively skewed distribution.
4

32  
Now 2
=
22
2
=
2  3 

2
3
1

=
E (e tx )
=
 etx f(x) =  e tx ,

0
0
=
=
e  
0
e
e
= e 2 (et 1)
 My (t )
Now Mx+y(t) = Mx (t) My (t) since X and Y are independent.
e 1 (e
=
e (1  2 )(e 1)
mgf of PD with parameter
t
n
X
+
Y
e   x
x!
t x
(e )
x!
r 1
Proof: r
=
d r 

 r r 1 
d  

=
E [ X  E ( X )]r
=
E ( X   )r
=
 (x   )r f (x )
=
 (x   )r
=
 x ! r (x   )r 1 e   x .(1)

  et
e
 (et 1)
0


d r
d
0

 P ( 1  2 )
Property Distribution
e   x
x!
1
0
(x  y )r (e  ) x  (x   )r e  

r 1
= r  (x   )
0
6 4
(1  2 )
 X  Y  P(1  2 ) . This result can be generalised for n
independent poisson random variables.
If X  P(1 ), Y  P(2 ) and if X and Y are independent
e
t 1)  (et 1)
e 2
=
Reproductive property
h
t 1)
Recurrence relation for central moments
Note:
Differentiating mgf successively and putting t = 0, we get the raw
moments about the origin zero, as in the case of binomial distribution.
From this, central moments can be easily determined.
t
e 1 (e
When X  P( )

=
=
Y  P(2 ) ,
=
Moment generating function

Given X  P(1 ) ,  Mx (t )
1

since  > 0, posson distribution is leptokurtic
M x (t )
School of Distance Education
x 1

e   x
e   x
  (x   )r
x!
x!
0
Standard Distribution


x

 1   
65
School of Distance Education


x 
r  (x   )r 1 f (x )   (x   )r f (x ) 

  
0
0
=

=

d r
 r r 1 =
d
 r 1
r r 1 
 (x   )r 1 f (x )
0

Example l
A random variable X follows a poisson distribution with mean 1.
Calculate the probability that
(a) X = 0 (b) X = 1 (c) X  2
Given X  P(1)  P( X  x ) =
d r 

 r r 1 
d  

By definition,
0  1, 1  0
r= 0, 1
=
d 0 

 0 
=0
d  

r = 1, 2
=
d 1 

  0 
 [1  0]  
d  

r = 2, 3
=
d 2 
d


 2 1 
  2  0 


d 
d  


r = 3, 4
=
d 3 

  3 2 
 [3   1]  3  2  
d  

Putting
Fitting of Poisson Distribution
By fitting a poisson distribution we mean to calculate the expected poisson
frequencies against the given observed frequencies. The theoretical
frequencies are obtained by multiplying the poisson probabilities by the total
frequency. If f(x) denotes the poisson frequency function, it is given by
f(x)
= NP(X = x)
=
SOLVED PROBLEMS
Solution
r 1

=
School of Distance Education
N
e


x!
(a) P(X = 0)
=
e 1
= 0.36788
0!
(b) P(X = 1)
=
e 1
= 0.36788
1!
(c) P(X  2)
=
P(2) + P(3) + ...
=
1  {P(X =0) + P(X = 1)
=
1  (.36788 + .36788) = 0.26424
Example 2
If X and Y are independent poisson variates such that
P(X = 1) = P(X = 2) and P(Y = 2) = P(Y = 3)
 2Y
F
i
n
 >0
In the calculation of poisson probabilities we require the parameter  ,
which is to be estimated as x , the sample mean.
6 6
Property Distribution
d
t
h
e
v
a
r
i
a
n
c
e
o
f
X
Solution
Let X  P(1 ) and Y  P(2 )
Given P(X = 1) = P(X = 2)
x
, x = 0, 1, 2,...
e   x
e 11x
e 1


x!
x!
x!
ie.,
e  1 11 e 1 12
=
1!
2!
Standard Distribution
67
School of Distance Education
1
=
1
, since 1  0
2!
 1
=
2
Also P(Y = 2)
=
P(Y = 3)
ie.,
School of Distance Education
=
0  95  1  75  2  44  3  18  4  2  5  1
235
=
230
= 0.98
235
The poisson frequencies are given below
e
ie.,
 2
2
2!
2
=
1
 V (X )
=
=
e
 2
2
3!
3
2
 2 = 3 (since 2  0 )
3
1
75
2
44
Solution
The poisson frequency function is
=
N P(X = x)
=
N
e   x
, x = 0, 1, 2...
x!
Here we have to estimate  as
̂
6 8
=
235 
e  0.98 (.98)0
= 235  .3753 = 88.19 = 88
0!
f(1)
=
235 
e  0.98 (.98)1
= 86.43 = 86
1!
f(2)
=
235 
e  0.98 (.98)2
= 42.35 = 43
2!
f(3)
=
235 
e  0.98 (.98)3
= 13.83 = 14
3!
f(4)
=
235 
e  0.98 (.98)4
= 3.38 = 3
4!
f(5)
=
235 
e  0.98 (.98)5
= 0.66 = 1
5!
2 + 4  3 = 14
Example 8
Fit a PD to the following data
No. of accidents
:
0
No. of men
:
95
f(x)
=
1  2 and V (Y )  2  3
 V ( X  2Y ) = V(X) + 4V(Y), since X and Y are independent
=
f(0)
x 
 fi xi
 fi
Property Distribution
3
18
4
2
5
1
3. Uniform Distribution
(Discrete)
This is the simplest type of discrete probability distribution. A discrete
random variable X is said to have a uniform distribution if the probability
density function is given by
f(x)
=
1
for x = 1, 2, 3,...n
n
Standard Distribution
69
School of Distance Education
=
0, else where.
The graph of the uniform distribution is given below.
School of Distance Education
=
1 2
1  2 2  3 2  ....  n 2 

n 
Y

1
=
1 n (n  1)(2n  1)
n
6
n
=
(n  1)(2n  1)
6
 V(X) =
1
2
3

X
Moments
Mean
=
n  1  2n  1 n  1 

2  3
2 
=
n 1
2
=
 n 1  n 1 



 2  6 
=
n2  1
12
n
E(X)
=
 xf(x)
1
n
=
1
x n
1
=
1
[1  2  ...  n ]
n
=
n (n  1)
(n  1)
=
2n
2
 x 2 f(x)
7 0
E etx 
 
=
 etx f(x)
=
 etx n
1
n
=
=
1
x2 n
1
Property Distribution
1
E(X 2 ) =
n 2 1
12
=
E ( X 2 )  [ E ( X )]2
n
 4 n  2  3n  3 


6
n
=
2
Moment Generating Function
Mx(t)
Variance
V(X)
SD
n (n  1)(2n  1)  n  1 


6
 2 
n
1
1
Standard Distribution
71
School of Distance Education
=
1 t
e  e 2t  e 3 t  ...  e nt 

n 
=
1 e t (ent  1)
n
et  1
=
School of Distance Education
Variance
V(X)
=

E(X 2 ) =
e t (e nt  1 )
=
 [x(x-1)+x]f(x)
=
 [x(x-1)q x p   x f(x)
=
2
3
4
P 2.1 q  3.2q  4.3 q  ....  +E(X)
=
q
2 pq 2 1  3 q  6 q 2  ...  

 p
=
2 pq 2 (1  q )3 
Definition
A random variable X is defined to have geometric (or Pascal)
distribution if the pdf of X is given by
f(x) =
qx for x = 0, 1, 2...; p + q = 1
=
otherwise
The geometric distribution is well named since the values that the
geometric density assumes are the terms of a geometric series. Clearly the
mode of the geometric distribution is zero since the probability is maximum
at x = 0.
V(X)
=
Mean

=
0
=
x q
=
p q  2q 2  3q 3  ....  = pq 1  2q  3q 2  .... 




=
pq(1  q )2
=
7 2
=
 x f(x)

x
pq
q
= p
Property Distribution
p2
=
p
0
0
0

4. Geometric Distribution
E(X)
 x 2 f(x)

n(e t  1 )
Moments
E ( X 2 )  {E ( X )2 }
0
2q 2
p2
q2
p2


q q 
 
p p
2
q
p
q 2  pq
p
q 2 pq 2 q 2q 2 q

  2 
p
p
p
p3
p
2

q
p
2
(p  q ) =
q
p2
Moment generating function
Mx(t)
=
E(etx )
=
 etx q x p

0
Standard Distribution
73
School of Distance Education

 
0
School of Distance Education
x
=
t
P  qe
=

t
t
P 1  qe  qe

=
p (1  qe t )1 =
=
 
2

 ..... 

=
p
M
x(t)
=
Example 1
Let X be a discrete r.v having the pdf f(x) = 2  x , x = 1, 2, 3.... Obtain
its mgf and hence find its mean and variance.
Solution

tx
-x
= e  2 =
1
 et
  2
1 

2



x

et 
et  et
1


= 2 
2  2

7 4
Property Distribution
=
t
2e
1
2e
1
 1
=
 M x (t )t 0 
2
 Variance, 2
1
2
1
 2e t
 2e
t

t
2
(2  1)2
= 2 = Mean
(2e t  1)2 (2e t )  2e t .2(2e t  1)(2e t )
(2e t  1)4
=
 M x (t )t 0 =
(2  1)2 (2)  8
(2  1)4
=6
= 2  (1 )2 = 6  4 = 2
3
 et 
et  et 
      ......
=
2 
2  2 
 
 et 
1  
2 

et
=
M x (t ) =
tx
tx
= E e  =  e f(x)
1
et
=
2
2  et
M x (t)
= 2  x , x = 1, 2, 3...

Mx(t)
et
To obtain the mean and varianc, we have
1  qe t
In this case we say that geometric distribution possesses lack of
memory property.
Given f(x)
et
2
2 2  et
1
2


  ..... 



for every t  0 and all k  0. In particular, taking k = 1, we get
p t 1  q1 . p t  ( p1  p 2  ...)p t  (1  p 0 )p t
[from (*)]
 p t  (1  p 0 )p t 1  (1  p 0 )2 Pt 2  ..  (1  p 0 )t p 0
Hence p t  p ( X  t )  p 0 (1  p 0 )t ; t  0,1, 2...
 X has a geometric distribution.
Standard Distribution
75
School of Distance Education
School of Distance Education
Negative Binomial Distribution
EXERCISES
Multiple choice questions
Definion:
Let X be a discrete r.v assuming values 0, 1, 2, 3....
If the pmf of X is given by
 x  k 1 k x
P(X = x) = p(x) =  k  1  p q ,


x  0,1, 2......
p q 1
= 0, otherwise,
then X is said to follow a negative Binomial distribution with parameters k
and p.
Note:
In the defenition of negative binomial distribution if we take k = 1, p(x)
becomes,
p(x) = qx p, x = 0, 1, 2......
This is the probability function of geometric distribution.
Properties:
1. Mean = E(X) =
kq
,
p
2. Variance = V(X) =
E(X 2 ) 
k (k  1)q 2
p
2

kq
p
2.
3.
4.
5.
kq
p2
3. Negative binomial distribution tends of Poison distribution under certain
conditions to be stated.
4. Geometric distribution is a particular case of negative binomial
distribution.
5. Negative binomial distribution is also called binomial waiting time
distribution.
Note:
For a negative binomial distribution variance is greater than mean.
76
1.
Probability Distributions
6.
In a BD with parameters n and p, the relationship
between mean and variance is
a. Mean = variance
b. Mean < variance
c. Mean > variance
d. None of these
The mgf of BD is
a. (q + pet)n
b. (q + p)n
c. (p + qet)n
d. (q + et)n
A family of parametric distributions in which mean is equal to variance
is
a. Binomial distribution
b. Uniform distribution
c. Poisson distribution
d. Geometric distribution
The distribution possessing the memoryless property is
a. Gamma distribution
b. Geometric distribution
b. Hyper geometric distribution
c. All the above
For a negative binomial distribution
a. Mean > Variance
b. Variance > Mean
c. Mean = Variance
d. Mean > Standard deviation
Negative binomial distribution is also known as
a. Binomial wating time distribution
b. Polya distribution
d. All the above
Fill in the blanks
1.
2.
The mean of BD is.......................... variance.
For a symmetric binomial distribution the value of p is
..........................
Standard Distributions
77
School of Distance Education
3.
4.
5.
The number of parameters involved in a PD is .........................
Mean and variance of uniform distribution are .......................
and..........................
Mean of geometric distribution is..........................
1.
2.
3.
4.
5.
6.
7.
8.
Define point binomial distribution
What are the features of binomial distribution?
Define poisson distribution
Obtain the m.g.f. of P.D.
Obtain the mean of uniform distribution
Define geometric distribution.
Define Negative binomial distribution.
Obtain the mean of NBD
Short essay questions
1.
2.
3.
Derive the binomial distribution in the usual notation.
Obtain the mean and variance of binomial distribution
Derive the recurrence relation for central moments of binomial
distribution.
4.
In a binomial distribution mean is 6 and standard deviation is 2 .
Write the binomial distribution.
A coin is tossed 6 times. What is the probability of obtaining 4 or
Define poisson distribution and state the conditions under which the
distribution is used.
5.
6.
7.
10. If X and Y are independent poisson variates having means 1 and 3
respectively. Find the variance of 3X + Y.
11. A car hire firm has two cars which it hires out day by day. The number
of demands for a car on each day is distributed as poisson distribution
with mean 1.5. Calculate the proportion of days on which (i) Neither
car is used (ii) some demand is refused.
12. Define uniform distribution. Obtain its mean and variance.
13. Define geometric distribution. Find its mean and variance.
14. A fair coin is flipped until a head appears. What is the probability that
less than 3 flips are required.
Long essay questions
1.
2.
3.
4.
5.
Derive the recurrence relation for central moments of poisson
distribution and hence obtain 1 and 2 .
8.
If X is a poisson variate and P(X = 0) = P(X = 1) = k find k.
9.
Find the probability that no defective fuse will be found in a box of
200 fuses, past experience shows that 2% of such fuses are defective.
78
School of Distance Education
Probability Distributions
6.
7.
The mean and variance of binomial variate X with parameters n and p
are 16 and 8.
Find (i) P(X = 0) (ii) P(X = 1) (iii) P(X  2)
The incidence of occupational disease in an industry is such that the
workers have a 20% chance of suffering from it. What is the probability
that out of 6 workmen 4 or more will contact the disease?
The probability of a man hitting a target is 1/3. How many times
must he fire so that the probability of hitting the target at least once
is more than 90%?
The probability of a man hitting a target is 1/4 (i) if he fires 7 times
what is the probability p of his hitting the target at least twice? (ii) In
how many times must he fire so that the probability of his hiiting the
target at least once is greater than 2/3.
Assuming that the chance of a traffic accident in a day in street of
Bombay is 0.001. on how many days out of a total of 1000 days
can we expect (i) no accident (ii) more than 3 accidents, if there are
1000 such streets in the whole city.
A manufacturer of cotter pins knows that 5% of his product is
defective. If he sells cotter pins in boxes of 100 and guarantees that
not more that 10 pins will be defective, what is the probability that
a box fail to meet the guaranteed quality.
In a large fleet of delivery trucks the number of in-operative trucks
is 2 per day. Two standby trucks are available. What is the probability
Standard Distributions
79
School of Distance Education
8.
9.
that on any one day
(i) no standby truck is needed.
(ii) the standby trucks are not adequate.
Data were collected over a period of 10 years showing the number
of deaths from horse kicks in each of the 20 army corps. From 200
corps - years the distribution of deaths was as follows.
No. of Deaths:
0
1
2
3
4
Frequency:
122
60
15
2
1
Graduate the data by poisson distribution and calculate the theoretical
frequencies.
Define negative binomial distribution. Obtain its mean, variance and
mgf. What are its applications.
School of Distance Education
0. 6
= 0,= 2/3
0. 4
0. 3
= 0,= 1
0. 2
0. 1
= 0,= 2
0. 0
-6
PROBABILITY DISTRIBUTIONS
(CONTINUOUS)
1. Normal DistributionDefinition
A random variable X is defined to be normally distributed if its probability
density function is given by
f(x) =
1
e
 2
( x   )2

2 2
, X 
= 4,= 2/3
0. 5
-4
-2
2
0
4
6
Some normal N (,  2 ) densities
Properties of Normal distribution
The graph of the normal distribution given by

1
f(x) =
e
 2
( x   )2
2 2
; X 
is a bell shaped smooth symmetrical curve known as the normal curve as
shown below.
where the parameters  and  satisfy      and   0 . Any
distribution defined by the density function given above is called a normal
distribution. Where X follow normal distribution we write it symbolically
as
X  N (,  ) or X  N (,  2 )
Once the values of  and  are known, the shape of the curve of
normal distribution is completely determined. The idea will be more clear
from the shapes of some normal curves for different values of and as
shown below.
80
Probability Distributions
x = 
+
Standard Distributions
81
School of Distance Education
School of Distance Education

The following are the some of the important properties of the normal curve.
1. The normal curve is symmetrical about the ordinate at
x =  . ie., f(+ c) = f( c) for any c,
2. The mean, median and mode are idential.
3. The normal curve f(x) has a maximum at x =  and the mximum
1
.
 2
The normal curve extends from   to + .
The curve touches the X axis only at  .
 X axis is an asymptote to the curve.
For a normal distribution. 1  0 and 2  3 .
=
 (x     )f(x)dx


=



(x   )f(x)dx 

 f(x)dx

value of the ordinate is
4.
5.
6.
2
4
7. In a normal distribution, QD =
SD and MD = SD.
3
5
8. All odd order central moments vanish.
ie., r + 1= 0, r = 0, 1, 2.....
9. The even central moments are given by
r = 1, 3, 5...(2r  1)2r.
10. The points of inflexion of the curve are x = .
11. The lower and upper quartiles are equidistant from median.
12. The area under the normal curve is disributed as below:
(i) 68.27% of the area lies between and ie.,
X  + ) = 0.6827
(ii) 95.45% of the area lies between 2and + 2
ie., P(2X  + 2) = 0.9545
(iii) 99.73% of the area lies between 3and + 3
ie., P(3X  + 3) = 0.9973
Moments of ND
Mean

=
= .
82

V(X)
=
=
Probability Distribution

f(x) dx

=

E[X  E(X)]2
E(X  2

=
P ( 
=
2
 (x   )

(x-  )2

=
=
f(x)dx


xf(x)dx


dx  
Variance
=
=
( x   )2
2 2
1
 0    1 , being the integral an odd function
2

E(X)
1
 (x   )  2 e


1
 2
2
2
2 2
2
1
e
 2
2
 ( z)
( x   )2
2 2

z2 e z
e z
2/ 2
2 / 2
dz
z
0
 dz
w hen x   
z 
z2
u
2
2
z  2u
Put


dx
dx   dz


x 
z

x    z
Put
2
e z
2/ 2
dz
2 zdz  2 du
du du
dz 

z
2u
Standard Distribution
83
School of Distance Education
=
=
2 2
2

2 2


 2u e
u
0
du
2u
1/ 2  u
 u e du
2
2
 13 / 2
=
2 2 1
  2
 2
ie.,V(X) =

=
=
2 2 1 1
 2 2
3/ 2  u
 uz  e, u du

0
=
=
=
=

2 r 1
 (x   )
1
 2
 2r 1
 2
1
e
 2
2r 1  z 2 / 2

 ( z )


=
=
e
z 2r 1e  z
2/ 2
( x   )2
2 2
V(X)=
=
dx
 dz , by putting
=
x 
=z

dz
 2r 1
 0, since the integrand is an odd function.
2
=
=
=
1
e
 2
2r
 (x   )

1
 2
e z
2/ 2
dx
 dz


 2r
2
2r
 ( z )
( x   )2
2 2
z
2r
e z
2/ 2
dz
2
2
2 2r
2
2r  2r

w hen x   
z
2r
e
z 2 / 2
dz
z  
0

r
 (2u ) e
du
2u
u
0
Pu t

u
r 1 / 2 u
e
du

1
r  1
u 2 .
z2
 u
2
z 2  2u
0
2r 
2 

x 
z

x    z
Put
dx   dz

2 z dz  2du
du du

z
2u
w hen z  0,
e u du
dz 
0
=
2r  2r

=
2r  2r 
1 
3 3 1
 r    r   ... , 1 / 2
2 
2 2 2


Even order central moments
2r

r
ie., 2r + 1 = 0, r = 0, 1, 2, 3...
84
f(x)dx

2r 

When X  N(, 
=
1, 3, 5...(2r  1) 2r
Probability Distribution
2r
 (x   )

=
 SD of normal distribution =
E[X 2r, = E(X)

2r+1
=
u  0,
2 2

3
=
2
when z  0,
0
2
School of Distance Education
Proof
2r
(r  1 2 )
u  0,
1r  1 2
z ,u 
Standard Distribution
85
School of Distance Education
2  2r (2r  1)(2r  3)... 3.1. 

2r
School of Distance Education
r
=
2r
=
1
.
3
.
5
.
.
.
.
(
2
r
2 r  2

=
2 r  2
2 r =
1
.
3
.
5
.
.
.
.
(
2
r
1.3. 5....(2r 1) (2r + 1) 2r + 2
=
2
=
=
,
4
22

34

4
=
Since normal curve is symmetric, obviously 1 = 0 as 3 = 0
=
e
=
E [e tx ]

=



e tx
1
e
 2

=
86


e
1 / 2(z 2 2 t  z t 2 2 )1 / 2 t 2 2
dx   dz
dz
when x   

e
1
 t  t 2 2
2
e
2
1 / 2(z t  )2
e
u2 / 2
dz
Put z  t   u
du

dz du
w hen z  0,

2 e
u2 / 2
du
u  
0

2 e
v
0
dv
2v
1
 t  t 2 2  1
2
1
v 2 e  v dv

z  

1
 t  t 2 2 
2
2
e tx f(x)dx

0
Put
u2
v
2
u2 = v
2udu = 2dv
dv
dv

u
2v
when u = 0, v = 0


=
=
dz
x 
z

x    z
1
 t  t 2 2
2
e
e
Put

2
Moment generating function
Mx(t)
e
t  z z 2 / 2
2
3
2  3 = 3 3 = 0

1
 t  t 2 2 
2
e
2
(2r + 1)
0
=
=
e t
2
1) 2r
2 r  2
=
(2r + 1) 2 2r
This is the required relation
With the help of the above recurrence formula we calculate the 2nd
and 4th moments.
Put r =
2 = 1. 2 = 2
r =
1, 4 = (2 + 1) 2 2= 34
 2
=
e
2
1) 2r
Recurrence relation for even order central moments.
We have
2r
=
t
( x   )2
2 2
=
1
 t  t 2 2
2
e
=
e

dx
2
1
e t (   z ) e z 2 .  dz

 2 
Probability Distribution
1
 t  t 2 2
2


1/ 2
11 /
u= , v= 
2
 
Standard Distribution
87
School of Distance Education
Thus, Mx(t)
=
e
1
t  t 2  2
2
=
=
E e t ( x   ) 


=
E e tx . e   t 


=
e   t E e tx 


=
e   t M x (t )
=
e
=
e2
t
f(z)
1
 t  t 2 2
2
e
1 2 2
t 
,   Z  
2
1
e ( x   ) /
f(x) =
 2
2 2
,   X  
X 
Define Z =
. By changing the variable, the probability density

function of z is given by
= f(x)
=
2
1
e z /
2
2
,   Z  
Moment generating function
When X  N (,  ) , its pdf is given by
dx
dz
1
=
e
 2
88
2
and the corresponding probability distribution is called standard normal
distribution. We can see that mean of Z is 0 and variance of Z is 1. That
means a normal distribution with mean 0 and variance unity is called
standard normal distribution and we write this as Z  N(0, 1). It is for this
very reason, in fact, Z is called a standard normal variable. When we
speak of ‘standardising’ any variable, this is precisely what we mean: shifting
it so that its mean is zero and rescaling it so that its standard deviation is 1.
Standard Normal distribution
f(z)
2
1
e z /
2
A random variable is said to be a standard normal variate if its pdf is
given by
Central mgf.
Mx(t)
School of Distance Education
( x   )2
2 2

Probability Distribution
Mz(t)
(t )
= M x   e

=
 t
e 
 t

t 
MX  
 
t
t 2 2
1 / 2

2
e
1
= e2
t2
Standard normal distribution satisfies all the properties of normal
distribution provided  = 0 and  = 1. Some of them are the following.
The curve of f(z) is symmetrical about the ordinate at z = 0. The curve of
f(z) is maximum at z = 0 and the maximum ordinate =1/ 2 . The curve
extends from  to  . Mean = Median = Mode = 0. In a standard
normal distribution 95.45% of observations are lying between 2 and +2.
Similarly 99.73% of observations are lying between 3 and 3.
Area under the normal probability curve:
In a normal distribution we are generally interested in finding the
probability of the variate lying between two values, say, a and b. Too
Standard Distribution
89
School of Distance Education
determine this, we first of all change these values in terms of standard
X 
normal variate, by the substitution Z =
.

Table for the areas under the STANDARD NORMAL CURVE gives
probability of the variate lying between 0 and any positive value of z. The
shaded area in the figure represents that probability. This area can be read
from the table of ‘Areas under standard normal curve’ (provided at the
and of this book). Therefore to find any probability regarding X, the standard
normal variate can be made use of.
Theorem:
= P(1.1  Z  2.1)
2.1
=

2.1
=
f (z )d z
1.1

f (z )dz 
0
1.1

f (z )dz
0
since Xi’s are
independent
= 0.4821-0.3643
0.178
=
Normal distribution as a limiting form of binomial distribution
Binomial distribution tends to normal distribution under the following
conditions
(i) n is very large (n   )
(ii) neither p nor q is very small
Remark
The normal distribution can also be obtained as a limiting form of poisson
distribution with the parameter    .
The above result is applied for calculating binomial probabilities when
n is very large. For example, suppose a fair coin in tossed 100 times. Then
say X  B (100, ) we want to calculate the probability for getting 56
to 60 heads. Under the binomial situation the probability is equal to the sum
of the areas of the rectangles of the histogram with width I and with
midpoints at 56, 57, .....60. But that area can be obtained using normal
approximation. In this case we find the area from 55.5 to 60.5 under a
normal curve, in the usual manner.
ie., P(56  X  60)
= P(56) + P(57) + P(58) + P(59) + P(60) .... binomial
= P(55.5  X  60.5)..... normal
 55.5  50 X   60.5  50 


= P

5

5


90
School of Distance Education
Probability Distribution
SOLVED PROBLEMS
Example 1
X is a normal variate with mean 42 and standard deviation 4. Find the
probability that a value taken by X is (i) less than 50 (ii) greater than 50 (iii)
less than 40 (iv) greater than 40 (v) between 43 and 45 (vi) between 40
and 44 (vii) between 37 and 41.
Solution
X is normal variate with parameters  =42 and  =4. Therefore, Z =
X   X  42
is a Standard Normal variate.


4
 X  42 50  42 
(i) P[X<50] = P 


4
 4
= P[Z < 2]
= P(   < Z < 0) + P(0 < Z < 2)
= [area from (   ) to 0] + [area from 0 to 2]
= 0.5 + 0.4772
= 0.9772
 X  42 50  42 
(ii) P[X < 50] = P 


4
 4
= P[Z > 2]
Standard Distribution
91
School of Distance Education
School of Distance Education
= P(0 < Z <  )  P(0 < Z < 2)
= [area from 0 to  ]  [area from 0 to 2]
= 0.5  0.4772 = 0.0228
= P(  .5 < Z < 0) + P(0 < Z < .5]
= [area from 0 to 0.5] +
[area from 0 to 0.5]
(iii) P[X < 40] = P  X  42  40  42 
 4

4
= 0.1915 + 0.1915
= 0.3830
= P[Z >  0.5]
= P(  < Z < 0)  P(  .5 < Z < 0)
= [area from 0 to  ]  [area from 0 to 0.5]
= 0.5  0.1915
= 0.3085
 37  42 X  42 41  42 
(vii) P[37 < X < 41] = P 



4
4
4

= P[  1.25 < Z <  0.25]
= P(  1.25 < Z < 0)  O(  .25 < Z < 0)
 X  42 40  42 
(iv) P[X > 40] = P 


4
 4
= [area from 0 to 1.25] 
[area from 0 to 0.25]
= P[Z >  0.5]
= P(-.5 < Z < 0)+ P(< 0 < Z <  )
= [area from (-0.5) to 0] + [area from 0 to  ]
= [area from 0 to 0.5] + [area from 0 to  ]
= 0.1915 + 0.5
= 0.6915
= 0.3944  0.0987
= 0.2957
Example 1
If X is normally distributed with mean 11 and standard deviation 1.5,
find the number x0 such that
(i) P(X > x0) = 0.3 (ii) P(X < x0) = 0.09
 43  42 X  42 46  42 
(v) P[43 < X< 46] = P 



4
4
4

= P[0.25 < Z < 1]
= P(0 < Z < 1)  P(0 < Z < .25)
= [area from 0 to 1] 
[area from 0 to 0.2]
= 0.3413  0.0987 = 0.2426
 40  42 X  42 44  42 
(vi) P[40 <X <44]= P 



4
4
4

= P[  0.5 < Z < 0.5]
92
Probability Distribution
..
..
..
..
Solution
Given X  N (11, 1.5)
(i) We have to find x0 such that
P(X > x0)
= 0.3
 X  11 x 0  11 

ie., P 
 = 0.3
1.5 
 1.5
ie.,
P(Z > z1)
= 0.3 where z1 =
x 0  11
1.5
P(0 < Z <  )  P(0 < Z < z1) = 0.3
0.5  P[0 < Z < z1) = 0.3
Standard Distribution
93
School of Distance Education
ie.,
P(0 < Z < z1) = 0.2
From table, z1
= 0.524
x 0  11
1.5
ie.,
ie.,
x0
(ii) Find x0 such that
P[X < x0]
= 11 + 1.5  0.524
= 11.786
ie.,
= 0.09
x 0  11
and it is negative
1.5
P(  < Z < 0)  p(z2 < Z < 0) = 0.09
.5  Area from 0 to z2 = 0.09
Area from 0 to z2 = 0.41
From table,
z2 =  1.34
ie.,
x 0  11
1.5
= 11  1.5  1.34
= 8.99
Example 2
In a normal distribution, 15% items are below 35 and 10% items are
above 65. Find the mean and standard deviation.
Solution
Let X be the Normal variate. Let its mean be  and standard deviation
be  .
Let
94
P[X < 35]
ie.,
 X   35   
P

 
 
= 0.15
35   

PZ 

 

= .15
X 

Probability Distribution
Z
=
35  

Then, [area from  to z1] = 0.15
ie., [area from z1 to 0]
= 0.35
area from 0 to (  z1) = 0.35
= 1.04
 z1
Let
z1
=
 35   

= 1.04

  
= 1.04 
 35 + 

= 1.04  + 35

It is also given 10% items are above 65.
ie.,
=  1.34
z2
15
= 0.15
100

= 0.09
= 0.09
where z2 =
It is given that 15% items are below 35.
= 0.524
 X  11 x 0  11 
ie., P 


1.5 
 1.5
p[Z < z2]
School of Distance Education

ie.,
P[X > 65] =
65   

PZ 

 

Then area from z2 to 
ie., [area from 0 to z2]

ie.,
z2 = 1.28
65  

...(1)
10
= 0.1
100
= 0.1
Let z2 =
65  

= 0.1
= 0.4
[from the table of areas]
= 1.28
=
Standard Distribution
95
School of Distance Education
65  
= 1.28 
 
= 1.28   65
School of Distance Education
0, if    x  a
x  a

F(x) =  b  a , a  x  b

,b x 
1
(....2)
0 = 2.32   30 ie., 
Substituting the value 

=
30
= 12.93
2.32
= 12.93 in result (1)
Since F(x) is not continuous at x = a and x = b, it is not differentiable
= 1.04  12.93 + 35
at these points. Thus F (x ) = f(x) =
= 48.45
Thus, mean = 48.45 and standard deviation = 12.93
2. Rectangular Distribution
4.
1
 0 , exists everywhere
b a
except at the points x = a and x = b and consequently we get the
p.d.f. f(x)
The graphs of uniform p.d.f. f(x) and the corresponding distribution
function F(x) are given below:
[Uniform Distribution (Continuous)]
A very simple distribution for a continuous random variable is the uniform
distribution. It is particularly useful in theoretical statistics because it is
convenient to deal with mathematically.
Definition
A continuous random variable X is said to have a rectangular distribution
if its pdf is given by
f(x) =
=
1
,a  x  b
b a
0, elsewhere
Remarks:
1. a and b ( a < b) are the two parameters of the uniform distribution
on ( a , b).
2.
3.
96
The distribution is known as rectangular distribution, since the curve
y = f(x) describes a rectangle over the x-axis and between the ordinates
at x = a and x = b.
The d.f. F(x) is given by
Probability Distribution
5.
For a rectangular or uniform variate X in ( a , a ), p.d.f. is given by
1/ 2a ,  a  x  a
f(x) = 0
, otherw ise

Moments
Mean
Mean
= E(X)
b
=
a xf(x)dx
=
a x b  a dx
b
1
Standard Distribution
97
School of Distance Education
=
School of Distance Education
3. Gamma Distribution
2 b
x


 2 a
1
b a
A continuous r.v. X is said to have a gamma distribution if its probability
density function is given by
2
2
b+a
= b a
=
2
2(b  a )
Variance
V(X)
E(X2)
= E(X2) {E( X)}2
=
=
 V(X)
f(x)
=
a
a
x
3 b
= 0, otherwise
where m > 0, p > 0 are called parameters of the gamma distribution
1
dx
b a
2
( p is read as Gamma p)
Note: Being a pdf, we know that
3
3
2
x
b a
b  ab  a


 
3
 3 a 3(b  a )
1
b a
b 2  ab  a 2  b  a 


3
 2 
2
SD =
Mx(t)
= E e 
=
 = np
2
98
1
dx
b a
e bt  eat
,t  0
t (b  a)
Probability Distribution
e
or
 mx
x p 1 dx
=
p
mp
This result is very useful in evaluating certain integrals.
By putting
m=1
b
=
= 1
e
tx
f (x ) dx
a
1
b a
p =

0 e
x
x p 1dx  ( p  1) ! when p is a +ve integer..
If p = n, a positive integer, - n = (n - 1)!
 e tx

 t
b


a
Using integration by parts, we can show that
n  1)
)
1
+
n
(
=
e
a
tx
m p  mx p 1
e
x
dx
p
0

(
b
= 1

  npq
ba
12
=
 f(x)dx
0
Moment generating function
tx

0
(b  a)2
12
2
Remember
4 b 2  4 ab  4 a 2  3b 2  6 ab  3 a
12
∴
=
b
2
 x f(x)dx 
m p  mx p 1
e
.x
,x  0
p
∞
=
b
=
= n! = n(n1)! = n n
ie.,
= nn
Putting m = 1 and p
=
1
, in the above, we get
2
Standard Distribution
99
School of Distance Education
School of Distance Education


 x p 1
 e x dx
(1/ 2) =
 , which proves useful in
=
0
  x 2 1
x dx
=  e
0
evaluating certain integrals.
= 
Moments
Mean
E(X)
Variance
V(X)
p
m
=
E(X2)
p2
m2

p
m2

p2
m2


=
p
m2
1


2
= E(X2)  {E(X)}2

Variance
V(X)
2
x
2
 e   x dx
0

  x 3 1
x dx
=  e
0
4. Exponential Distribution
Definition
If a continuous random variable X has a probability density function
given by
f(x) = ex, x > 0, > 0
then X is defined to have an exponential distribution.
Note:
Exponential distribution is a particular case of Gamma distribution. In a
gamma distribution with parameter m and p, if we put p = 1, we will get
the exponential distribution.
Moments
Mean
= 
 V(x)
=
=
 xf(x)dx
=
100
 x e
2

2
Mx(t)
=
dx
Probability Distribution

2

1
2

0
0
tx
tx
 x
 e f (x )dx   e  e dx
= e
0
1

0
 x

2
= E (e tx )

0


2

3
Moment generating function

E(X)
3
=

(  t )x
 e ( t )x 
dx   

 (  t )  0

t 

 1  
 t 

1
Standard Distribution
101
School of Distance Education
School of Distance Education
5. Beta Distribution
Definition
If a random variable X has a probability density function given by
f(x) =
 (m, n) =
x
0
m 1
(1  x )n 1 dx , which is called beta function.
Note:
The beta distribution reduces to uniform distribution over (0, 1) if m =
n = 1.
Moments
Mean
E(X)
=
Variance V(X)
=
(m  n  1)
1
= e  
2/ 2
2
2
= e 2   2
 2
(2  2
= e
2
2)
 e (2  
2)
. e (2  
2 )  2
e


1

SD
AM
Coefficient of Variation =
   2 /
e

2
 a2
 e 1

=
2
  / 2
ea
2
1
7. Pereto Distribution
,

(log x   )
1
2
=
e 2
; 0<x <
x 2
Probability Distribution
r2 2

2
e
Definition
Let X be a positive random variable, and let Y = logeX. If Y has normal
distribution, then X is said to have a log normal distribution. The pdf of log
normal distribution is given by
102
 1
=
6. Log Normal Distribution
f(x)
= E(Xr) = E(ery) ( log x = y  x = ey)
= er  
m
m +n
n
m )
2
n
+
m
(
Variance
Now r
1
x m 1 ,(1  x )n 1 0  x  1
 (m , n )
where m > 0 and n > 0, then X is said to have a beta distribution. This
is called beta distribution of first kind and is denoted as 1 (m, n).
By definition we have,
1
where       , and   0 .
When X is distributed log normally with parameters and 2 we write
X   (2)
Pareto Distributioin
Let X be a continuous random variable. If the pdf of X is given by
 1
f(x) =
  x0 


x0  x 
,   0, x 0  0
then X is said to follow a Pareto distribution.
Mean =
x0
for   1
 1
Standard Distribution
103
School of Distance Education
Variance =
SOLVED PROBLEM
x0
 x0 

 for   2
  2  1 
Mgf of Pareto distribution does not exist.
This distribution has found applications in modelling problems involving
distributions of incomes when incomes exceed a certain limit x0
Cauchy Distribution
Definition
A continuous r.v.x is said to follow Cauchy distribution if its probability
density function is
f(x) =
1
  x   2 
 1  

  
 

for
  x  
if = 0, = 1, then the pdf of Cauchy distribution will be
1
 (1  x 2 )
Example l
On an average 20 customers arrive at a garment shop every hour. Find
the probability that
a) the next customer will arrive with in 5 minutes and
b) no customer will arrive during the next 10 minutes. Solution
Here the unit of measurement is one minute. Since 20 customers arrive
per hour, the number of customers arriving per minute is 1/3. Or on an
average one customer arrives every 3 minutes. If the random variable X
represents the time interval between the arrival of successive customers,
then X will be distributed exponentially with parameter 3. Hence the pdf
will be
     and   0
It has two parameter and In this case we can write X  C ( ,  )
f(x) =
School of Distance Education
2
2
f(x)
=
4. Characteristic function of Cauchy distribution is ei t  | t|
5. Cauchy distribution curve is unimodal and has its maximum at the
point x = 
5
5
=
1
 x/ 3
3e
 f(x)dx
0
dx = e0/ 3  e 5 /
3
0
= 1  e 5 /
3
= 1 0.1884
= 0.8116
b) P(no customer arriving for the next 10 minutes)
= P(the time span being greater
than 10 minutes) = P(X > 10)
=
Probability Distribution
x 0
= P(X < 5) =

104
3,
a) P (the next customer arriving with in 5 minutes)
,   x  
Here the r.v. X  C(0, 1)
Properties
1. For a Cauchy distribution mean does not exist.
2. For a Cauchy distribution variance does not exist.
3. Mgf of Cauchy distribution does not exist.
1 x /
e
3
1
 3e
10
x / 3
dx =  10 /
e
3
= 0.0357
Standard Distribution
105
School of Distance Education
School of Distance Education
EXERCISES
Fill in the blanks
10. The normal distribution is symmetric about ..................
11. Mgf of ND N( 2), is ..................
12. For a ND all odd order central moments 2r+1 = ..................
13. BD tends to ND when .................. and ..................
14. MGF of standard normal distribution is ..................
15. In a standard normal distribution .................. percentage of
observations lies between 2.58 and +2.58.
16. For a standard normal distribution the quartile deviation is
..................
17. Mean of the rectangular distribution is ..................
18. The exponential distribution has been ..................
19. Mean of the log normal distribution is ..................
Multiple choice questions
1.
For a normal curve, the QD, MD and SD are in the ratio
a. 5: 6: 7 b. 10: 12: 15 c. 2 : 3: 4 d. None of these
2.
The area under the standard normal curve beyond the lines z = 
1.96 is
a. 95% b. 90% c. 5% d. 10%
3.
Normal distribution was discovered by
a. Laplace b. De-Moivre c. Gauss d. all the above
4.
The characteristic function of the normal distribution is
1
 t  t 2 2
5.
6.
i t 
1 2 2
 t
2
2
a. e
b. e
c. both a and b
d. none of a and b
A normal distribution is
a. symmetric b. continuous c. mesokurtic d. all the above
The distribution function of rectangular distribution of a variable X
lying in the interval (a, b) is
1
x a
b a
x b
b.
c.
d.
b a
b a
x a
b a
The mean of the Pareto distribution f(x; x0, ) is
a.
7.
8.
9.
a.
x0
for > 1
 1
c.

for x0> 1
x 0 1
x0
for > 1
 1
d. none of these
The probability distribution for which mean and higher order moments
does not exist is
a. Pareto distribution
b. Cauchy distribution
c. Log normal distribution d. Gamma distribution
If X  Expo (4), the probability density function of x is
a. 4 e 4x for x > 0
b. e 4x for x > 0
c. 4 e x for x > 0
106
b.
d.
1 5 x
e
for x > 0
5
Probability Distribution
Very
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
Define normal distribution
What are the points of inflexion of the normal curve?
What is the area under the standard nomal curve ranging from 3 to
+3?
Define rectangular distribution.
Define gamma distribution
Define exponential distribution.
Define beta distribution of first kind.
Define log normal distribution
Define Pareto distribution.
Define Cauchy distribution.
30. What are the main features of normal distribution with mean and
variance  ?
31. If X is N (5, 3), find the distribution of Y = 2X + 5
32. Write down the probability function of a normal variate (i) with
mean 20 and standard deviation 4 (ii) with mean 0 and variance 64.
33. If X is normally distributed with mean 15 and variance 16, find
P(12 < X < 20) .
Standard Distribution
107
School of Distance Education
34.
School of Distance Education
The heights of school children in a district are found to follow
normal distribution with  = 56 and  = 10. What percentage of
children?
(a) exceed 68 inches
(b) less than 40 inches
(c) between 50 and 60 inches
Long essay questions
35. The scores in a test follow the normal law with mean 60 and s.d.
10. Find the % of students scoring (1) above 75 (2)`between 65
and 75 (3) between 48 and 70 (4) below 40.
36. If X  N (30, 5), find (i) P(X > 40) and (ii) P(|X  5|  20)
37. In a Normal distribution, 20% of the observations are greater than
70 and 10% of the observations lie between 60 and 70. Find the
mean and standard deviation.
38. If X has an exponential distribution with mean 2 find
P(X < 1 |X < 2)
39. A random variable X is uniformly distributed over (a, b). If E(X) =

1
3
and V(X) =
find the values of a and b.
2
4
(2) a
(3) c
(4) b
(2) 1/2
(3) one
(4)
(5) b
(6) a
(1) greater than
n 1 n 2 1
,
2
12
(5) q/p
Short Essay Questions :
(4) (1/3 + 2/3)9
(5) 11/32
(11) (i) 0.2231 (ii) 0.1912
108
(i)
1/2)
(2) 53/3125
(5) 1 
(ii)
32
(3) 6
e 5 5 r
 r!
r 0

(1/2) 32
(iii)
1
-
33
(1/2) 32
(4) (i) 362 (ii) 35
10
(6) (i) 0. 1353 (ii) 0.3235
(7) 122, 61, 15,2, 0.
Continuous Probability Distributions
(1) b
(6)b
(11)
1
t  t 2  2
e 2
(15) 99%
(19)
e

(2) c
(7) b
(3) c
(8) b
(4) b
(9) a
(12)zero
(13) n  , p  0.
(16) 2/3
(17)
b a
2
(5) d
(10) x = 
(14)
1
e2
t2
(18) 1/
2
2
(31) Y  N (15,1)
(1) c
(1)
32
2
2
1
1
e ( x 20 ) / 32    x  ,
e  x / 128 ,   x  
4 2
8 2
(33) 0.6678
(34)
(a)
21.19%
(b)
4.58%
(c)
38.11%
(35) (i) 6.68% (ii) 24.17% (iii) 72.62% (iv) 2.28%
(36) (i) 0.0228 (ii) 0.1587
(37) 43.75, 31.25
(32)
1  e 1 /
1e
(39) 2, 1
2
(38)
(8) 1/e
(9) 0.0183
(10) 0.12
(14) 3/4
Probability Distribution
Standard Distribution
109
School of Distance Education
School of Distance Education
 K 
Module IV
LAW OF LARGE NUMBERS
If X is a random variable with E(X) =  and V(X) =  2 exists, then
for any k > 0,
1


deleting the second integral.
 K 
2


k2
or
1
k2
1
k2
ie., P {| X  |  k  }
2

KG
f (x )dx
+KG

Then, dividing the integral into 3 parts as shown in figure, we get
 K 
=
k 2  2 f (x )dx
 K 
 K 




f (x )dx 

f (x )dx
 K 
P {X    k  }  P{X    k  }
 P {| X  |  k  }
= E[XE(X)]2
 (x   )


 P {X    k  }  P{X    k  }
= E[X]2
2

2
k  f (x )dx 

By definition,
=
2

Proof:

 K 

( x   )2 f ( x )d x , by
and, hence, that
P{|X|  k  }  1 
2

Now, since (x) 2  k 2  2 for x    k or x   + k, it
follows that
Chebyshev’s Inequality
P{|X|  k  } 
2

( x   )2 f ( x )d x 

(x   )2 f (x )dx 
 K 

(x   )2 f (x )dx 
 K 



(x   )2 f (x )dx
 K 
2

1
k2
On taking the compliment, P{| X  |  k  }  1 
1
k2
In a similar way, we can prove the Chebyshev’s inequality in the case
of a discrete random variable by replacing integration by summation.
Convergence in probability
There are various modes of convergence and convergence in probability
is a property that is connected with sequence of random variables. The
law of large numbers and the central limit theorem which are going to be
proved in this chapter are of considerable importance in the study of
probability and statistics.
Since the integrand (x) f(x) is nonnegative, we can form
the inequality.
110
Probability Distributions
Law of Large Numbers
111
School of Distance Education
School of Distance Education
Definition
A sequence of r.v.s X1 X2 ...Xn ... is said to converge in probability to
a constant a, if for any  > 0, however small,
P{| X n  a |   }
 1
1
V( X n ) = V 
n
and we write
 0
If there exists a random variable X such that {Xn} converge in probability
to the r.v. X if for any

 


ie. P | X n   |  t
n


Let us choose  
  0 , however small,
 0
Lt P{| X n  X |   }  0
n 
1
P{| X n   |  t  x n }
as n  
P
X n  a as n  
P{| X n  X |   }
1

n 2
n2

2
n
From the Chebyshev’s inequality,
as n  
or equivalently
P{| X n  a |   }

 X i   n 2  V( X i )  n 2   2
as n   or
as n  
This is also called stochastic convergence.
t
n
, then
1
t2

1
t2
1
t2
2
n2
2
 P{| X n   |   }

  ie. P{| X n   |   }
 0
n2
0
as n  
as n  
Weak law of large numbers
Theorem:
Bernoulli’s law of large numbers
Let X1, X2,...Xn be a sequence of independent random variables with
E(Xi) =  and V(Xi) =  2 , i = 1, 2, 3...n.
1
Define X n =
n
Lindeberg-levy Theorem
n
 Xi
i 1
. Then for any   0 , however small.
Let X1, X2, X3, ....Xn be a sequence of independent and identically
distributed random variables with E(Xi) =  and V(Xi) =  2 , i = 1, 2, 3...
n. where we assume that 0   2   . Letting Sn = X1 + X2 + ....+ Xn,
the normalised random variable.
P {| X n   |   }  as n  
Proof
Given E(Xi) = , V(Xi) =  , i = 1, 2, 3,...n.
2
1
 E ( X n ) = E 
n
112

1
1
 X i   n  ( X i )  n  
Probability Distributions
Z=

Sn  n 
n
~ N (0 , 1)
as n  
n
 
n
Law of Large Numbers
113
School of Distance Education
Assumptions on CLT
1.
2.
3.
4.
School of Distance Education
Example 2
The random variables are independent.
All the random variables have a common distribution.
The mean and variance both exists and finite.
The mean and variance of the random variables are equal.
A random variable X has mean 50 and variance 100. Use Chebyshev’s
inequality to obtain appropriate bounds for (i) P {| X  50 |  15} and (ii)
P {| X  50 |  20} (CU 92S)
Solution
SOLVED POBLEMS
Given E(X) = 50, V(X) = 100
(i) By Chebyshev’s inequality
Example 1
Find the least value of probability P(1  X  7) where X is a r.v. with
E(X) = 4 and Var (X) = 4. (CU 93S)

1
k2
We have to find P{| X  50 | 15}
Solution
Given E(X) = 4, V(X) = 4
By Chebyshev’s inequality
Taking 10k = 15, k =
P{| X   |  k  }
 1
P{| X  4 |  2k }
 1
1
1
k
P(1  X  7 )
3 1
2
,

2 k
3
= P {| X  4 |  3}  1 
 Least value of probability =
5
9
Probability Distributions
P{| X  50 | 10 k }
 1
1
k2
We have to find P{| X  50 |  20}
= P {| X  4 |  3}
 P(1  X  7 )
4
9
(ii) By Chebyshev’s inequality,
= P {1  4  X  4  7  4}
=
4
 9
ie., Upper bound to the required probability =
2
= P {  3  X  4  3}
Put 2k = 3, k
15
3

10
2
 P{| X  50 | 15}
k2
But we have to find that least value of
114
P{| X  50 | 10 k }
Putting 10k = 20, k = 2, 1 
4
5

9
9
ie., P{| X  5 0 |  2 0 }
1
k
2
1
1
3

4
4
3
 4
ie., Lower bound to the required probability =
Law of Large Numbers
3
4
115
School of Distance Education
School of Distance Education
Example 3
A distribution with unknown mean  has variance equal to 1.5. By
using central limit theorem, how large a sample should be taken in order
that the probability will be at least 0.95 that the sample mean will not differ
from the population mean by more than 0.5
(CU 82 A)
EXERCISES
Multiple Choice Questions
1.
Solution
Let x1, x2, x3,....xn be a sample of size n.
1
Let x =
n
2.
n
 x i . We have to find n such that
1
P {| x   |  . 05}

0.95
...(1)
We can consider the random observations x1, x2, x3,....xn as i.i.d. r.v.s
with E(Xi) = and V(Xi) = 1.5, i = 1, 2, 3,.... n
Then by CLT,
ie.,
1.96  1.5 


ie., P | x   | 
n


1.
2.
3.
= 0.95
1.
= 0.95
 1.96  1.5
n  
0.5

ie., n 23.05
ie., The sample size should be at least 24.
116
Probability Distributions
State and prove Chebyshev’s inequality
State and prove weak law of large numbers
State central limit theorem and give its assumptions.
Short essay questions
...(2)
2.
Comparing (1) and (2), we get
1.96  1.5
 0.5
n
State Tchebycheff’s inequality
State weak law of large numbers
State central limit theorem
What are the assumption of CLT.
Short essay questions
= 0.95
 x  

P
 1.96 
 1.5 / n

1.
2.
3.
4.
x 
Z=
 N(0, 1) as n  
1.5 / n
Here we know that
P{|Z|  1.96}
3.
The abbrevations of i.i.d stands for
a. Independent and identically distributed
b. Identically and independently distributed
c. both a and b
d. none of a and b.
In CLT, the r.v.s. X1, X2... Xn are assumed to be
a. independent
b. Identical
c. with means and same variance
d. All the above
The propounder of CLT for i.i.d. r.v.s is
a. De-Morive
b. Laplace
c. Lindeberg-Levy
d. Chebyshev’s



2
3.
If E(X) = 3, E(X2) = 13 use Chebyshev’s inequality to find a lower
bound for P( 2 < X < 8).
A die is rolled 200 times. Find a lower bound for the prob: getting 80
to 120 odd numbers.
x
For the distribution with pdf, f(x) = e , 0  x   , obtain P[ X
E (X)  2] and compare it with the value given by Chebyshev’s
inequality.