...

Topic one: Independent and Conditional Distribution MSIS 575: Introduction to Probability Lecture5

by user

on
4

views

Report

Comments

Transcript

Topic one: Independent and Conditional Distribution MSIS 575: Introduction to Probability Lecture5
MSIS 575: Introduction to Probability
Lecture5
10/14/00
Scribe: Xin Wei
Topic one: Independent and Conditional Distribution
I. Independent distribution
Definition: The random variables X and Y are independent if their joint distribution
f ( x, y)  f X ( x) f Y ( y)  Pr ob[ X  x, Y  y]  Pr ob[ X  x]  Pr ob[Y  y], x, y
That is to say, X and Y are independent if, for all x and y, the event X=x and Y=y are independent.
X2
X1
0
1
2
…
i
…
n
g X2
0
g0
1
g1
2
g2
…
…
fi  g j
j
gj
…
…
gn
n
f X1
f0
f1
f2
…
fi
…
fn
II. Conditional distribution
Definition: Let X1 and X2 be two random variables,
Then f X 1| X 2 ( x1 | x 2 )  Pr ob[ X 1  x1 | X 2  x 2 ]

Pr ob[ X 1  x1 , X 2  x2 ]
Pr ob[ X 2  x2 ]
1

f ( x1 , x 2 )
f X 2 ( x2 )
Example: In the table below:
0
1
2
3
f X2
0
1/27
3/27
3/27
1/27
8/27
1
3/27
6/27
3/27
0
12/27
2
3/27
3/27
0
0
6/27
3
1/27
0
0
0
1/27
f X1
8/27
12/27
6/27
1/27
X2
f X1| X 2 ( X 1 | 2)  3 27
x1=
X1
6 27
 1 2 3 27
0
6 27
1 2
1
0
0
2
3
III. Expectation and Variance
1. Conditional Expectation
Definition: E( X 1 | X 2 ) =
x
1
Pr ob[ X 1  x1 | X 2  x2 ] =  x1 f X1| X 2 ( x1 | x2 )
x1
x1
Example:
In the table we talked before,
E( X 1 )  0  8 27  112 27  2  6 27  3 1 27  1
E ( X 1 | X 2  2)  0 
3 27
3 27
0
0
 1
 2
 3
 0.5
6 27
6 27
6 27
6 27
2. Basic operations of expectations
1) Lemma: If X and Y are mutually independent random variables, then
E ( XY )  E ( X ) E (Y )
Proof.
E ( XY )   xi y j f ( xi y j )
i, j
2
=
x y
i
f X ( xi ) f Y ( y j )
j
i, j
=
 [( x
i
f X ( xi ))( y j f Y ( y j ))]
i, j
=(
x
i
f X ( xi ))(  y j f Y ( y j ))
i
j
= E ( X ) E (Y )
2) Linearity of expectation
Lemma: E ( X  Y )  E ( X )  E (Y )
Proof.
g ( x, y )  X  Y
E ( g ( x, y ))   g ( xi , y j ) f ( xi , y j )
i, j
=
 (x
i
 y j ) f ( xi , y j )
i, j
=
 [ x
i
=
 x
i
=
f ( xi , y j )  y j f ( xi , y j )]
i
f ( xi , y j )   y j f ( xi , y j )
j
i
 x  f (x , y
i
i
=
i
j
x
i
j
j
i
j
)   y j  f ( xi , y j )
j
i
f X ( xi )   y j f Y ( y j )
i
j
= E ( X )  E (Y )
Notes:

E( X 2 )  E( X )E( X )

E (2 X )  2 E ( X )
3. Moment and Deviation
Definition: If X is a random variable
Then i) E ( X r ) is the rth moment of X
ii) E ([ X  E ( X )] r ) is the rth central moment
Deviation
For r  2 E ( X 2 ) is the second moment
3
Var ( X )   X  E ([ X  E ( X )] 2 ) is the variance of X.
2
Lemma: Var ( X )  E ( X 2 )  [ E ( X )] 2 .
That is to say, Var ( X ) can be calculated by the second moment subtracted by the square of
mean.
Proof. Var ( X )  E ([ X  E ( X )] 2 )
= E ( X 2  2 XE( X )  [ E ( X )] 2 )
= E ( X 2 )  2 E ( XE( X ))  E ( E 2 ( X ))
= E ( X 2 )  2E ( X ) E ( X )  E 2 ( X )
= E( X 2 )  E 2 ( X )
Lemma: Var ( X  Y )  Var ( X )  Var (Y ) if X and Y are independent random variables.
Topic Two: Ordinary Generating Function (OGF)
Among discrete random variables those assuming only the integral values are of special
importance. a0 , a1 , a2 ,..., ak ... are function of the integral values, we call them sequence of
numbers.
I.
Generalities

be a sequence of real numbers. Then
k 0
Definition: Let a k 
A( z )  a0  a1 z  a 2 z 2  ...  a n z n  ... is called an ordinary generating function (OGF).
Examples:
A number of sequences and their corresponding OGF are listed here.
1) a n : 1,1,1,...,1,...  A( z )  1  z  z 2      z n     
2) bn : 1,  ,  2 ,  3 ,...  B ( z )  1  z   2 z 2     
1
1 z
1
1  z
3) c n : 1,2,3,4...  C ( z )  1  2 z  3z 2  4 z 3      nz n 1     
1
(1  z ) 2
4
1 1
2 3
1
n
4) d n : 0,1, , ,..., ,...  D( z )  z 
z 1
z2 z3
zn
1

  
   
dz  ln
0
2
3
n
1 z
1 z
1 1
1
,  , ,    E ( z )  e z
1! 2!
n!
5) en : 1, ,
II.
F ( n ) (0)
 F ( z)
n!
6)
fn :
7)
 
   (  1)    (  n  1)
g n:   
 G ( z )   ( ) z n  (1  z ) , z  1
n!
n 0 n
n
Basic operations on OGF
1. For Two sequences:
an   A(z)
bn   B(z )
an  bn   an  bn   A( z)  B( z)
an   A(z)
2. Shifting
Shift right:
bn : 0,0,  0, bm
 a0 , bm1  a1 ,...bm n  a n ,...  B( z )  z m A( z )
Shift left:
A( z )  a0  a1 z  a 2 z 2      a m1 z m1
bn : b0  am , b1  am1 ,..., bn  amn ...  B( z ) 
zm
3. Convolutions
Definition: Let a k and bk  be two numerical sequences.
an : A( z)  a0  a1 z  ...  an z n  ...
bn : B( z)  b0  b1 z  ...  bn z n  ...

a n  bn   a0 bn  ...  a n b0  : A( z )  B( z )   (a0 bn  a1bn1  ...  a n b0 ) z n
0
an  bn is called the convolution of a n and bn  .
r
Example:
 n  m 
m  n

 r 
  k  r  k   
k 0
 

Proof:
n
a r  :  
r
5
br : 
m

r
ar  br   
n  m   n  m 
 n  m 
    
        
 0  r   1  r  1
 r  0 
ar : A( z )   
n r
 z  (1  z ) n
r 0  r 

br : B( z )   
m r
 z  (1  z ) m
r 0  r 

m  n r
z
r 
r 0 
ar  br : A( z )  B( z )  (1  z ) n (1  z ) m  (1  z ) m n   



 n  m  r
) z
  (  
r  0 k  0  k  r  k 
r
 n  m   m  n 
  

   
k  0  k  r  k 
 r 
III.
Expectation and Variance
X is a discrete random variable, that can assume values 0, 1, 2, …
Let Pn  Pr ob[ X  n] be the distribution of X
P( z )  P0  P1 z  P2 z 2      Pn z n    

E ( X )   iPi  P' (1)
i 0

E ( X 2 )   i 2 Pi  P' ' (1)  P' (1)
i 0
Var ( X )  P' ' (1)  P' (1)  [ P' (1)] 2
Proof:
P' ( z )  P1  2 P2 z  ...  nPn Z n 1  ...
P' (1)  P1  2 P2  ...  nPn  ...  E ( X )

P' ' ( z )   n(n  1) Pn z n  2
n2

  (n 2 Pn  nPn ) z n  2
n2


n2
n2
  n 2 Pn z n  2   nPn z n  2
6


k 0
k 0
  (k  2) 2 Pk  2 z k   (k  2) Pk  2 z k
P ' ' (1)  ( P1  4 P2  9 P3  ...  n 2 Pn  ...)
 ( P1  2 P2  3P3  ...  n Pn  ...)
 E ( X 2 )  P' ' (1)  P' (1)
Var ( X )  E ( X 2 )  [ E ( X )] 2
 P' ' (1)  P' (1)  [ P' (1)] 2
Examples:
1) Bernoulli Trials
0 q  1

p
1
p


Pn : q, p,0,0,0,...
P ( z )  q  pz
E ( X )  P' (1)  p
Var ( X )  p  p 2  pq
2) Poisson Distributions
e   n
n!
 n n

e  z
P( z )  
n!
n 0
P[ X  n] 

 e  
z n
n 0
e

e
n!
z
 e    z
E ( X )  P' (1)  e  z | z 1  
Var ( X )  P' ' (1)  P' (1)  [ P' (1)] 2
 2 e  z | z 1   2  
7
Fly UP