# Introduction to Probability – 2

## Uniform Probability distribution

If Ω is finite and each outcome is equally likely then, where |A| denotes the number of elements in A.
which is called Uniform Probability distribution.

In general, probability distribution in tossing a coin or rolling dice is considered to be uniform. An example of uniform probability distribution is covered in detail in the previous post.

## Independent events

Two events A and B are independent if
P(AB) = P(A) x P(B)

Two events are independent if the occurrence of one doesn’t change the probability of occurrence of the other event. This might seem intuitive but actually, these events are less obvious.

Example:
In rolling a dice, let us take 2 events, A = {2, 4, 6} and B = {1, 2, 3, 4}
A ∩ B = {2, 4}.
Can you say that the events are independent or not just by looking at the sample outcomes in each event? Maybe not, independent events are not as simple as disjoint events.

P(A) = 1/2, P(B) = 2/3 and
P(A ∩ B) i.e. P(C) = 1/3 where C = A ∩ B = {2, 4}
P(A ∩ B) = 1/3 = P(A)*P(B) = 1/2 * 2/3

since P(A ∩ B) = P(A) * P(B), A and B are independent events.

This concludes the fact that we can’t visibly identify whether two events are independent or not independent unless we verify with the mathematical relationship. Note that sometimes independence is just assumed rather than deriving. In tossing a coin twice we assume that the tosses are independent which reflects the fact that coin has no memory of the first toss.

Example:
Toss a fair coin 10 times. Let A = “at least one Head”. Let Tj be the event when tail occurs on the jth toss. Then
P(A) = 1 – P(Ac)
= 1 – P(all tails)
= 1 – P(T1T2 . . . T10)
= 1 – P(T1)P(T2) . . . P(T10) . . . using independence
= 1 – (1/2)10 ≈ 0.999

Don’t confuse independent with disjoint events. Let us suppose A and B are disjoint events, each with positive probability. So can they be independent? NO. For both events, probabilities are positive, therefore P(A)P(B) > 0, but according to disjoint condition (AB) = Φ and P(AB) = P(Φ) = 0. Concluding, P(AB) ≠ P(A)P(B).
Therefore they are not independent. Confusing! Is it? Go through the definitions again to get more intuitions.

## Conditional Probability

If P(B)>0 then the conditional probability of A given B is Think of P(A|B) as the fraction of times A occurs among those in which B occurs. Another interpretation of independence is that knowing B doesn’t change the probability of A.

P(AB) = P(A|B)P(B) = P(B|A)P(A) are the common forms of the above formula generally used.

Example:
Say we have 2 blue and 3 red marbles in a bag.
Event A: Get a blue marble in the first draw
Event B: Get a blue marble in the second draw.
So we’ve to calculate P(B|A) i.e. probability of getting blue marble in the second draw given that first draw was blue.
P(A) = 2/5 (2 out of 5 marbles are blue initially)
Now if 1 blue marble is taken off there are 1 blue and 3 red marbles.
Therefore, P(B|A) = 1/4.
Now calculate P(AB) i.e. Getting blue in both draws as P(B|A) x P(A) (from the above formula) i.e. 1/10.

If A and B are independent events then P(A|B) = P(A) (Try deriving this formula from the definitions).