Chapter 16
Probability Statistics and Stochastic Processes
This chapter briefly introduces the important contents of probability theory, in addition to introducing random events and their probability, random variables and distribution functions, numerical characteristics of random variables, probability generating functions, moment generating functions and characteristic functions, law of large numbers and central limit theorem, etc. In addition to basic concepts , the normal distribution table and the use of probability paper are also introduced. This chapter focuses on describing the commonly used mathematical statistics methods, including samples and their frequency distribution, interval estimation of population parameters, statistical testing, variance analysis, regression analysis, orthogonal experimental design, sampling inspection, quality assessment (process control), etc. Finally, the basic content of stochastic process theory is briefly described, and the more commonly used Markov processes and stationary stochastic processes are highlighted.
§ 1 Probability Theory
1.
Events and Probability
1. Random events and their operational relationships
[ Random event · inevitable event · impossible event ] Under certain conditions, the experimental results that may or may not occur are called random events, or events for short, represented by A , B , C , . . . There are two special cases of random events, namely inevitable events (events that must occur in each trial under certain conditions) and impossible events (events that must not occur in each trial under certain conditions), respectively recorded as Ω and Φ .
[ Operational relationship of events ]
1 ^{° } contains When event B occurs , event A must also occur, then A contains B or B is contained in A , denoted as A B , or B A . _{}_{}
2 ^{° } Equivalence If A B and A B , that is, events A and B occur at the same time or do not occur, then A and B are said to be equivalent, denoted as A=B . _{}_{}
The 3 ^{° } product represents the simultaneous occurrence of events A and B , called the product of A and B , denoted as A B ( or AB ). _{}
4 ^{°Sum} means the event that event A or event B occurs, called the sum of A and B , and denoted as A B ( or A+B ). _{}
A 5 ^{° } difference represents an event where event A occurs and event B does not, called the difference between A and B , denoted as A \ B (or A ). _{}
6 ^{° } Mutual exclusion If events A and B cannot occur at the same time, namely AB , then A and B are said to be mutually exclusive (or mutually incompatible). _{}
7 ^{° } Opposition If events A and B are mutually exclusive, and either A or B occurs in each trial , that is, A B = and A B = Ω , then B is called the opposite event of A , and denoted as B = . _{}_{}_{}_{}
8 ^{°} Complete If events A _{1} , A _{2} , ··· , An occur at least one in each trial, that is , _{{} A _{1} , A _{2} , ··· , An } _{is} said to constitute a complete set of events . Especially when A _{1} , A _{2} , ··· , An are mutually exclusive in pairs, that is , A _{i} A j = ( _{ij} , i , _{j} = 1 , 2 , ··· , _{ }_{} _{}_{} _{}_{}_{}_{}_{ }_{}_{}_{}_{}_{}_{}n ), it is said that { A _{1} , A _{2} , ... , A _{n} } is a complete set of mutually exclusive events.
2. Several definitions of probability
[ Frequency and Probability ] Whether or not a random event occurs in an experiment is an accidental phenomenon that cannot be determined in advance, but when repeated experiments are carried out, the statistical regularity of the probability of its occurrence can be found. Specifically, if n repeated trials are carried out under the same conditions , and event A occurs v times, then the frequency of event A in n trials appears stable when n increases infinitely. This statistical regularity shows that the possibility of event A happening is an objective attribute inherent in the event itself and not changed by people's subjective will. The probability of event A occurring is called the probability of event A , denoted as P ( A ) . When the number of trials n is large enough, the frequency of the event can be used to approximate the probability of the event, that is_{}
_{}
[ Classical definition of probability ] Suppose a random experiment (an experiment whose outcome cannot be accurately predicted in advance and can be repeated under the same conditions) has only a finite number of different basic events ω _{1 } , ω _{2 } , . . . , ω _{n} (A basic event is also a kind of event, a general event is always composed of several basic events), each basic event is equally possible * , the whole of the basic events is recorded as Ω , and it is called the basic event space, if Event A consists of k ( k n_{} ) different basic events, then the probability P ( A ) of A is specified as
_{}
The probability of an impossible event is specified as_{}
_{}
[ Axiomatic Definition of Probability ]
Definition 1 Let , F , if F satisfies the following conditions:_{}_{}
( i ) F ;_{}
(ii) if F , then F ( ) ;_{}_{}_{}
(iii) For any F ( n = 1, 2, . . . ) , we have_{}
_{}F
Then F is said to be an algebra in ._{}_{}
Definition 2 Let be a realvalued set function on an algebra F if it satisfies the condition:_{}_{}
( i ) for any F , there is 0 P ( A ) 1 ; _{}_{}_{}
( ii ) ; _{}
( iii ) For any F ( n = 1 , 2 , . . . ) , A _{i} A _{j} = ( i j ) has _{}_{}_{}_{} _{}
P ( ) = An _{)}_{}_{}_{}
Then P ( A ) is called the probability measure on F , or probability for short. At this time, ω is called the basic event, A ( _{F} ) is _{}called the event, F is the whole of the event, P ( A ) is called the probability of the event A , and < , F , P> is called the probability space._{}_{}
3 . Basic Properties of Probability
1 ^{°} 0 P ( A ) 1_{}_{}
2 ^{° }P ( inevitable event ) = P ( Ω )=1
3 ^{° }P ( impossible event ) = P ( )=0_{}
4 ^{° }P(A B ) = P ( A ) + P ( B ) _{}— P ( A B_{} )
If A , B are mutually exclusive, then P ( A B_{} ) =P ( A ) + P ( B )
If A _{1} , A _{2 } , ... , A _{n} are mutually exclusive, then
P ( ) =P ( A _{1} ) +P ( A _{2} ) + ... +P ( A _{n} )=1_{}_{}_{}_{}
5 ^{° } If A B_{} , then P ( A ) P ( B )_{}
6 ^{°} If A B_{} , then P ( A ) ( B ) =P ( A \ B )_{}
7 ^{°} For any event A , P ( ) = 1 ( A )_{}_{}
8 ^{°} If A _{1 } , A _{2 } ,... , An _{is} a complete set of events that are mutually exclusive in pairs, then
P ( ) =P ( A _{1} ) +P ( A _{2} ) + + P ( A _{n} ) = 1_{} _{}_{}_{}
9 ^{°} Let A _{n }_{F} , A _{n} A _{n+}_{ 1} , n= 1,2,... , let A= _{n}_{ } , then
P ( A ) =_{} (Continuity Theorem)
4. The formula for calculating the probability
[ Conditional probability and multiplication formula ] Under the condition that event B occurs, the probability of event A occurring is called the conditional probability of event A under the condition that event B has occurred, denoted as P ( A  B ) . When P ( B ) > 0 , it is specified that
P ( AB ) =_{}
When P ( B ) = 0 , it is specified that P ( AB )=0 . This leads to the multiplication formula:
P ( A =P_{} ( B ) P ( AB ) =P ( A ) P ( BA )
P ( A _{1 }A _{2} ··· A _{n} ) = P ( A _{1} ) P ( A _{2}  A _{1} ) P ( A _{3 } A _{1 }A _{2} ) · P ( A _{n}  A _{1 }A _{2} · · · A _{n}_{ 1} ) ( P ( A _{1 }A _{2} ··· A _{n}_{ 1} )>0)
[ Independence formula ] If events A and B satisfy P ( A  B ) = P ( A ) , then event A is said to be independent of event B. Independence is the nature of each other, that is, A is independent of B , and B must be independent of A , or A and B are independent of each other.
The necessary and sufficient conditions for A and B to be independent of each other are:
P ( A B_{} ) = P ( A ) P ( B )
If any m ( ) of events A _{1} , A _{2} ,..., A _{n} satisfy the relation_{}_{}
_{}
A _{1} , A _{2} , ···, An_{} are said to _{be} independent in total, and abbreviated to be independent of each other.
[ Full probability formula ] If the event group B _{1} , B _{2} , satisfies
_{} _{}
P ( ) = 1, P ( B _{i} ) > 0 ( i = 1,2,...)_{} _{}
Then for any event A , we have
_{ }
If there are only n B _{i} , the formula is also established, and only n terms are added on the right side.
[ Bayesian formula ] If the event group B _{1} , B _{2} , satisfies
_{} ( i j_{} )
_{}, _{} _{}
Then for any event A ( P ( A )>0) , we have
P ( B _{i}  A ) =_{}
If there are only n B _{i} , the formula is also established, and only n terms of the righthand denominator are added.
[ Bernoulli's formula ] Assuming that the probability of an event A appearing in one trial is p , then the probability p _{n,k} of event A appearing k times in n repeated trials is_{}
p _{n,k} = _{}p ^{k} (1 ) ^{nk} ( k= 0,1,..., n ) ^{}_{}^{ }^{ }
where is the binomial coefficient._{}
When both n and k are large, there are approximate formulas
p _{n,k}^{}
In the formula , ._{} _{}
[ Poisson formula ] When n is sufficiently large and p is small, there is an approximate formula
p _{n,k}
where = np ._{}
2.
Random variables and distribution functions
[ Random variable and its probability distribution function ] The result of each test can be represented by the value of a variable. The value of this variable varies with random factors, but it follows a certain probability distribution law. This variable is called a random variable , represented by , . . . It is the ratio of numbers of random phenomena._{}_{}
Given a random variable , the probability P ( x ) of an event whose value does not exceed a real number x is a function of x , which is called the probability distribution function, or the distribution function for short, denoted as F ( x ) , that is_{}_{}_{}
F ( x ) = P ( ( (_{} _{}
[ Basic properties of distribution functions ]
1 ^{° }_{,}_{}
2 ^{° } If x _{1 }<x _{2} , then F ( x _{1} ) F ( x _{2} ) _{}_{} (monotonicity)
3 ^{° }F ( x + 0) = F ( x ) (right continuity)
4 ^{° }P ( a < =F ( b ) ( a ) _{}_{}
5 ^{° }P ( =F ( a ) 0) _{}_{}
[ Discrete distribution and probability distribution column ] If a random variable can only take a finite or listable number of values x _{1} , x _{2} ,..., x _{n} , ... , it is called a discrete random variable. If P ( ) = p _{k} ( k= 1,2,...) , the probability distribution of the values is completely determined by { p _{k} } . Let { p _{k} } be the probability distribution column. { p _{k} } has the following properties:_{}_{}_{}_{}_{}_{}_{ }_{}_{}_{}_{}_{}
1 ^{°} _{}
2 ^{° } =1_{}
3 ^{°} Let D be any measurable set on the real axis, then P (_{}
4 ^{°}_{} distribution function
F ( x ) =_{}
is a step function with jumps at ._{}_{}
[ Continuous distribution and distribution density function ] If the distribution function F ( x ) of the random variable can be expressed as_{}
F ( x ) =_{} ( p ( x ) is not negative)
It is called a continuous random variable. p ( x ) is called the distribution density function (or distribution density). The distribution density function has the following properties:_{}_{}
1 ^{° }p ( x ) = _{}
2 ^{° }_{}
3 ^{°} If p ( x ) is the distribution density of continuous random variables , then for any measurable set D on the real axis , we have_{}
_{ }
[ Distribution of a function of a random variable ] If the random variable is a function of a random variable_{}_{}
_{}
Let the distribution function of the random variable be F ( x ) , then the distribution function G ( x ) is_{}_{}
G ( x ) =_{}
In particular, when it is a discrete random variable, its possible values are x _{1} , x _{2} , ···, and P , then_{}_{}_{}_{}
G ( x )=_{}
When it is a continuous random variable , its distribution density is p ( x ) , then_{}
G ( x ) = _{}
[ Joint distribution function and marginal distribution function of random vector ] If ... , is related to n random variables under the same set of conditions , then ... , ) is an n dimensional random variable or random vector._{}_{}_{}_{}
If ( x _{1} , x _{2 } , ··· , x _{n} ) is a point on the n dimensional real space R ^{n}_{ , then the probability} of the event " ···,_{}_{}_{}
_{ }
As a function of x _{1} , x _{2} , ···, x _{n} , it is called the joint distribution function of the random vector ···,._{}_{}
Suppose ( , _{}_{}is an m dimensional random variable composed of m ( _{}_{}m n ) components arbitrarily taken out from ( , m n ) , then the joint distribution function of ( , is called the m dimensional edge of ( , ). Distribution function._{}_{}_{}_{}_{}
At this time, if the distribution functions of ( ..., _{}_{}and ( ..., ) are respectively recorded as _{}_{}F ( x _{1} , x _{2} ,..., x _{n} ) and , then_{}_{}_{}_{}
_{} _{}=F ( ··· ,x , ··· , , ··· ,x , ··· , )_{}_{}_{}_{}_{}
[ Conditional Distribution Function and Independence ] Suppose it is a random variable and event B satisfies P ( B )>0 , then it is called_{}
F ( x  B ) = P ( x _{} B )
is the conditional distribution function under the condition that event B has occurred._{}
1 ^{°} Let ( , _{}_{}be a twodimensional discrete random variable, and the possible values of sum are x _{i} ( i = 1,2, ) and y _{k} ( k = 1, 2, ) . ( , the joint distribution of_{}_{}_{}_{ }_{}_{}_{}
P ( = p _{ik}_{}_{}
The two onedimensional marginal distributions are
P ( = _{·} = ( i= 1,2,···)_{}_{}_{}
P ( = =_{}_{}_{ }
say
P ( _{} _{}) =_{} _{}
is the conditional distribution of a discrete random variable under condition. similar, say_{}_{}
P(_{}  _{}) =_{} ( > 0,
k= 1,2,...)_{}
is the conditional distribution of a discrete random variable under condition._{}_{}
2 ^{°} Let ( ) _{}be a twodimensional continuous random variable, and its joint distribution density is f ( x, y ) , at point y , then it is _{}called
_{}
is the conditional distribution function under the condition of =y , at point x , then it is called_{}_{}_{}
_{}
is the conditional distribution function under the condition._{}_{}
3 ^{°} If the joint distribution function of ( ..., _{}_{}is equal to the product of all onedimensional marginal distribution functions, i.e.
F ( x _{1} , x _{2} ,..., x _{n} ) =_{}
(It is equivalent to P ( , ···, _{n} x _{n} ) =_{}_{}_{}_{} then say , ··· , are independent of each other._{}_{}
3.
Numerical Characteristics of Random Variables
[ Mathematical expectation (mean) and variance ] The mathematical expectation (or mean) of a random variable is denoted as E (or M ), which describes the value center of the random variable. The mathematical expectation of a random variable ( ) ^{2} is called the variance, denoted as D (or Var ), and the square root of D is called the mean square error (or standard deviation), denoted as = . They describe how densely the possible values of a random variable deviate from the mean._{}_{}_{}_{}^{}_{}_{}_{}_{}_{}_{}_{}
1 ^{°} If a continuous random variable has a distribution density of p ( x ) and a distribution function of F ( x ) , then (when the integral converges absolutely)_{}
E =_{}_{}
D =_{}_{}
2 ^{°} If it is a discrete random variable, its possible values are x _{k} , k= 1,2,... , and P( = x _{k} )=p _{k} , then (when the series is absolutely convergent)_{}_{}_{}_{}_{}
E_{}
D = p _{k}_{}_{}_{}
[ Several formulas for mean and variance ]
1 ^{° }D =E ^{2}  ( E ) ^{2} _{}_{}^{}_{}^{}^{}
2 ^{° }Ea=a , Da= 0 ( a is a constant)_{}
3 ^{° } E ( c_{} ) =cE , D_{ } ( c_{} ) =c ^{2} D_{} ( c is a constant)
4 ^{° }E ( _{}_{}
5 ^{°} If _{1} , _{2} ,..., _{n} are n independent random variables, then_{}_{} _{}_{ } _{}
E ( _{1 }+ _{2 }+ ··· + _{n} ) =E _{1 }+E _{2} + ··· +E _{n}_{}_{}_{}_{}_{}_{}_{}_{}_{}_{}
D ( _{1} + _{2} +...+ _{n} )=_{}_{}_{}_{}
6 ^{°} If _{1} , _{2 } ,..., _{n} are n independent random variables , then_{}_{} _{}_{ }_{}_{}
E ( _{1 }_{2} ··· _{n} ) = ( E _{1} )( E _{2} ) ··· ( E _{n} )_{}_{}_{}_{}_{}_{}_{}_{}
D ( _{1 }+ _{2 }+ ··· + _{n} ) = D _{1 }+D _{2 }+ ··· +D _{n}_{}_{}_{}_{}_{}_{}_{}_{}
7 ^{°} If _{1} , _{2} ,..., _{n} are independent random variables, and = 0, D _{k} = ( k= 1,2,..., n ) , then the mean and variance of the random variables are_{}_{} _{}_{ }_{}_{}_{}_{}_{}
_{}
[ Chebyshev's inequality ] For any given positive number , we have_{}
_{}
[ Conditional Mathematical Expectation and Full Mathematical Expectation Formula ] Let F ( x  B ) be the conditional distribution function of random variables to event B , then_{}
_{}
is called (when the integral converges absolutely) the conditional mathematical expectation for event B. If a continuous random variable has a conditional distribution density p ( x  B ) , then_{}_{}
_{}
If it is a discrete random variable, its possible values are x _{1} , x _{2} ,... , then_{}_{ }_{ }
_{}
If B _{1} , B _{2 } ,..., B _{n} is a complete set of mutually exclusive events, then there is a full mathematical expectation formula
_{}
[ Relationship between median, mode and mean ] Satisfaction
P ( , P (_{}_{} _{}
The number m is called the median of the random variable . In other words, m satisfies the following two equations:_{}
P (_{}
P (_{}
To maximize the value of the distribution density function, that is
p( )= _{}maximum value
is called the mode of a random variable ._{}_{}
For a unimodal symmetrical distribution function, m = = _{}_{}(mean)
For asymmetric unimodal distribution functions, m lies between and ._{}_{}
[ Higherorder origin moment and central moment ] When r , the mathematical expectation (assuming existence) of the random variable and ( is called the r order origin moment and the r order central moment of the random variable, respectively, denoted as sum . In particular, it is the mean value, which is variance._{}_{}_{}_{}_{}_{}_{}_{}
1 ^{°} If a continuous random variable has a distribution density of p(x) , then_{}
_{}
_{}
2 ^{°} If it is a discrete random variable, its possible value is x _{k} ( k = 1,2,...) , and P ( =x _{k} ) =p _{k} , then_{}_{}_{ }_{}_{}_{}
_{}
_{}
3 ^{°} When r_{} , the mathematical expectation of the random variable and (assuming they exist) are called the r order absolute origin moment and the r order absolute central moment of the random variable, respectively. And there are similar formulas corresponding to 1 ^{°} and 2 ^{°} ._{}_{}_{}^{}^{}
4 ^{°} The origin moment and the central moment satisfy the following relationship ( r is a positive integer); _{}_{}
_{}
_{}
where is the binomial coefficient._{}
[ Covariance and Correlation Coefficient ] Assuming that
both the mean and variance of the sum of random variables exist, then the covariance or Cov of the sum ( for_{}_{}_{}_{}_{}_{}
_{}= E [(_{}
_{}and the correlation coefficient is_{}_{}
_{}=_{}
4.
Probability Generating Function Moment Generating Function Characteristic Function
[ Probability generating function of an integervalued random variable ] If only random variables with nonnegative integer values are taken, the mean of the random variable function is called the probability generating function of the random variable. Write P= ( =k ) =p _{k} ( k= 0,1,2,...) , then the probability generating function is_{}_{}_{}_{}_{}_{} _{}
P ( ( _{} 1_{}
set , then_{}
_{}(1) =E_{}
_{}(1) =E [_{}
････････････････････････････････
P_{}
in turn have _{}
_{}
[ Moment generating function ]
If it is a random variable, it is called the mean value of the random variable function_{}_{}
_{}
is the moment generating function. If there is an origin moment of any order . . . ) , then_{}_{}_{}
_{}
_{}
1 ^{°} If it is a discrete random variable, its possible values are x _{1} , x _{2} , · · · , then _{}_{}_{}_{}
_{}
2 ^{°} If a continuous random variable has a distribution density of p ( x ), then _{}
_{}
[ Characteristic function ] If it is a random variable, it is called
the mean value of the complex valued random variable e_{}_{}
_{}
( i = )_{}
for the characteristic function. If there is an origin moment of any order ( k =1,2,...) , then_{}_{}_{}
_{}
_{}
_{} If it is a discrete random variable, its possible values are x _{1} , x _{2} ,..., P( , then_{}_{ }_{}_{}
_{}
2 ^{° } If a continuous random variable has a distribution density of p ( x ) , then _{}
_{}
[ Relationship between probability generating function, moment generating function and characteristic function ]
P ( e ^{t} )=_{}
P ( e ^{it} )=_{}
Five,
commonly used distribution functions
1. Commonly used discrete distributions
name notation 
Probability distribution and its domain parameter conditions 
mean_{} 
variance_{} 
probability generating function _{} 
moment generating function _{} 
Characteristic Function _{} 
icon 

binomial distribution _{} 
_{} _{} _{} _{}positive integer 
_{} 
_{} 
_{} 
_{} 
_{} 


Poisson distribution _{} 
_{} _{} _{}positive integer 
_{} 
_{} 
_{} 
_{} 
_{} 


geometric distribution _{} 
_{} _{} _{} 
_{} 
_{} 
_{} 
_{} 
_{} 


Negative binomial distribution _{} 
_{} _{} _{} _{}positive real number 
_{} 
_{} 
_{} 
_{} 
_{} 


single point distribution _{} 
_{} _{}positive integer 
_{} 
_{} 
_{} 
_{} 
_{} 


name notation 
Probability distribution and its domain parameter conditions 
mean_{} 
variance_{} 
probability generating function _{} 
moment generating function _{} 
Characteristic Function _{} 
icon 

log distribution _{} 
_{} _{} _{} 
_{} 
_{} 
_{} 
_{} 
_{} 


hypergeometric distribution _{} 
_{} _{} _{} _{}positive integer _{} 
_{} _{} 
_{} ( for the hypergeometric function)_{} 


2. Commonly used continuous distribution
name notation 
Distribution Density and Its Definition Domain parameter conditions 
mean_{} 
variance_{} 
moment generating function _{} 
Characteristic Function _{} 
icon 
Uniform function _{} 
_{} _{} 
_{} 
_{} 
_{} 
_{} 

Standard normal distribution _{} 
_{} 
0 
1 
_{} 
_{} 

normal distribution _{} 
_{} _{} _{} 
_{} 
_{} 
_{} 
_{} 

Rayleigh distribution _{} 
_{} _{} 
_{} 
_{} 



index distribution _{} 
_{} 
_{} 
_{} 
_{} 
_{} 

beta distribution _{} 
_{} 
_{} 
_{} 
_{} (Kummer function) 


Gamma distribution _{} 
_{} 
_{} 
_{} 
_{} 
_{} 

Lognormal distribution
_{} 
_{} 
_{} 
_{} 



_{}distribution (degrees of freedom is_{}
_{} 
_{} n is a positive integer 
n 
2 n 
_{} 
_{} 

_{}Distribution (with degrees of freedom )_{} _{} 
_{} n is a positive integer 
0 ( n> 1) 
_{} _{} 
_{} _{}is the Neumann function 


F distribution (degrees of freedom ( m,n ) ) F ( m,n ) 
_{}m, n are positive integers 
_{} 
_{} 
_{} (Kummer function) 


Wilbur distribution _{} 
_{}shape parameter , scale parameter ,_{}_{} positional parameters_{} 
_{} 
_{} 



Cauchy distribution _{} 
_{} 
does not exist 
_{} 

6. The Law of Large Numbers and the Central Limit Theorem
[ Law of Large Numbers ]
1 ^{°} Bernoulli's Theorem The frequency of random event A in n independent trials converges to the probability p of event A according to probability , that is, for any , _{}_{}
_{}
2 ^{°} Independent random variables ... if (i) has a mean variance. Note E , )_{}_{}_{}
; or ( ii ) have the same distribution with finite mean E. then_{}
_{}
Convergence to the mean of random variables according to probability , that is, for any ,_{}_{}
_{}
3 ^{°} If the mean and variance of random variables with the same distribution are independent of each other , remember that _{}_{}
_{}
converges to the variance of the random variable according to the probability , that is, for any ,_{}_{}
_{}
[ Central Limit Theorem ]
1 ^{°} If the mean and variance of random variables with the same distribution are independent of each other , remember , then the random variable _{}_{}
_{}
Asymptotically follow the standard normal distribution N(0,1) , that is
_{}
2 ^{°} under the condition of 1 ^{° , there is}
^{}
_{}
or
_{}
Seven,
the use of the normal distribution table
In practice, many random phenomena follow a normal distribution, or asymptotically follow a normal distribution with appropriate transformations. This manual accompanies the normal probability integral
_{}
table of values, and the integral
_{} ( K_{}_{}
Median and value correspondence table. Using them, the following problems can be calculated:_{}_{}
1 ^{°} The probability of a random variable following a standard normal distribution falling within the interval is _{}_{}_{}
_{}
The onesided probability is
_{}
_{}
(or from the values found in the valueto value correspondence table )._{}_{}_{}_{}
2 ^{°} known , determine the integral
_{}
_{}
in . by symmetry_{}
_{}
Find out from the valueto value correspondence table , then ._{}_{}_{}_{}
3 ^{°} The probability that a random variable that follows a normal distribution falls within the interval is _{}_{}_{}
_{}
The onesided probability is
_{}
_{}
* (In applications, when one event is more likely to occur than the other for no reason, it is considered that these two events are equally likely)