If the Cumulative Density Function is Not Continuous Does It Have Probability Density Function
Correlation as an Asset Class and the Smile
Robert L. Kosowski , Salih N. Neftci , in Principles of Financial Engineering (Third Edition), 2015
16.5 Application to Option Payoffs
The major advantage of the dirac delta functions, interpreted as the limits of distributions, is in differentiating functions that have points that cannot be differentiated in the usual sense. There are many such points in option trading. The payoff at the strike K is one example. Knock-in, knock-out barriers is another example. Dirac delta will be useful for discussing derivatives at those points.
Before we proceed, for simplicity we will assume in this section that interest rates are equal to zero:
(16.7)
We also assume that the underlying S T follows the risk-neutral SDE, which in this case will be given by
(16.8)
Note that with interest rates being zero, the drift is eliminated and that the volatility is not of the Black–Scholes form. It depends on the random variable S t . Let
(16.9)
be the vanilla call option payoff shown in Figure 16.4. The function is not differentiable at S T =K, yet its first-order derivative is like a step function. More interestingly, the second-order derivative can be interpreted as a dirac delta function. These derivatives are shown in Figures 16.4 and 16.5.
Now write the equivalent of Ito's Lemma in a setting where functions have kinks as in the option payoff case. This is called Tanaka's formula and essentially extends Ito's Lemma to functions that cannot be differentiated at all points. We can write
(16.10)
where we define
(16.11)
(16.12)
Taking integrals from t 0 to T we get:
(16.13)
where the first term on the right-hand side is the time value of the option at time t 0 and is known with certainty. We also know that with zero interest rates, the option price will be given by
(16.14)
Now, using the risk-adjusted probability , (i) apply the expectation operator to both sides of Eq. (16.13), (ii) change the order of integration and expectation, and (iii) use the property of dirac delta functions in eliminating the terms valued at points other than S t =K. We obtain the characterization of the option price as:
(16.15)
where is the continuous density function that corresponds to the risk-adjusted probability of S t . 4 This means that the time value of the option depends (i) on the intrinsic value of the option, (ii) on the time spent around K during the life of the option, and (iii) on the volatility at that strike, σ(K).
The main point for us is that this expression shows that the option price depends not on the overall volatility, but on the volatility of S t around K. This is exactly what the notion of volatility smile is.
16.5.1 An Interpretation of Dynamic Hedging
There are many dynamic strategies that replicate an option's final payoff. The best known is delta hedging. In delta hedging, the financial engineer will buy or sell the delta=D t units of the underlying, borrow any necessary funds, and adjust D t as the underlying S t moves over time. As t→T, the expiration date, this will duplicate the option's payoff. This is the case because, as the time value goes to zero the option price merges with (S T −K)+.
However, there is an alternative dynamic hedging procedure that is similar to the approach adopted in the previous section. The dynamic hedging technique, called stop-loss strategy, is as follows.
In order to replicate the payoff of the long call, hold one unit of S t if K<S t . Otherwise hold no S t . This strategy requires that as S t crosses level K, we keep adjusting the position as soon as possible. Either buy one unit of S t or sell the S t immediately as S t crosses the K from left to right or from right to left, respectively. The P/L of this position is given by the term
(16.16)
Clearly the switches at S t =K cannot be done instantaneously at zero cost. The trader is moving with time Δ while the underlying Wiener process is moving at a faster rate . These adjustments are shown in Figures 16.6 and 16.7. The resulting hedging cost is the options value.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123869685000169
Physical Basis of Acoustics
J.P. Lefebvre , in Acoustics, 1999
1.1.1 Conservation equations
Conservation equations describe conservation of mass (continuity equation), momentum (motion equation) and energy (from the first law of thermodynamics).
Let us consider a portion of material volume Ω filled with a (piecewise) continuous medium (for greater generality we suppose the existence of a discontinuity surface ∑ – a shock wave or an interface – moving at velocity ); the equations of conservation of mass, momentum, and energy are as follows.
Mass conservation equation (or continuity equation). The hypothesis of continuous medium allows us to introduce the notion of a (piecewise continuous) density function ρ, so that the total mass M of the material volume Ω is M = ∫ Ω ρ dΩ; the mass conservation (or continuity) equation is written
(1.1)
where d/dt is the material time derivative of the volume integral.
Momentum conservation equation (or motion equation). Let be the local velocity; the momentum of the material volume Ω is defined as ∫ Ω dΩ; and, if σ is the stress tensor and the supply of body forces per unit volume (or volumic force source), the momentum balance equation for a volume Ω of boundary S with outward normal is written
(1.2)
Energy conservation equation (first law of thermodynamics). If e is the specific internal energy, the total energy of the material volume Ω is ∫ Ω dΩ; and, if is the heat flux vector and r the heat supply per unit volume and unit time (or volumic heat source), the energy balance equation for a volume Ω of boundary S with outer normal is written
(1.3)
Using the lemma on derivatives of integrals over a material volume Ω crossed by a discontinuity ∑ of velocity :
where [Φ]∑ designates the jump Φ(2) – Φ(1) of the quantity Φ at the crossing of the discontinuity surface ∑; or
with
the material time derivative of the function ɸ. Using the formula
one obtains
or
The continuity hypothesis states that all equations are true for any material volume Ω. So one finds the local forms of the conservation equations:
So at the discontinuities, one obtains
(1.4)
Away from the discontinuity surfaces combining the first three local forms of the conservation equations above, one obtains
(1.5)
or
(1.6)
with the strain rate tensor.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780122561900500026
COMPENDIUM OF THE FOUNDATIONS OF CLASSICAL STATISTICAL PHYSICS
Jos Uffink , in Philosophy of Physics, 2007
5.5 Comments
Gibbs' statistical mechanics has produced a formalism with clearly delineated concepts and methods, using only Hamiltonian mechanics and probability theory. It can and is routinely used to calculate equilibrium properties of gases and other systems by introducing a specific form of the Hamiltonian. The main problems that Gibbs has left open are, first, the motivation for the special choice of the equilibrium ensembles and, second, that the quantities serving as thermodynamic analogies are not uniquely defined. However, much careful work has been devoted to show that, under certain assumptions about tempered interaction of molecules, unique thermodynamic state functions, with their desired properties are obtained in the 'thermodynamic limit' (cf. §6.3).
- 1.
-
Motivating the choice of ensemble. While Gibbs had not much more to offer in recommendation of these three ensembles than their simplicity as candidates for representation for equilibrium, modern views often provide an additional story. First, the microcanonical ensemble is particularly singled out for describing an ensemble of systems in thermal isolation with a fixed energy E.
Arguments for this purpose come in different kinds. As argued by Boltzmann (1868), and shown more clearly by Einstein (1902), the microcanonical ensemble is the unique stationary density for an isolated ensemble of systems with fixed energy, if one assumes the ergodic hypothesis. Unfortunately, for this argument, the ergodic hypothesis is false for any system that has a phase space of dimension 2 or higher (cf. paragraph 6.1).
A related but more promising argument relies on the theorem that the measure P mc associated with the microcanonical ensemble via P mc(A) = ∫ A ρmc(x)dx is the unique stationary measure under all measures that are absolutely continuous with respect to P mc, if one assumes that the system is metrically transitive (again, see paragraph 6.1).
This argument is applicable for more general systems, but its conclusion is weaker. In particular, one would now have to argue that physically interesting systems are indeed metrically transitive, and why measures that are not absolutely continuous with respect to the microcanonical one are somehow to be disregarded. The first problem is still an open question, even for the hard-spheres model (as we shall see in paragraph 6.1). The second question can be answered in a variety of ways.
For example, [Penrose, 1979, p. 1941] adopts a principle that every ensemble should be representable by a (piecewise) continuous density function, in order to rule out "physically unreasonable cases". (This postulate implies absolute continuity of the ensemble measure with respect to the microcanonical measure by virtue of the Radon-Nikodym theorem.) See [Kurth, 1960, p. 78] for a similar postulate. Another argument, proposed by [Malament and Zabell, 1980], assumes that the measure P associated with a physically meaningful ensemble should have a property called 'translation continuity'. Roughly, this notion means that the probability assigned to any measurable set should be a continuous function under small displacements of that set within the energy hypersurface. Malament & Zabell show that this property is equivalent to absolute continuity of P with respect to μmc, and thus singles out the microcanonical measure uniquely if the system is metrically transitive (see [van Lith, 2001b, for a more extensive discussion]).
A third approach, due to Tolman and Jaynes, more or less postulates the microcanonical density, as a appropriate description of our knowledge about the microstate of a system with given energy (regardless of whether the system is metrically transitive or not).
Once the microcanonical ensemble is in place as a privileged description of an isolated system with a fixed energy, one can motivate the corresponding status for the other ensembles with relatively less effort. The canonical distribution is shown to provide the description of a small system S1 in weak energetic contact with a larger system S 2, acting as a 'heat bath' (see [Gibbs, 1902, p. 180–183]). Here, it is assumed that the total system is isolated and described by a microcanonical ensemble, where the total system has a Hamiltonian H tot = H 1 + H 2 + H int with H 2 >> H 1 >> H int. More elaborate versions of such an argument are given by Einstein (1902) and Martin-Löf (1979). Similarly, the grand-canonical ensemble can be derived for a small system that can exchange both energy and particles with a large system. (see [van Kampen, 1984]).
- 2.
-
The 'equivalence' of ensembles. It is often argued in physics textbooks that the choice between these different ensembles (say the canonical and micro-canonical) is deprived of practical relevance by a claim that they are all "equivalent". (See [Lorentz, 1916, p. 32] for perhaps the earliest version of this argument, or [Thompson, 1972, p. 72; Huang, 1987, p. 161–2] for recent statements.) What is meant by this claim is that if the number of constituents increases, N → ∞, and the total Hamiltonian is proportional to N, the thermodynamic relations derived from each of them will coincide in this limit.
However, these arguments should not be mistaken as settling the empirical equivalence of the various ensembles, even in this limit. For example, it can be shown that the microcanonical ensemble admits the description of certain metastable thermodynamic states, (e.g. with negative heat capacity) that are excluded in the canonical ensemble (see [Touchette, 2003; Touchette et al., 2004, and literature cited therein]).
- 3.
-
The coarse-grained entropy. The coarse-graining approach is reminiscent of Boltzmann's construction of cells in his (1877b); cf. the discussion in paragraph 4.4). The main difference is that here one assumes a partition on phase-space Γ, where Boltzmann adopted it in the μ-space. Nevertheless, the same issues about the origin or status of a privileged partition can be debated (cf. p. 977). If one assumes that the partition is intended to represent what we know about the system, i.e. if one argues that all we know is whether its state falls in a particular cell ω i , it can be argued that the its status is subjective. If one argues that the partition is meant to represent limitations in the precision of human observational possibilities, perhaps enriched by instruments, i.e. that we cannot observe more about the system than that its state is in some cell ωi , one might argue that its choice is objective, in the sense that there are objective facts about what a given epistemic community can observe or not. Of course, one can then still maintain that the status of the coarse-graining would then be anthropocentric (see also the discussion in §7.5). However, note that Gibbs himself did not argue for a preferential size of the cells in phase space, but for taking the limit in which their size goes to zero in a different order.
- 4.
-
Statistical equilibrium. Finally, a remark about Gibbs' notion of equilibrium. This is fundamentally different from Boltzmann's 1877 notion of equilibrium as the macrostate corresponding to the region occupying the largest volume in phase space (cf. section 4.4). For Gibbs, statistical equilibrium can only apply to an ensemble. And since any given system can be regarded as belonging to an infinity of different ensembles, it makes no sense to say whether an individual system is in statistical equilibrium or not. In contrast, in Boltzmann's case, equilibrium can be attributed to a single system (namely if the microstate of that system is an element of the set Γeq ⊂ Γ). But it is not guaranteed to remain there for all times.
Thus, one might say that in comparison with the orthodox thermodynamical notion of equilibrium (which is both stationary and a property of an individual system) Boltzmann (1877b) and Gibbs each made an opposite choice about which aspect to preserve and which aspect to sacrifice. See [Uffink, 1996b; Callender, 1999; Lavis, 2005] for further discussions.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444515605500129
Probability Distributions I
B.R. Martin , in Statistics for Physical Science, 2012
3.2 Single Variates
In this section we will examine the case of a single random variable. The ideas discussed here will be extended to the multivariate case in Section 3.3.
3.2.1 Probability Distributions
First we will need some definitions that extend those given in Chapter 2 in the discussion of the axioms of probability, starting with the case of a single discrete random variable. If x is a discrete random variable that can take the values with probabilities , then we can define a probability distribution by
(3.1a)
Thus,
(3.1b)
To distinguish between the cases of discrete and continuous variables, the probability distribution for the former is often called the probability mass function (or simply a mass function) sometimes abbreviated to pmf. A pmf satisfies the following two conditions:
- 1.
-
is a single-valued non-negative real number for all real values of x, i.e., ;
- 2.
-
summed over all values of x is unity:
(3.1c)
We saw in Chapter 1 that we are also interested in the probability that x is less than or equal to a given value. This was called the cumulative distribution function (or simply the distribution function), sometimes abbreviated to cdf, and is given by
(3.2a)
So, if x takes on the values , the cumulative distribution function is
(3.2b)
is a nondecreasing function with limits 0 and 1 as and , respectively. The quantile of order , defined in Chapter 1, is thus the value of x such that , with , and so , where is the inverse function of F. For example, the median is .
As sample sizes become larger, frequency plots tend to approximate smooth curves and if the area of the histogram is normalized to unity, as in Fig. 1.5, the resulting function is a continuous probability density function (or simply a density function) abbreviated to pdf, introduced in Chapter 1. The definitions above may be extended to continuous random variables with the appropriate changes. Thus, for a continuous random variable x, with a pdf , (3.2a) becomes
(3.3)
It follows from (3.3) that if a member of a population is chosen at random, that is, by a method that makes it equally likely that each member will be chosen, then is the probability that the member will have a value . While all this is clearly consistent with earlier definitions, once again we should note the element of circularity in the concept of randomness defined in terms of probability. In mathematical statistics it is usual to start from the cumulative distribution and define the density function as its derivative. For the mathematically well-behaved distributions usually met in physical science the two approaches are equivalent.
The density function has the following properties analogous to those for discrete variables.
- 1.
-
is a single-valued non-negative real number for all real values of x.
In the frequency interpretation of probability, is the probability of observing the quantity x in the range . Thus, the second condition is:
- 2.
-
is normalized to unity:
It follows from property 2 that the probability of x lying between any two real values a and b for which is given by
(3.4)
and so, unlike a discrete random variable, the probability of a continuous random variable assuming exactly any of its values is zero. This result may seem rather paradoxical at first until you consider that between any two values a and b there is an infinite number of other values and so the probability of selecting an exact value from this infinitude of possibilities must be zero. The density function cannot therefore be given in a tabular form like that of a discrete random variable.
EXAMPLE 3.1
A family has 5 children. Assuming that the birth of a girl or boy is equally likely, construct a frequency table of possible outcomes and plot the resulting probability mass function and the associated cumulative distribution function .
The probability of a sequence of births containing g girls (and hence boys) is . However there are such sequences, and so the probability of having g girls is . The probability mass function is thus as given in the table below.
g | 0 | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|---|
1/32 | 5/32 | 10/32 | 10/32 | 5/32 | 1/32 |
From this table we can find the cumulative distribution function using (3.2b), and and are plotted in Figs 3.1(a) and (b), respectively, below.
EXAMPLE 3.2
Find the value of N in the continuous density function:
and find its associated distribution function . Plot and .
Because has to be correctly normalized, to find N we evaluate the integral:
Integrating by parts, gives
so that N
=
1. The resulting density function is plotted in Fig. 3.2(a). The associated distribution function is
and is shown in Fig. 3.2(b).
Some of the earlier definitions of Chapter 1 may now be rewritten in terms of these formal definitions. Thus, the general moments about an arbitrary point are, for a continuous variate,
(3.5)
so that the mean and variance, also with respect to the point , are
(3.6)
respectively. The integrals in (3.5) may not converge for all n, and some distributions possess only the trivial zero-order moment. For convenience, usually will be used in what follows.
3.2.2 Expectation Values
The expectation value, also called the expected value, of a random variable is obtained by finding the average value of the variate over all its possible values weighted by the probability of their occurrence. Thus, if x is a discrete random variable with the possible values , then the expectation value of x is defined as
(3.7)
where the second sum is over all relevant values of x and is their probability mass distribution. The analogous quantity for a continuous variate with density function is
(3.8a)
We can see from this definition that the nth moment of a distribution about any point is
(3.8b)
In particular, the nth central moment is
(3.8c)
and for the nth algebraic moment is
(3.8d)
Thus, the mean is the first algebraic moment and the variance is the second central moment. It follows from (3.8) that if c is a constant, then
(3.9a)
and for a set of random variables A, B, C, etc.:
(3.9b)
In addition, if the random variables A, B, C, etc. are independent, then
(3.9c)
EXAMPLE 3.3
Three 'fair' dice are thrown and yield face values a, b, and c. What is the expectation value for the sum of their face values?
From (3.7),
and since , then from (3.9b) .
EXAMPLE 3.4
Find the mean of the continuous distribution of Example 3.2 .
Using (3.8d), the mean is
Integrating by parts gives
3.2.3 Moment Generating, and Characteristic Functions
The usefulness of moments partly stems from the fact that knowledge of them determines the form of the density function. Formally, if the moments of a random variable x exist and the series
(3.10)
converges absolutely for some , then the set of moments uniquely determines the density function. There are exceptions to this statement, but fortunately it is true for all the distributions commonly met in physical science. In practice, knowledge of the first few moments essentially determines the general characteristics of the distribution and so it is worthwhile to construct a method that gives a representation of all the moments. Such a function is called a moment generating function (mgf) and is defined by
(3.11)
For a discrete random variable x, this is
(3.12a)
and for a continuous variable,
(3.12b)
The moments may be generated from (3.11) by first expanding the exponential,
then differentiating n times and setting , that is:
(3.13)
For example, setting and , gives and . Also, since the mgf about any point is
then if ,
(3.14)
An important use of the mgf is to compare two density functions and . If two random variables possess mgfs that are equal for some interval symmetric about the origin, then and are identical density functions. It is also straightforward to show that the mgf of a sum of independent random variables is equal to the product of their individual mgfs.
It is sometimes convenient to consider, instead of the mgf, its logarithm. The Taylor expansion 3 for this quantity is
where is the cumulant of order n, and
Cumulants are simply related to the central moments of the distribution, the first few relations being
For some distributions the integral defining the mgf may not exist and in these circumstances the Fourier transform of the density function, defined as
(3.15)
may be used. In statistics, is called the characteristic function (cf). The density function is then obtainable by the Fourier transform theorem (known in this context as the inversion theorem):
(3.16)
The cf obeys theorems analogous to those obeyed by the mgf, that is: (a) if two random variables possess cfs that are equal for some interval symmetric about the origin then they have identical density functions; and (b) the cf of a sum of independent random variables is equal to the product of their individual cfs. The converse of (b) is however untrue.
EXAMPLE 3.5
Find the moment generating function of the density function used in Example 3.2 and calculate the three moments , and .
Using definition (3.12b),
which integrating by parts gives:
Then, using (3.13), the first three moments of the distribution are found to be
EXAMPLE 3.6
(a) Find the characteristic function of the density function:
and (b) the density function corresponding to a characteristic function .
- (a)
-
From (3.15),
Again, integration by parts gives
- (b)
-
From the inversion theorem,
where the symmetry of the circular functions has been used. The second integral may be evaluated by parts to give
Thus,
This is the density of the Cauchy distribution that we will meet again in Section 4.5.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123877604000032
k-Sample test based on the common area of kernel density estimators
P. Martínez-Camblor , ... N. Corral , in Journal of Statistical Planning and Inference, 2008
In this section, a Crámer–Chernoff type theorem for the statistic defined in (2) will be proved. This large deviation result will enable us to investigate the Bahadur efficiency of the test (see Section 5). Similar results have been given by Louani (2000), for the distance between a kernel density estimator and its target, and by Martínez-Camblor et al. (2006), for the measure between two kernel density estimators based on independent samples. In the present case, we will need the following conditions:
- (C1)
-
The k samples , , are drawn from equal, absolutely continuous populations, with common density f;
- (C2)
-
for each where means for each ;
- (C3)
-
the kernel function K is a continuous density function symmetrical about zero;
- (C4)
-
, and for each .
There is a limitation imposed by condition (C2), which states that the sample sizes should grow at the same rate. In practice, the available sample sizes may be very different to each other. In the simulations reported in Section 3, we have seen that this issue may influence the power of the proposed test statistic when the differences among sample sizes get larger.
Theorem 3
Under (C1)–(C4), we have as
Proof
We first consider the case . We will prove the inequality
(4)
Let be a family of open intervals such that if , and , . We haveand hence
By the Crámer-Chernoff theorem (Van der Vaart, 1998, Proposition 14.23), the equality
follows, where . Now, use (C3) and the dominated convergence theorem to get for
where for each borelian set A the notation , and hold for the complementary, the interior and the closure set, respectively. Since by definition , the variable follows a binomial distribution , where . Then,
and
Using, in a neighborhood of , a Taylor expansion and the fact that we get (4). The next step is to establish
(5)
in the case of equal sample sizes. Let the set which contains the possible combinations (repetitions allowed) of elements of . We havewhere we use the notation . Then,
Now,
Use (C3) and the dominated convergence theorem to get (in the same notation as above) for
By using arguments similar to those needed for proving (4), we obtain
Using a Taylor expansion in a neighborhood of we get (5) as .
It remains to prove the result for arbitrary sample sizes. Introduce , and
It has been proven that
(6)
On the other hand, for each ,From this we obtain
Use (6), the continuity of the probability, and condition (C2) to conclude. □
Read full article
URL:
https://www.sciencedirect.com/science/article/pii/S0378375808000499
harveywhortiven67.blogspot.com
Source: https://www.sciencedirect.com/topics/mathematics/density-continuous-function
0 Response to "If the Cumulative Density Function is Not Continuous Does It Have Probability Density Function"
Post a Comment