# Yule–Simon distribution

Parameters Probability mass functionYule–Simon PMF on a log-log scale. (Note that the function is only defined at integer values of k. The connecting lines do not indicate continuity.) Cumulative distribution functionYule–Simon CMF. (Note that the function is only defined at integer values of k. The connecting lines do not indicate continuity.) ${\displaystyle \rho >0\,}$ shape (real) ${\displaystyle k\in \{1,2,\dotsc \}}$ ${\displaystyle \rho \operatorname {B} (k,\rho +1)}$ ${\displaystyle 1-k\operatorname {B} (k,\rho +1)}$ ${\displaystyle {\frac {\rho }{\rho -1}}}$ for ${\displaystyle \rho >1}$ ${\displaystyle 1}$ ${\displaystyle {\frac {\rho ^{2}}{(\rho -1)^{2}(\rho -2)}}}$ for ${\displaystyle \rho >2}$ ${\displaystyle {\frac {(\rho +1)^{2}{\sqrt {\rho -2}}}{(\rho -3)\rho }}\,}$ for ${\displaystyle \rho >3}$ ${\displaystyle \rho +3+{\frac {11\rho ^{3}-49\rho -22}{(\rho -4)(\rho -3)\rho }}}$ for ${\displaystyle \rho >4}$ does not exist ${\displaystyle {\frac {\rho }{\rho +1}}{}_{2}F_{1}(1,1;\rho +2;e^{i\,t})e^{i\,t}}$

In probability and statistics, the Yule–Simon distribution is a discrete probability distribution named after Udny Yule and Herbert A. Simon. Simon originally called it the Yule distribution.[1]

The probability mass function (pmf) of the Yule–Simon (ρ) distribution is

${\displaystyle f(k;\rho )=\rho \operatorname {B} (k,\rho +1),}$

for integer ${\displaystyle k\geq 1}$ and real ${\displaystyle \rho >0}$, where ${\displaystyle \operatorname {B} }$ is the beta function. Equivalently the pmf can be written in terms of the rising factorial as

${\displaystyle f(k;\rho )={\frac {\rho \Gamma (\rho +1)}{(k+\rho )^{\underline {\rho +1}}}},}$

where ${\displaystyle \Gamma }$ is the gamma function. Thus, if ${\displaystyle \rho }$ is an integer,

${\displaystyle f(k;\rho )={\frac {\rho \,\rho !\,(k-1)!}{(k+\rho )!}}.}$

The parameter ${\displaystyle \rho }$ can be estimated using a fixed point algorithm.[2]

The probability mass function f has the property that for sufficiently large k we have

${\displaystyle f(k;\rho )\approx {\frac {\rho \Gamma (\rho +1)}{k^{\rho +1}}}\propto {\frac {1}{k^{\rho +1}}}.}$
Plot of the Yule–Simon(1) distribution (red) and its asymptotic Zipf's law (blue)

This means that the tail of the Yule–Simon distribution is a realization of Zipf's law: ${\displaystyle f(k;\rho )}$ can be used to model, for example, the relative frequency of the ${\displaystyle k}$th most frequent word in a large collection of text, which according to Zipf's law is inversely proportional to a (typically small) power of ${\displaystyle k}$.

## Occurrence

The Yule–Simon distribution arose originally as the limiting distribution of a particular model studied by Udny Yule in 1925 to analyze the growth in the number of species per genus in some higher taxon of biotic organisms. [3] The Yule model makes use of two related Yule processes, where a Yule process is defined as a continuous time birth process which starts with one or more individuals. Yule proved that when time goes to infinity, the limit distribution of the number of species in a genus selected uniformly at random has a specific form and exhibits a power-law behavior in its tail. Thirty years later, the Nobel laureate Herbert A. Simon proposed a time-discrete preferential attachment model to describe the appearance of new words in a large piece of a text. Interestingly enough, the limit distribution of the number of occurrences of each word, when the number of words diverges, coincides with that of the number of species belonging to the randomly chosen genus in the Yule model, for a specific choice of the parameters. This fact explains the designation Yule–Simon distribution that is commonly assigned to that limit distribution. In the context of random graphs, the Barabási–Albert model also exhibits an asymptotic degree distribution that equals the Yule–Simon distribution in correspondence of a specific choice of the parameters and still presents power-law characteristics for more general choices of the parameters. The same happens also for other preferential attachment random graph models. [4]

The preferential attachment process can also be studied as an urn process in which balls are added to a growing number of urns, each ball being allocated to an urn with probability linear in the number (of balls) the urn already contains.

The distribution also arises as a compound distribution, in which the parameter of a geometric distribution is treated as a function of random variable having an exponential distribution.[citation needed] Specifically, assume that ${\displaystyle W}$ follows an exponential distribution with scale ${\displaystyle 1/\rho }$ or rate ${\displaystyle \rho }$:

${\displaystyle W\sim \operatorname {Exponential} (\rho ),}$

with density

${\displaystyle h(w;\rho )=\rho \exp(-\rho w).}$

Then a Yule–Simon distributed variable K has the following geometric distribution conditional on W:

${\displaystyle K\sim \operatorname {Geometric} (\exp(-W)).}$

The pmf of a geometric distribution is

${\displaystyle g(k;p)=p(1-p)^{k-1}}$

for ${\displaystyle k\in \{1,2,\dotsc \}}$. The Yule–Simon pmf is then the following exponential-geometric compound distribution:

${\displaystyle f(k;\rho )=\int _{0}^{\infty }g(k;\exp(-w))h(w;\rho )\,dw.}$

The maximum likelihood estimator for the parameter ${\displaystyle \rho }$ given the observations ${\displaystyle k_{1},k_{2},k_{3},\dots ,k_{N}}$ is the solution to the fixed point equation

${\displaystyle \rho ^{(t+1)}={\frac {N+a-1}{b+\sum _{i=1}^{N}\sum _{j=1}^{k_{i}}{\frac {1}{\rho ^{(t)}+j}}}},}$

where ${\displaystyle b=0,a=1}$ are the rate and shape parameters of the gamma distribution prior on ${\displaystyle \rho }$.

This algorithm is derived by Garcia [2] by directly optimizing the likelihood. Roberts and Roberts [5]

generalize the algorithm to Bayesian settings with the compound geometric formulation described above. Additionally, Roberts and Roberts [5] are able to use the Expectation Maximisation (EM) framework to show convergence of the fixed point algorithm. Moreover, Roberts and Roberts [5] derive the sub-linearity of the convergence rate for the fixed point algorithm. Additionally, they use the EM formulation to give 2 alternate derivations of the standard error of the estimator from the fixed point equation. The variance of the ${\displaystyle \lambda }$ estimator is

${\displaystyle \operatorname {Var} ({\hat {\lambda }})={\frac {1}{{\frac {N}{{\hat {\lambda }}^{2}}}-\sum _{i=1}^{N}\sum _{j=1}^{k_{i}}{\frac {1}{({\hat {\lambda }}+j)^{2}}}}},}$

the standard error is the square root of the quantity of this estimate divided by N.

## Generalizations

The two-parameter generalization of the original Yule distribution replaces the beta function with an incomplete beta function. The probability mass function of the generalized Yule–Simon(ρ, α) distribution is defined as

${\displaystyle f(k;\rho ,\alpha )={\frac {\rho }{1-\alpha ^{\rho }}}\;\mathrm {B} _{1-\alpha }(k,\rho +1),\,}$

with ${\displaystyle 0\leq \alpha <1}$. For ${\displaystyle \alpha =0}$ the ordinary Yule–Simon(ρ) distribution is obtained as a special case. The use of the incomplete beta function has the effect of introducing an exponential cutoff in the upper tail.