stringtranslate.com

Distribución beta

En teoría de probabilidad y estadística , la distribución beta es una familia de distribuciones de probabilidad continuas definidas en el intervalo [0, 1] o (0, 1) en términos de dos parámetros positivos , denotados por alfa ( α ) y beta ( β ), que aparecen como exponentes de la variable y su complemento a 1, respectivamente, y controlan la forma de la distribución.

La distribución beta se ha aplicado para modelar el comportamiento de variables aleatorias limitadas a intervalos de longitud finita en una amplia variedad de disciplinas. La distribución beta es un modelo adecuado para el comportamiento aleatorio de porcentajes y proporciones.

En la inferencia bayesiana , la distribución beta es la distribución de probabilidad previa conjugada para las distribuciones de Bernoulli , binomial , binomial negativa y geométrica .

La formulación de la distribución beta que se analiza aquí también se conoce como distribución beta de primer tipo , mientras que la distribución beta de segundo tipo es un nombre alternativo para la distribución beta prima . La generalización a múltiples variables se denomina distribución de Dirichlet .

Definiciones

Función de densidad de probabilidad

Una animación de la distribución beta para diferentes valores de sus parámetros.

La función de densidad de probabilidad (PDF) de la distribución beta, para o , y parámetros de forma , , es una función de potencia de la variable y de su reflejo como sigue:

donde es la función gamma . La función beta , , es una constante de normalización para garantizar que la probabilidad total sea 1. En las ecuaciones anteriores hay una realización —un valor observado que realmente ocurrió— de una variable aleatoria .

Varios autores, incluidos NL Johnson y S. Kotz , [1] utilizan los símbolos y (en lugar de y ) para los parámetros de forma de la distribución beta, que recuerdan a los símbolos utilizados tradicionalmente para los parámetros de la distribución de Bernoulli , porque la distribución beta se aproxima a la distribución de Bernoulli en el límite cuando ambos parámetros de forma y se aproximan al valor de cero.

A continuación, se presenta una variable aleatoria con distribución beta y parámetros y que se denotará por: [2] [3]

Otras notaciones para variables aleatorias distribuidas en beta utilizadas en la literatura estadística son [4] y [5] .

Función de distribución acumulativa

CDF para distribución beta simétrica vs. xα  =  β
CDF para distribución beta sesgada vs. xβ  = 5 α

La función de distribución acumulativa es

donde es la función beta incompleta y es la función beta incompleta regularizada .

Parametrizaciones alternativas

Dos parámetros

Media y tamaño de la muestra

La distribución beta también puede ser reparametrizada en términos de su media μ (0 < μ < 1) y la suma de los dos parámetros de forma ν = α + β > 0 ( [3] p. 83). Denotando por αPosterior y βPosterior los parámetros de forma de la distribución beta posterior resultante de aplicar el teorema de Bayes a una función de verosimilitud binomial y una probabilidad previa, la interpretación de la suma de ambos parámetros de forma como tamaño de muestra = ν = α ·Posterior + β ·Posterior solo es correcta para la probabilidad previa de Haldane Beta(0,0). Específicamente, para la previa de Bayes (uniforme) Beta(1,1) la interpretación correcta sería tamaño de muestra = α ·Posterior + β  Posterior − 2, o ν = (tamaño de muestra) + 2. Para un tamaño de muestra mucho mayor que 2, la diferencia entre estas dos previas se vuelve insignificante. (Ver la sección Inferencia bayesiana para más detalles.) ν = α + β se conoce como el "tamaño de muestra" de una distribución beta, pero uno debe recordar que es, estrictamente hablando, el "tamaño de muestra" de una función de verosimilitud binomial solo cuando se utiliza una distribución previa Haldane Beta(0,0) en el teorema de Bayes.

Esta parametrización puede ser útil en la estimación de parámetros bayesianos. Por ejemplo, se puede administrar una prueba a un número de individuos. Si se supone que la puntuación de cada persona (0 ≤ θ ≤ 1) se extrae de una distribución beta a nivel de población, entonces una estadística importante es la media de esta distribución a nivel de población. Los parámetros de media y tamaño de muestra están relacionados con los parámetros de forma α y β a través de [3].

α = μν , β = (1 − μ ) ν

Bajo esta parametrización , se puede colocar una probabilidad previa no informativa sobre la media y una probabilidad previa vaga (como una distribución exponencial o gamma ) sobre los reales positivos para el tamaño de la muestra, si son independientes y los datos y/o creencias previos lo justifican.

Modo y concentración

Las distribuciones beta cóncavas , que tienen , se pueden parametrizar en términos de moda y "concentración". La moda, , y la concentración, , se pueden utilizar para definir los parámetros de forma habituales de la siguiente manera: [6]

Para que la moda, , esté bien definida, necesitamos , o equivalentemente . Si en cambio definimos la concentración como , la condición se simplifica a y la densidad beta en y puede escribirse como:

donde escala directamente las estadísticas suficientes , y . Nótese también que en el límite, , la distribución se vuelve plana.

Media y varianza

Resolviendo el sistema de ecuaciones (acopladas) dado en las secciones anteriores como las ecuaciones para la media y la varianza de la distribución beta en términos de los parámetros originales α y β , se pueden expresar los parámetros α y β en términos de la media ( μ ) y la varianza (var):

Esta parametrización de la distribución beta puede llevar a una comprensión más intuitiva que la basada en los parámetros originales α y β . Por ejemplo, expresando la moda, la asimetría, el exceso de curtosis y la entropía diferencial en términos de la media y la varianza:

Cuatro parámetros

Una distribución beta con los dos parámetros de forma α y β se admite en el rango [0,1] o (0,1). Es posible alterar la ubicación y la escala de la distribución introduciendo dos parámetros adicionales que representan los valores mínimos, a , y máximos c ( c > a ), de la distribución, [1] mediante una transformación lineal que sustituye la variable adimensional x en términos de la nueva variable y (con soporte [ a , c ] o ( a , c )) y los parámetros a y c :

La función de densidad de probabilidad de la distribución beta de cuatro parámetros es igual a la distribución de dos parámetros, escalada por el rango ( c  −  a ), (de modo que el área total bajo la curva de densidad es igual a una probabilidad de uno), y con la variable "y" desplazada y escalada de la siguiente manera:

Que una variable aleatoria Y tenga una distribución beta con cuatro parámetros α, β, a y c se denotará por:

Algunas medidas de ubicación central se escalan (por ( c  −  a )) y se desplazan (por a ), de la siguiente manera:

Nota: la media geométrica y la media armónica no se pueden transformar mediante una transformación lineal como sí se puede hacer con la media, la mediana y la moda.

Los parámetros de forma de Y se pueden escribir en términos de su media y varianza como

Las medidas de dispersión estadística se escalan (no es necesario desplazarlas porque ya están centradas en la media) por el rango ( c  −  a ), linealmente para la desviación media y no linealmente para la varianza:

Dado que la asimetría y el exceso de curtosis son cantidades adimensionales (como momentos centrados en la media y normalizados por la desviación estándar ), son independientes de los parámetros a y c , y por lo tanto iguales a las expresiones dadas anteriormente en términos de X (con soporte [0,1] o (0,1)):

Propiedades

Medidas de tendencia central

Modo

La moda de una variable aleatoria distribuida beta X con α , β > 1 es el valor más probable de la distribución (correspondiente al pico en la PDF), y se da mediante la siguiente expresión: [1]

Cuando ambos parámetros son menores que uno ( α , β < 1), este es el antimodo: el punto más bajo de la curva de densidad de probabilidad. [7]

Si α = β , la expresión para la moda se simplifica a 1/2, lo que demuestra que para α = β > 1 la moda (o antimoda cuando α , β < 1 ), está en el centro de la distribución: es simétrica en esos casos. Consulte la sección Formas de este artículo para obtener una lista completa de casos de moda, para valores arbitrarios de α y β . Para varios de estos casos, el valor máximo de la función de densidad ocurre en uno o ambos extremos. En algunos casos, el valor (máximo) de la función de densidad que ocurre en el extremo es finito. Por ejemplo, en el caso de α = 2, β = 1 (o α = 1, β = 2), la función de densidad se convierte en una distribución de triángulo rectángulo que es finita en ambos extremos. En varios otros casos, hay una singularidad en un extremo, donde el valor de la función de densidad se acerca al infinito. Por ejemplo, en el caso α = β = 1/2, la distribución beta se simplifica para convertirse en la distribución arcoseno . Existe un debate entre los matemáticos sobre algunos de estos casos y sobre si los extremos ( x = 0 y x = 1) pueden llamarse modos o no. [8] [2]

Moda para distribución beta para 1 ≤ α ≤ 5 y 1 ≤ β ≤ 5

Mediana

Mediana de la distribución beta para 0 ≤ α ≤ 5 y 0 ≤ β ≤ 5
(Media–mediana) para la distribución beta versus alfa y beta de 0 a 2

La mediana de la distribución beta es el único número real para el cual se obtiene la función beta incompleta regularizada . No existe una expresión general en forma cerrada para la mediana de la distribución beta para valores arbitrarios de α y β . A continuación se presentan expresiones en forma cerrada para valores particulares de los parámetros α y β : [ cita requerida ]

Los siguientes son los límites con un parámetro finito (distinto de cero) y el otro que se aproxima a estos límites: [ cita requerida ]

Una aproximación razonable del valor de la mediana de la distribución beta, tanto para α como para β mayores o iguales a uno, viene dada por la fórmula [9]

Cuando α, β ≥ 1, el error relativo (el error absoluto dividido por la mediana) en esta aproximación es menor que 4% y tanto para α ≥ 2 como para β ≥ 2 es menor que 1%. El error absoluto dividido por la diferencia entre la media y la moda es igualmente pequeño:

Abs[(Mediana-Aprox.)/Mediana] para la distribución beta para 1 ≤ α ≤ 5 y 1 ≤ β ≤ 5Abs[(Mediana-Aprox.)/(Media-Moda)] para la distribución beta para 1≤α≤5 y 1≤β≤5

Significar

Media de la distribución beta para 0 ≤ α ≤ 5 y 0 ≤ β ≤ 5

El valor esperado (media) ( μ ) de una variable aleatoria de distribución beta X con dos parámetros α y β es una función únicamente de la relación β / α de estos parámetros: [1]

Si en la expresión anterior se toma α = β , se obtiene μ = 1/2 , lo que demuestra que para α = β la media está en el centro de la distribución: es simétrica. Además, a partir de la expresión anterior se pueden obtener los siguientes límites:

Por lo tanto, para β / α → 0, o para α / β → ∞, la media se encuentra en el extremo derecho, x = 1. Para estas razones límite, la distribución beta se convierte en una distribución degenerada de un punto con un pico de función delta de Dirac en el extremo derecho, x = 1 , con probabilidad 1 y probabilidad cero en el resto. Hay una probabilidad del 100 % (certeza absoluta) concentrada en el extremo derecho, x = 1 .

De manera similar, para β / α → ∞, o para α / β → 0, la media se ubica en el extremo izquierdo, x = 0. La distribución beta se convierte en una distribución degenerada de 1 punto con un pico de función delta de Dirac en el extremo izquierdo, x = 0, con probabilidad 1 y probabilidad cero en todos los demás lugares. Hay una probabilidad del 100 % (certeza absoluta) concentrada en el extremo izquierdo, x = 0. A continuación se muestran los límites con un parámetro finito (distinto de cero) y el otro que se acerca a estos límites:

Mientras que para distribuciones unimodales típicas (con modas ubicadas centralmente, puntos de inflexión a ambos lados de la moda y colas más largas) (con Beta( αβ ) tal que α , β > 2 ) se sabe que la media de la muestra (como una estimación de la ubicación) no es tan robusta como la mediana de la muestra, lo opuesto es el caso de distribuciones bimodales uniformes o "en forma de U" (con Beta( αβ ) tal que α , β ≤ 1 ), con las modas ubicadas en los extremos de la distribución. Como señalan Mosteller y Tukey ( [10] p. 207) "el promedio de las dos observaciones extremas utiliza toda la información de la muestra. Esto ilustra cómo, para distribuciones de cola corta, las observaciones extremas deberían tener más peso". Por el contrario, se deduce que la mediana de las distribuciones bimodales "en forma de U" con modas en el borde de la distribución (con Beta( αβ ) tal que α , β ≤ 1 ) no es robusta, ya que la mediana de la muestra excluye las observaciones de muestra extremas de la consideración. Una aplicación práctica de esto ocurre, por ejemplo, para los paseos aleatorios , ya que la probabilidad para el momento de la última visita al origen en un paseo aleatorio se distribuye como la distribución de arcoseno Beta(1/2, 1/2): [5] [11] la media de un número de realizaciones de un paseo aleatorio es un estimador mucho más robusto que la mediana (que es una estimación de medida de muestra inapropiada en este caso).

Media geométrica

(Media − Media geométrica) para la distribución beta versus α y β de 0 a 2, mostrando la asimetría entre α y β para la media geométrica
Medias geométricas para la distribución beta Púrpura = G ( x ), Amarillo = G (1 −  x ), valores más pequeños α y β al frente
Medias geométricas para la distribución beta. púrpura = G ( x ), amarillo = G (1 −  x ), valores mayores α y β al frente

El logaritmo de la media geométrica G X de una distribución con variable aleatoria X es la media aritmética de ln( X ), o, equivalentemente, su valor esperado:

Para una distribución beta, la integral del valor esperado da:

donde ψ es la función digamma .

Por lo tanto, la media geométrica de una distribución beta con parámetros de forma α y β es la exponencial de las funciones digamma de α y β como sigue:

Mientras que para una distribución beta con parámetros de forma iguales α = β, se deduce que asimetría = 0 y moda = media = mediana = 1/2, la media geométrica es menor que 1/2: 0 < G X < 1/2 . La razón de esto es que la transformación logarítmica pondera fuertemente los valores de X cercanos a cero, ya que ln( X ) tiende fuertemente hacia el infinito negativo a medida que X se acerca a cero, mientras que ln( X ) se aplana hacia cero a medida que X → 1 .

A lo largo de una línea α = β , se aplican los siguientes límites:

A continuación se presentan los límites con un parámetro finito (distinto de cero) y el otro que se aproxima a estos límites:

El gráfico adjunto muestra la diferencia entre la media y la media geométrica para los parámetros de forma α y β desde cero hasta 2. Además del hecho de que la diferencia entre ellos se acerca a cero a medida que α y β se acercan al infinito y que la diferencia se hace grande para los valores de α y β que se acercan a cero, se puede observar una evidente asimetría de la media geométrica con respecto a los parámetros de forma α y β. La diferencia entre la media geométrica y la media es mayor para valores pequeños de α en relación con β que cuando se intercambian las magnitudes de β y α.

NLJohnson y S. Kotz [1] sugieren la aproximación logarítmica a la función digamma ψ ( α ) ≈ ln( α  − 1/2) que da como resultado la siguiente aproximación a la media geométrica:

Los valores numéricos para el error relativo en esta aproximación son los siguientes: [ ( α = β = 1): 9,39 % ]; [ ( α = β =2): 1,29% ]; [ ( α = 2, β = 3): 1,51% ]; [ ( α = 3, β = 2): 0,44% ]; [ ( α = β = 3): 0,51% ]; [ ( α = β = 4): 0,26% ]; [ ( α = 3, β = 4): 0,55% ]; [ ( α = 4, β = 3): 0,24% ].

De manera similar, se puede calcular el valor de los parámetros de forma necesarios para que la media geométrica sea igual a 1/2. Dado el valor del parámetro β , ¿cuál sería el valor del otro parámetro,  α , necesario para que la media geométrica sea igual a 1/2?. La respuesta es que (para β > 1 ), el valor de α necesario tiende hacia β + 1/2 cuando β → ∞ . Por ejemplo, todas estas parejas tienen la misma media geométrica de 1/2: [ β = 1, α = 1,4427 ], [ β = 2, α = 2,46958 ], [ β = 3, α = 3,47943 ], [ β = 4, α = 4,48449 ], [ β = 5, α = 5,48756 ], β = 10, α = 10,4938 ], [ β = 100, α = 100,499 ].

La propiedad fundamental de la media geométrica, que puede demostrarse falsa para cualquier otra media, es

Esto hace que la media geométrica sea la única media correcta cuando se promedian resultados normalizados , es decir, resultados que se presentan como proporciones de valores de referencia. [12] Esto es relevante porque la distribución beta es un modelo adecuado para el comportamiento aleatorio de los porcentajes y es particularmente adecuada para el modelado estadístico de proporciones. La media geométrica juega un papel central en la estimación de máxima verosimilitud, consulte la sección "Estimación de parámetros, máxima verosimilitud". En realidad, cuando se realiza la estimación de máxima verosimilitud, además de la media geométrica G X basada en la variable aleatoria X, también aparece naturalmente otra media geométrica: la media geométrica basada en la transformación lineal –– (1 − X ) , la imagen especular de X , denotada por G (1− X ) :

A lo largo de una línea α = β , se aplican los siguientes límites:

A continuación se presentan los límites con un parámetro finito (distinto de cero) y el otro que se aproxima a estos límites:

Tiene el siguiente valor aproximado:

Aunque tanto G X como G (1− X ) son asimétricas, en el caso de que ambos parámetros de forma sean iguales α = β , las medias geométricas son iguales: G X = G (1− X ) . Esta igualdad se desprende de la siguiente simetría que se muestra entre ambas medias geométricas:

Media armónica

Media armónica para la distribución beta para 0 <  α  < 5 y 0 <  β  < 5
Media armónica para la distribución beta versus α y β de 0 a 2
Medias armónicas para la distribución beta Púrpura = H ( X ), Amarillo = H (1 −  X ), valores más pequeños α y β al frente
Medias armónicas para la distribución beta: púrpura = H ( X ), amarillo = H (1 −  X ), valores mayores α y β al frente

La inversa de la media armónica ( H X ) de una distribución con variable aleatoria X es la media aritmética de 1/ X o, equivalentemente, su valor esperado. Por lo tanto, la media armónica ( H X ) de una distribución beta con parámetros de forma α y β es:

La media armónica ( H X ) de una distribución beta con α < 1 no está definida, porque su expresión definitoria no está acotada en [0, 1] para un parámetro de forma α menor que la unidad.

Dejando α = β en la expresión anterior se obtiene

mostrando que para α = β la media armónica varía de 0, para α = β = 1, a 1/2, para α = β → ∞.

A continuación se presentan los límites con un parámetro finito (distinto de cero) y el otro que se aproxima a estos límites:

La media armónica desempeña un papel en la estimación de máxima verosimilitud para el caso de cuatro parámetros, además de la media geométrica. En realidad, al realizar la estimación de máxima verosimilitud para el caso de cuatro parámetros, además de la media armónica H X basada en la variable aleatoria X , también aparece naturalmente otra media armónica: la media armónica basada en la transformación lineal (1 −  X ), la imagen especular de X , denotada por H 1 −  X :

La media armónica ( H (1 −  X ) ) de una distribución beta con β < 1 no está definida, porque su expresión definitoria no está acotada en [0, 1] para un parámetro de forma β menor que la unidad.

Dejando α = β en la expresión anterior se obtiene

mostrando que para α = β la media armónica varía de 0, para α = β = 1, a 1/2, para α = β → ∞.

A continuación se presentan los límites con un parámetro finito (distinto de cero) y el otro que se aproxima a estos límites:

Aunque tanto H X como H 1− X son asimétricas, en el caso de que ambos parámetros de forma sean iguales α = β , las medias armónicas son iguales: H X ​​= H 1− X . Esta igualdad se desprende de la siguiente simetría que se muestra entre ambas medias armónicas:

Medidas de dispersión estadística

Diferencia

La varianza (el segundo momento centrado en la media) de una variable aleatoria de distribución beta X con parámetros α y β es: [1] [13]

Dejando α = β en la expresión anterior se obtiene

mostrando que para α = β la varianza disminuye monótonamente a medida que α = β aumenta. Fijando α = β = 0 en esta expresión, se encuentra la varianza máxima var( X ) = 1/4 [1] que solo ocurre al acercarse al límite, en α = β = 0 .

La distribución beta también puede parametrizarse en términos de su media μ (0 < μ < 1) y tamaño de muestra ν = α + β ( ν > 0 ) (ver subsección Media y tamaño de muestra):

Utilizando esta parametrización , se puede expresar la varianza en términos de la media μ y el tamaño de la muestra ν de la siguiente manera:

Dado que ν = α + β > 0 , se deduce que var( X ) < μ (1 − μ ) .

Para una distribución simétrica, la media está en el medio de la distribución, μ = 1/2 , y por lo tanto:

Además, los siguientes límites (con sólo la variable indicada acercándose al límite) se pueden obtener a partir de las expresiones anteriores:

Varianza y covarianza geométrica

varianzas geométricas logarítmicas frente a α y β
varianzas geométricas logarítmicas frente a α y β

El logaritmo de la varianza geométrica, ln(var GX ), de una distribución con variable aleatoria X es el segundo momento del logaritmo de X centrado en la media geométrica de X , ln( G X ):

y por lo tanto, la varianza geométrica es:

En la matriz de información de Fisher , y la curvatura de la función de verosimilitud logarítmica , aparecen el logaritmo de la varianza geométrica de la variable  reflejada 1 − X y el logaritmo de la covarianza geométrica entre X y 1 −  X :

Para una distribución beta, se pueden derivar momentos logarítmicos de orden superior utilizando la representación de una distribución beta como proporción de dos distribuciones gamma y diferenciando a través de la integral. Se pueden expresar en términos de funciones poligamma de orden superior. Véase la sección § Momentos de variables aleatorias transformadas logarítmicamente. La varianza de las variables logarítmicas y la covarianza de ln  X y ln(1− X ) son:

donde la función trigamma , denotada ψ 1 (α), es la segunda de las funciones poligamma , y ​​se define como la derivada de la función digamma :

Por lo tanto,

Los gráficos adjuntos muestran las varianzas geométricas logarítmicas y la covarianza geométrica logarítmica en función de los parámetros de forma α y β . Los gráficos muestran que las varianzas geométricas logarítmicas y la covarianza geométrica logarítmica son cercanas a cero para los parámetros de forma α y β mayores que 2, y que las varianzas geométricas logarítmicas aumentan rápidamente de valor para los valores de los parámetros de forma α y β menores que la unidad. Las varianzas geométricas logarítmicas son positivas para todos los valores de los parámetros de forma. La covarianza geométrica logarítmica es negativa para todos los valores de los parámetros de forma, y ​​alcanza grandes valores negativos para α y β menores que la unidad.

A continuación se presentan los límites con un parámetro finito (distinto de cero) y el otro que se aproxima a estos límites:

Límites con dos parámetros variables:

Aunque tanto ln(var GX ) como ln(var G (1 −  X ) ) son asimétricas, cuando los parámetros de forma son iguales, α = β, se tiene: ln(var GX ) = ln(var G(1−X) ). Esta igualdad se desprende de la siguiente simetría que se muestra entre ambas varianzas geométricas logarítmicas:

La covarianza geométrica logarítmica es simétrica:

Desviación absoluta media alrededor de la media

Relación entre la desviación estándar y la desviación absoluta media para la distribución beta con α y β que van de 0 a 5
Relación entre la desviación estándar y la desviación absoluta media para una distribución beta con una media de 0 ≤ μ ≤ 1 y un tamaño de muestra de 0 < ν ≤ 10

La desviación absoluta media alrededor de la media para la distribución beta con parámetros de forma α y β es: [8]

La desviación absoluta media alrededor de la media es un estimador más robusto de la dispersión estadística que la desviación estándar para distribuciones beta con colas y puntos de inflexión a cada lado de la moda, distribuciones Beta( αβ ) con α , β > 2, ya que depende de las desviaciones lineales (absolutas) en lugar de las desviaciones cuadradas de la media. Por lo tanto, el efecto de desviaciones muy grandes de la media no está tan ponderado.

Utilizando la aproximación de Stirling a la función Gamma , NLJohnson y S.Kotz [1] derivaron la siguiente aproximación para valores de los parámetros de forma mayores que la unidad (el error relativo para esta aproximación es solo −3,5% para α = β = 1, y disminuye a cero cuando α → ∞, β → ∞):

En el límite α → ∞, β → ∞, la razón entre la desviación absoluta media y la desviación estándar (para la distribución beta) se vuelve igual a la razón de las mismas medidas para la distribución normal: . Para α = β = 1 esta razón es igual a , de modo que de α = β = 1 a α, β → ∞ la razón disminuye en un 8,5%. Para α = β = 0 la desviación estándar es exactamente igual a la desviación absoluta media alrededor de la media. Por lo tanto, esta razón disminuye en un 15% de α = β = 0 a α = β = 1, y en un 25% de α = β = 0 a α, β → ∞ . Sin embargo, para distribuciones beta sesgadas tales que α → 0 o β → 0, la relación entre la desviación estándar y la desviación absoluta media se acerca al infinito (aunque cada una de ellas, individualmente, se acerca a cero) porque la desviación absoluta media se acerca a cero más rápido que la desviación estándar.

Utilizando la parametrización en términos de media μ y tamaño de muestra ν = α + β > 0:

α = μν, β = (1−μ)ν

La desviación absoluta media alrededor de la media se puede expresar en términos de la media μ y el tamaño de la muestra ν de la siguiente manera:

Para una distribución simétrica, la media está en el medio de la distribución, μ = 1/2, y por lo tanto:

Además, los siguientes límites (con sólo la variable indicada acercándose al límite) se pueden obtener a partir de las expresiones anteriores:

Diferencia absoluta media

La diferencia absoluta media para la distribución beta es:

El coeficiente de Gini para la distribución beta es la mitad de la diferencia absoluta media relativa:

Oblicuidad

Asimetría de la distribución beta en función de la varianza y la media

La asimetría (el tercer momento centrado en la media, normalizado por la potencia 3/2 de la varianza) de la distribución beta es [1]

Si en la expresión anterior se toma α = β, se obtiene γ 1 = 0, lo que demuestra una vez más que para α = β la distribución es simétrica y, por lo tanto, la asimetría es cero. La asimetría es positiva (de cola derecha) para α < β, y negativa (de cola izquierda) para α > β.

Utilizando la parametrización en términos de media μ y tamaño de muestra ν = α + β:

La asimetría se puede expresar en términos de la media μ y el tamaño de la muestra ν de la siguiente manera:

La asimetría también se puede expresar solo en términos de la varianza var y la media μ de la siguiente manera:

El gráfico adjunto de asimetría en función de la varianza y la media muestra que la varianza máxima (1/4) está acoplada con una asimetría cero y la condición de simetría (μ = 1/2), y que la asimetría máxima (infinito positivo o negativo) ocurre cuando la media se ubica en un extremo o en el otro, de modo que la "masa" de la distribución de probabilidad se concentra en los extremos (varianza mínima).

La siguiente expresión para el cuadrado de la asimetría, en términos del tamaño de la muestra ν = α + β y la varianza var , es útil para el método de estimación de momentos de cuatro parámetros:

Esta expresión da correctamente una asimetría de cero para α = β, ya que en ese caso (ver § Varianza): .

Para el caso simétrico (α = β), la asimetría = 0 en todo el rango y se aplican los siguientes límites:

Para los casos asimétricos (α ≠ β) los siguientes límites (con sólo la variable indicada acercándose al límite) se pueden obtener a partir de las expresiones anteriores:

Curtosis

Curtosis excesiva para la distribución beta en función de la varianza y la media

La distribución beta se ha aplicado en el análisis acústico para evaluar el daño a los engranajes, ya que se ha informado que la curtosis de la distribución beta es un buen indicador de la condición de un engranaje. [14] La curtosis también se ha utilizado para distinguir la señal sísmica generada por los pasos de una persona de otras señales. Como las personas u otros objetivos que se mueven en el suelo generan señales continuas en forma de ondas sísmicas, se pueden separar diferentes objetivos en función de las ondas sísmicas que generan. La curtosis es sensible a las señales impulsivas, por lo que es mucho más sensible a la señal generada por los pasos humanos que a otras señales generadas por vehículos, vientos, ruido, etc. [15] Desafortunadamente, la notación para la curtosis no se ha estandarizado. Kenney y Keeping [16] usan el símbolo γ 2 para el exceso de curtosis , pero Abramowitz y Stegun [17] usan una terminología diferente. Para evitar confusiones [18] entre la curtosis (el cuarto momento centrado en la media, normalizado por el cuadrado de la varianza) y el exceso de curtosis, al utilizar símbolos, se escribirán de la siguiente manera: [8] [19]

Dejando α = β en la expresión anterior se obtiene

.

Por lo tanto, para distribuciones beta simétricas, el exceso de curtosis es negativo, aumentando desde un valor mínimo de −2 en el límite cuando {α = β} → 0, y acercándose a un valor máximo de cero cuando {α = β} → ∞. El valor de −2 es el valor mínimo de exceso de curtosis que cualquier distribución (no solo las distribuciones beta, sino cualquier distribución de cualquier tipo posible) puede alcanzar. Este valor mínimo se alcanza cuando toda la densidad de probabilidad está completamente concentrada en cada extremo x = 0 y x = 1, sin nada en el medio: una distribución de Bernoulli de 2 puntos con igual probabilidad 1/2 en cada extremo (un lanzamiento de moneda: vea la sección a continuación "Curtosis limitada por el cuadrado de la asimetría" para una discusión más detallada). La descripción de la curtosis como una medida de los "valores atípicos potenciales" (o "valores extremos raros potenciales") de la distribución de probabilidad, es correcta para todas las distribuciones, incluida la distribución beta. Cuando se dan valores extremos poco frecuentes en la distribución beta, mayor es su curtosis; de lo contrario, la curtosis es menor. Para distribuciones beta sesgadas α ≠ β, el exceso de curtosis puede alcanzar valores positivos ilimitados (particularmente para α → 0 para β finito, o para β → 0 para α finito) porque el lado que se aleja de la moda producirá valores extremos ocasionales. La curtosis mínima se produce cuando la densidad de masa se concentra de manera uniforme en cada extremo (y, por lo tanto, la media está en el centro) y no hay densidad de masa de probabilidad entre los extremos.

Utilizando la parametrización en términos de media μ y tamaño de muestra ν = α + β:

Se puede expresar el exceso de curtosis en términos de la media μ y el tamaño de la muestra ν de la siguiente manera:

El exceso de curtosis también se puede expresar en términos de sólo los dos parámetros siguientes: la varianza var y el tamaño de la muestra ν de la siguiente manera:

y, en términos de la varianza var y la media μ como sigue:

El gráfico de exceso de curtosis en función de la varianza y la media muestra que el valor mínimo de exceso de curtosis (−2, que es el valor mínimo posible de exceso de curtosis para cualquier distribución) está íntimamente relacionado con el valor máximo de la varianza (1/4) y la condición de simetría: la media se encuentra en el punto medio (μ = 1/2). Esto ocurre para el caso simétrico de α = β = 0, con asimetría cero. En el límite, esta es la distribución de Bernoulli de 2 puntos con igual probabilidad 1/2 en cada extremo de la función delta de Dirac x = 0 y x = 1 y probabilidad cero en todos los demás lugares. (Un lanzamiento de moneda: una cara de la moneda es x = 0 y la otra cara es x = 1). La varianza es máxima porque la distribución es bimodal sin nada entre los dos modos (picos) en cada extremo. El exceso de curtosis es mínimo: la "masa" de densidad de probabilidad es cero en la media y se concentra en los dos picos de cada extremo. El exceso de curtosis alcanza el valor mínimo posible (para cualquier distribución) cuando la función de densidad de probabilidad tiene dos picos en cada extremo: es bi-"pico" sin nada entre ellos.

Por otra parte, el gráfico muestra que para los casos de asimetría extrema, donde la media se ubica cerca de uno u otro extremo (μ = 0 o μ = 1), la varianza es cercana a cero y el exceso de curtosis se acerca rápidamente al infinito cuando la media de la distribución se acerca a cualquiera de los extremos.

Alternativamente, el exceso de curtosis también se puede expresar en términos de sólo los dos parámetros siguientes: el cuadrado de la asimetría y el tamaño de la muestra ν de la siguiente manera:

A partir de esta última expresión, se pueden obtener los mismos límites publicados hace más de un siglo por Karl Pearson [20] para la distribución beta (véase la sección siguiente titulada "Curtosis limitada por el cuadrado de la asimetría"). Si se establece α  +  β  =  ν  = 0 en la expresión anterior, se obtiene el límite inferior de Pearson (los valores de asimetría y exceso de curtosis por debajo del límite (exceso de curtosis + 2 − asimetría 2  = 0) no pueden darse para ninguna distribución, y por lo tanto Karl Pearson llamó apropiadamente a la región por debajo de este límite la "región imposible"). El límite de α  +  β  =  ν  → ∞ determina el límite superior de Pearson.

por lo tanto:

Los valores de ν  =  α  +  β tales que ν varía de cero a infinito, 0 <  ν  < ∞, abarcan toda la región de la distribución beta en el plano de exceso de curtosis versus asimetría al cuadrado.

Para el caso simétrico ( α  =  β ), se aplican los siguientes límites:

Para los casos asimétricos ( α  ≠  β ) los siguientes límites (con sólo la variable indicada acercándose al límite) se pueden obtener a partir de las expresiones anteriores:

Función característica

Re(función característica) caso simétrico α  =  β que varía de 25 a 0
Re(función característica) caso simétrico α  =  β que varía de 0 a 25
Re(función característica) β = α  + 1/2; α  varía de 25 a 0
Re(función característica) α  =  β  + 1/2; β que varía de 25 a 0
Re(función característica) α  =  β  + 1/2; β varía de 0 a 25

La función característica es la transformada de Fourier de la función de densidad de probabilidad. La función característica de la distribución beta es la función hipergeométrica confluente de Kummer (de primera especie): [1] [17] [21]

dónde

es el factorial ascendente , también llamado "símbolo de Pochhammer". El valor de la función característica para t  = 0, es uno:

Además, las partes reales e imaginarias de la función característica disfrutan de las siguientes simetrías con respecto al origen de la variable t :

El caso simétrico α = β simplifica la función característica de la distribución beta a una función de Bessel , ya que en el caso especial α + β = 2α la función hipergeométrica confluente (de primer tipo) se reduce a una función de Bessel (la función de Bessel modificada de primer tipo ) utilizando la segunda transformación de Kummer de la siguiente manera:

En los gráficos adjuntos, se muestra la parte real (Re) de la función característica de la distribución beta para casos simétricos (α = β) y sesgados (α ≠ β).

Otros momentos

Función generadora de momentos

También se deduce [1] [8] que la función generadora de momentos es

En particular MX ( α ; β ; 0) = 1.

Momentos más elevados

Utilizando la función generadora de momentos , el k - ésimo momento bruto viene dado por [1] el factor

multiplicando el término (serie exponencial) en la serie de la función generadora de momentos

donde ( x ) ( k ) es un símbolo de Pochhammer que representa un factorial ascendente. También se puede escribir en forma recursiva como

Dado que la función generadora de momentos tiene un radio de convergencia positivo, la distribución beta está determinada por sus momentos . [22]

Momentos de variables aleatorias transformadas

Momentos de variables aleatorias transformadas linealmente, producto e invertidas

También se pueden mostrar las siguientes expectativas para una variable aleatoria transformada, [1] donde la variable aleatoria X tiene una distribución Beta con parámetros α y β : X ~ Beta( αβ ). El valor esperado de la variable 1 −  X es la simetría especular del valor esperado basado en  X :

Debido a la simetría especular de la función de densidad de probabilidad de la distribución beta, las varianzas basadas en las variables X y 1 −  X son idénticas, y la covarianza en X (1 −  X es el negativo de la varianza:

Estos son los valores esperados para las variables invertidas (están relacionadas con las medias armónicas, ver § Media armónica):

La siguiente transformación, que consiste en dividir la variable X por su imagen especular X /(1 −  X ), da como resultado el valor esperado de la "distribución beta invertida" o distribución beta prima (también conocida como distribución beta de segundo tipo o distribución beta tipo VI de Pearson ): [1]

Las varianzas de estas variables transformadas se pueden obtener mediante integración, como los valores esperados de los segundos momentos centrados en las variables correspondientes:

La siguiente varianza de la variable X dividida por su imagen especular ( X /(1− X ) da como resultado la varianza de la "distribución beta invertida" o distribución beta prima (también conocida como distribución beta de segundo tipo o tipo VI de Pearson ): [1]

Las covarianzas son:

  Estas expectativas y variaciones aparecen en la matriz de información de Fisher de cuatro parámetros (§ Información de Fisher).

Momentos de variables aleatorias transformadas logarítmicamente
Gráfico de logit( X ) = ln( X /(1 − X )) (eje vertical) vs. X en el dominio de 0 a 1 (eje horizontal). Las transformaciones logit son interesantes, ya que generalmente transforman varias formas (incluidas las formas J) en densidades en forma de campana (generalmente sesgadas) sobre la variable logit, y pueden eliminar las singularidades finales sobre la variable original.

En esta sección se analizan los valores esperados para las transformaciones logarítmicas (útiles para las estimaciones de máxima verosimilitud , véase § Estimación de parámetros, Máxima verosimilitud). Las siguientes transformaciones lineales logarítmicas están relacionadas con las medias geométricas G X y G (1− X ) (véase § Media geométrica):

Donde la función digamma ψ(α) se define como la derivada logarítmica de la función gamma : [17]

Las transformaciones logit son interesantes, [23] ya que generalmente transforman varias formas (incluidas las formas J) en densidades en forma de campana (generalmente sesgadas) sobre la variable logit, y pueden eliminar las singularidades finales sobre la variable original:

Johnson [24] consideró la distribución de la variable transformada logit ln( X /1 −  X ), incluyendo su función generadora de momentos y aproximaciones para valores grandes de los parámetros de forma. Esta transformación extiende el soporte finito [0, 1] basado en la variable original X a un soporte infinito en ambas direcciones de la línea real (−∞, +∞). El logit de una variable beta tiene la distribución logística-beta .

Los momentos logarítmicos de orden superior se pueden derivar utilizando la representación de una distribución beta como proporción de dos distribuciones gamma y diferenciándolas mediante la integral. Se pueden expresar en términos de funciones poligamma de orden superior de la siguiente manera:

Por lo tanto, la varianza de las variables logarítmicas y la covarianza de ln( X ) y ln(1− X ) son:

donde la función trigamma , denotada ψ 1 ( α ), es la segunda de las funciones poligamma , y ​​se define como la derivada de la función digamma :

Las varianzas y covarianzas de las variables transformadas logarítmicamente X y (1 −  X ) son diferentes, en general, porque la transformación logarítmica destruye la simetría especular de las variables originales X y (1 −  X ), a medida que el logaritmo se acerca al infinito negativo para la variable que se acerca a cero.

Estas varianzas y covarianzas logarítmicas son los elementos de la matriz de información de Fisher para la distribución beta. También son una medida de la curvatura de la función de verosimilitud logarítmica (véase la sección sobre Estimación de máxima verosimilitud).

Las varianzas de las variables inversas logarítmicas son idénticas a las varianzas de las variables logarítmicas:

También se deduce que las varianzas de las variables transformadas logit son

Cantidades de información (entropía)

Dada una variable aleatoria distribuida beta, X ~ Beta( αβ ), la entropía diferencial de X es (medida en nats ), [25] el valor esperado del negativo del logaritmo de la función de densidad de probabilidad :

donde f ( x ; α , β ) es la función de densidad de probabilidad de la distribución beta:

La función digamma ψ aparece en la fórmula de la entropía diferencial como consecuencia de la fórmula integral de Euler para los números armónicos que se desprende de la integral:

La entropía diferencial de la distribución beta es negativa para todos los valores de α y β mayores que cero, excepto en α  =  β  = 1 (para cuyos valores la distribución beta es la misma que la distribución uniforme ), donde la entropía diferencial alcanza su valor máximo de cero. Es de esperar que la entropía máxima se produzca cuando la distribución beta se vuelve igual a la distribución uniforme, ya que la incertidumbre es máxima cuando todos los eventos posibles son equiprobables.

Para α o β que se acercan a cero, la entropía diferencial se acerca a su valor mínimo de infinito negativo. Para (cualquiera o ambos) α o β que se acercan a cero, hay una cantidad máxima de orden: toda la densidad de probabilidad se concentra en los extremos, y hay densidad de probabilidad cero en los puntos ubicados entre los extremos. De manera similar, para (cualquiera o ambos) α o β que se acercan al infinito, la entropía diferencial se acerca a su valor mínimo de infinito negativo y una cantidad máxima de orden. Si α o β se acercan al infinito (y el otro es finito) toda la densidad de probabilidad se concentra en un extremo, y la densidad de probabilidad es cero en todos los demás lugares. Si ambos parámetros de forma son iguales (el caso simétrico), α = β , y se acercan al infinito simultáneamente, la densidad de probabilidad se convierte en un pico ( función delta de Dirac ) concentrado en el medio x  = 1/2, y por lo tanto hay 100% de probabilidad en el medio x  = 1/2 y probabilidad cero en todos los demás lugares.

La entropía diferencial (del caso continuo) fue introducida por Shannon en su artículo original (donde la llamó la "entropía de una distribución continua"), como la parte final del mismo artículo donde definió la entropía discreta . [26] Desde entonces se sabe que la entropía diferencial puede diferir del límite infinitesimal de la entropía discreta por un desplazamiento infinito, por lo tanto, la entropía diferencial puede ser negativa (como lo es para la distribución beta). Lo que realmente importa es el valor relativo de la entropía.

Dadas dos variables aleatorias distribuidas beta, X 1  ~ Beta( αβ ) y X 2 ~ Beta( α , β ), la entropía cruzada es (medida en nats) [27]

La entropía cruzada se ha utilizado como métrica de error para medir la distancia entre dos hipótesis. [28] [29] Su valor absoluto es mínimo cuando las dos distribuciones son idénticas. Es la medida de información más estrechamente relacionada con el logaritmo de máxima verosimilitud [27] (ver apartado sobre "Estimación de parámetros. Estimación de máxima verosimilitud").

La entropía relativa, o divergencia de Kullback–Leibler D KL ( X 1 || X 2 ), es una medida de la ineficiencia de suponer que la distribución es X 2 ~ Beta( α , β ) cuando la distribución es realmente X 1 ~ Beta( α , β ). Se define de la siguiente manera (medida en nats).

La entropía relativa, o divergencia de Kullback–Leibler , siempre es no negativa. A continuación se presentan algunos ejemplos numéricos:

La divergencia de Kullback–Leibler no es simétrica D KL ( X 1 || X 2 ) ≠ D KL ( X 2 || X 1 ) para el caso en el que las distribuciones beta individuales Beta(1, 1) y Beta(3, 3) son simétricas, pero tienen diferentes entropías h ( X 1 ) ≠ h ( X 2 ). El valor de la divergencia de Kullback depende de la dirección recorrida: si se va de una entropía (diferencial) más alta a una entropía (diferencial) más baja o al revés. En el ejemplo numérico anterior, la divergencia de Kullback mide la ineficiencia de suponer que la distribución es (en forma de campana) Beta(3, 3), en lugar de (uniforme) Beta(1, 1). La entropía "h" de Beta(1, 1) es mayor que la entropía "h" de Beta(3, 3) porque la distribución uniforme Beta(1, 1) tiene una cantidad máxima de desorden. La divergencia de Kullback es más de dos veces mayor (0,598803 en lugar de 0,267864) cuando se mide en la dirección de la entropía decreciente: la dirección que supone que la distribución (uniforme) Beta(1, 1) es (en forma de campana) Beta(3, 3) en lugar de lo contrario. En este sentido restringido, la divergencia de Kullback es consistente con la segunda ley de la termodinámica .

La divergencia de Kullback-Leibler es simétrica D KL ( X 1 || X 2 ) = D KL ( X 2 || X 1 ) para los casos sesgados Beta(3, 0.5) y Beta(0.5, 3) que tienen igual entropía diferencial h ( X 1 ) = h ( X 2 ).

La condición de simetría:

se desprende de las definiciones anteriores y de la simetría especular f ( x ; α , β ) = f (1 − x ; α , β ) de la que disfruta la distribución beta.

Relaciones entre medidas estadísticas

Relación entre media, moda y mediana

Si 1 < α < β entonces moda ≤ mediana ≤ media. [9] Expresando la moda (solo para α, β > 1) y la media en términos de α y β:

Si 1 < β < α entonces el orden de las desigualdades se invierte. Para α, β > 1 la distancia absoluta entre la media y la mediana es menor que el 5% de la distancia entre los valores máximo y mínimo de x . Por otra parte, la distancia absoluta entre la media y la moda puede alcanzar el 50% de la distancia entre los valores máximo y mínimo de x , para el caso ( patológico ) de α = 1 y β = 1, para cuyos valores la distribución beta se aproxima a la distribución uniforme y la entropía diferencial se aproxima a su valor máximo , y por tanto al máximo "desorden".

Por ejemplo, para α = 1,0001 y β = 1,00000001:

donde PDF representa el valor de la función de densidad de probabilidad .

Relación entre media, media geométrica y media armónica

:Media, mediana, media geométrica y media armónica para la distribución beta con 0 < α = β < 5

De la desigualdad de las medias aritmética y geométrica se sabe que la media geométrica es menor que la media. De manera similar, la media armónica es menor que la media geométrica. El gráfico adjunto muestra que para α = β, tanto la media como la mediana son exactamente iguales a 1/2, independientemente del valor de α = β, y la moda también es igual a 1/2 para α = β > 1, sin embargo, las medias geométrica y armónica son menores que 1/2 y solo se aproximan a este valor asintóticamente cuando α = β → ∞.

Curtosis limitada por el cuadrado de la asimetría

Parámetros α y β de la distribución beta frente a exceso de curtosis y asimetría al cuadrado

Como señaló Feller , [5] en el sistema de Pearson la densidad de probabilidad beta aparece como tipo I (cualquier diferencia entre la distribución beta y la distribución tipo I de Pearson es solo superficial y no hace ninguna diferencia para la siguiente discusión sobre la relación entre curtosis y asimetría). Karl Pearson mostró, en la Placa 1 de su artículo [20] publicado en 1916, un gráfico con la curtosis como eje vertical ( ordenada ) y el cuadrado de la asimetría como eje horizontal ( abscisa ), en el que se mostraban varias distribuciones. [30] La región ocupada por la distribución beta está limitada por las siguientes dos líneas en el plano (asimetría 2 ,curtosis) o el plano (asimetría 2 ,exceso de curtosis) :

o, equivalentemente,

At a time when there were no powerful digital computers, Karl Pearson accurately computed further boundaries,[31][20] for example, separating the "U-shaped" from the "J-shaped" distributions. The lower boundary line (excess kurtosis + 2 − skewness2 = 0) is produced by skewed "U-shaped" beta distributions with both values of shape parameters α and β close to zero. The upper boundary line (excess kurtosis − (3/2) skewness2 = 0) is produced by extremely skewed distributions with very large values of one of the parameters and very small values of the other parameter. Karl Pearson showed[20] that this upper boundary line (excess kurtosis − (3/2) skewness2 = 0) is also the intersection with Pearson's distribution III, which has unlimited support in one direction (towards positive infinity), and can be bell-shaped or J-shaped. His son, Egon Pearson, showed[30] that the region (in the kurtosis/squared-skewness plane) occupied by the beta distribution (equivalently, Pearson's distribution I) as it approaches this boundary (excess kurtosis − (3/2) skewness2 = 0) is shared with the noncentral chi-squared distribution. Karl Pearson[32] (Pearson 1895, pp. 357, 360, 373–376) also showed that the gamma distribution is a Pearson type III distribution. Hence this boundary line for Pearson's type III distribution is known as the gamma line. (This can be shown from the fact that the excess kurtosis of the gamma distribution is 6/k and the square of the skewness is 4/k, hence (excess kurtosis − (3/2) skewness2 = 0) is identically satisfied by the gamma distribution regardless of the value of the parameter "k"). Pearson later noted that the chi-squared distribution is a special case of Pearson's type III and also shares this boundary line (as it is apparent from the fact that for the chi-squared distribution the excess kurtosis is 12/k and the square of the skewness is 8/k, hence (excess kurtosis − (3/2) skewness2 = 0) is identically satisfied regardless of the value of the parameter "k"). This is to be expected, since the chi-squared distribution X ~ χ2(k) is a special case of the gamma distribution, with parametrization X ~ Γ(k/2, 1/2) where k is a positive integer that specifies the "number of degrees of freedom" of the chi-squared distribution.

An example of a beta distribution near the upper boundary (excess kurtosis − (3/2) skewness2 = 0) is given by α = 0.1, β = 1000, for which the ratio (excess kurtosis)/(skewness2) = 1.49835 approaches the upper limit of 1.5 from below. An example of a beta distribution near the lower boundary (excess kurtosis + 2 − skewness2 = 0) is given by α= 0.0001, β = 0.1, for which values the expression (excess kurtosis + 2)/(skewness2) = 1.01621 approaches the lower limit of 1 from above. In the infinitesimal limit for both α and β approaching zero symmetrically, the excess kurtosis reaches its minimum value at −2. This minimum value occurs at the point at which the lower boundary line intersects the vertical axis (ordinate). (However, in Pearson's original chart, the ordinate is kurtosis, instead of excess kurtosis, and it increases downwards rather than upwards).

Values for the skewness and excess kurtosis below the lower boundary (excess kurtosis + 2 − skewness2 = 0) cannot occur for any distribution, and hence Karl Pearson appropriately called the region below this boundary the "impossible region". The boundary for this "impossible region" is determined by (symmetric or skewed) bimodal U-shaped distributions for which the parameters α and β approach zero and hence all the probability density is concentrated at the ends: x = 0, 1 with practically nothing in between them. Since for α ≈ β ≈ 0 the probability density is concentrated at the two ends x = 0 and x = 1, this "impossible boundary" is determined by a Bernoulli distribution, where the two only possible outcomes occur with respective probabilities p and q = 1−p. For cases approaching this limit boundary with symmetry α = β, skewness ≈ 0, excess kurtosis ≈ −2 (this is the lowest excess kurtosis possible for any distribution), and the probabilities are pq ≈ 1/2. For cases approaching this limit boundary with skewness, excess kurtosis ≈ −2 + skewness2, and the probability density is concentrated more at one end than the other end (with practically nothing in between), with probabilities at the left end x = 0 and at the right end x = 1.

Symmetry

All statements are conditional on α, β > 0:

.

Geometry of the probability density function

Inflection points

Inflection point location versus α and β showing regions with one inflection point
Inflection point location versus α and β showing region with two inflection points

For certain values of the shape parameters α and β, the probability density function has inflection points, at which the curvature changes sign. The position of these inflection points can be useful as a measure of the dispersion or spread of the distribution.

Defining the following quantity:

Points of inflection occur,[1][7][8][19] depending on the value of the shape parameters α and β, as follows:

There are no inflection points in the remaining (symmetric and skewed) regions: U-shaped: (α, β < 1) upside-down-U-shaped: (1 < α < 2, 1 < β < 2), reverse-J-shaped (α < 1, β > 2) or J-shaped: (α > 2, β < 1)

The accompanying plots show the inflection point locations (shown vertically, ranging from 0 to 1) versus α and β (the horizontal axes ranging from 0 to 5). There are large cuts at surfaces intersecting the lines α = 1, β = 1, α = 2, and β = 2 because at these values the beta distribution change from 2 modes, to 1 mode to no mode.

Shapes

PDF for symmetric beta distribution vs. x and α = β from 0 to 30
PDF for symmetric beta distribution vs. x and α = β from 0 to 2
PDF for skewed beta distribution vs. x and β = 2.5α from 0 to 9
PDF for skewed beta distribution vs. x and β = 5.5α from 0 to 9
PDF for skewed beta distribution vs. x and β = 8α from 0 to 10

The beta density function can take a wide variety of different shapes depending on the values of the two parameters α and β. The ability of the beta distribution to take this great diversity of shapes (using only two parameters) is partly responsible for finding wide application for modeling actual measurements:

Symmetric (α = β)
Skewed (αβ)

The density function is skewed. An interchange of parameter values yields the mirror image (the reverse) of the initial curve, some more specific cases:

Related distributions

Transformations

Special and limiting cases

Example of eight realizations of a random walk in one dimension starting at 0: the probability for the time of the last visit to the origin is distributed as Beta(1/2, 1/2)
Beta(1/2, 1/2): The arcsine distribution probability density was proposed by Harold Jeffreys to represent uncertainty for a Bernoulli or a binomial distribution in Bayesian inference, and is now commonly referred to as Jeffreys prior: p−1/2(1 − p)−1/2. This distribution also appears in several random walk fundamental theorems

Derived from other distributions

Combination with other distributions

Compounding with other distributions

Generalisations

Statistical inference

Parameter estimation

Method of moments

Two unknown parameters

Two unknown parameters ( of a beta distribution supported in the [0,1] interval) can be estimated, using the method of moments, with the first two moments (sample mean and sample variance) as follows. Let:

be the sample mean estimate and

be the sample variance estimate. The method-of-moments estimates of the parameters are

if
if

When the distribution is required over a known interval other than [0, 1] with random variable X, say [a, c] with random variable Y, then replace with and with in the above couple of equations for the shape parameters (see the "Four unknown parameters" section below),[40] where:

Four unknown parameters
Solutions for parameter estimates vs. (sample) excess Kurtosis and (sample) squared Skewness Beta distribution

All four parameters ( of a beta distribution supported in the [a, c] interval, see section "Alternative parametrizations, Four parameters") can be estimated, using the method of moments developed by Karl Pearson, by equating sample and population values of the first four central moments (mean, variance, skewness and excess kurtosis).[1][41][42] The excess kurtosis was expressed in terms of the square of the skewness, and the sample size ν = α + β, (see previous section "Kurtosis") as follows:

One can use this equation to solve for the sample size ν= α + β in terms of the square of the skewness and the excess kurtosis as follows:[41]

This is the ratio (multiplied by a factor of 3) between the previously derived limit boundaries for the beta distribution in a space (as originally done by Karl Pearson[20]) defined with coordinates of the square of the skewness in one axis and the excess kurtosis in the other axis (see § Kurtosis bounded by the square of the skewness):

The case of zero skewness, can be immediately solved because for zero skewness, α = β and hence ν = 2α = 2β, therefore α = β = ν/2

(Excess kurtosis is negative for the beta distribution with zero skewness, ranging from -2 to 0, so that -and therefore the sample shape parameters- is positive, ranging from zero when the shape parameters approach zero and the excess kurtosis approaches -2, to infinity when the shape parameters approach infinity and the excess kurtosis approaches zero).

For non-zero sample skewness one needs to solve a system of two coupled equations. Since the skewness and the excess kurtosis are independent of the parameters , the parameters can be uniquely determined from the sample skewness and the sample excess kurtosis, by solving the coupled equations with two known variables (sample skewness and sample excess kurtosis) and two unknowns (the shape parameters):

resulting in the following solution:[41]

Where one should take the solutions as follows: for (negative) sample skewness < 0, and for (positive) sample skewness > 0.

The accompanying plot shows these two solutions as surfaces in a space with horizontal axes of (sample excess kurtosis) and (sample squared skewness) and the shape parameters as the vertical axis. The surfaces are constrained by the condition that the sample excess kurtosis must be bounded by the sample squared skewness as stipulated in the above equation. The two surfaces meet at the right edge defined by zero skewness. Along this right edge, both parameters are equal and the distribution is symmetric U-shaped for α = β < 1, uniform for α = β = 1, upside-down-U-shaped for 1 < α = β < 2 and bell-shaped for α = β > 2. The surfaces also meet at the front (lower) edge defined by "the impossible boundary" line (excess kurtosis + 2 - skewness2 = 0). Along this front (lower) boundary both shape parameters approach zero, and the probability density is concentrated more at one end than the other end (with practically nothing in between), with probabilities at the left end x = 0 and at the right end x = 1. The two surfaces become further apart towards the rear edge. At this rear edge the surface parameters are quite different from each other. As remarked, for example, by Bowman and Shenton,[43] sampling in the neighborhood of the line (sample excess kurtosis - (3/2)(sample skewness)2 = 0) (the just-J-shaped portion of the rear edge where blue meets beige), "is dangerously near to chaos", because at that line the denominator of the expression above for the estimate ν = α + β becomes zero and hence ν approaches infinity as that line is approached. Bowman and Shenton [43] write that "the higher moment parameters (kurtosis and skewness) are extremely fragile (near that line). However, the mean and standard deviation are fairly reliable." Therefore, the problem is for the case of four parameter estimation for very skewed distributions such that the excess kurtosis approaches (3/2) times the square of the skewness. This boundary line is produced by extremely skewed distributions with very large values of one of the parameters and very small values of the other parameter. See § Kurtosis bounded by the square of the skewness for a numerical example and further comments about this rear edge boundary line (sample excess kurtosis - (3/2)(sample skewness)2 = 0). As remarked by Karl Pearson himself [44] this issue may not be of much practical importance as this trouble arises only for very skewed J-shaped (or mirror-image J-shaped) distributions with very different values of shape parameters that are unlikely to occur much in practice). The usual skewed-bell-shape distributions that occur in practice do not have this parameter estimation problem.

The remaining two parameters can be determined using the sample mean and the sample variance using a variety of equations.[1][41] One alternative is to calculate the support interval range based on the sample variance and the sample kurtosis. For this purpose one can solve, in terms of the range , the equation expressing the excess kurtosis in terms of the sample variance, and the sample size ν (see § Kurtosis and § Alternative parametrizations, four parameters):

to obtain:

Another alternative is to calculate the support interval range based on the sample variance and the sample skewness.[41] For this purpose one can solve, in terms of the range , the equation expressing the squared skewness in terms of the sample variance, and the sample size ν (see section titled "Skewness" and "Alternative parametrizations, four parameters"):

to obtain:[41]

The remaining parameter can be determined from the sample mean and the previously obtained parameters: :

and finally, .

In the above formulas one may take, for example, as estimates of the sample moments:

The estimators G1 for sample skewness and G2 for sample kurtosis are used by DAP/SAS, PSPP/SPSS, and Excel. However, they are not used by BMDP and (according to [45]) they were not used by MINITAB in 1998. Actually, Joanes and Gill in their 1998 study[45] concluded that the skewness and kurtosis estimators used in BMDP and in MINITAB (at that time) had smaller variance and mean-squared error in normal samples, but the skewness and kurtosis estimators used in DAP/SAS, PSPP/SPSS, namely G1 and G2, had smaller mean-squared error in samples from a very skewed distribution. It is for this reason that we have spelled out "sample skewness", etc., in the above formulas, to make it explicit that the user should choose the best estimator according to the problem at hand, as the best estimator for skewness and kurtosis depends on the amount of skewness (as shown by Joanes and Gill[45]).

Maximum likelihood

Two unknown parameters
Max (joint log likelihood/N) for beta distribution maxima at α = β = 2
Max (joint log likelihood/N) for Beta distribution maxima at α = β ∈ {0.25,0.5,1,2,4,6,8}

As is also the case for maximum likelihood estimates for the gamma distribution, the maximum likelihood estimates for the beta distribution do not have a general closed form solution for arbitrary values of the shape parameters. If X1, ..., XN are independent random variables each having a beta distribution, the joint log likelihood function for N iid observations is:

Finding the maximum with respect to a shape parameter involves taking the partial derivative with respect to the shape parameter and setting the expression equal to zero yielding the maximum likelihood estimator of the shape parameters:

where:

since the digamma function denoted ψ(α) is defined as the logarithmic derivative of the gamma function:[17]

To ensure that the values with zero tangent slope are indeed a maximum (instead of a saddle-point or a minimum) one has to also satisfy the condition that the curvature is negative. This amounts to satisfying that the second partial derivative with respect to the shape parameters is negative

using the previous equations, this is equivalent to:

where the trigamma function, denoted ψ1(α), is the second of the polygamma functions, and is defined as the derivative of the digamma function:

These conditions are equivalent to stating that the variances of the logarithmically transformed variables are positive, since:

Therefore, the condition of negative curvature at a maximum is equivalent to the statements:

Alternatively, the condition of negative curvature at a maximum is also equivalent to stating that the following logarithmic derivatives of the geometric means GX and G(1−X) are positive, since:

While these slopes are indeed positive, the other slopes are negative:

The slopes of the mean and the median with respect to α and β display similar sign behavior.

From the condition that at a maximum, the partial derivative with respect to the shape parameter equals zero, we obtain the following system of coupled maximum likelihood estimate equations (for the average log-likelihoods) that needs to be inverted to obtain the (unknown) shape parameter estimates in terms of the (known) average of logarithms of the samples X1, ..., XN:[1]

where we recognize as the logarithm of the sample geometric mean and as the logarithm of the sample geometric mean based on (1 − X), the mirror-image of X. For , it follows that .

These coupled equations containing digamma functions of the shape parameter estimates must be solved by numerical methods as done, for example, by Beckman et al.[46] Gnanadesikan et al. give numerical solutions for a few cases.[47] N.L.Johnson and S.Kotz[1] suggest that for "not too small" shape parameter estimates , the logarithmic approximation to the digamma function may be used to obtain initial values for an iterative solution, since the equations resulting from this approximation can be solved exactly:

which leads to the following solution for the initial values (of the estimate shape parameters in terms of the sample geometric means) for an iterative solution:

Alternatively, the estimates provided by the method of moments can instead be used as initial values for an iterative solution of the maximum likelihood coupled equations in terms of the digamma functions.

When the distribution is required over a known interval other than [0, 1] with random variable X, say [a, c] with random variable Y, then replace ln(Xi) in the first equation with

,

and replace ln(1−Xi) in the second equation with

(see "Alternative parametrizations, four parameters" section below).

If one of the shape parameters is known, the problem is considerably simplified. The following logit transformation can be used to solve for the unknown shape parameter (for skewed cases such that , otherwise, if symmetric, both -equal- parameters are known when one is known):

This logit transformation is the logarithm of the transformation that divides the variable X by its mirror-image (X/(1 - X) resulting in the "inverted beta distribution" or beta prime distribution (also known as beta distribution of the second kind or Pearson's Type VI) with support [0, +∞). As previously discussed in the section "Moments of logarithmically transformed random variables," the logit transformation , studied by Johnson,[24] extends the finite support [0, 1] based on the original variable X to infinite support in both directions of the real line (−∞, +∞).

If, for example, is known, the unknown parameter can be obtained in terms of the inverse[48] digamma function of the right hand side of this equation:

In particular, if one of the shape parameters has a value of unity, for example for (the power function distribution with bounded support [0,1]), using the identity ψ(x + 1) = ψ(x) + 1/x in the equation , the maximum likelihood estimator for the unknown parameter is,[1] exactly:

The beta has support [0, 1], therefore , and hence , and therefore

In conclusion, the maximum likelihood estimates of the shape parameters of a beta distribution are (in general) a complicated function of the sample geometric mean, and of the sample geometric mean based on (1−X), the mirror-image of X. One may ask, if the variance (in addition to the mean) is necessary to estimate two shape parameters with the method of moments, why is the (logarithmic or geometric) variance not necessary to estimate two shape parameters with the maximum likelihood method, for which only the geometric means suffice? The answer is because the mean does not provide as much information as the geometric mean. For a beta distribution with equal shape parameters α = β, the mean is exactly 1/2, regardless of the value of the shape parameters, and therefore regardless of the value of the statistical dispersion (the variance). On the other hand, the geometric mean of a beta distribution with equal shape parameters α = β, depends on the value of the shape parameters, and therefore it contains more information. Also, the geometric mean of a beta distribution does not satisfy the symmetry conditions satisfied by the mean, therefore, by employing both the geometric mean based on X and geometric mean based on (1 − X), the maximum likelihood method is able to provide best estimates for both parameters α = β, without need of employing the variance.

One can express the joint log likelihood per N iid observations in terms of the sufficient statistics (the sample geometric means) as follows:

We can plot the joint log likelihood per N observations for fixed values of the sample geometric means to see the behavior of the likelihood function as a function of the shape parameters α and β. In such a plot, the shape parameter estimators correspond to the maxima of the likelihood function. See the accompanying graph that shows that all the likelihood functions intersect at α = β = 1, which corresponds to the values of the shape parameters that give the maximum entropy (the maximum entropy occurs for shape parameters equal to unity: the uniform distribution). It is evident from the plot that the likelihood function gives sharp peaks for values of the shape parameter estimators close to zero, but that for values of the shape parameters estimators greater than one, the likelihood function becomes quite flat, with less defined peaks. Obviously, the maximum likelihood parameter estimation method for the beta distribution becomes less acceptable for larger values of the shape parameter estimators, as the uncertainty in the peak definition increases with the value of the shape parameter estimators. One can arrive at the same conclusion by noticing that the expression for the curvature of the likelihood function is in terms of the geometric variances

These variances (and therefore the curvatures) are much larger for small values of the shape parameter α and β. However, for shape parameter values α, β > 1, the variances (and therefore the curvatures) flatten out. Equivalently, this result follows from the Cramér–Rao bound, since the Fisher information matrix components for the beta distribution are these logarithmic variances. The Cramér–Rao bound states that the variance of any unbiased estimator of α is bounded by the reciprocal of the Fisher information:

so the variance of the estimators increases with increasing α and β, as the logarithmic variances decrease.

Also one can express the joint log likelihood per N iid observations in terms of the digamma function expressions for the logarithms of the sample geometric means as follows:

this expression is identical to the negative of the cross-entropy (see section on "Quantities of information (entropy)"). Therefore, finding the maximum of the joint log likelihood of the shape parameters, per N iid observations, is identical to finding the minimum of the cross-entropy for the beta distribution, as a function of the shape parameters.

with the cross-entropy defined as follows:

Four unknown parameters

The procedure is similar to the one followed in the two unknown parameter case. If Y1, ..., YN are independent random variables each having a beta distribution with four parameters, the joint log likelihood function for N iid observations is:

Finding the maximum with respect to a shape parameter involves taking the partial derivative with respect to the shape parameter and setting the expression equal to zero yielding the maximum likelihood estimator of the shape parameters:

these equations can be re-arranged as the following system of four coupled equations (the first two equations are geometric means and the second two equations are the harmonic means) in terms of the maximum likelihood estimates for the four parameters :

with sample geometric means:

The parameters are embedded inside the geometric mean expressions in a nonlinear way (to the power 1/N). This precludes, in general, a closed form solution, even for an initial value approximation for iteration purposes. One alternative is to use as initial values for iteration the values obtained from the method of moments solution for the four parameter case. Furthermore, the expressions for the harmonic means are well-defined only for , which precludes a maximum likelihood solution for shape parameters less than unity in the four-parameter case. Fisher's information matrix for the four parameter case is positive-definite only for α, β > 2 (for further discussion, see section on Fisher information matrix, four parameter case), for bell-shaped (symmetric or unsymmetric) beta distributions, with inflection points located to either side of the mode. The following Fisher information components (that represent the expectations of the curvature of the log likelihood function) have singularities at the following values:

(for further discussion see section on Fisher information matrix). Thus, it is not possible to strictly carry on the maximum likelihood estimation for some well known distributions belonging to the four-parameter beta distribution family, like the uniform distribution (Beta(1, 1, a, c)), and the arcsine distribution (Beta(1/2, 1/2, a, c)). N.L.Johnson and S.Kotz[1] ignore the equations for the harmonic means and instead suggest "If a and c are unknown, and maximum likelihood estimators of a, c, α and β are required, the above procedure (for the two unknown parameter case, with X transformed as X = (Y − a)/(c − a)) can be repeated using a succession of trial values of a and c, until the pair (a, c) for which maximum likelihood (given a and c) is as great as possible, is attained" (where, for the purpose of clarity, their notation for the parameters has been translated into the present notation).

Fisher information matrix

Let a random variable X have a probability density f(x;α). The partial derivative with respect to the (unknown, and to be estimated) parameter α of the log likelihood function is called the score. The second moment of the score is called the Fisher information:

The expectation of the score is zero, therefore the Fisher information is also the second moment centered on the mean of the score: the variance of the score.

If the log likelihood function is twice differentiable with respect to the parameter α, and under certain regularity conditions,[49] then the Fisher information may also be written as follows (which is often a more convenient form for calculation purposes):

Thus, the Fisher information is the negative of the expectation of the second derivative with respect to the parameter α of the log likelihood function. Therefore, Fisher information is a measure of the curvature of the log likelihood function of α. A low curvature (and therefore high radius of curvature), flatter log likelihood function curve has low Fisher information; while a log likelihood function curve with large curvature (and therefore low radius of curvature) has high Fisher information. When the Fisher information matrix is computed at the evaluates of the parameters ("the observed Fisher information matrix") it is equivalent to the replacement of the true log likelihood surface by a Taylor's series approximation, taken as far as the quadratic terms.[50] The word information, in the context of Fisher information, refers to information about the parameters. Information such as: estimation, sufficiency and properties of variances of estimators. The Cramér–Rao bound states that the inverse of the Fisher information is a lower bound on the variance of any estimator of a parameter α:

The precision to which one can estimate the estimator of a parameter α is limited by the Fisher Information of the log likelihood function. The Fisher information is a measure of the minimum error involved in estimating a parameter of a distribution and it can be viewed as a measure of the resolving power of an experiment needed to discriminate between two alternative hypothesis of a parameter.[51]

When there are N parameters

then the Fisher information takes the form of an N×N positive semidefinite symmetric matrix, the Fisher information matrix, with typical element:

Under certain regularity conditions,[49] the Fisher Information Matrix may also be written in the following form, which is often more convenient for computation:

With X1, ..., XN iid random variables, an N-dimensional "box" can be constructed with sides X1, ..., XN. Costa and Cover[52] show that the (Shannon) differential entropy h(X) is related to the volume of the typical set (having the sample entropy close to the true entropy), while the Fisher information is related to the surface of this typical set.

Two parameters

For X1, ..., XN independent random variables each having a beta distribution parametrized with shape parameters α and β, the joint log likelihood function for N iid observations is:

therefore the joint log likelihood function per N iid observations is

For the two parameter case, the Fisher information has 4 components: 2 diagonal and 2 off-diagonal. Since the Fisher information matrix is symmetric, one of these off diagonal components is independent. Therefore, the Fisher information matrix has 3 independent components (2 diagonal and 1 off diagonal).

Aryal and Nadarajah[53] calculated Fisher's information matrix for the four-parameter case, from which the two parameter case can be obtained as follows:

Since the Fisher information matrix is symmetric

The Fisher information components are equal to the log geometric variances and log geometric covariance. Therefore, they can be expressed as trigamma functions, denoted ψ1(α), the second of the polygamma functions, defined as the derivative of the digamma function:

These derivatives are also derived in the § Two unknown parameters and plots of the log likelihood function are also shown in that section. § Geometric variance and covariance contains plots and further discussion of the Fisher information matrix components: the log geometric variances and log geometric covariance as a function of the shape parameters α and β. § Moments of logarithmically transformed random variables contains formulas for moments of logarithmically transformed random variables. Images for the Fisher information components and are shown in § Geometric variance.

The determinant of Fisher's information matrix is of interest (for example for the calculation of Jeffreys prior probability). From the expressions for the individual components of the Fisher information matrix, it follows that the determinant of Fisher's (symmetric) information matrix for the beta distribution is:

From Sylvester's criterion (checking whether the diagonal elements are all positive), it follows that the Fisher information matrix for the two parameter case is positive-definite (under the standard condition that the shape parameters are positive α > 0 and β > 0).

Four parameters
Fisher Information I(a,a) for α = β vs range (c − a) and exponent α = β
Fisher Information I(α,a) for α = β, vs. range (c − a) and exponent α = β

If Y1, ..., YN are independent random variables each having a beta distribution with four parameters: the exponents α and β, and also a (the minimum of the distribution range), and c (the maximum of the distribution range) (section titled "Alternative parametrizations", "Four parameters"), with probability density function:

the joint log likelihood function per N iid observations is:

For the four parameter case, the Fisher information has 4*4=16 components. It has 12 off-diagonal components = (4×4 total − 4 diagonal). Since the Fisher information matrix is symmetric, half of these components (12/2=6) are independent. Therefore, the Fisher information matrix has 6 independent off-diagonal + 4 diagonal = 10 independent components. Aryal and Nadarajah[53] calculated Fisher's information matrix for the four parameter case as follows:

In the above expressions, the use of X instead of Y in the expressions var[ln(X)] = ln(varGX) is not an error. The expressions in terms of the log geometric variances and log geometric covariance occur as functions of the two parameter X ~ Beta(α, β) parametrization because when taking the partial derivatives with respect to the exponents (α, β) in the four parameter case, one obtains the identical expressions as for the two parameter case: these terms of the four parameter Fisher information matrix are independent of the minimum a and maximum c of the distribution's range. The only non-zero term upon double differentiation of the log likelihood function with respect to the exponents α and β is the second derivative of the log of the beta function: ln(B(α, β)). This term is independent of the minimum a and maximum c of the distribution's range. Double differentiation of this term results in trigamma functions. The sections titled "Maximum likelihood", "Two unknown parameters" and "Four unknown parameters" also show this fact.

The Fisher information for N i.i.d. samples is N times the individual Fisher information (eq. 11.279, page 394 of Cover and Thomas[27]). (Aryal and Nadarajah[53] take a single observation, N = 1, to calculate the following components of the Fisher information, which leads to the same result as considering the derivatives of the log likelihood per N observations. Moreover, below the erroneous expression for in Aryal and Nadarajah has been corrected.)

The lower two diagonal entries of the Fisher information matrix, with respect to the parameter a (the minimum of the distribution's range): , and with respect to the parameter c (the maximum of the distribution's range): are only defined for exponents α > 2 and β > 2 respectively. The Fisher information matrix component for the minimum a approaches infinity for exponent α approaching 2 from above, and the Fisher information matrix component for the maximum c approaches infinity for exponent β approaching 2 from above.

The Fisher information matrix for the four parameter case does not depend on the individual values of the minimum a and the maximum c, but only on the total range (c − a). Moreover, the components of the Fisher information matrix that depend on the range (c − a), depend only through its inverse (or the square of the inverse), such that the Fisher information decreases for increasing range (c − a).

The accompanying images show the Fisher information components and . Images for the Fisher information components and are shown in § Geometric variance. All these Fisher information components look like a basin, with the "walls" of the basin being located at low values of the parameters.

The following four-parameter-beta-distribution Fisher information components can be expressed in terms of the two-parameter: X ~ Beta(α, β) expectations of the transformed ratio ((1 − X)/X) and of its mirror image (X/(1 − X)), scaled by the range (c − a), which may be helpful for interpretation:

These are also the expected values of the "inverted beta distribution" or beta prime distribution (also known as beta distribution of the second kind or Pearson's Type VI) [1] and its mirror image, scaled by the range (c − a).

Also, the following Fisher information components can be expressed in terms of the harmonic (1/X) variances or of variances based on the ratio transformed variables ((1-X)/X) as follows:

See section "Moments of linearly transformed, product and inverted random variables" for these expectations.

The determinant of Fisher's information matrix is of interest (for example for the calculation of Jeffreys prior probability). From the expressions for the individual components, it follows that the determinant of Fisher's (symmetric) information matrix for the beta distribution with four parameters is:

Using Sylvester's criterion (checking whether the diagonal elements are all positive), and since diagonal components and have singularities at α=2 and β=2 it follows that the Fisher information matrix for the four parameter case is positive-definite for α>2 and β>2. Since for α > 2 and β > 2 the beta distribution is (symmetric or unsymmetric) bell shaped, it follows that the Fisher information matrix is positive-definite only for bell-shaped (symmetric or unsymmetric) beta distributions, with inflection points located to either side of the mode. Thus, important well known distributions belonging to the four-parameter beta distribution family, like the parabolic distribution (Beta(2,2,a,c)) and the uniform distribution (Beta(1,1,a,c)) have Fisher information components () that blow up (approach infinity) in the four-parameter case (although their Fisher information components are all defined for the two parameter case). The four-parameter Wigner semicircle distribution (Beta(3/2,3/2,a,c)) and arcsine distribution (Beta(1/2,1/2,a,c)) have negative Fisher information determinants for the four-parameter case.

Bayesian inference

: The uniform distribution probability density was proposed by Thomas Bayes to represent ignorance of prior probabilities in Bayesian inference.

The use of Beta distributions in Bayesian inference is due to the fact that they provide a family of conjugate prior probability distributions for binomial (including Bernoulli) and geometric distributions. The domain of the beta distribution can be viewed as a probability, and in fact the beta distribution is often used to describe the distribution of a probability value p:[23]

Examples of beta distributions used as prior probabilities to represent ignorance of prior parameter values in Bayesian inference are Beta(1,1), Beta(0,0) and Beta(1/2,1/2).

Rule of succession

A classic application of the beta distribution is the rule of succession, introduced in the 18th century by Pierre-Simon Laplace[54] in the course of treating the sunrise problem. It states that, given s successes in n conditionally independent Bernoulli trials with probability p, that the estimate of the expected value in the next trial is . This estimate is the expected value of the posterior distribution over p, namely Beta(s+1, ns+1), which is given by Bayes' rule if one assumes a uniform prior probability over p (i.e., Beta(1, 1)) and then observes that p generated s successes in n trials. Laplace's rule of succession has been criticized by prominent scientists. R. T. Cox described Laplace's application of the rule of succession to the sunrise problem ([55] p. 89) as "a travesty of the proper use of the principle". Keynes remarks ([56] Ch.XXX, p. 382) "indeed this is so foolish a theorem that to entertain it is discreditable". Karl Pearson[57] showed that the probability that the next (n + 1) trials will be successes, after n successes in n trials, is only 50%, which has been considered too low by scientists like Jeffreys and unacceptable as a representation of the scientific process of experimentation to test a proposed scientific law. As pointed out by Jeffreys ([58] p. 128) (crediting C. D. Broad[59] ) Laplace's rule of succession establishes a high probability of success ((n+1)/(n+2)) in the next trial, but only a moderate probability (50%) that a further sample (n+1) comparable in size will be equally successful. As pointed out by Perks,[60] "The rule of succession itself is hard to accept. It assigns a probability to the next trial which implies the assumption that the actual run observed is an average run and that we are always at the end of an average run. It would, one would think, be more reasonable to assume that we were in the middle of an average run. Clearly a higher value for both probabilities is necessary if they are to accord with reasonable belief." These problems with Laplace's rule of succession motivated Haldane, Perks, Jeffreys and others to search for other forms of prior probability (see the next § Bayesian inference). According to Jaynes,[51] the main problem with the rule of succession is that it is not valid when s=0 or s=n (see rule of succession, for an analysis of its validity).

Bayes–Laplace prior probability (Beta(1,1))

The beta distribution achieves maximum differential entropy for Beta(1,1): the uniform probability density, for which all values in the domain of the distribution have equal density. This uniform distribution Beta(1,1) was suggested ("with a great deal of doubt") by Thomas Bayes[61] as the prior probability distribution to express ignorance about the correct prior distribution. This prior distribution was adopted (apparently, from his writings, with little sign of doubt[54]) by Pierre-Simon Laplace, and hence it was also known as the "Bayes–Laplace rule" or the "Laplace rule" of "inverse probability" in publications of the first half of the 20th century. In the later part of the 19th century and early part of the 20th century, scientists realized that the assumption of uniform "equal" probability density depended on the actual functions (for example whether a linear or a logarithmic scale was most appropriate) and parametrizations used. In particular, the behavior near the ends of distributions with finite support (for example near x = 0, for a distribution with initial support at x = 0) required particular attention. Keynes ([56] Ch.XXX, p. 381) criticized the use of Bayes's uniform prior probability (Beta(1,1)) that all values between zero and one are equiprobable, as follows: "Thus experience, if it shows anything, shows that there is a very marked clustering of statistical ratios in the neighborhoods of zero and unity, of those for positive theories and for correlations between positive qualities in the neighborhood of zero, and of those for negative theories and for correlations between negative qualities in the neighborhood of unity. "

Haldane's prior probability (Beta(0,0))

: The Haldane prior probability expressing total ignorance about prior information, where we are not even sure whether it is physically possible for an experiment to yield either a success or a failure. As α, β → 0, the beta distribution approaches a two-point Bernoulli distribution with all probability density concentrated at each end, at 0 and 1, and nothing in between. A coin-toss: one face of the coin being at 0 and the other face being at 1.

The Beta(0,0) distribution was proposed by J.B.S. Haldane,[62] who suggested that the prior probability representing complete uncertainty should be proportional to p−1(1−p)−1. The function p−1(1−p)−1 can be viewed as the limit of the numerator of the beta distribution as both shape parameters approach zero: α, β → 0. The Beta function (in the denominator of the beta distribution) approaches infinity, for both parameters approaching zero, α, β → 0. Therefore, p−1(1−p)−1 divided by the Beta function approaches a 2-point Bernoulli distribution with equal probability 1/2 at each end, at 0 and 1, and nothing in between, as α, β → 0. A coin-toss: one face of the coin being at 0 and the other face being at 1. The Haldane prior probability distribution Beta(0,0) is an "improper prior" because its integration (from 0 to 1) fails to strictly converge to 1 due to the singularities at each end. However, this is not an issue for computing posterior probabilities unless the sample size is very small. Furthermore, Zellner[63] points out that on the log-odds scale, (the logit transformation ln(p/1 − p)), the Haldane prior is the uniformly flat prior. The fact that a uniform prior probability on the logit transformed variable ln(p/1 − p) (with domain (−∞, ∞)) is equivalent to the Haldane prior on the domain [0, 1] was pointed out by Harold Jeffreys in the first edition (1939) of his book Theory of Probability ([58] p. 123). Jeffreys writes "Certainly if we take the Bayes–Laplace rule right up to the extremes we are led to results that do not correspond to anybody's way of thinking. The (Haldane) rule dx/(x(1 − x)) goes too far the other way. It would lead to the conclusion that if a sample is of one type with respect to some property there is a probability 1 that the whole population is of that type." The fact that "uniform" depends on the parametrization, led Jeffreys to seek a form of prior that would be invariant under different parametrizations.

Jeffreys' prior probability (Beta(1/2,1/2) for a Bernoulli or for a binomial distribution)

Jeffreys prior probability for the beta distribution: the square root of the determinant of Fisher's information matrix: is a function of the trigamma function ψ1 of shape parameters α, β
Posterior Beta densities with samples having success = "s", failure = "f" of s/(s + f) = 1/2, and s + f = {3,10,50}, based on 3 different prior probability functions: Haldane (Beta(0,0), Jeffreys (Beta(1/2,1/2)) and Bayes (Beta(1,1)). The image shows that there is little difference between the priors for the posterior with sample size of 50 (with more pronounced peak near p = 1/2). Significant differences appear for very small sample sizes (the flatter distribution for sample size of 3)
Posterior Beta densities with samples having success = "s", failure = "f" of s/(s + f) = 1/4, and s + f ∈ {3,10,50}, based on three different prior probability functions: Haldane (Beta(0,0), Jeffreys (Beta(1/2,1/2)) and Bayes (Beta(1,1)). The image shows that there is little difference between the priors for the posterior with sample size of 50 (with more pronounced peak near p = 1/4). Significant differences appear for very small sample sizes (the very skewed distribution for the degenerate case of sample size = 3, in this degenerate and unlikely case the Haldane prior results in a reverse "J" shape with mode at p = 0 instead of p = 1/4. If there is sufficient sampling data, the three priors of Bayes (Beta(1,1)), Jeffreys (Beta(1/2,1/2)) and Haldane (Beta(0,0)) should yield similar posterior probability densities.
Posterior Beta densities with samples having success = s, failure = f of s/(s + f) = 1/4, and s + f ∈ {4,12,40}, based on three different prior probability functions: Haldane (Beta(0,0), Jeffreys (Beta(1/2,1/2)) and Bayes (Beta(1,1)). The image shows that there is little difference between the priors for the posterior with sample size of 40 (with more pronounced peak near p = 1/4). Significant differences appear for very small sample sizes

Harold Jeffreys[58][64] proposed to use an uninformative prior probability measure that should be invariant under reparameterization: proportional to the square root of the determinant of Fisher's information matrix. For the Bernoulli distribution, this can be shown as follows: for a coin that is "heads" with probability p ∈ [0, 1] and is "tails" with probability 1 − p, for a given (H,T) ∈ {(0,1), (1,0)} the probability is pH(1 − p)T. Since T = 1 − H, the Bernoulli distribution is pH(1 − p)1 − H. Considering p as the only parameter, it follows that the log likelihood for the Bernoulli distribution is

The Fisher information matrix has only one component (it is a scalar, because there is only one parameter: p), therefore:

Similarly, for the Binomial distribution with n Bernoulli trials, it can be shown that

Thus, for the Bernoulli, and Binomial distributions, Jeffreys prior is proportional to , which happens to be proportional to a beta distribution with domain variable x = p, and shape parameters α = β = 1/2, the arcsine distribution:

It will be shown in the next section that the normalizing constant for Jeffreys prior is immaterial to the final result because the normalizing constant cancels out in Bayes theorem for the posterior probability. Hence Beta(1/2,1/2) is used as the Jeffreys prior for both Bernoulli and binomial distributions. As shown in the next section, when using this expression as a prior probability times the likelihood in Bayes theorem, the posterior probability turns out to be a beta distribution. It is important to realize, however, that Jeffreys prior is proportional to for the Bernoulli and binomial distribution, but not for the beta distribution. Jeffreys prior for the beta distribution is given by the determinant of Fisher's information for the beta distribution, which, as shown in the § Fisher information matrix is a function of the trigamma function ψ1 of shape parameters α and β as follows:

As previously discussed, Jeffreys prior for the Bernoulli and binomial distributions is proportional to the arcsine distribution Beta(1/2,1/2), a one-dimensional curve that looks like a basin as a function of the parameter p of the Bernoulli and binomial distributions. The walls of the basin are formed by p approaching the singularities at the ends p → 0 and p → 1, where Beta(1/2,1/2) approaches infinity. Jeffreys prior for the beta distribution is a 2-dimensional surface (embedded in a three-dimensional space) that looks like a basin with only two of its walls meeting at the corner α = β = 0 (and missing the other two walls) as a function of the shape parameters α and β of the beta distribution. The two adjoining walls of this 2-dimensional surface are formed by the shape parameters α and β approaching the singularities (of the trigamma function) at α, β → 0. It has no walls for α, β → ∞ because in this case the determinant of Fisher's information matrix for the beta distribution approaches zero.

It will be shown in the next section that Jeffreys prior probability results in posterior probabilities (when multiplied by the binomial likelihood function) that are intermediate between the posterior probability results of the Haldane and Bayes prior probabilities.

Jeffreys prior may be difficult to obtain analytically, and for some cases it just doesn't exist (even for simple distribution functions like the asymmetric triangular distribution). Berger, Bernardo and Sun, in a 2009 paper[65] defined a reference prior probability distribution that (unlike Jeffreys prior) exists for the asymmetric triangular distribution. They cannot obtain a closed-form expression for their reference prior, but numerical calculations show it to be nearly perfectly fitted by the (proper) prior

where θ is the vertex variable for the asymmetric triangular distribution with support [0, 1] (corresponding to the following parameter values in Wikipedia's article on the triangular distribution: vertex c = θ, left end a = 0,and right end b = 1). Berger et al. also give a heuristic argument that Beta(1/2,1/2) could indeed be the exact Berger–Bernardo–Sun reference prior for the asymmetric triangular distribution. Therefore, Beta(1/2,1/2) not only is Jeffreys prior for the Bernoulli and binomial distributions, but also seems to be the Berger–Bernardo–Sun reference prior for the asymmetric triangular distribution (for which the Jeffreys prior does not exist), a distribution used in project management and PERT analysis to describe the cost and duration of project tasks.

Clarke and Barron[66] prove that, among continuous positive priors, Jeffreys prior (when it exists) asymptotically maximizes Shannon's mutual information between a sample of size n and the parameter, and therefore Jeffreys prior is the most uninformative prior (measuring information as Shannon information). The proof rests on an examination of the Kullback–Leibler divergence between probability density functions for iid random variables.

Effect of different prior probability choices on the posterior beta distribution

If samples are drawn from the population of a random variable X that result in s successes and f failures in n Bernoulli trials n = s + f, then the likelihood function for parameters s and f given x = p (the notation x = p in the expressions below will emphasize that the domain x stands for the value of the parameter p in the binomial distribution), is the following binomial distribution:

If beliefs about prior probability information are reasonably well approximated by a beta distribution with parameters α Prior and β Prior, then:

According to Bayes' theorem for a continuous event space, the posterior probability density is given by the product of the prior probability and the likelihood function (given the evidence s and f = n − s), normalized so that the area under the curve equals one, as follows:

The binomial coefficient

appears both in the numerator and the denominator of the posterior probability, and it does not depend on the integration variable x, hence it cancels out, and it is irrelevant to the final result. Similarly the normalizing factor for the prior probability, the beta function B(αPrior,βPrior) cancels out and it is immaterial to the final result. The same posterior probability result can be obtained if one uses an un-normalized prior

because the normalizing factors all cancel out. Several authors (including Jeffreys himself) thus use an un-normalized prior formula since the normalization constant cancels out. The numerator of the posterior probability ends up being just the (un-normalized) product of the prior probability and the likelihood function, and the denominator is its integral from zero to one. The beta function in the denominator, B(s + α Prior, n − s + β Prior), appears as a normalization constant to ensure that the total posterior probability integrates to unity.

The ratio s/n of the number of successes to the total number of trials is a sufficient statistic in the binomial case, which is relevant for the following results.

For the Bayes' prior probability (Beta(1,1)), the posterior probability is:

For the Jeffreys' prior probability (Beta(1/2,1/2)), the posterior probability is:

and for the Haldane prior probability (Beta(0,0)), the posterior probability is:

From the above expressions it follows that for s/n = 1/2) all the above three prior probabilities result in the identical location for the posterior probability mean = mode = 1/2. For s/n < 1/2, the mean of the posterior probabilities, using the following priors, are such that: mean for Bayes prior > mean for Jeffreys prior > mean for Haldane prior. For s/n > 1/2 the order of these inequalities is reversed such that the Haldane prior probability results in the largest posterior mean. The Haldane prior probability Beta(0,0) results in a posterior probability density with mean (the expected value for the probability of success in the "next" trial) identical to the ratio s/n of the number of successes to the total number of trials. Therefore, the Haldane prior results in a posterior probability with expected value in the next trial equal to the maximum likelihood. The Bayes prior probability Beta(1,1) results in a posterior probability density with mode identical to the ratio s/n (the maximum likelihood).

In the case that 100% of the trials have been successful s = n, the Bayes prior probability Beta(1,1) results in a posterior expected value equal to the rule of succession (n + 1)/(n + 2), while the Haldane prior Beta(0,0) results in a posterior expected value of 1 (absolute certainty of success in the next trial). Jeffreys prior probability results in a posterior expected value equal to (n + 1/2)/(n + 1). Perks[60] (p. 303) points out: "This provides a new rule of succession and expresses a 'reasonable' position to take up, namely, that after an unbroken run of n successes we assume a probability for the next trial equivalent to the assumption that we are about half-way through an average run, i.e. that we expect a failure once in (2n + 2) trials. The Bayes–Laplace rule implies that we are about at the end of an average run or that we expect a failure once in (n + 2) trials. The comparison clearly favours the new result (what is now called Jeffreys prior) from the point of view of 'reasonableness'."

Conversely, in the case that 100% of the trials have resulted in failure (s = 0), the Bayes prior probability Beta(1,1) results in a posterior expected value for success in the next trial equal to 1/(n + 2), while the Haldane prior Beta(0,0) results in a posterior expected value of success in the next trial of 0 (absolute certainty of failure in the next trial). Jeffreys prior probability results in a posterior expected value for success in the next trial equal to (1/2)/(n + 1), which Perks[60] (p. 303) points out: "is a much more reasonably remote result than the Bayes–Laplace result 1/(n + 2)".

Jaynes[51] questions (for the uniform prior Beta(1,1)) the use of these formulas for the cases s = 0 or s = n because the integrals do not converge (Beta(1,1) is an improper prior for s = 0 or s = n). In practice, the conditions 0<s<n necessary for a mode to exist between both ends for the Bayes prior are usually met, and therefore the Bayes prior (as long as 0 < s < n) results in a posterior mode located between both ends of the domain.

As remarked in the section on the rule of succession, K. Pearson showed that after n successes in n trials the posterior probability (based on the Bayes Beta(1,1) distribution as the prior probability) that the next (n + 1) trials will all be successes is exactly 1/2, whatever the value of n. Based on the Haldane Beta(0,0) distribution as the prior probability, this posterior probability is 1 (absolute certainty that after n successes in n trials the next (n + 1) trials will all be successes). Perks[60] (p. 303) shows that, for what is now known as the Jeffreys prior, this probability is ((n + 1/2)/(n + 1))((n + 3/2)/(n + 2))...(2n + 1/2)/(2n + 1), which for n = 1, 2, 3 gives 15/24, 315/480, 9009/13440; rapidly approaching a limiting value of as n tends to infinity. Perks remarks that what is now known as the Jeffreys prior: "is clearly more 'reasonable' than either the Bayes–Laplace result or the result on the (Haldane) alternative rule rejected by Jeffreys which gives certainty as the probability. It clearly provides a very much better correspondence with the process of induction. Whether it is 'absolutely' reasonable for the purpose, i.e. whether it is yet large enough, without the absurdity of reaching unity, is a matter for others to decide. But it must be realized that the result depends on the assumption of complete indifference and absence of knowledge prior to the sampling experiment."

Following are the variances of the posterior distribution obtained with these three prior probability distributions:

for the Bayes' prior probability (Beta(1,1)), the posterior variance is:

for the Jeffreys' prior probability (Beta(1/2,1/2)), the posterior variance is:

and for the Haldane prior probability (Beta(0,0)), the posterior variance is:

So, as remarked by Silvey,[49] for large n, the variance is small and hence the posterior distribution is highly concentrated, whereas the assumed prior distribution was very diffuse. This is in accord with what one would hope for, as vague prior knowledge is transformed (through Bayes theorem) into a more precise posterior knowledge by an informative experiment. For small n the Haldane Beta(0,0) prior results in the largest posterior variance while the Bayes Beta(1,1) prior results in the more concentrated posterior. Jeffreys prior Beta(1/2,1/2) results in a posterior variance in between the other two. As n increases, the variance rapidly decreases so that the posterior variance for all three priors converges to approximately the same value (approaching zero variance as n → ∞). Recalling the previous result that the Haldane prior probability Beta(0,0) results in a posterior probability density with mean (the expected value for the probability of success in the "next" trial) identical to the ratio s/n of the number of successes to the total number of trials, it follows from the above expression that also the Haldane prior Beta(0,0) results in a posterior with variance identical to the variance expressed in terms of the max. likelihood estimate s/n and sample size (in § Variance):

with the mean μ = s/n and the sample size ν = n.

In Bayesian inference, using a prior distribution Beta(αPrior,βPrior) prior to a binomial distribution is equivalent to adding (αPrior − 1) pseudo-observations of "success" and (βPrior − 1) pseudo-observations of "failure" to the actual number of successes and failures observed, then estimating the parameter p of the binomial distribution by the proportion of successes over both real- and pseudo-observations. A uniform prior Beta(1,1) does not add (or subtract) any pseudo-observations since for Beta(1,1) it follows that (αPrior − 1) = 0 and (βPrior − 1) = 0. The Haldane prior Beta(0,0) subtracts one pseudo observation from each and Jeffreys prior Beta(1/2,1/2) subtracts 1/2 pseudo-observation of success and an equal number of failure. This subtraction has the effect of smoothing out the posterior distribution. If the proportion of successes is not 50% (s/n ≠ 1/2) values of αPrior and βPrior less than 1 (and therefore negative (αPrior − 1) and (βPrior − 1)) favor sparsity, i.e. distributions where the parameter p is closer to either 0 or 1. In effect, values of αPrior and βPrior between 0 and 1, when operating together, function as a concentration parameter.

The accompanying plots show the posterior probability density functions for sample sizes n ∈ {3,10,50}, successes s ∈ {n/2,n/4} and Beta(αPrior,βPrior) ∈ {Beta(0,0),Beta(1/2,1/2),Beta(1,1)}. Also shown are the cases for n = {4,12,40}, success s = {n/4} and Beta(αPrior,βPrior) ∈ {Beta(0,0),Beta(1/2,1/2),Beta(1,1)}. The first plot shows the symmetric cases, for successes s ∈ {n/2}, with mean = mode = 1/2 and the second plot shows the skewed cases s ∈ {n/4}. The images show that there is little difference between the priors for the posterior with sample size of 50 (characterized by a more pronounced peak near p = 1/2). Significant differences appear for very small sample sizes (in particular for the flatter distribution for the degenerate case of sample size = 3). Therefore, the skewed cases, with successes s = {n/4}, show a larger effect from the choice of prior, at small sample size, than the symmetric cases. For symmetric distributions, the Bayes prior Beta(1,1) results in the most "peaky" and highest posterior distributions and the Haldane prior Beta(0,0) results in the flattest and lowest peak distribution. The Jeffreys prior Beta(1/2,1/2) lies in between them. For nearly symmetric, not too skewed distributions the effect of the priors is similar. For very small sample size (in this case for a sample size of 3) and skewed distribution (in this example for s ∈ {n/4}) the Haldane prior can result in a reverse-J-shaped distribution with a singularity at the left end. However, this happens only in degenerate cases (in this example n = 3 and hence s = 3/4 < 1, a degenerate value because s should be greater than unity in order for the posterior of the Haldane prior to have a mode located between the ends, and because s = 3/4 is not an integer number, hence it violates the initial assumption of a binomial distribution for the likelihood) and it is not an issue in generic cases of reasonable sample size (such that the condition 1 < s < n − 1, necessary for a mode to exist between both ends, is fulfilled).

In Chapter 12 (p. 385) of his book, Jaynes[51] asserts that the Haldane prior Beta(0,0) describes a prior state of knowledge of complete ignorance, where we are not even sure whether it is physically possible for an experiment to yield either a success or a failure, while the Bayes (uniform) prior Beta(1,1) applies if one knows that both binary outcomes are possible. Jaynes states: "interpret the Bayes–Laplace (Beta(1,1)) prior as describing not a state of complete ignorance, but the state of knowledge in which we have observed one success and one failure...once we have seen at least one success and one failure, then we know that the experiment is a true binary one, in the sense of physical possibility." Jaynes [51] does not specifically discuss Jeffreys prior Beta(1/2,1/2) (Jaynes discussion of "Jeffreys prior" on pp. 181, 423 and on chapter 12 of Jaynes book[51] refers instead to the improper, un-normalized, prior "1/p dp" introduced by Jeffreys in the 1939 edition of his book,[58] seven years before he introduced what is now known as Jeffreys' invariant prior: the square root of the determinant of Fisher's information matrix. "1/p" is Jeffreys' (1946) invariant prior for the exponential distribution, not for the Bernoulli or binomial distributions). However, it follows from the above discussion that Jeffreys Beta(1/2,1/2) prior represents a state of knowledge in between the Haldane Beta(0,0) and Bayes Beta (1,1) prior.

Similarly, Karl Pearson in his 1892 book The Grammar of Science[67][68] (p. 144 of 1900 edition) maintained that the Bayes (Beta(1,1) uniform prior was not a complete ignorance prior, and that it should be used when prior information justified to "distribute our ignorance equally"". K. Pearson wrote: "Yet the only supposition that we appear to have made is this: that, knowing nothing of nature, routine and anomy (from the Greek ανομία, namely: a- "without", and nomos "law") are to be considered as equally likely to occur. Now we were not really justified in making even this assumption, for it involves a knowledge that we do not possess regarding nature. We use our experience of the constitution and action of coins in general to assert that heads and tails are equally probable, but we have no right to assert before experience that, as we know nothing of nature, routine and breach are equally probable. In our ignorance we ought to consider before experience that nature may consist of all routines, all anomies (normlessness), or a mixture of the two in any proportion whatever, and that all such are equally probable. Which of these constitutions after experience is the most probable must clearly depend on what that experience has been like."

If there is sufficient sampling data, and the posterior probability mode is not located at one of the extremes of the domain (x = 0 or x = 1), the three priors of Bayes (Beta(1,1)), Jeffreys (Beta(1/2,1/2)) and Haldane (Beta(0,0)) should yield similar posterior probability densities. Otherwise, as Gelman et al.[69] (p. 65) point out, "if so few data are available that the choice of noninformative prior distribution makes a difference, one should put relevant information into the prior distribution", or as Berger[4] (p. 125) points out "when different reasonable priors yield substantially different answers, can it be right to state that there is a single answer? Would it not be better to admit that there is scientific uncertainty, with the conclusion depending on prior beliefs?."

Occurrence and applications

Order statistics

The beta distribution has an important application in the theory of order statistics. A basic result is that the distribution of the kth smallest of a sample of size n from a continuous uniform distribution has a beta distribution.[38] This result is summarized as:

From this, and application of the theory related to the probability integral transform, the distribution of any individual order statistic from any continuous distribution can be derived.[38]

Subjective logic

In standard logic, propositions are considered to be either true or false. In contradistinction, subjective logic assumes that humans cannot determine with absolute certainty whether a proposition about the real world is absolutely true or false. In subjective logic the posteriori probability estimates of binary events can be represented by beta distributions.[70]

Wavelet analysis

A wavelet is a wave-like oscillation with an amplitude that starts out at zero, increases, and then decreases back to zero. It can typically be visualized as a "brief oscillation" that promptly decays. Wavelets can be used to extract information from many different kinds of data, including – but certainly not limited to – audio signals and images. Thus, wavelets are purposefully crafted to have specific properties that make them useful for signal processing. Wavelets are localized in both time and frequency whereas the standard Fourier transform is only localized in frequency. Therefore, standard Fourier Transforms are only applicable to stationary processes, while wavelets are applicable to non-stationary processes. Continuous wavelets can be constructed based on the beta distribution. Beta wavelets[71] can be viewed as a soft variety of Haar wavelets whose shape is fine-tuned by two shape parameters α and β.

Population genetics

The Balding–Nichols model is a two-parameter parametrization of the beta distribution used in population genetics.[72] It is a statistical description of the allele frequencies in the components of a sub-divided population:

where and ; here F is (Wright's) genetic distance between two populations.

Project management: task cost and schedule modeling

The beta distribution can be used to model events which are constrained to take place within an interval defined by a minimum and maximum value. For this reason, the beta distribution — along with the triangular distribution — is used extensively in PERT, critical path method (CPM), Joint Cost Schedule Modeling (JCSM) and other project management/control systems to describe the time to completion and the cost of a task. In project management, shorthand computations are widely used to estimate the mean and standard deviation of the beta distribution:[37]

where a is the minimum, c is the maximum, and b is the most likely value (the mode for α > 1 and β > 1).

The above estimate for the mean is known as the PERT three-point estimation and it is exact for either of the following values of β (for arbitrary α within these ranges):

β = α > 1 (symmetric case) with standard deviation , skewness = 0, and excess kurtosis =

or

β = 6 − α for 5 > α > 1 (skewed case) with standard deviation

skewness, and excess kurtosis

The above estimate for the standard deviation σ(X) = (ca)/6 is exact for either of the following values of α and β:

α = β = 4 (symmetric) with skewness = 0, and excess kurtosis = −6/11.
β = 6 − α and (right-tailed, positive skew) with skewness, and excess kurtosis = 0
β = 6 − α and (left-tailed, negative skew) with skewness, and excess kurtosis = 0

Otherwise, these can be poor approximations for beta distributions with other values of α and β, exhibiting average errors of 40% in the mean and 549% in the variance.[73][74][75]

Random variate generation

If X and Y are independent, with and then

So one algorithm for generating beta variates is to generate , where X is a gamma variate with parameters (α, 1) and Y is an independent gamma variate with parameters (β, 1).[76] In fact, here and are independent, and . If and is independent of and , then and is independent of . This shows that the product of independent and random variables is a random variable.

Also, the kth order statistic of n uniformly distributed variates is , so an alternative if α and β are small integers is to generate α + β − 1 uniform variates and choose the α-th smallest.[38]

Another way to generate the Beta distribution is by Pólya urn model. According to this method, one start with an "urn" with α "black" balls and β "white" balls and draw uniformly with replacement. Every trial an additional ball is added according to the color of the last ball which was drawn. Asymptotically, the proportion of black and white balls will be distributed according to the Beta distribution, where each repetition of the experiment will produce a different value.

It is also possible to use the inverse transform sampling.

Normal approximation to the Beta distribution

A beta distribution with α ~ β and α and β >> 1 is approximately normal with mean 1/2 and variance 1/(4(2α + 1)). If αβ the normal approximation can be improved by taking the cube-root of the logarithm of the reciprocal of [77]

History

Thomas Bayes, in a posthumous paper [61] published in 1763 by Richard Price, obtained a beta distribution as the density of the probability of success in Bernoulli trials (see § Applications, Bayesian inference), but the paper does not analyze any of the moments of the beta distribution or discuss any of its properties.

Karl Pearson analyzed the beta distribution as the solution Type I of Pearson distributions

The first systematic modern discussion of the beta distribution is probably due to Karl Pearson.[78][79] In Pearson's papers[20][32] the beta distribution is couched as a solution of a differential equation: Pearson's Type I distribution which it is essentially identical to except for arbitrary shifting and re-scaling (the beta and Pearson Type I distributions can always be equalized by proper choice of parameters). In fact, in several English books and journal articles in the few decades prior to World War II, it was common to refer to the beta distribution as Pearson's Type I distribution. William P. Elderton in his 1906 monograph "Frequency curves and correlation"[41] further analyzes the beta distribution as Pearson's Type I distribution, including a full discussion of the method of moments for the four parameter case, and diagrams of (what Elderton describes as) U-shaped, J-shaped, twisted J-shaped, "cocked-hat" shapes, horizontal and angled straight-line cases. Elderton wrote "I am chiefly indebted to Professor Pearson, but the indebtedness is of a kind for which it is impossible to offer formal thanks." Elderton in his 1906 monograph [41] provides an impressive amount of information on the beta distribution, including equations for the origin of the distribution chosen to be the mode, as well as for other Pearson distributions: types I through VII. Elderton also included a number of appendixes, including one appendix ("II") on the beta and gamma functions. In later editions, Elderton added equations for the origin of the distribution chosen to be the mean, and analysis of Pearson distributions VIII through XII.

As remarked by Bowman and Shenton[43] "Fisher and Pearson had a difference of opinion in the approach to (parameter) estimation, in particular relating to (Pearson's method of) moments and (Fisher's method of) maximum likelihood in the case of the Beta distribution." Also according to Bowman and Shenton, "the case of a Type I (beta distribution) model being the center of the controversy was pure serendipity. A more difficult model of 4 parameters would have been hard to find." The long running public conflict of Fisher with Karl Pearson can be followed in a number of articles in prestigious journals. For example, concerning the estimation of the four parameters for the beta distribution, and Fisher's criticism of Pearson's method of moments as being arbitrary, see Pearson's article "Method of moments and method of maximum likelihood" [44] (published three years after his retirement from University College, London, where his position had been divided between Fisher and Pearson's son Egon) in which Pearson writes "I read (Koshai's paper in the Journal of the Royal Statistical Society, 1933) which as far as I am aware is the only case at present published of the application of Professor Fisher's method. To my astonishment that method depends on first working out the constants of the frequency curve by the (Pearson) Method of Moments and then superposing on it, by what Fisher terms "the Method of Maximum Likelihood" a further approximation to obtain, what he holds, he will thus get, 'more efficient values' of the curve constants".

David and Edwards's treatise on the history of statistics[80] cites the first modern treatment of the beta distribution, in 1911,[81] using the beta designation that has become standard, due to Corrado Gini, an Italian statistician, demographer, and sociologist, who developed the Gini coefficient. N.L.Johnson and S.Kotz, in their comprehensive and very informative monograph[82] on leading historical personalities in statistical sciences credit Corrado Gini[83] as "an early Bayesian...who dealt with the problem of eliciting the parameters of an initial Beta distribution, by singling out techniques which anticipated the advent of the so-called empirical Bayes approach."

References

  1. ^ a b c d e f g h i j k l m n o p q r s t u v w x y Johnson, Norman L.; Kotz, Samuel; Balakrishnan, N. (1995). "Chapter 25: Beta Distributions". Continuous Univariate Distributions Vol. 2 (2nd ed.). Wiley. ISBN 978-0-471-58494-0.
  2. ^ a b Rose, Colin; Smith, Murray D. (2002). Mathematical Statistics with MATHEMATICA. Springer. ISBN 978-0387952345.
  3. ^ a b c Kruschke, John K. (2011). Doing Bayesian data analysis: A tutorial with R and BUGS. Academic Press / Elsevier. p. 83. ISBN 978-0123814852.
  4. ^ a b Berger, James O. (2010). Statistical Decision Theory and Bayesian Analysis (2nd ed.). Springer. ISBN 978-1441930743.
  5. ^ a b c d Feller, William (1971). An Introduction to Probability Theory and Its Applications, Vol. 2. Wiley. ISBN 978-0471257097.
  6. ^ Kruschke, John K. (2015). Doing Bayesian Data Analysis: A Tutorial with R, JAGS and Stan. Academic Press / Elsevier. ISBN 978-0-12-405888-0.
  7. ^ a b Wadsworth, George P. and Joseph Bryan (1960). Introduction to Probability and Random Variables. McGraw-Hill.
  8. ^ a b c d e f g Gupta, Arjun K., ed. (2004). Handbook of Beta Distribution and Its Applications. CRC Press. ISBN 978-0824753962.
  9. ^ a b Kerman, Jouni (2011). "A closed-form approximation for the median of the beta distribution". arXiv:1111.0433 [math.ST].
  10. ^ Mosteller, Frederick and John Tukey (1977). Data Analysis and Regression: A Second Course in Statistics. Addison-Wesley Pub. Co. Bibcode:1977dars.book.....M. ISBN 978-0201048544.
  11. ^ a b Feller, William (1968). An Introduction to Probability Theory and Its Applications. Vol. 1 (3rd ed.). ISBN 978-0471257080.
  12. ^ Philip J. Fleming and John J. Wallace. How not to lie with statistics: the correct way to summarize benchmark results. Communications of the ACM, 29(3):218–221, March 1986.
  13. ^ "NIST/SEMATECH e-Handbook of Statistical Methods 1.3.6.6.17. Beta Distribution". National Institute of Standards and Technology Information Technology Laboratory. April 2012. Retrieved May 31, 2016.
  14. ^ Oguamanam, D.C.D.; Martin, H. R.; Huissoon, J. P. (1995). "On the application of the beta distribution to gear damage analysis". Applied Acoustics. 45 (3): 247–261. doi:10.1016/0003-682X(95)00001-P.
  15. ^ Zhiqiang Liang; Jianming Wei; Junyu Zhao; Haitao Liu; Baoqing Li; Jie Shen; Chunlei Zheng (27 August 2008). "The Statistical Meaning of Kurtosis and Its New Application to Identification of Persons Based on Seismic Signals". Sensors. 8 (8): 5106–5119. Bibcode:2008Senso...8.5106L. doi:10.3390/s8085106. PMC 3705491. PMID 27873804.
  16. ^ Kenney, J. F., and E. S. Keeping (1951). Mathematics of Statistics Part Two, 2nd edition. D. Van Nostrand Company Inc.{{cite book}}: CS1 maint: multiple names: authors list (link)
  17. ^ a b c d Abramowitz, Milton and Irene A. Stegun (1965). Handbook Of Mathematical Functions With Formulas, Graphs, And Mathematical Tables. Dover. ISBN 978-0-486-61272-0.
  18. ^ Weisstein., Eric W. "Kurtosis". MathWorld--A Wolfram Web Resource. Retrieved 13 August 2012.
  19. ^ a b Panik, Michael J (2005). Advanced Statistics from an Elementary Point of View. Academic Press. ISBN 978-0120884940.
  20. ^ a b c d e f Pearson, Karl (1916). "Mathematical contributions to the theory of evolution, XIX: Second supplement to a memoir on skew variation". Philosophical Transactions of the Royal Society A. 216 (538–548): 429–457. Bibcode:1916RSPTA.216..429P. doi:10.1098/rsta.1916.0009. JSTOR 91092.
  21. ^ Gradshteyn, Izrail Solomonovich; Ryzhik, Iosif Moiseevich; Geronimus, Yuri Veniaminovich; Tseytlin, Michail Yulyevich; Jeffrey, Alan (2015) [October 2014]. Zwillinger, Daniel; Moll, Victor Hugo (eds.). Table of Integrals, Series, and Products. Translated by Scripta Technica, Inc. (8 ed.). Academic Press, Inc. ISBN 978-0-12-384933-5. LCCN 2014010276.
  22. ^ Billingsley, Patrick (1995). "30". Probability and measure (3rd ed.). Wiley-Interscience. ISBN 978-0-471-00710-4.
  23. ^ a b MacKay, David (2003). Information Theory, Inference and Learning Algorithms. Cambridge University Press; First Edition. Bibcode:2003itil.book.....M. ISBN 978-0521642989.
  24. ^ a b Johnson, N.L. (1949). "Systems of frequency curves generated by methods of translation" (PDF). Biometrika. 36 (1–2): 149–176. doi:10.1093/biomet/36.1-2.149. hdl:10338.dmlcz/135506. PMID 18132090.
  25. ^ Verdugo Lazo, A. C. G.; Rathie, P. N. (1978). "On the entropy of continuous probability distributions". IEEE Trans. Inf. Theory. 24 (1): 120–122. doi:10.1109/TIT.1978.1055832.
  26. ^ Shannon, Claude E. (1948). "A Mathematical Theory of Communication". Bell System Technical Journal. 27 (4): 623–656. doi:10.1002/j.1538-7305.1948.tb01338.x.
  27. ^ a b c Cover, Thomas M. and Joy A. Thomas (2006). Elements of Information Theory 2nd Edition (Wiley Series in Telecommunications and Signal Processing). Wiley-Interscience; 2 edition. ISBN 978-0471241959.
  28. ^ Plunkett, Kim, and Jeffrey Elman (1997). Exercises in Rethinking Innateness: A Handbook for Connectionist Simulations (Neural Network Modeling and Connectionism). A Bradford Book. p. 166. ISBN 978-0262661058.{{cite book}}: CS1 maint: multiple names: authors list (link)
  29. ^ Nallapati, Ramesh (2006). The smoothed dirichlet distribution: understanding cross-entropy ranking in information retrieval (Thesis). Computer Science Dept., University of Massachusetts Amherst.
  30. ^ a b Pearson, Egon S. (July 1969). "Some historical reflections traced through the development of the use of frequency curves". THEMIS Statistical Analysis Research Program, Technical Report 38. Office of Naval Research, Contract N000014-68-A-0515 (Project NR 042–260).
  31. ^ Hahn, Gerald J.; Shapiro, S. (1994). Statistical Models in Engineering (Wiley Classics Library). Wiley-Interscience. ISBN 978-0471040651.
  32. ^ a b Pearson, Karl (1895). "Contributions to the mathematical theory of evolution, II: Skew variation in homogeneous material". Philosophical Transactions of the Royal Society. 186: 343–414. Bibcode:1895RSPTA.186..343P. doi:10.1098/rsta.1895.0010. JSTOR 90649.
  33. ^ Buchanan, K.; Rockway, J.; Sternberg, O.; Mai, N. N. (May 2016). "Sum-difference beamforming for radar applications using circularly tapered random arrays". 2016 IEEE Radar Conference (RadarConf). pp. 1–5. doi:10.1109/RADAR.2016.7485289. ISBN 978-1-5090-0863-6. S2CID 32525626.
  34. ^ Buchanan, K.; Flores, C.; Wheeland, S.; Jensen, J.; Grayson, D.; Huff, G. (May 2017). "Transmit beamforming for radar applications using circularly tapered random arrays". 2017 IEEE Radar Conference (RadarConf). pp. 0112–0117. doi:10.1109/RADAR.2017.7944181. ISBN 978-1-4673-8823-8. S2CID 38429370.
  35. ^ Ryan, Buchanan, Kristopher (2014-05-29). "Theory and Applications of Aperiodic (Random) Phased Arrays". {{cite journal}}: Cite journal requires |journal= (help)CS1 maint: multiple names: authors list (link)
  36. ^ Herrerías-Velasco, José Manuel and Herrerías-Pleguezuelo, Rafael and René van Dorp, Johan. (2011). Revisiting the PERT mean and Variance. European Journal of Operational Research (210), p. 448–451.
  37. ^ a b Malcolm, D. G.; Roseboom, J. H.; Clark, C. E.; Fazar, W. (September–October 1958). "Application of a Technique for Research and Development Program Evaluation". Operations Research. 7 (5): 646–669. doi:10.1287/opre.7.5.646. ISSN 0030-364X.
  38. ^ a b c d David, H. A., Nagaraja, H. N. (2003) Order Statistics (3rd Edition). Wiley, New Jersey pp 458. ISBN 0-471-38926-9
  39. ^ "Beta distribution". www.statlect.com.
  40. ^ "1.3.6.6.17. Beta Distribution". www.itl.nist.gov.
  41. ^ a b c d e f g h Elderton, William Palin (1906). Frequency-Curves and Correlation. Charles and Edwin Layton (London).
  42. ^ Elderton, William Palin and Norman Lloyd Johnson (2009). Systems of Frequency Curves. Cambridge University Press. ISBN 978-0521093361.
  43. ^ a b c Bowman, K. O.; Shenton, L. R. (2007). "The beta distribution, moment method, Karl Pearson and R.A. Fisher" (PDF). Far East J. Theo. Stat. 23 (2): 133–164.
  44. ^ a b Pearson, Karl (June 1936). "Method of moments and method of maximum likelihood". Biometrika. 28 (1/2): 34–59. doi:10.2307/2334123. JSTOR 2334123.
  45. ^ a b c Joanes, D. N.; C. A. Gill (1998). "Comparing measures of sample skewness and kurtosis". The Statistician. 47 (Part 1): 183–189. doi:10.1111/1467-9884.00122.
  46. ^ Beckman, R. J.; G. L. Tietjen (1978). "Maximum likelihood estimation for the beta distribution". Journal of Statistical Computation and Simulation. 7 (3–4): 253–258. doi:10.1080/00949657808810232.
  47. ^ Gnanadesikan, R., Pinkham and Hughes (1967). "Maximum likelihood estimation of the parameters of the beta distribution from smallest order statistics". Technometrics. 9 (4): 607–620. doi:10.2307/1266199. JSTOR 1266199.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  48. ^ Fackler, Paul. "Inverse Digamma Function (Matlab)". Harvard University School of Engineering and Applied Sciences. Retrieved 2012-08-18.
  49. ^ a b c Silvey, S.D. (1975). Statistical Inference. Chapman and Hal. p. 40. ISBN 978-0412138201.
  50. ^ Edwards, A. W. F. (1992). Likelihood. The Johns Hopkins University Press. ISBN 978-0801844430.
  51. ^ a b c d e f Jaynes, E.T. (2003). Probability theory, the logic of science. Cambridge University Press. ISBN 978-0521592710.
  52. ^ Costa, Max, and Cover, Thomas (September 1983). On the similarity of the entropy power inequality and the Brunn Minkowski inequality (PDF). Tech.Report 48, Dept. Statistics, Stanford University.{{cite book}}: CS1 maint: multiple names: authors list (link)
  53. ^ a b c Aryal, Gokarna; Saralees Nadarajah (2004). "Information matrix for beta distributions" (PDF). Serdica Mathematical Journal (Bulgarian Academy of Science). 30: 513–526.
  54. ^ a b Laplace, Pierre Simon, marquis de (1902). A philosophical essay on probabilities. New York : J. Wiley; London : Chapman & Hall. ISBN 978-1-60206-328-0.{{cite book}}: CS1 maint: multiple names: authors list (link)
  55. ^ Cox, Richard T. (1961). Algebra of Probable Inference. The Johns Hopkins University Press. ISBN 978-0801869822.
  56. ^ a b Keynes, John Maynard (2010) [1921]. A Treatise on Probability: The Connection Between Philosophy and the History of Science. Wildside Press. ISBN 978-1434406965.
  57. ^ Pearson, Karl (1907). "On the Influence of Past Experience on Future Expectation". Philosophical Magazine. 6 (13): 365–378.
  58. ^ a b c d Jeffreys, Harold (1998). Theory of Probability. Oxford University Press, 3rd edition. ISBN 978-0198503682.
  59. ^ Broad, C. D. (October 1918). "On the relation between induction and probability". MIND, A Quarterly Review of Psychology and Philosophy. 27 (New Series) (108): 389–404. doi:10.1093/mind/XXVII.4.389. JSTOR 2249035.
  60. ^ a b c d Perks, Wilfred (January 1947). "Some observations on inverse probability including a new indifference rule". Journal of the Institute of Actuaries. 73 (2): 285–334. doi:10.1017/S0020268100012270. Archived from the original on 2014-01-12. Retrieved 2012-09-19.
  61. ^ a b Bayes, Thomas; communicated by Richard Price (1763). "An Essay towards solving a Problem in the Doctrine of Chances". Philosophical Transactions of the Royal Society. 53: 370–418. doi:10.1098/rstl.1763.0053. JSTOR 105741.
  62. ^ Haldane, J.B.S. (1932). "A note on inverse probability". Mathematical Proceedings of the Cambridge Philosophical Society. 28 (1): 55–61. Bibcode:1932PCPS...28...55H. doi:10.1017/s0305004100010495. S2CID 122773707.
  63. ^ Zellner, Arnold (1971). An Introduction to Bayesian Inference in Econometrics. Wiley-Interscience. ISBN 978-0471169376.
  64. ^ Jeffreys, Harold (September 1946). "An Invariant Form for the Prior Probability in Estimation Problems". Proceedings of the Royal Society. A 24. 186 (1007): 453–461. Bibcode:1946RSPSA.186..453J. doi:10.1098/rspa.1946.0056. PMID 20998741.
  65. ^ Berger, James; Bernardo, Jose; Sun, Dongchu (2009). "The formal definition of reference priors". The Annals of Statistics. 37 (2): 905–938. arXiv:0904.0156. Bibcode:2009arXiv0904.0156B. doi:10.1214/07-AOS587. S2CID 3221355.
  66. ^ Clarke, Bertrand S.; Andrew R. Barron (1994). "Jeffreys' prior is asymptotically least favorable under entropy risk" (PDF). Journal of Statistical Planning and Inference. 41: 37–60. doi:10.1016/0378-3758(94)90153-8.
  67. ^ Pearson, Karl (1892). The Grammar of Science. Walter Scott, London.
  68. ^ Pearson, Karl (2009). The Grammar of Science. BiblioLife. ISBN 978-1110356119.
  69. ^ Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (2003). Bayesian Data Analysis. Chapman and Hall/CRC. ISBN 978-1584883883.{{cite book}}: CS1 maint: multiple names: authors list (link)
  70. ^ A. Jøsang. A Logic for Uncertain Probabilities. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems. 9(3), pp.279-311, June 2001. PDF[permanent dead link]
  71. ^ H.M. de Oliveira and G.A.A. Araújo,. Compactly Supported One-cyclic Wavelets Derived from Beta Distributions. Journal of Communication and Information Systems. vol.20, n.3, pp.27-33, 2005.
  72. ^ Balding, David J.; Nichols, Richard A. (1995). "A method for quantifying differentiation between populations at multi-allelic loci and its implications for investigating identity and paternity". Genetica. 96 (1–2). Springer: 3–12. doi:10.1007/BF01441146. PMID 7607457. S2CID 30680826.
  73. ^ Keefer, Donald L. and Verdini, William A. (1993). Better Estimation of PERT Activity Time Parameters. Management Science 39(9), p. 1086–1091.
  74. ^ Keefer, Donald L. and Bodily, Samuel E. (1983). Three-point Approximations for Continuous Random variables. Management Science 29(5), p. 595–609.
  75. ^ "Defense Resource Management Institute - Naval Postgraduate School". www.nps.edu.
  76. ^ van der Waerden, B. L., "Mathematical Statistics", Springer, ISBN 978-3-540-04507-6.
  77. ^ On normalizing the incomplete beta-function for fitting to dose-response curves M.E. Wise Biometrika vol 47, No. 1/2, June 1960, pp. 173–175
  78. ^ Yule, G. U.; Filon, L. N. G. (1936). "Karl Pearson. 1857–1936". Obituary Notices of Fellows of the Royal Society. 2 (5): 72. doi:10.1098/rsbm.1936.0007. JSTOR 769130.
  79. ^ "Library and Archive catalogue". Sackler Digital Archive. Royal Society. Archived from the original on 2011-10-25. Retrieved 2011-07-01.
  80. ^ David, H. A. and A.W.F. Edwards (2001). Annotated Readings in the History of Statistics. Springer; 1 edition. ISBN 978-0387988443.
  81. ^ Gini, Corrado (1911). "Considerazioni Sulle Probabilità Posteriori e Applicazioni al Rapporto dei Sessi Nelle Nascite Umane". Studi Economico-Giuridici della Università de Cagliari. Anno III (reproduced in Metron 15, 133, 171, 1949): 5–41.
  82. ^ Johnson, Norman L. and Samuel Kotz, ed. (1997). Leading Personalities in Statistical Sciences: From the Seventeenth Century to the Present (Wiley Series in Probability and Statistics. Wiley. ISBN 978-0471163817.
  83. ^ Metron journal. "Biography of Corrado Gini". Metron Journal. Archived from the original on 2012-07-16. Retrieved 2012-08-18.

External links