- [[define priors for Bayes factors]], [[Bayesian model averaging]], [[inclusion Bayes factors]], [[Bayes theorem]], [[multiply Bayes factors to combine evidence]], [[approximate objective bayes factors from p-values and sample size]], [[approximate Bayes factor]]
- [[Jeffreys-Lindley paradox]]
# Idea
Bayes factors provide relative evidence of one "model" over another. They can be calculated in several ways.
In general, the more specific the two competing hypotheses are ("alternative" vs "null"), and the more distinct they are from one another, the larger the Bayes factor for one over the other. See [[Bayes factor interpretation]].
## Relative probability of the observed data under two models
It quantifies the probability of the observed data $D$, given two models $M_1$ versus $M_2$. It's the ratio of the probabilities of the data in models 1 and 2.
$B F_{12}=\frac{P\left(D \mid M_{1}\right)}{P\left(D \mid M_{2}\right)}$
## Degree of shift in prior beliefs
It quantifies the shift in prior beliefs about the relative credibility of two models.
$B F_{12}=\frac{\text {Posterior Odds}_{12}}{\text {Prior Odds}_{12}}$
# References
- [GitHub - jomulder/BFpack: BFpack can be used for testing statistical hypotheses using the Bayes factor, a Bayesian criterion originally developed by sir Harold Jeffreys.](https://github.com/jomulder/BFpack)
- [Bayes Factors (BF) for a Single Parameter — bayesfactor\_parameters • bayestestR](https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html)
- [Bayes Factors • bayestestR](https://easystats.github.io/bayestestR/articles/bayes_factors.html)
- [[Wulff 2023 how and why alpha should depend on sample size]]
- [10 Model Comparison and Hierarchical Modeling | Doing Bayesian Data Analysis in brms and the tidyverse](https://bookdown.org/ajkurz/DBDA_recoded/model-comparison-and-hierarchical-modeling.html)