While solving a case study, a researcher encounters many predictors, possibilities, and interactions. That makes it intricate to select a model. With the help of different criteria for model selection, they can resolve those problems and estimate the precision.

The AIC and BIC are the two such criteria processes for evaluating a model. They consist of selective determinants for the aggregation of the considered variables. In 2002, Burnham and Anderson did a research study on both criteria.

## Key Takeaways

- AIC and BIC are both measures used for model selection in statistical analysis.
- AIC stands for Akaike Information Criterion, and BIC stands for Bayesian Information Criterion.
- AIC penalizes model complexity less than BIC, which means that AIC may be preferred for smaller sample sizes, while BIC may be preferred for larger sample sizes.

**AIC vs BIC**

AIC measures the relative quality of a statistical model for a given set of data. It is based on the likelihood function and the number of parameters in the model. BIC is a similar model based on Bayesian principles on complexity measure but places a greater penalty on models with more parameters.

AIC results in complex traits, whereas BIC has more finite dimensions and consistent attributes. The former is better for negative findings and the latter for positive ones.

**Comparison Table**

Parameters of Comparison | AIC | BIC |
---|---|---|

Full Forms | The full form of AIC is the Akaike Information Criteria. | The full form of BIC is the Bayesian Information Criteria. |

Definition | An evaluation of a continual and corresponding interval among the undetermined, accurate, and justified probability of the facts is called Akaike Information Criteria or AIC. | Under a particular Bayesian structure, an accurate evaluation of the purpose of the possibility following the model is called Bayesian Information Criteria or BIC. |

Formula | To calculate the Akaike information criterion, the formula is: AIC = 2k – 2ln(L^) | To calculate the Bayesian information criterion, the formula is: BIC = k ln(n) – 2ln(L^) |

Selection Of Model | For false-negative outcomes, AIC is elected in the model. | For false-positive outcomes, BIC is elected in the model. |

Dimension | The dimension of AIC is infinite and relatively high. | The dimension of BIC is finite and is lower than that of AIC. |

Penalty Term | Penalty terms are smaller here. | Penalty terms are larger here. |

Probability | To select the true model in AIC, the probability should be less than 1. | To select the true model in BIC, the probability should be exactly at 1. |

Results | Here, results are more unpredictable and more complicated than BIC. | Here, results are consistent and easier than AIC. |

Assumptions | With the help of assumptions, AIC can calculate the most optimal coverage. | With the help of assumptions, BIC can calculate less optimal coverage than that AIC. |

Risks | Risk is minimized with AIC, as n is much larger than k^{2}. | Risk is maximized with BIC, as n is finite. |

**What is AIC?**

The model was first announced by statistician ‘Hirotugu Akaike’ in 1971. And the first formal paper was published by Akaike in 1974 and received more than 14,000 citations.

Akaike Information Criteria (AIC) evaluates a continual in addition to the corresponding interval among the facts’ undetermined, accurate, and justified probability.

It is the integrated probability purpose of the model. So that a lower AIC means a model is estimated to be more similar to the accuracy. For false-negative conclusions, it is useful.

Reaching a true model requires a probability of less than 1. The dimension of AIC is infinite and relatively high in number, because of which it provides unpredictable and complicated results.

It serves the most optimal coverage of assumptions. Its penalty terms are smaller. Many researchers believe it benefits with the minimum risks while presuming. Because here, *n* is larger than k^{2}.

The AIC calculation is done with the following formula:

**AIC = 2k – 2ln(***L^***)**

**What is BIC?**

Bayesian Information Criteria (BIC) is an evaluation of the purpose of the possibility, following the model’s accuracy, under a particular Bayesian structure. So a lower BIC means that a model is acknowledged to be further anticipated as the precise model.

The theory was developed and published by Gideon E. Schwarz in 1978. Also, it is known as Schwarz Information Criterion, shortly SIC, SBIC, or SBC. To reach a true model, it requires a probability of exactly 1. For false-positive outcomes, it is helpful.

The penalty terms are substantial. Its dimension is finite that gives consistent and easy results. Scientists say that its optimal coverage is less than AIC for assumptions. That even sequences into maximum risk-taking. Because here, *n* is definable.

The BIC calculation is done with the following formula:

**BIC = k ln(***n***) – 2ln(***L^***)**

The ‘Bridge Criterion’, BC, was developed by Jie Ding, Vahid Tarokh, and Yuhong Yang. The criterion was published on 20th June 2017 in IEEE Transactions on Information Theory. Its motive was to bridge the fundamental gap between AIC and BIC modules.

**Main Differences Between AIC and BIC**

- AIC is used in model selection for false-negative outcomes, whereas BIC is for false-positive.
- The former has an infinite and relatively high dimension. On the contrary, the latter has finite.
- The penalty term for the first is smaller. At the same time, the second one is substantial.
- Akaike information criteria have complicated and unpredictable results. Conversely, the Bayesian information criterion has easy results with consistency.
- AIC provides optimistic assumptions. At the same time, BIC coverages are less optimal assumptions.
- Risk is minimized in AIC and is maximum in BIC.
- The Akaike theory requires a probability of less than 1, and Bayesian needs exactly 1 to reach the true model.

**References**

- https://psycnet.apa.org/record/2012-03019-001
- https://journals.sagepub.com/doi/abs/10.1177/0049124103262065
- https://journals.sagepub.com/doi/abs/10.1177/0049124104268644
- https://www.sciencedirect.com/science/article/pii/S0165783605002870

This article has been written by: Supriya Kandekar