While solving a case study, a researcher comes across many predictors, possibilities, and interactions. That makes it intricate to select a model. With the help of different criteria for model selection, they can resolve those problems and estimate the precision.

The AIC and BIC are the two such criteria processes for evaluating a model. They consist of selective determinants for the aggregation of the considered variables. In 2002, Burnham and Anderson did a research study on both the criteria.

**AIC vs BIC**

The difference Between AIC and BIC is that their selection of the model. They are specified for particular uses and can give distinguish results. AIC has infinite and relatively high dimensions.

AIC results in complex traits, whereas BIC has more finite dimensions and consistent attributes. The former is better for negative findings, and the latter used for positive.

**Comparison Table Between AIC and BIC**

Parameters of Comparison | AIC | BIC |

Full Forms | The full form of AIC is the Akaike Information Criteria. | The full form of BIC is the Bayesian Information Criteria. |

Definition | An evaluation of a continual and corresponding interval among the undetermined, accurate, and justified probability of the facts, is called Akaike Information Criteria or AIC. | Under a particular Bayesian structure, an accurate evaluation of the purpose of the possibility following the model is called Bayesian Information Criteria or BIC. |

Formula | To calculate the Akaike information criterion, the formula is: AIC = 2k – 2ln(L^) | To calculate the Bayesian information criterion, the formula is: BIC = k ln(n) – 2ln(L^) |

Selection Of Model | For false-negative outcomes, AIC is elected in the model. | For false-positive outcomes, BIC is elected in the model. |

Dimension | The dimension of AIC is infinite and relatively high. | The dimension of BIC is finite and is lower than that of AIC. |

Penalty Term | Penalty terms are smaller here. | Penalty terms are larger here. |

Probability | To select the true model in AIC, the probability should be less than 1. | To select the true model in BIC, the probability should be exactly at 1. |

Results | Here, results are unpredictable and complicated than BIC. | Here, results are consistent and easier than AIC. |

Assumptions | With the help of assumptions, AIC can calculate the most optimal coverage. | With the help of assumptions, BIC can calculate less optimal coverage than that of AIC. |

Risks | Risk is minimized with AIC, as n is much larger than k^{2}. | Risk is maximized with BIC, as n is finite. |

**What is AIC?**

The model was first announced by statistician ‘Hirotugu Akaike’ in the year 1971. And the first formal paper was published by Akaike in 1974 and received more than 14,000 citations.

Akaike Information Criteria (AIC) is an evaluation of a continual in addition to the corresponding interval among the undetermined, accurate, and justified probability of the facts. It is the integrated probability purpose of the model. So that a lower AIC means a model is estimated to be more alike to the accuracy. For false-negative conclusions, it is useful.

To reach a true-model requires a probability of less than 1. The dimension of AIC is infinite and relatively high in number. Because of which it provides unpredictable and complicated results. It serves the most optimal coverage of assumptions. Its penalty terms are smaller. Many researchers believe it benefits with the minimum risks while presuming. Because here, *n* is larger than k^{2}.

The AIC calculation is done with the following formula:

**AIC = 2k – 2ln(***L^***)**

**What is BIC?**

Bayesian Information Criteria (BIC) is an evaluation of the purpose of the possibility, following the model is accurate, under a particular Bayesian structure. So a lower BIC means that a model is acknowledged to be further anticipated to be the precise model.

The theory was developed and published by Gideon E. Schwarz in the year 1978. Also, it is known as Schwarz Information Criterion, shortly SIC, SBIC, or SBC. To reach a true-model, it requires probability exactly 1. For false-positive outcomes, it is helpful.

The penalty terms are substantial. Its dimension is finite that gives consistent and easy results. Scientists say that its optimal coverage is less than AIC for assumptions. That even sequences into maximum risk-taking. Because here, *n* is definable.

The BIC calculation is done with the following formula:

**BIC = k ln(***n***) – 2ln(***L^***)**

The ‘Bridge Criterion’ or BC, was developed by Jie Ding, Vahid Tarokh, and Yuhong Yang. The publication of the criterion was on 20th June 2017 in IEEE Transactions on Information Theory. Its motive was to bridge the fundamental gap between AIC and BIC modules.

**Main Differences Between AIC and BIC**

- AIC is used in model selection for false-negative outcomes, whereas BIC is for false-positive.
- The former has an infinite and relatively high dimension. On the contrary, the latter has finite.
- The penalty term for the first is smaller. Whereas, the second one is substantial.
- Akaike information criteria have complicated and unpredictable results. Conversely, the Bayesian information criterion has easy results with consistency.
- AIC provides optimistic assumptions. While BIC coverages less optimal assumptions.
- Risk is minimized in AIC and is maximum in BIC.
- The Akaike theory requires the probability of less than 1, and Bayesian needs exactly 1 to reach the true-model.

**Conclusion **

AIC and BIC both are nearly accurate depending on their various objectives and a distinct collection of asymptotic speculations. Both groups of presumptions have been disapproved as unfeasible. The dynamism for each distributed alpha is raising in ‘*n*.’ Therefore, the AIC model typically has a prospect of preferring likewise high a model, despite *n*. BIC has too limited uncertainty of collecting over significant a model if *n* is adequate. Although, it has a massive possibility than AIC, for all presented *n*, of preferring besides short a model.

Recognizing the variation within their operative realization is most common if the mild fact of analyzing two correlated models is acknowledged. The most reliable method to apply them both is concurrently in the model range. For false-negative verdicts, AIC is more beneficial. Conversely, BIC is better for false-positive. Lately, the ‘Bridge Criterion’ was formed, to bridge the significant block among AIC and BIC modules. The previous is used for negative decisions and the following for the positive.

**References**

- https://psycnet.apa.org/record/2012-03019-001
- https://journals.sagepub.com/doi/abs/10.1177/0049124103262065
- https://journals.sagepub.com/doi/abs/10.1177/0049124104268644
- https://www.sciencedirect.com/science/article/pii/S0165783605002870

This Article has been written by: Supriya Kandekar

Table of Contents