# Difference Between AIC and BIC

While solving a case study, a researcher comes across many predictors, possibilities, and interactions. That makes it intricate to select a model. With the help of different criteria for model selection, they can resolve those problems and estimate the precision.

/10

Education Quiz

1 / 10

Which is the first country to have a public education system?

2 / 10

What word, taken from German, names the traditional first formal year of U.S. schooling?

3 / 10

What is the study of the human mind and behavior called?

4 / 10

When should a teacher and a pupil hold a case conference?

5 / 10

The purpose of the evaluation is to make?

6 / 10

What is the name of the first university established in the world?

7 / 10

The purpose of the evaluation is to make a judgment about educational...

8 / 10

What is the main purpose of a thesis statement in an essay?

9 / 10

What is the name of the standardized test used for college admissions in the United States?

10 / 10

Who painted the famous artwork “The Starry Night”?

Summary

The AIC and BIC are the two such criteria processes for evaluating a model. They consist of selective determinants for the aggregation of the considered variables. In 2002, Burnham and Anderson did a research study on both the criteria.

## AIC vs BIC

The difference Between AIC and BIC is that their selection of the model. They are specified for particular uses and can give distinguish results. AIC has infinite and relatively high dimensions.

AIC results in complex traits, whereas BIC has more finite dimensions and consistent attributes. The former is better for negative findings, and the latter used for positive.

## What is AIC?

The model was first announced by statistician ‘Hirotugu Akaike’ in the year 1971. And the first formal paper was published by Akaike in 1974 and received more than 14,000 citations.

Akaike Information Criteria (AIC) is an evaluation of a continual in addition to the corresponding interval among the undetermined, accurate, and justified probability of the facts.

It is the integrated probability purpose of the model. So that a lower AIC means a model is estimated to be more alike to the accuracy. For false-negative conclusions, it is useful.

To reach a true-model requires a probability of less than 1. The dimension of AIC is infinite and relatively high in number. Because of which it provides unpredictable and complicated results.

It serves the most optimal coverage of assumptions. Its penalty terms are smaller. Many researchers believe it benefits with the minimum risks while presuming. Because here, n is larger than k2.

The AIC calculation is done with the following formula:

• AIC = 2k – 2ln(L^)

## What is BIC?

Bayesian Information Criteria (BIC) is an evaluation of the purpose of the possibility, following the model is accurate, under a particular Bayesian structure. So a lower BIC means that a model is acknowledged to be further anticipated to be the precise model.

The theory was developed and published by Gideon E. Schwarz in the year 1978. Also, it is known as Schwarz Information Criterion, shortly SIC, SBIC, or SBC. To reach a true-model, it requires probability exactly 1. For false-positive outcomes, it is helpful.

The penalty terms are substantial. Its dimension is finite that gives consistent and easy results. Scientists say that its optimal coverage is less than AIC for assumptions. That even sequences into maximum risk-taking. Because here, n is definable.

The BIC calculation is done  with the following formula:

• BIC = k ln(n) – 2ln(L^)

The ‘Bridge Criterion’ or BC, was developed by Jie Ding, Vahid Tarokh, and Yuhong Yang. The publication of the criterion was on 20th June 2017 in IEEE Transactions on Information Theory. Its motive was to bridge the fundamental gap between AIC and BIC modules.

## Main Differences Between AIC and BIC

1. AIC is used in model selection for false-negative outcomes, whereas BIC is for false-positive.
2. The former has an infinite and relatively high dimension. On the contrary, the latter has finite.
3. The penalty term for the first is smaller. Whereas, the second one is substantial.
4. Akaike information criteria have complicated and unpredictable results. Conversely, the Bayesian information criterion has easy results with consistency.
5. AIC provides optimistic assumptions. While BIC coverages less optimal assumptions.
6. Risk is minimized in AIC and is maximum in BIC.
7. The Akaike theory requires the probability of less than 1, and Bayesian needs exactly 1 to reach the true-model.