Comparison and Assessment of Commonly Used Model Selection Criteria in Modeling of Experimental Data

Authors

  • Radoslav Mavrevski* Faculty of Engineering, South-West University "Neofit Rilski", Blagoevgrad, Bulgaria
  • Peter Milanov Faculty of Mathematics and Natural Sciences, South-West University "Neofit Rilski", Blagoevgrad, Bulgaria Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Sofia, Bulgaria
  • Borislav Yurukov Faculty of Mathematics and Natural Sciences, South-West University "Neofit Rilski", Blagoevgrad, Bulgaria

DOI:

https://doi.org/10.11145/cb.v3i1.647

Abstract

Model selection is a process of choosing a model from a set of candidate models which will provide the best balance between goodness of fit of the data and complexity of the model [1]. Different criteria for evaluation of competitive mathematical models for data fitting have become available [2], [3].

This research has several specific objectives: (1) to generate artificial experimental data by known models; (2) to fit data with various models with increasing complexity; (3) to verify if the model used to generate the data could be correctly identified through the two commonly used criteria Akaike’s information criterion (AIC) and Bayesian information criterion (BIC) and to assess and compare empirically their performance.

The artificial experimental data generating and the curve fitting is performed through using the GraphPad Prism software. ...

Downloads

Published

2016-03-28

Issue

Section

Conference Contributions