Hyper-parameters tuning is a key step to find the optimal machine learning parameters. Determining the best hyper-parameters takes a good deal of time, especially when the objective functions are costly to determine, or a large number of parameters are required to be tuned. In contrast to the conventional machine learning algorithms, Neural Network requires tuning hyperparameters more because it has to process a lot of parameters together, and depending on the fine tuning, the accuracy of the model can be varied in between 25%-90%. A few of the most effective techniques for tuning hyper-parameters in the Deep learning methods are: Grid search, Random forest, Bayesian optimization, etc. Every method has some advantages and disadvantages over others. For example: Grid search has proven to be an effective technique to tune hyper-parameters, along with drawbacks like trying too many combinations, and performing poorly when it is required to tune many parameters at a time. In our work, we will determine, show and analyze the efficiencies of a real-world synthetic polymer dataset for different parameters and tuning methods.

How to Cite
RIYAD HOSSAIN, DR. DOUGLAS TIMMER, Md. Machine Learning Model Optimization with Hyper Parameter Tuning Approach. Global Journal of Computer Science and Technology, [S.l.], sep. 2021. ISSN 0975-4172. Available at: <https://computerresearch.org/index.php/computer/article/view/2059>. Date accessed: 15 aug. 2022.