DSIR-426: BLOG POST 3

NE
1 min readJul 1, 2021

‘Chaos is a Ladder’

After only fiscal quarter’s worth of experience in fundamental base Python, greater challenges now await in the form of applying newly minted concepts and techniques, in varying combination and frequency, across a menu of model types. While model selection is certainly dependent on the unique nature or acute aspects of the data set under investigation, it is undoubtedly a luxury having available a full menu of models to select from.

Although only the tip of the iceberg, below is a list of high-priority, supervised machine learning models, worthy of implementation in coding exercises:

  • Linear Regression
  • Logistic Regression
  • Ridge / Lasso / Elastic-Net
  • Bayesian Regression
  • Stochastic Gradient Descent (SGD)
  • Support Vector Machines (SVM)
  • K-Nearest Neighbors (KNN)
  • Naive Bayes (Gaussian / Multinomial / Bernoulli)
  • Decision Trees
  • Random Forest Regressor
  • Gradient Tree Boosting
  • AdaBoost Classifier / Regressor
  • Voting Classifier / Regressor

For further reference: https://scikit-learn.org/stable/supervised_learning.html

Experimental stress-testing of different model types can be useful in learning which models do (or definitely do not) work for different collections or sets of data. Mastering dozens of model types or Python library imports may seem daunting or exhaustive, though over time it is attainable and far less intimidating than it appears, adding steady and consistent practice reps.

While the list above hardly encapsulates the many Python models available for download through either sci-kit learn or other library imports, by learning how to implement and interpret these essential models, learning other models will certainly be a breeze thereafter!

--

--