Home

Welcome to Beginning with ML, a blog on machine learning for beginners. Most of these posts are small, so you can learn in bite-sized pieces. Below are the posts published so far. Some of these posts are also accompanied by scanned notes provided by Dr. Snehanshu Saha, Professor of Computer Science at PES University.

1 – Introduction and Prerequisites

2 – Linear Regression

2.1 – Feature Scaling

3 – Logistic Regression

3.1 – The Newton-Raphson Method

3.2 – Multi-class Classification

3.3 – So what is maximum likelihood estimation, anyway?

3.4 – Softmax Regression

3.5 – Regularization

3.6 – Completing Softmax Regression: Implementation and Regularization

3.7 – Putting it all together: MNIST digit recognition

3.8 – Where do the cost functions come from?

4 – Finding a good learning rate

5 – Evaluating Classification Models

6 – Locally Weighted Linear Regression

6.1 – Implementing Locally Weighted Regression

7 – Evaluating Regression Models

8 – Gaussian Discriminant Analysis

8.1 – Implementing Gaussian Discriminant Analysis – Iris data classification

8.2 – The Naive Bayes algorithm

8.3 – The Naive Bayes Multinomial Event Model

9 – Excursus: Principal Components Analysis

10 – Decision Trees

10.1 – Implementing the ID3 algorithm

10.2 – Improving the ID3 algorithm

11 – Support Vector Machines

11.1 – Convex Optimization

11.2 – Posing the dual problem

11.3 – Implementing a linear 2-class SVM

11.4 – SVMs with kernels

11.5 – Regularization and Soft Margin SVM

12 – Introduction to Cluster Analysis

12.1 – k-Means Clustering

12.2 – DBSCAN

12.3 – Hierarchical Clustering

12.4 – Gaussian Mixture Models

12.5 – Spectral Clustering

12.6 – Evaluating clusters

12.7 – Finding the number of clusters

13 – Excursus: k-Nearest Neighbors

14 – Markov Models

15 – Neural Networks

15.1 – Activation functions

15.2 – Improving performance

15.3 – Disadvantages of neural networks

15.4 – Next steps