
- Making predictions using model
- Evaluating loss
- Remember RSS? \[\textit{RSS} = \sum_{i=1}^{N}(y_i - [\color{blue}{w_0} + \color{blue}{w_{1}}x_i])^2 \]
Ahsan Ijaz

Constant model: Only \(w_0\) parameter used.

Linear model: \(y = w_0 + w_1x\)

Quadratic model: \(y = w_0 + w_1x + w_2x^2\)


Training error decreases as model complexity increases.

Is a complex model better?




Black line indiates the KNN decision boundary with K = 10. The optimum decision boundary is shown as purple dashed line.
Black curves indiates the KNN decision boundary with K = 1 and K = 100. The optimum decision boundary is shown as purple dashed line. With K = 1, we have an overly flexible decision boundary. With K = 100, it is not sufficiently flexible.
