SVMs are used for classification and regression. They find an optimal hyperplane that separates classes with the widest margin (or an ε-tube for regression). Poke the sliders, try Auto Maximize, and spin the 3D kernel magic!
Nearest points to the line glow: these are the support vectors.
As C increases, the model tries harder to avoid errors and the margin tends to shrink (teaching sketch, not exact math).
Features: height & weight. A linear SVM works if big dogs lie on one side of a line and small cats on the other. The closest cat+dog are the support vectors.
A straight line fails. Kernel SVM lifts points so a flat plane works; mapped back it’s a curvy boundary.
Fit a line with an ε-tube. Points outside the tube (errors > ε) become the support vectors that adjust the prediction.