- [[linear classifiers]], [[dot product]], [[loss function]], [[Python]]
# Idea
The [[dot product]] is used to express how [[linear classifiers]] make predictions.
Step 1: $prediction = coefficients \cdot features + intercept$
Step 2: If $prediction \geq 0$, predict one class; if $prediction < 0$, predict the other class.
These steps are the same for [[logistic regression]] and [[linear support vector machines]]—these two models are fitted differently (i.e., they have different fit functions) but they use the same prediction function.
Example logistic regression in [[scikit-learn]]:
```python
lr = LogisticRegression()
lr.fit(X, y)
lr.predict(X)[0] # same as lines below
lr.coef_ @ X[0] + lr.intercept_ # raw model prediction/output
(lr.coef_ @ X[0] + lr.intercept_) >= 0 # same as 2 lines above
```
# References
- https://campus.datacamp.com/courses/linear-classifiers-in-python/loss-functions?ex=1
- https://campus.datacamp.com/courses/linear-classifiers-in-python/loss-functions?ex=3
# Figures
- https://campus.datacamp.com/courses/linear-classifiers-in-python/loss-functions?ex=1
![[Pasted image 44.png]]
![[Pasted image 45.png]]
![[Pasted image 46.png]]