Butterfly Tarles - the old vintage that like a GEM
Recent newsLately I have been playing tennis with all ply woods more often than any carbon blade I have.
read more(Comments)
Get started
Open in app
Open in app
You have 2 free member-only stories left this month. Sign up for Medium and get an extra one
Understanding the OLS method for Simple Linear Regression
Valentina Alto
Valentina Alto
Aug 17, 2019·4 min read
Linear Regression is the family of algorithms employed in supervised machine learning tasks (to learn more about supervised learning, you can read my former article here). Knowing that supervised ML tasks are normally divided into classification and regression, we can collocate Linear Regression algorithms in the latter category. It differs from classification because of the nature of the target variable: in classification, the target is a categorical value (‘yes/no’, ‘red/blue/green’, ‘spam/not spam’…); on the other hand, regression involves numerical, continuous values as target, hence the algorithm will be asked to predict a continuous number rather than a class or category. Namely, imagine you want to predict the price of a house based on some relative features: the output of your model will be the price, hence a continuous number.
Regression tasks can be divided into two main groups: those which use only one feature to predict the target, and those which use more than one features for that purpose. To give you an example, let’s consider the house task above: if you want to predict its price only based on its squared meters, you will fall into the first situation (one feature); if you are going to predict the price based on, let’s say, its squared meters, its position and the liveability of the surrounding environment, you are going to fall into the second situation (multiple features, in that case, three).
In the first scenario, the algorithm you are likely to employ will be the Simple Linear Regression, which is the one we are going to talk about in this article. On the other side, whenever you are facing more than one features able to explain the target variable, you are likely to employ a Multiple Linear Regression.
Simple Linear Regression is a statistical model, widely used in ML regression tasks, based on the idea that the relationship between two variables can be explained by the following formula:
Where εi is the error term, and α, β are the true (but unobserved) parameters of the regression. The parameter β represents the variation of the dependent variable when the independent variable has a unitary variation: namely, if my parameter is equal to 0.75, when my x increases by 1, my dependent variable will increase by 0.75. On the other hand, the parameter α represents the value of our dependent variable when the independent one is equal to zero.
Let’s visualize it graphically:
Now, the idea of Simple Linear Regression is finding those parameters α and β for which the error term is minimized. To be more precise, the model will minimize the squared errors: indeed, we do not want our positive errors to be compensated by the negative ones, since they are equally penalizing for our model.
This procedure is called Ordinary Least Squared error — OLS.
Let’s demonstrate those optimization problems step by step. If we reframe our squared error sum as follows:
We can set our optimization problem as follows:
So let’s start with β:
Lately I have been playing tennis with all ply woods more often than any carbon blade I have.
read moreWe believe in a healthy body there is a healthy mind. And therefore I try to keep my body moving, as I have hypothesised, making my body moving, and dedicating it to health improvement, will increase my health and sanity.
read moreIt was the day of my birthday, and definitely, it was pure fun to spend it with family.
read moreCollaboratively administrate empowered markets via plug-and-play networks. Dynamically procrastinate B2C users after installed base benefits. Dramatically visualize customer directed convergence without
Comments