why gradient descent when we can solve linear regression analytically
Asked Answered
C

4

75

what is the benefit of using Gradient Descent in the linear regression space? looks like the we can solve the problem (finding theta0-n that minimum the cost func) with analytical method so why we still want to use gradient descent to do the same thing? thanks

Castlereagh answered 12/8, 2013 at 16:18 Comment(1)
This is a great question. Its very common for lecturers to go directly into gradient descent to find the solution, which is confusing when a student remembers the the ordinary least squares solution doesn't require an optimization algorithm; confusion which could quickly be dispensed with by acknowledging what @Ionic has provided here.Merow
I
110

When you use the normal equations for solving the cost function analytically you have to compute:

enter image description here

Where X is your matrix of input observations and y your output vector. The problem with this operation is the time complexity of calculating the inverse of a nxn matrix which is O(n^3) and as n increases it can take a very long time to finish.

When n is low (n < 1000 or n < 10000) you can think of normal equations as the better option for calculation theta, however for greater values Gradient Descent is much more faster, so the only reason is the time :)

Ionic answered 12/8, 2013 at 19:15 Comment(7)
Is n the number of samples or features?Madel
n is the number of features.Coincidence
This is not necessarily the bottleneck. To even use the normal equations, we typically make a non-singular assumption so that $n > p$ (here I'm using the notation that $n$ is number of data points and $p$ is number of features). This means the bottleneck is $O(np^2)$ to form $X^\top X$, not the $O(p^3)$ inversion.Trademark
just a nitty-gritty... matrix inversion can be performed below O(n^3) en.wikipedia.org/wiki/…Maladjusted
i was excited for a second after reading @Maladjusted comment, but the best matrix inversion is O(n^2.4)Merow
to be consistent with the nitty-grittiness, it is actually less than 2.4 :^)Maladjusted
As a side note - for very large matrices X, the way the inverse is computed X^-1 is using gradient descent (well, a more complex version of if - the preconditioned conjugate gradient). I am not just talking in this context, I mean any inverse which is kind of cool. For example, MatLab has this implementation: mathworks.com/help/matlab/ref/pcg.htmlCartomancy
M
13

You should provide more details about yout problem - what exactly are you asking about - are we talking about linear regression in one or many dimensions? Simple or generalized ones?

In general, why do people use the GD?

  • it is easy to implement
  • it is very generic optimization technique - even if you change your model to the more general one, you can stil use it

So what about analytical solutions? Well, we do use them, your claim is simply false here (if we are talking in general), for example the OLS method is a closed form, analytical solution, which is widely used. If you can use the analytical solution, it is affordable computationaly (as sometimes GD is simply cheapier or faster) then you can, and even should - use it.

Neverlethles this is always a matter of some pros and cons - analytical solutions are strongly connected to the model, so implementing them can be inefficient if you plan to generalize/change your models in the future. They are sometimes less efficient then their numerical approximations, and sometimes there are simply harder to implement. If none of above is true - you should use the analytical solution, and people do it, really.

To sum up, you rather use GD over analytical solution if:

  • you are considering changes in the model, generalizations, adding some more complex terms/regularization/modifications
  • you need a generic method because you do not know much about the future of the code and the model (you are only one of the developers)
  • analytical solution is more expensive computationaly, and you need efficiency
  • analytical solution requires more memory, which you do not have
  • analytical solution is hard to implement and you need easy, simple code
Mediacy answered 12/8, 2013 at 18:44 Comment(0)
T
7

I saw a very good answer from https://stats.stackexchange.com/questions/23128/solving-for-regression-parameters-in-closed-form-vs-gradient-descent

Basically, the reasons are:

1.For most nonlinear regression problems there is no closed form solution.

2.Even in linear regression (one of the few cases where a closed form solution is available), it may be impractical to use the formula. The following example shows one way in which this can happen.

Taxis answered 20/8, 2014 at 22:15 Comment(0)
M
0

Other reason is that gradient descent is immediately useful when you generalize linear regression, especially if the problem doesn't have a closed-form solution, like for example in Lasso (which adds regularization term consisting on sum of absolute values of weight vector).

Madi answered 7/9, 2017 at 8:23 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.