Behavioral difference between Gradient Desent and Hill Climbing
Asked Answered
S

2

8

I'm trying to understand the difference between these two algorithms and how they differ in solving a problem. I have looked at the algorithms and the internals of them. It would be good to hear from others who already experienced with them. Specially, I would like to know how they would behave differently on the same problem.

Thank you.

Shostakovich answered 18/11, 2016 at 8:33 Comment(0)
M
14

Difference

The main difference between the two is the direction in which they move to reach the local minima (or maxima).

  • In Hill Climbing we move only one element of the vector space, we then calculate the value of function and replace it if the value improves. we keep on changing one element of the vector till we can't move in a direction such that position improves. In 3D sapce the move can be visualised as moving in any one of the axial direction along x,y or z axis.
  • In Gradient Descent we take steps in the direction of negative gradient of current point to reach the point of minima (positive in case of maxima). For eg, in 3D Space the direction need not to be an axial direction.
Mossbunker answered 18/11, 2016 at 9:9 Comment(1)
If the function is continuous and differentiable shall we expect a different result? Or would hill climbing be more adapted to a search space which arguments are discrete?Throve
I
3

In addition to radbrawler's answer, they are however similar in the greedy approach that both use to find local minima/maxima. I may consider Gradient descent as the continuous version of the discrete Hill climbing technique.

Italia answered 2/5, 2018 at 9:45 Comment(1)
No, gradient ascent/descent cannot be considered a continuous analog of discrete Hill climbing. An expectation-maximization algorithm can such as Baum-Welch for Hidden Markov Models. Hill climbing will converge to the nearest local maxima whereas gradient ascent will not necessarily.Allcot

© 2022 - 2024 — McMap. All rights reserved.