American Express Interview Question | GDM
Question
Do gradient descent methods at all times converge to a similar point?
in progress
0
Interview Question
55 years
5 Answers
997 views
Great Grand Master 0
Answers ( 5 )
No gradient descent does not always converge to local minima. It depends where we start (initialize) the weights. If we start near a local minima, there are high chances that it will get trapped in that minima failing to achieve global minima. Hence random initialization of weight is carried out multiple times to achieve global minima.
It is not necessary. Depends upon the starting point on the curve and the learning rate of the process.
Sometimes, the gradient descent process can also get stuck in the local minima.
No, they always don’t. That’s because in some cases it reaches a local minimum or a local optima point.
So, you always don’t reach the global optima point. It depends on the data and starting conditions and your learning rate
No, they always don’t. That’s because in some cases it reaches a local minimum point. It depends upon the starting point on the curve and the learning rate of the process.
NO, it depends on how we initialize it. if the alpha is too big it may not converge at global minima and whereas for small value of alpha it may stop at local.