# 第二周：优化算法 (Optimization algorithms)

- [2.1 Mini-batch 梯度下降（Mini-batch gradient descent）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/21-mini-batch-ti-du-xia-jiang-ff08-mini-batch-gradient-descent.md)
- [2.2 理解 mini-batch 梯度下降法（Understanding mini-batch gradient descent）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/22-li-jie-mini-batch-ti-du-xia-jiang-fa-ff08-understanding-mini-batch-gradient-descent.md)
- [2.3 指数加权平均数（Exponentially weighted averages）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/23-zhi-shu-jia-quan-ping-jun-shu-ff08-exponentially-weighted-averages.md)
- [2.4 理解指数加权平均数（Understanding exponentially weighted averages ）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/24-li-jie-zhi-shu-jiaquanping-jun-shu-ff08-understandingexponentially-weighted-averages.md)
- [2.5 指 数 加 权 平 均 的 偏 差 修 正 （ Bias correction in exponentially weighted averages ）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/25-zhi-shu-jia-quan-ping-jun-de-pian-cha-xiu-zheng-bias-correction-in-exponentially-weighted-average.md)
- [2.6 动量梯度下降法（Gradient descent with Momentum ）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/26-dong-liang-tidu-xia-jiang-fa-ff08-gradient-descent-with-momentum.md)
- [2.7 RMSprop( root mean square prop)](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/27-rmsprop.md)
- [2.8 Adam 优化算法(Adam optimization algorithm)](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/28-adam-you-hua-suan-6cd528-adam-optimization-algorithm.md)
- [2.9 学习率衰减(Learning rate decay)](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/29-xue-xi-lv-shuai-51cf28-learning-rate-decay.md)
- [2.10 局部最优的问题(The problem of local optima)](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/210-ju-bu-zui-you-de-wen-989828-the-problem-of-local-optima.md)
- [Optimization](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/optimization-methods.md)
- [opt\_utils.py](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/optutils-py.md)
- [testCases.py](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/testcasespy.md)
