# 第二门课 改善深层神经网络：超参数调试、正则化以及优化(Improving Deep Neural Networks:Hyperparameter tuning, Regularization and

- [第一周：深度学习的实用层面(Practical aspects of Deep Learning)](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning.md)
- [1.1 训练，验证，测试集（Train / Dev / Test sets）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning/11-xun-lian-ff0c-yan-zheng-ff0c-ceshi-ji-ff08-train-dev-test-sets.md)
- [1.2 偏差，方差（Bias /Variance）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning/12-pian-cha-ff0c-fang-cha-ff08-bias-variance.md)
- [1.3 机器学习基础（Basic Recipe for Machine Learning）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning/13-ji-qi-xue-xi-ji-chu-ff08-basic-recipe-for-machine-learning.md)
- [1.4 正则化（Regularization）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning/14-zheng-ze-hua-ff08-regularization.md)
- [1.5 为什么正则化有利于预防过拟合呢？（Why regularization reduces overfitting?）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning/15-wei-shi-yao-zheng-ze-hua-you-li-yu-yu-fang-guo-ni-he-ni-ff1f-ff08-why-regularization-reduces-over.md)
- [1.6 dropout 正则化（Dropout Regularization）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning/16-dropout-zheng-ze-hua-ff08-dropout-regularization.md)
- [1.7 理解 dropout（Understanding Dropout）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning/17-li-jie-dropout-understanding-dropout.md)
- [1.8 其他正则化方法（Other regularization methods）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning/18-qi-ta-zheng-ze-hua-fang-fa-ff08-other-regularization-methods.md)
- [1.9 归一化输入（Normalizing inputs）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning/19-gui-yi-hua-shu-ru-ff08-normalizing-inputs.md)
- [1.10 梯度消失/梯度爆炸（Vanishing / Exploding gradients）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning/110-ti-du-xiao-5931-ti-du-bao-zha-ff08-vanishing-exploding-gradients.md)
- [1.11 神经网络的权重初始化（Weight Initialization for Deep Networks）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning/111-shen-jing-wang-luo-de-quan-zhong-chu-shi-hua-ff08-weight-initialization-for-deep-networks.md)
- [1.12 梯度的数值逼近（Numerical approximation of gradients）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning/112-ti-du-de-shu-zhi-bi-jin-ff08-numerical-approximation-of-gradients.md)
- [1.13 梯度检验（Gradient checking）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning/113-ti-du-jian-yan-ff08-gradient-checking.md)
- [1.14 梯度检验应用的注意事项（Gradient Checking Implementation Notes）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning/114-ti-du-jian-yan-ying-yong-de-zhu-yi-shi-xiang-ff08-gradient-checking-implementation-notes.md)
- [Initialization](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning/dai-ma.md)
- [Gradient Checking](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning/gradient-checking.md)
- [Regularization](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning/regularization.md)
- [reg\_utils.py](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning/regutils-py.md)
- [testCases.py](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/practical-aspects-of-deep-learning/testcasespy.md)
- [第二周：优化算法 (Optimization algorithms)](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms.md)
- [2.1 Mini-batch 梯度下降（Mini-batch gradient descent）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/21-mini-batch-ti-du-xia-jiang-ff08-mini-batch-gradient-descent.md)
- [2.2 理解 mini-batch 梯度下降法（Understanding mini-batch gradient descent）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/22-li-jie-mini-batch-ti-du-xia-jiang-fa-ff08-understanding-mini-batch-gradient-descent.md)
- [2.3 指数加权平均数（Exponentially weighted averages）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/23-zhi-shu-jia-quan-ping-jun-shu-ff08-exponentially-weighted-averages.md)
- [2.4 理解指数加权平均数（Understanding exponentially weighted averages ）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/24-li-jie-zhi-shu-jiaquanping-jun-shu-ff08-understandingexponentially-weighted-averages.md)
- [2.5 指 数 加 权 平 均 的 偏 差 修 正 （ Bias correction in exponentially weighted averages ）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/25-zhi-shu-jia-quan-ping-jun-de-pian-cha-xiu-zheng-bias-correction-in-exponentially-weighted-average.md)
- [2.6 动量梯度下降法（Gradient descent with Momentum ）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/26-dong-liang-tidu-xia-jiang-fa-ff08-gradient-descent-with-momentum.md)
- [2.7 RMSprop( root mean square prop)](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/27-rmsprop.md)
- [2.8 Adam 优化算法(Adam optimization algorithm)](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/28-adam-you-hua-suan-6cd528-adam-optimization-algorithm.md)
- [2.9 学习率衰减(Learning rate decay)](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/29-xue-xi-lv-shuai-51cf28-learning-rate-decay.md)
- [2.10 局部最优的问题(The problem of local optima)](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/210-ju-bu-zui-you-de-wen-989828-the-problem-of-local-optima.md)
- [Optimization](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/optimization-methods.md)
- [opt\_utils.py](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/optutils-py.md)
- [testCases.py](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/optimization-algorithms/testcasespy.md)
- [第 三 周 超 参 数 调 试 、 Batch 正 则 化 和 程 序 框 架 （Hyperparameter tuning）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/hyperparameter-tuning.md)
- [3.1 调试处理（Tuning process）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/hyperparameter-tuning/31-diao-shi-chu-li-ff08-tuning-process.md)
- [3.2 为超参数选择合适的范围（Using an appropriate scale to pick hyperparameters）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/hyperparameter-tuning/32-wei-chao-can-shu-xuan-ze-he-shi-de-fan-wei-ff08-using-an-appropriate-scale-to-pick-hyperparameter.md)
- [3.3 超参数训练的实践： Pandas VS Caviar（Hyperparameters tuning in practice: Pandas vs. Caviar）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/hyperparameter-tuning/33-chao-can-shu-xun-lian-de-shi-jian-ff1a-pandas-vs-caviar-hyperparameterstuning-in-practice-pandas.md)
- [3.4 归一化网络的激活函数（ Normalizing activations in a network）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/hyperparameter-tuning/34-gui-yi-hua-wang-luo-de-ji-huo-han-shu-ff08-normalizing-activations-in-a-network.md)
- [3.5 将 Batch Norm 拟合进神经网络（Fitting Batch Norm into a neural network）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/hyperparameter-tuning/35-jiang-batch-norm-ni-he-jin-shen-jing-wang-luo-ff08-fitting-batch-norm-into-a-neural-network.md)
- [3.6 Batch Norm 为什么奏效？（Why does Batch Norm work?）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/hyperparameter-tuning/36-batch-norm-wei-shi-yao-zou-xiao-ff1f-ff08-why-does-batch-norm-work.md)
- [3.7 测试时的 Batch Norm（Batch Norm at test time）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/hyperparameter-tuning/37-ce-shi-shi-de-batch-norm-batch-norm-at-test-time.md)
- [3.8 Softmax 回归（Softmax regression）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/hyperparameter-tuning/38-softmax-hui-gui-ff08-softmax-regression.md)
- [3.9 训练一个 Softmax 分类器（Training a Softmax classifier）](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/hyperparameter-tuning/39-xun-lian-yi-ge-softmax-fen-lei-qi-ff08-training-a-softmax-classifier.md)
- [tensorflow tutorial](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/hyperparameter-tuning/tensorflow-tutorial.md)
- [improv\_utils.py](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/hyperparameter-tuning/improvutils-py.md)
- [tf\_utils.py](/neural-networks-and-deep-learning/di-er-men-ke-gai-shan-shen-ceng-shen-jing-wang-luo-chao-can-shu-tiao-shi-zheng-ze-hua-yi-ji-you-hua/improving-deep-neural-networks/hyperparameter-tuning/tfutils-py.md)
