第一周:深度学习的实用层面(Practical aspects of Deep Learning)
1.1 训练,验证,测试集(Train / Dev / Test sets)1.2 偏差,方差(Bias /Variance)1.3 机器学习基础(Basic Recipe for Machine Learning)1.4 正则化(Regularization)1.5 为什么正则化有利于预防过拟合呢?(Why regularization reduces overfitting?)1.6 dropout 正则化(Dropout Regularization)1.7 理解 dropout(Understanding Dropout)1.8 其他正则化方法(Other regularization methods)1.9 归一化输入(Normalizing inputs)1.10 梯度消失/梯度爆炸(Vanishing / Exploding gradients)1.11 神经网络的权重初始化(Weight Initialization for Deep Networks)1.12 梯度的数值逼近(Numerical approximation of gradients)1.13 梯度检验(Gradient checking)1.14 梯度检验应用的注意事项(Gradient Checking Implementation Notes)InitializationGradient CheckingRegularizationreg_utils.pytestCases.py
Previous第二门课 改善深层神经网络:超参数调试、正则化以及优化(Improving Deep Neural Networks:Hyperparameter tuning, Regularization andNext1.1 训练,验证,测试集(Train / Dev / Test sets)
Last updated
Was this helpful?