DeepLearning.ai深度学习课程笔记
  • Introduction
  • 第一门课 神经网络和深度学习(Neural-Networks-and-Deep-Learning)
    • 第一周:深度学习引言(Introduction to Deep Learning)
      • 1.1 神经网络的监督学习(Supervised Learning with Neural Networks)
      • 1.2 为什么神经网络会流行?(Why is Deep Learning taking off?)
    • 第二周:神经网络的编程基础(Basics of Neural Network programming)
      • 2.1 二分类(Binary Classification)
      • 2.2 逻辑回归(Logistic Regression)
      • 2.3 逻辑回归的代价函数(Logistic Regression Cost Function)
      • 2.4 逻辑回归的梯度下降(Logistic Regression Gradient Descent)
      • 2.5 梯度下降的例子(Gradient Descent on m Examples)
      • 2.6 向量化 logistic 回归的梯度输出(Vectorizing Logistic Regression’s Gradient Output)
      • 2.7 (选修)logistic 损失函数的解释(Explanation of logistic regression cost function )
      • Logistic Regression with a Neural Network mindset 代码
      • lr_utils.py
    • 第三周:浅层神经网络(Shallow neural networks)
      • 3.1 神经网络概述(Neural Network Overview)
      • 3.2 神经网络的表示(Neural Network Representation )
      • 3.3 计算一个神经网络的输出(Computing a Neural Network's output )
      • 3.4 多样本向量化(Vectorizing across multiple examples )
      • 3.5 激活函数(Activation functions)
      • 3.6 为什么需要( 非线性激活函数?(why need a nonlinear activation function?)
      • 3.7 激活函数的导数(Derivatives of activation functions )
      • 3.8 神经网络的梯度下降(Gradient descent for neural networks)
      • 3.9 (选修)直观理解反向传播(Backpropagation intuition )
      • 3.10 随机初始化(Random+Initialization)
      • Planar data classification with one hidden layer
      • planar_utils.py
      • testCases.py
    • 第四周:深层神经网络(Deep Neural Networks)
      • 4.1 深层神经网络(Deep L-layer neural network)
      • 4.2 前向传播和反向传播(Forward and backward propagation)
      • 4.3 深层网络中的前向传播(Forward propagation in a Deep Network )
      • 4.4 为什么使用深层表示?(Why deep representations?)
      • 4.5 搭建神经网络块(Building blocks of deep neural networks)
      • 4.6 参数 VS 超参数(Parameters vs Hyperparameters)
      • Building your Deep Neural Network Step by Step
      • dnn_utils.py
      • testCases.py
      • Deep Neural Network Application
      • dnn_app_utils.py
  • 第二门课 改善深层神经网络:超参数调试、 正 则 化 以 及 优 化 (Improving Deep Neural Networks:Hyperparameter tuning, Regulariza
    • 第二门课 改善深层神经网络:超参数调试、正则化以及优化(Improving Deep Neural Networks:Hyperparameter tuning, Regularization and
      • 第一周:深度学习的实用层面(Practical aspects of Deep Learning)
        • 1.1 训练,验证,测试集(Train / Dev / Test sets)
        • 1.2 偏差,方差(Bias /Variance)
        • 1.3 机器学习基础(Basic Recipe for Machine Learning)
        • 1.4 正则化(Regularization)
        • 1.5 为什么正则化有利于预防过拟合呢?(Why regularization reduces overfitting?)
        • 1.6 dropout 正则化(Dropout Regularization)
        • 1.7 理解 dropout(Understanding Dropout)
        • 1.8 其他正则化方法(Other regularization methods)
        • 1.9 归一化输入(Normalizing inputs)
        • 1.10 梯度消失/梯度爆炸(Vanishing / Exploding gradients)
        • 1.11 神经网络的权重初始化(Weight Initialization for Deep Networks)
        • 1.12 梯度的数值逼近(Numerical approximation of gradients)
        • 1.13 梯度检验(Gradient checking)
        • 1.14 梯度检验应用的注意事项(Gradient Checking Implementation Notes)
        • Initialization
        • Gradient Checking
        • Regularization
        • reg_utils.py
        • testCases.py
      • 第二周:优化算法 (Optimization algorithms)
        • 2.1 Mini-batch 梯度下降(Mini-batch gradient descent)
        • 2.2 理解 mini-batch 梯度下降法(Understanding mini-batch gradient descent)
        • 2.3 指数加权平均数(Exponentially weighted averages)
        • 2.4 理解指数加权平均数(Understanding exponentially weighted averages )
        • 2.5 指 数 加 权 平 均 的 偏 差 修 正 ( Bias correction in exponentially weighted averages )
        • 2.6 动量梯度下降法(Gradient descent with Momentum )
        • 2.7 RMSprop( root mean square prop)
        • 2.8 Adam 优化算法(Adam optimization algorithm)
        • 2.9 学习率衰减(Learning rate decay)
        • 2.10 局部最优的问题(The problem of local optima)
        • Optimization
        • opt_utils.py
        • testCases.py
      • 第 三 周 超 参 数 调 试 、 Batch 正 则 化 和 程 序 框 架 (Hyperparameter tuning)
        • 3.1 调试处理(Tuning process)
        • 3.2 为超参数选择合适的范围(Using an appropriate scale to pick hyperparameters)
        • 3.3 超参数训练的实践: Pandas VS Caviar(Hyperparameters tuning in practice: Pandas vs. Caviar)
        • 3.4 归一化网络的激活函数( Normalizing activations in a network)
        • 3.5 将 Batch Norm 拟合进神经网络(Fitting Batch Norm into a neural network)
        • 3.6 Batch Norm 为什么奏效?(Why does Batch Norm work?)
        • 3.7 测试时的 Batch Norm(Batch Norm at test time)
        • 3.8 Softmax 回归(Softmax regression)
        • 3.9 训练一个 Softmax 分类器(Training a Softmax classifier)
        • tensorflow tutorial
        • improv_utils.py
        • tf_utils.py
  • 第三门课 结构化机器学习项目(Structuring Machine Learning Projects)
    • 第三门课 结构化机器学习项目(Structuring Machine Learning Projects)
      • 第一周 机器学习(ML)策略(1)(ML strategy(1))
        • 1.1 为什么是 ML 策略?(Why ML Strategy?)
        • 1.2 正交化(Orthogonalization)
        • 1.3 单一数字评估指标(Single number evaluation metric)
        • 1.4 满足和优化指标(Satisficing and optimizing metrics)
        • 1.5 训练/开发/测试集划分(Train/dev/test distributions)
        • 1.6 开发集和测试集的大小(Size of dev and test sets)
        • 1.7 什么时候该改变开发/测试集和指标?(When to change dev/test sets and metrics)
        • 1.8 为什么是人的表现?( Why human-level performance?)
        • 1.9 可避免偏差(Avoidable bias)
        • 1.10 理解人的表现(Understanding human-level performance)
        • 1.11 超过人的表现(Surpassing human- level performance)
        • 1.12 改善你的模型的表现(Improving your model performance)
      • 第二周:机器学习策略(2)(ML Strategy (2))
        • 2.1 进行误差分析(Carrying out error analysis)
        • 2.2 清楚标注错误的数据(Cleaning up Incorrectly labeled data)
        • 2.3 快速搭建你的第一个系统,并进行迭代(Build your first system quickly, then iterate)
        • 2.4 在不同的划分上进行训练并测试(Training and testing on different distributions)
        • 2.5 不匹配数据划分的偏差和方差(Bias and Variance with mismatched data distributions)
        • 2.6 定位数据不匹配(Addressing data mismatch)
        • 2.7 迁移学习(Transfer learning)
        • 2.8 多任务学习(Multi-task learning)
        • 2.9 什么是端到端的深度学习?(What is end-to-end deep learning?)
        • 2.10 是否要使用端到端的深度学习?(Whether to use end-to-end learning?)
  • 第四门课 卷积神经网络(Convolutional Neural Networks)
    • 第四门课 卷积神经网络(Convolutional Neural Networks)
      • 第一周 卷积神经网络(Foundations of Convolutional Neural Networks)
        • 1.1 计算机视觉(Computer vision)
        • 1.2 边缘检测示例(Edge detection example)
        • 1.3 更多边缘检测内容(More edge detection)
        • 1.4 Padding
        • 1.5 卷积步长(Strided convolutions)
        • 1.6 三维卷积(Convolutions over volumes)
        • 1.7 单层卷积网络(One layer of a convolutional network)
        • 1.8 简单卷积网络示例(A simple convolution network example)
        • 1.9 池化层(Pooling layers)
        • 1.10 卷积神经网络示例(Convolutional neural network example)
        • 1.11 为什么使用卷积?(Why convolutions?)
        • Convolution model Step by Step
        • Convolutional Neural Networks: Application
        • cnn_utils
      • 第二周 深度卷积网络:实例探究(Deep convolutional models: case studies)
        • 2.1 经典网络(Classic networks)
        • 2.2 残差网络(Residual Networks (ResNets))
        • 2.3 残差网络为什么有用?(Why ResNets work?)
        • 2.4 网络中的网络以及 1×1 卷积(Network in Network and 1×1 convolutions)
        • 2.5 谷歌 Inception 网络简介(Inception network motivation)
        • 2.6 Inception 网络(Inception network)
        • 2.7 迁移学习(Transfer Learning)
        • 2.8 数据扩充(Data augmentation)
        • 2.9 计算机视觉现状(The state of computer vision)
        • Residual Networks
        • Keras tutorial - the Happy House
        • kt_utils.py
      • 第三周 目标检测(Object detection)
        • 3.1 目标定位(Object localization)
        • 3.2 特征点检测(Landmark detection)
        • 3.3 目标检测(Object detection)
        • 3.4 卷积的滑动窗口实现(Convolutional implementation of sliding windows)
        • 3.5 Bounding Box预测(Bounding box predictions)
        • 3.6 交并比(Intersection over union)
        • 3.7 非极大值抑制(Non-max suppression)
        • 3.8 Anchor Boxes
        • 3.9 YOLO 算法(Putting it together: YOLO algorithm)
        • 3.10 候选区域(选修)(Region proposals (Optional))
        • Autonomous driving application - Car detection
        • yolo_utils.py
      • 第四周 特殊应用:人脸识别和神经风格转换(Special applications: Face recognition &Neural style transfer)
        • 4.1 什么是人脸识别?(What is face recognition?)
        • 4.2 One-Shot学习(One-shot learning)
        • 4.3 Siamese 网络(Siamese network)
        • 4.4 Triplet 损失(Triplet 损失)
        • 4.5 面部验证与二分类(Face verification and binary classification)
        • 4.6 什么是深度卷积网络?(What are deep ConvNets learning?)
        • 4.7 代价函数(Cost function)
        • 4.8 内容代价函数(Content cost function)
        • 4.9 风格代价函数(Style cost function)
        • 4.10 一维到三维推广(1D and 3D generalizations of models)
        • Art Generation with Neural Style Transfer
        • nst_utils.py
        • Face Recognition for the Happy House
        • fr_utils.py
        • inception_blocks.py
  • 第五门课 序列模型(Sequence Models)
    • 第五门课 序列模型(Sequence Models)
      • 第一周 循环序列模型(Recurrent Neural Networks)
        • 1.1 为什么选择序列模型?(Why Sequence Models?)
        • 1.2 数学符号(Notation)
        • 1.3 循环神经网络模型(Recurrent Neural Network Model)
        • 1.4 通过时间的反向传播(Backpropagation through time)
        • 1.5 不同类型的循环神经网络(Different types of RNNs)
        • 1.6 语言模型和序列生成(Language model and sequence generation)
        • 1.7 对新序列采样(Sampling novel sequences)
        • 1.8 循环神经网络的梯度消失(Vanishing gradients with RNNs)
        • 1.9 GRU单元(Gated Recurrent Unit(GRU))
        • 1.10 长短期记忆(LSTM(long short term memory)unit)
        • 1.11 双向循环神经网络(Bidirectional RNN)
        • 1.12 深层循环神经网络(Deep RNNs)
        • Building your Recurrent Neural Network
        • rnn_utils.py
        • Dinosaurus Island -- Character level language model final
        • utils.py
        • shakespeare_utils.py
        • Improvise a Jazz Solo with an LSTM Network
      • 第二周 自然语言处理与词嵌入(Natural Language Processing and Word Embeddings)
        • 2.1 词汇表征(Word Representation)
        • 2.2 使用词嵌入(Using Word Embeddings)
        • 2.3 词嵌入的特性(Properties of Word Embeddings)
        • 2.4 嵌入矩阵(Embedding Matrix)
        • 2.5 学习词嵌入(Learning Word Embeddings)
        • 2.6 Word2Vec
        • 2.7 负采样(Negative Sampling)
        • 2.8 GloVe 词向量(GloVe Word Vectors)
        • 2.9 情感分类(Sentiment Classification)
        • 2.10 词嵌入除偏(Debiasing Word Embeddings)
        • Operations on word vectors
        • w2v_utils.py
        • Emojify
        • emo_utils.py
      • 第三周 序列模型和注意力机制(Sequence models & Attention mechanism)
        • 3.1 基础模型(Basic Models)
        • 3.2 选择最可能的句子(Picking the most likely sentence)
        • 3.3 集束搜索(Beam Search)
        • 3.4 改进集束搜索(Refinements to Beam Search)
        • 3.5 集束搜索的误差分析(Error analysis in beam search)
        • 3.6 Bleu 得分(选修)(Bleu Score (optional))
        • 3.7 注意力模型直观理解(Attention Model Intuition)
        • 3.8注意力模型(Attention Model)
        • 3.9语音识别(Speech recognition)
        • 3.10触发字检测(Trigger Word Detection)
        • Neural machine translation with attention
        • nmt_utils.py
        • Trigger word detection
        • td_utils.py
Powered by GitBook
On this page

Was this helpful?

  1. 第一门课 神经网络和深度学习(Neural-Networks-and-Deep-Learning)
  2. 第三周:浅层神经网络(Shallow neural networks)

testCases.py

import numpy as np

def layer_sizes_test_case():
    np.random.seed(1)
    X_assess = np.random.randn(5, 3)
    Y_assess = np.random.randn(2, 3)
    return X_assess, Y_assess

def initialize_parameters_test_case():
    n_x, n_h, n_y = 2, 4, 1
    return n_x, n_h, n_y


def forward_propagation_test_case():
    np.random.seed(1)
    X_assess = np.random.randn(2, 3)
    b1 = np.random.randn(4,1)
    b2 = np.array([[ -1.3]])

    parameters = {'W1': np.array([[-0.00416758, -0.00056267],
        [-0.02136196,  0.01640271],
        [-0.01793436, -0.00841747],
        [ 0.00502881, -0.01245288]]),
     'W2': np.array([[-0.01057952, -0.00909008,  0.00551454,  0.02292208]]),
     'b1': b1,
     'b2': b2}

    return X_assess, parameters

def compute_cost_test_case():
    np.random.seed(1)
    Y_assess = (np.random.randn(1, 3) > 0)
    parameters = {'W1': np.array([[-0.00416758, -0.00056267],
        [-0.02136196,  0.01640271],
        [-0.01793436, -0.00841747],
        [ 0.00502881, -0.01245288]]),
     'W2': np.array([[-0.01057952, -0.00909008,  0.00551454,  0.02292208]]),
     'b1': np.array([[ 0.],
        [ 0.],
        [ 0.],
        [ 0.]]),
     'b2': np.array([[ 0.]])}

    a2 = (np.array([[ 0.5002307 ,  0.49985831,  0.50023963]]))

    return a2, Y_assess, parameters

def backward_propagation_test_case():
    np.random.seed(1)
    X_assess = np.random.randn(2, 3)
    Y_assess = (np.random.randn(1, 3) > 0)
    parameters = {'W1': np.array([[-0.00416758, -0.00056267],
        [-0.02136196,  0.01640271],
        [-0.01793436, -0.00841747],
        [ 0.00502881, -0.01245288]]),
     'W2': np.array([[-0.01057952, -0.00909008,  0.00551454,  0.02292208]]),
     'b1': np.array([[ 0.],
        [ 0.],
        [ 0.],
        [ 0.]]),
     'b2': np.array([[ 0.]])}

    cache = {'A1': np.array([[-0.00616578,  0.0020626 ,  0.00349619],
         [-0.05225116,  0.02725659, -0.02646251],
         [-0.02009721,  0.0036869 ,  0.02883756],
         [ 0.02152675, -0.01385234,  0.02599885]]),
  'A2': np.array([[ 0.5002307 ,  0.49985831,  0.50023963]]),
  'Z1': np.array([[-0.00616586,  0.0020626 ,  0.0034962 ],
         [-0.05229879,  0.02726335, -0.02646869],
         [-0.02009991,  0.00368692,  0.02884556],
         [ 0.02153007, -0.01385322,  0.02600471]]),
  'Z2': np.array([[ 0.00092281, -0.00056678,  0.00095853]])}
    return parameters, cache, X_assess, Y_assess

def update_parameters_test_case():
    parameters = {'W1': np.array([[-0.00615039,  0.0169021 ],
        [-0.02311792,  0.03137121],
        [-0.0169217 , -0.01752545],
        [ 0.00935436, -0.05018221]]),
 'W2': np.array([[-0.0104319 , -0.04019007,  0.01607211,  0.04440255]]),
 'b1': np.array([[ -8.97523455e-07],
        [  8.15562092e-06],
        [  6.04810633e-07],
        [ -2.54560700e-06]]),
 'b2': np.array([[  9.14954378e-05]])}

    grads = {'dW1': np.array([[ 0.00023322, -0.00205423],
        [ 0.00082222, -0.00700776],
        [-0.00031831,  0.0028636 ],
        [-0.00092857,  0.00809933]]),
 'dW2': np.array([[ -1.75740039e-05,   3.70231337e-03,  -1.25683095e-03,
          -2.55715317e-03]]),
 'db1': np.array([[  1.05570087e-07],
        [ -3.81814487e-06],
        [ -1.90155145e-07],
        [  5.46467802e-07]]),
 'db2': np.array([[ -1.08923140e-05]])}
    return parameters, grads

def nn_model_test_case():
    np.random.seed(1)
    X_assess = np.random.randn(2, 3)
    Y_assess = (np.random.randn(1, 3) > 0)
    return X_assess, Y_assess

def predict_test_case():
    np.random.seed(1)
    X_assess = np.random.randn(2, 3)
    parameters = {'W1': np.array([[-0.00615039,  0.0169021 ],
        [-0.02311792,  0.03137121],
        [-0.0169217 , -0.01752545],
        [ 0.00935436, -0.05018221]]),
     'W2': np.array([[-0.0104319 , -0.04019007,  0.01607211,  0.04440255]]),
     'b1': np.array([[ -8.97523455e-07],
        [  8.15562092e-06],
        [  6.04810633e-07],
        [ -2.54560700e-06]]),
     'b2': np.array([[  9.14954378e-05]])}
    return parameters, X_assess
Previousplanar_utils.pyNext第四周:深层神经网络(Deep Neural Networks)

Last updated 6 years ago

Was this helpful?