DeepLearning.ai深度学习课程笔记
  • Introduction
  • 第一门课 神经网络和深度学习(Neural-Networks-and-Deep-Learning)
    • 第一周:深度学习引言(Introduction to Deep Learning)
      • 1.1 神经网络的监督学习(Supervised Learning with Neural Networks)
      • 1.2 为什么神经网络会流行?(Why is Deep Learning taking off?)
    • 第二周:神经网络的编程基础(Basics of Neural Network programming)
      • 2.1 二分类(Binary Classification)
      • 2.2 逻辑回归(Logistic Regression)
      • 2.3 逻辑回归的代价函数(Logistic Regression Cost Function)
      • 2.4 逻辑回归的梯度下降(Logistic Regression Gradient Descent)
      • 2.5 梯度下降的例子(Gradient Descent on m Examples)
      • 2.6 向量化 logistic 回归的梯度输出(Vectorizing Logistic Regression’s Gradient Output)
      • 2.7 (选修)logistic 损失函数的解释(Explanation of logistic regression cost function )
      • Logistic Regression with a Neural Network mindset 代码
      • lr_utils.py
    • 第三周:浅层神经网络(Shallow neural networks)
      • 3.1 神经网络概述(Neural Network Overview)
      • 3.2 神经网络的表示(Neural Network Representation )
      • 3.3 计算一个神经网络的输出(Computing a Neural Network's output )
      • 3.4 多样本向量化(Vectorizing across multiple examples )
      • 3.5 激活函数(Activation functions)
      • 3.6 为什么需要( 非线性激活函数?(why need a nonlinear activation function?)
      • 3.7 激活函数的导数(Derivatives of activation functions )
      • 3.8 神经网络的梯度下降(Gradient descent for neural networks)
      • 3.9 (选修)直观理解反向传播(Backpropagation intuition )
      • 3.10 随机初始化(Random+Initialization)
      • Planar data classification with one hidden layer
      • planar_utils.py
      • testCases.py
    • 第四周:深层神经网络(Deep Neural Networks)
      • 4.1 深层神经网络(Deep L-layer neural network)
      • 4.2 前向传播和反向传播(Forward and backward propagation)
      • 4.3 深层网络中的前向传播(Forward propagation in a Deep Network )
      • 4.4 为什么使用深层表示?(Why deep representations?)
      • 4.5 搭建神经网络块(Building blocks of deep neural networks)
      • 4.6 参数 VS 超参数(Parameters vs Hyperparameters)
      • Building your Deep Neural Network Step by Step
      • dnn_utils.py
      • testCases.py
      • Deep Neural Network Application
      • dnn_app_utils.py
  • 第二门课 改善深层神经网络:超参数调试、 正 则 化 以 及 优 化 (Improving Deep Neural Networks:Hyperparameter tuning, Regulariza
    • 第二门课 改善深层神经网络:超参数调试、正则化以及优化(Improving Deep Neural Networks:Hyperparameter tuning, Regularization and
      • 第一周:深度学习的实用层面(Practical aspects of Deep Learning)
        • 1.1 训练,验证,测试集(Train / Dev / Test sets)
        • 1.2 偏差,方差(Bias /Variance)
        • 1.3 机器学习基础(Basic Recipe for Machine Learning)
        • 1.4 正则化(Regularization)
        • 1.5 为什么正则化有利于预防过拟合呢?(Why regularization reduces overfitting?)
        • 1.6 dropout 正则化(Dropout Regularization)
        • 1.7 理解 dropout(Understanding Dropout)
        • 1.8 其他正则化方法(Other regularization methods)
        • 1.9 归一化输入(Normalizing inputs)
        • 1.10 梯度消失/梯度爆炸(Vanishing / Exploding gradients)
        • 1.11 神经网络的权重初始化(Weight Initialization for Deep Networks)
        • 1.12 梯度的数值逼近(Numerical approximation of gradients)
        • 1.13 梯度检验(Gradient checking)
        • 1.14 梯度检验应用的注意事项(Gradient Checking Implementation Notes)
        • Initialization
        • Gradient Checking
        • Regularization
        • reg_utils.py
        • testCases.py
      • 第二周:优化算法 (Optimization algorithms)
        • 2.1 Mini-batch 梯度下降(Mini-batch gradient descent)
        • 2.2 理解 mini-batch 梯度下降法(Understanding mini-batch gradient descent)
        • 2.3 指数加权平均数(Exponentially weighted averages)
        • 2.4 理解指数加权平均数(Understanding exponentially weighted averages )
        • 2.5 指 数 加 权 平 均 的 偏 差 修 正 ( Bias correction in exponentially weighted averages )
        • 2.6 动量梯度下降法(Gradient descent with Momentum )
        • 2.7 RMSprop( root mean square prop)
        • 2.8 Adam 优化算法(Adam optimization algorithm)
        • 2.9 学习率衰减(Learning rate decay)
        • 2.10 局部最优的问题(The problem of local optima)
        • Optimization
        • opt_utils.py
        • testCases.py
      • 第 三 周 超 参 数 调 试 、 Batch 正 则 化 和 程 序 框 架 (Hyperparameter tuning)
        • 3.1 调试处理(Tuning process)
        • 3.2 为超参数选择合适的范围(Using an appropriate scale to pick hyperparameters)
        • 3.3 超参数训练的实践: Pandas VS Caviar(Hyperparameters tuning in practice: Pandas vs. Caviar)
        • 3.4 归一化网络的激活函数( Normalizing activations in a network)
        • 3.5 将 Batch Norm 拟合进神经网络(Fitting Batch Norm into a neural network)
        • 3.6 Batch Norm 为什么奏效?(Why does Batch Norm work?)
        • 3.7 测试时的 Batch Norm(Batch Norm at test time)
        • 3.8 Softmax 回归(Softmax regression)
        • 3.9 训练一个 Softmax 分类器(Training a Softmax classifier)
        • tensorflow tutorial
        • improv_utils.py
        • tf_utils.py
  • 第三门课 结构化机器学习项目(Structuring Machine Learning Projects)
    • 第三门课 结构化机器学习项目(Structuring Machine Learning Projects)
      • 第一周 机器学习(ML)策略(1)(ML strategy(1))
        • 1.1 为什么是 ML 策略?(Why ML Strategy?)
        • 1.2 正交化(Orthogonalization)
        • 1.3 单一数字评估指标(Single number evaluation metric)
        • 1.4 满足和优化指标(Satisficing and optimizing metrics)
        • 1.5 训练/开发/测试集划分(Train/dev/test distributions)
        • 1.6 开发集和测试集的大小(Size of dev and test sets)
        • 1.7 什么时候该改变开发/测试集和指标?(When to change dev/test sets and metrics)
        • 1.8 为什么是人的表现?( Why human-level performance?)
        • 1.9 可避免偏差(Avoidable bias)
        • 1.10 理解人的表现(Understanding human-level performance)
        • 1.11 超过人的表现(Surpassing human- level performance)
        • 1.12 改善你的模型的表现(Improving your model performance)
      • 第二周:机器学习策略(2)(ML Strategy (2))
        • 2.1 进行误差分析(Carrying out error analysis)
        • 2.2 清楚标注错误的数据(Cleaning up Incorrectly labeled data)
        • 2.3 快速搭建你的第一个系统,并进行迭代(Build your first system quickly, then iterate)
        • 2.4 在不同的划分上进行训练并测试(Training and testing on different distributions)
        • 2.5 不匹配数据划分的偏差和方差(Bias and Variance with mismatched data distributions)
        • 2.6 定位数据不匹配(Addressing data mismatch)
        • 2.7 迁移学习(Transfer learning)
        • 2.8 多任务学习(Multi-task learning)
        • 2.9 什么是端到端的深度学习?(What is end-to-end deep learning?)
        • 2.10 是否要使用端到端的深度学习?(Whether to use end-to-end learning?)
  • 第四门课 卷积神经网络(Convolutional Neural Networks)
    • 第四门课 卷积神经网络(Convolutional Neural Networks)
      • 第一周 卷积神经网络(Foundations of Convolutional Neural Networks)
        • 1.1 计算机视觉(Computer vision)
        • 1.2 边缘检测示例(Edge detection example)
        • 1.3 更多边缘检测内容(More edge detection)
        • 1.4 Padding
        • 1.5 卷积步长(Strided convolutions)
        • 1.6 三维卷积(Convolutions over volumes)
        • 1.7 单层卷积网络(One layer of a convolutional network)
        • 1.8 简单卷积网络示例(A simple convolution network example)
        • 1.9 池化层(Pooling layers)
        • 1.10 卷积神经网络示例(Convolutional neural network example)
        • 1.11 为什么使用卷积?(Why convolutions?)
        • Convolution model Step by Step
        • Convolutional Neural Networks: Application
        • cnn_utils
      • 第二周 深度卷积网络:实例探究(Deep convolutional models: case studies)
        • 2.1 经典网络(Classic networks)
        • 2.2 残差网络(Residual Networks (ResNets))
        • 2.3 残差网络为什么有用?(Why ResNets work?)
        • 2.4 网络中的网络以及 1×1 卷积(Network in Network and 1×1 convolutions)
        • 2.5 谷歌 Inception 网络简介(Inception network motivation)
        • 2.6 Inception 网络(Inception network)
        • 2.7 迁移学习(Transfer Learning)
        • 2.8 数据扩充(Data augmentation)
        • 2.9 计算机视觉现状(The state of computer vision)
        • Residual Networks
        • Keras tutorial - the Happy House
        • kt_utils.py
      • 第三周 目标检测(Object detection)
        • 3.1 目标定位(Object localization)
        • 3.2 特征点检测(Landmark detection)
        • 3.3 目标检测(Object detection)
        • 3.4 卷积的滑动窗口实现(Convolutional implementation of sliding windows)
        • 3.5 Bounding Box预测(Bounding box predictions)
        • 3.6 交并比(Intersection over union)
        • 3.7 非极大值抑制(Non-max suppression)
        • 3.8 Anchor Boxes
        • 3.9 YOLO 算法(Putting it together: YOLO algorithm)
        • 3.10 候选区域(选修)(Region proposals (Optional))
        • Autonomous driving application - Car detection
        • yolo_utils.py
      • 第四周 特殊应用:人脸识别和神经风格转换(Special applications: Face recognition &Neural style transfer)
        • 4.1 什么是人脸识别?(What is face recognition?)
        • 4.2 One-Shot学习(One-shot learning)
        • 4.3 Siamese 网络(Siamese network)
        • 4.4 Triplet 损失(Triplet 损失)
        • 4.5 面部验证与二分类(Face verification and binary classification)
        • 4.6 什么是深度卷积网络?(What are deep ConvNets learning?)
        • 4.7 代价函数(Cost function)
        • 4.8 内容代价函数(Content cost function)
        • 4.9 风格代价函数(Style cost function)
        • 4.10 一维到三维推广(1D and 3D generalizations of models)
        • Art Generation with Neural Style Transfer
        • nst_utils.py
        • Face Recognition for the Happy House
        • fr_utils.py
        • inception_blocks.py
  • 第五门课 序列模型(Sequence Models)
    • 第五门课 序列模型(Sequence Models)
      • 第一周 循环序列模型(Recurrent Neural Networks)
        • 1.1 为什么选择序列模型?(Why Sequence Models?)
        • 1.2 数学符号(Notation)
        • 1.3 循环神经网络模型(Recurrent Neural Network Model)
        • 1.4 通过时间的反向传播(Backpropagation through time)
        • 1.5 不同类型的循环神经网络(Different types of RNNs)
        • 1.6 语言模型和序列生成(Language model and sequence generation)
        • 1.7 对新序列采样(Sampling novel sequences)
        • 1.8 循环神经网络的梯度消失(Vanishing gradients with RNNs)
        • 1.9 GRU单元(Gated Recurrent Unit(GRU))
        • 1.10 长短期记忆(LSTM(long short term memory)unit)
        • 1.11 双向循环神经网络(Bidirectional RNN)
        • 1.12 深层循环神经网络(Deep RNNs)
        • Building your Recurrent Neural Network
        • rnn_utils.py
        • Dinosaurus Island -- Character level language model final
        • utils.py
        • shakespeare_utils.py
        • Improvise a Jazz Solo with an LSTM Network
      • 第二周 自然语言处理与词嵌入(Natural Language Processing and Word Embeddings)
        • 2.1 词汇表征(Word Representation)
        • 2.2 使用词嵌入(Using Word Embeddings)
        • 2.3 词嵌入的特性(Properties of Word Embeddings)
        • 2.4 嵌入矩阵(Embedding Matrix)
        • 2.5 学习词嵌入(Learning Word Embeddings)
        • 2.6 Word2Vec
        • 2.7 负采样(Negative Sampling)
        • 2.8 GloVe 词向量(GloVe Word Vectors)
        • 2.9 情感分类(Sentiment Classification)
        • 2.10 词嵌入除偏(Debiasing Word Embeddings)
        • Operations on word vectors
        • w2v_utils.py
        • Emojify
        • emo_utils.py
      • 第三周 序列模型和注意力机制(Sequence models & Attention mechanism)
        • 3.1 基础模型(Basic Models)
        • 3.2 选择最可能的句子(Picking the most likely sentence)
        • 3.3 集束搜索(Beam Search)
        • 3.4 改进集束搜索(Refinements to Beam Search)
        • 3.5 集束搜索的误差分析(Error analysis in beam search)
        • 3.6 Bleu 得分(选修)(Bleu Score (optional))
        • 3.7 注意力模型直观理解(Attention Model Intuition)
        • 3.8注意力模型(Attention Model)
        • 3.9语音识别(Speech recognition)
        • 3.10触发字检测(Trigger Word Detection)
        • Neural machine translation with attention
        • nmt_utils.py
        • Trigger word detection
        • td_utils.py
Powered by GitBook
On this page
  • 1 - Problem Statement
  • 2 - Transfer Learning
  • 3 - Neural Style Transfer
  • 3.1 - Computing the content cost
  • 3.2 - Computing the style cost
  • 3.2.1 - Style matrix
  • 3.2.2 - Style cost
  • 3.2.3 Style Weights
  • 3.3 - Defining the total cost to optimize
  • 4 - Solving the optimization problem
  • 5 - Test with your own image (Optional/Ungraded)
  • 6 - Conclusion
  • References:

Was this helpful?

  1. 第四门课 卷积神经网络(Convolutional Neural Networks)
  2. 第四门课 卷积神经网络(Convolutional Neural Networks)
  3. 第四周 特殊应用:人脸识别和神经风格转换(Special applications: Face recognition &Neural style transfer)

Art Generation with Neural Style Transfer

Previous4.10 一维到三维推广(1D and 3D generalizations of models)Nextnst_utils.py

Last updated 6 years ago

Was this helpful?

Welcome to the second assignment of this week. In this assignment, you will learn about Neural Style Transfer. This algorithm was created by .

In this assignment, you will:

  • Implement the neural style transfer algorithm

  • Generate novel artistic images using your algorithm

Most of the algorithms you've studied optimize a cost function to get a set of parameter values. In Neural Style Transfer, you'll optimize a cost function to get pixel values!

import os
import sys
import scipy.io
import scipy.misc
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
from PIL import Image
from nst_utils import *
import numpy as np
import tensorflow as tf


%matplotlib inline

1 - Problem Statement

Neural Style Transfer (NST) is one of the most fun techniques in deep learning. As seen below, it merges two images, namely, a "content" image (C) and a "style" image (S), to create a "generated" image (G). The generated image G combines the "content" of the image C with the "style" of image S.

In this example, you are going to generate an image of the Louvre museum in Paris (content image C), mixed with a painting by Claude Monet, a leader of the impressionist movement (style image S).

2 - Transfer Learning

Neural Style Transfer (NST) uses a previously trained convolutional network, and builds on top of that. The idea of using a network trained on a different task and applying it to a new task is called transfer learning.

Run the following code to load parameters from the VGG model. This may take a few seconds.

model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
print(model)


{'input': <tf.Variable 'Variable:0' shape=(1, 300, 400, 3) dtype=float32_ref>, 
'conv1_1': <tf.Tensor 'Relu:0' shape=(1, 300, 400, 64) dtype=float32>, 
'conv1_2': <tf.Tensor 'Relu_1:0' shape=(1, 300, 400, 64) dtype=float32>, 
'avgpool1': <tf.Tensor 'AvgPool:0' shape=(1, 150, 200, 64) dtype=float32>, 
'conv2_1': <tf.Tensor 'Relu_2:0' shape=(1, 150, 200, 128) dtype=float32>, 
'conv2_2': <tf.Tensor 'Relu_3:0' shape=(1, 150, 200, 128) dtype=float32>, 
'avgpool2': <tf.Tensor 'AvgPool_1:0' shape=(1, 75, 100, 128) dtype=float32>, 
'conv3_1': <tf.Tensor 'Relu_4:0' shape=(1, 75, 100, 256) dtype=float32>, 
'conv3_2': <tf.Tensor 'Relu_5:0' shape=(1, 75, 100, 256) dtype=float32>, 
'conv3_3': <tf.Tensor 'Relu_6:0' shape=(1, 75, 100, 256) dtype=float32>, 
'conv3_4': <tf.Tensor 'Relu_7:0' shape=(1, 75, 100, 256) dtype=float32>, 
'avgpool3': <tf.Tensor 'AvgPool_2:0' shape=(1, 38, 50, 256) dtype=float32>, 
'conv4_1': <tf.Tensor 'Relu_8:0' shape=(1, 38, 50, 512) dtype=float32>, 
'conv4_2': <tf.Tensor 'Relu_9:0' shape=(1, 38, 50, 512) dtype=float32>, 
'conv4_3': <tf.Tensor 'Relu_10:0' shape=(1, 38, 50, 512) dtype=float32>, 
'conv4_4': <tf.Tensor 'Relu_11:0' shape=(1, 38, 50, 512) dtype=float32>, 
'avgpool4': <tf.Tensor 'AvgPool_3:0' shape=(1, 19, 25, 512) dtype=float32>, 
'conv5_1': <tf.Tensor 'Relu_12:0' shape=(1, 19, 25, 512) dtype=float32>, 
'conv5_2': <tf.Tensor 'Relu_13:0' shape=(1, 19, 25, 512) dtype=float32>, 
'conv5_3': <tf.Tensor 'Relu_14:0' shape=(1, 19, 25, 512) dtype=float32>, 
'conv5_4': <tf.Tensor 'Relu_15:0' shape=(1, 19, 25, 512) dtype=float32>, 
'avgpool5': <tf.Tensor 'AvgPool_4:0' shape=(1, 10, 13, 512) dtype=float32>}
model["input"].assign(image)

This assigns the image as an input to the model. After this, if you want to access the activations of a particular layer, say layer 4_2 when the network is run on this image, you would run a TensorFlow session on the correct tensor conv4_2, as follows:

sess.run(model["conv4_2"])

3 - Neural Style Transfer

We will build the NST algorithm in three steps:

  • Build the content cost function Jcontent(C,G)J_{content}(C,G)Jcontent​(C,G)

  • Build the style cost function Jstyle(S,G)J_{style}(S,G)Jstyle​(S,G)

  • Put it together to get J(G)=αJcontent(C,G)+βJstyle(S,G)J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)J(G)=αJcontent​(C,G)+βJstyle​(S,G).

3.1 - Computing the content cost

In our running example, the content image C will be the picture of the Louvre Museum in Paris. Run the code below to see a picture of the Louvre.

content_image = scipy.misc.imread("images/louvre.jpg")
imshow(content_image)

<matplotlib.image.AxesImage at 0x7f7390d19da0>

The content image (C) shows the Louvre museum's pyramid surrounded by old Paris buildings, against a sunny sky with a few clouds.

3.1.1 - How do you ensure the generated image G matches the content of the image C?

As we saw in lecture, the earlier (shallower) layers of a ConvNet tend to detect lower-level features such as edges and simple textures, and the later (deeper) layers tend to detect higher-level features such as more complex textures as well as object classes.

We would like the "generated" image G to have similar content as the input image C. Suppose you have chosen some layer's activations to represent the content of an image. In practice, you'll get the most visually pleasing results if you choose a layer in the middle of the network--neither too shallow nor too deep. (After you have finished this exercise, feel free to come back and experiment with using different layers, to see how the results vary.)

So, suppose you have picked one particular hidden layer to use. Now, set the image C as the input to the pretrained VGG network, and run forward propagation. Let a(C)a^{(C)}a(C) be the hidden layer activations in the layer you had chosen. (In lecture, we had written this as a[l](C)a^{[l](C)}a[l](C), but here we'll drop the superscript [l][l][l] to simplify the notation.) This will be a nH×nW×nCn_H \times n_W \times n_CnH​×nW​×nC​ tensor. Repeat this process with the image G: Set G as the input, and run forward progation. Let a(G)a^{(G)}a(G) be the corresponding hidden layer activation. We will define as the content cost function as:

Jcontent(C,G)=14×nH×nW×nC∑all entries(a(C)−a(G))2                        (1)J_{content}(C,G) = \frac{1}{4 \times n_H \times n_W \times n_C}\sum _{ \text{all entries}} (a^{(C)} - a^{(G)})^2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)Jcontent​(C,G)=4×nH​×nW​×nC​1​all entries∑​(a(C)−a(G))2                        (1)

Here, nH,nWn_H, n_WnH​,nW​ and nCn_CnC​ are the height, width and number of channels of the hidden layer you have chosen, and appear in a normalization term in the cost. For clarity, note that a(C)a^{(C)}a(C) and a(G)a^{(G)}a(G) are the volumes corresponding to a hidden layer's activations. In order to compute the cost Jcontent(C,G)J_{content}(C,G)Jcontent​(C,G), it might also be convenient to unroll these 3D volumes into a 2D matrix, as shown below. (Technically this unrolling step isn't needed to compute JcontentJ_{content}Jcontent​, but it will be good practice for when you do need to carry out a similar operation later for computing the style const JstyleJ_{style}Jstyle​.)

Exercise: Compute the "content cost" using TensorFlow.

Instructions: The 3 steps to implement this function are: 1. Retrieve dimensions from a_G:

  • To retrieve dimensions from a tensor X, use: X.get_shape().as_list()

2.Unroll a_C and a_G as explained in the picture above

3.Compute the content cost:

# GRADED FUNCTION: compute_content_cost

def compute_content_cost(a_C, a_G):
    """
    Computes the content cost

    Arguments:
    a_C -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image C 
    a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image G

    Returns: 
    J_content -- scalar that you compute using equation 1 above.
    """

    ### START CODE HERE ###
    # Retrieve dimensions from a_G (≈1 line)
    m, n_H, n_W, n_C = a_G.get_shape().as_list()

    # Reshape a_C and a_G (≈2 lines)
    a_C_unrolled = tf.transpose(tf.reshape(a_C, [n_W*n_H, n_C]), perm = [1, 0])
    a_G_unrolled = tf.transpose(tf.reshape(a_G, [n_W*n_H, n_C]), perm = [1, 0])

    # compute the cost with tensorflow (≈1 line)
    J_content = tf.reduce_sum(tf.square(tf.subtract(a_C_unrolled, a_G_unrolled))/(4*n_H*n_W*n_C))
    ### END CODE HERE ###

    return J_content
tf.reset_default_graph()

with tf.Session() as test:
    tf.set_random_seed(1)
    a_C = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
    a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
    J_content = compute_content_cost(a_C, a_G)
    print("J_content = " + str(J_content.eval()))


"""
J_content = 6.76559
"""

What you should remember:

  • The content cost takes a hidden layer activation of the neural network, and measures how different a(C)a^{(C)}a(C) and a(G)a^{(G)}a(G) are.

  • When we minimize the content cost later, this will help make sure GGG has similar content as CCC.

3.2 - Computing the style cost

For our running example, we will use the following style image:

style_image = scipy.misc.imread("images/monet_800600.jpg")
imshow(style_image)


<matplotlib.image.AxesImage at 0x7f738c42e470>

Lets see how you can now define a "style" const function Jstyle(S,G)J_{style}(S,G)Jstyle​(S,G).

3.2.1 - Style matrix

The style matrix is also called a "Gram matrix." In linear algebra, the Gram matrix G of a set of vectors (v1,⋯ ,vn)(v_{1},\cdots ,v_{n})(v1​,⋯,vn​) is the matrix of dot products, whose entries are Gij=viTvj=np.dot(vi,vj){\displaystyle G_{ij} = v_{i}^T v_{j} = np.dot(v_{i}, v_{j}) }Gij​=viT​vj​=np.dot(vi​,vj​). In other words, GijG_{ij}Gij​ compares how similar viv_ivi​ is to vjv_jvj​: If they are highly similar, you would expect them to have a large dot product, and thus for GijG_{ij}Gij​ to be large.

Note that there is an unfortunate collision in the variable names used here. We are following common terminology used in the literature, but GGG is used to denote the Style matrix (or Gram matrix) as well as to denote the generated image GGG. We will try to make sure which GGG we are referring to is always clear from the context.

In NST, you can compute the Style matrix by multiplying the "unrolled" filter matrix with their transpose:

The result is a matrix of dimension (nC,nC)(n_C,n_C)(nC​,nC​) where nCn_CnC​ is the number of filters. The value GijG_{ij}Gij​ measures how similar the activations of filter iii are to the activations of filter jjj.

One important part of the gram matrix is that the diagonal elements such as GiiG_{ii}Gii​ also measures how active filter iii is. For example, suppose filter iii is detecting vertical textures in the image. Then GiiG_{ii}Gii​ measures how common vertical textures are in the image as a whole: If GiiG_{ii}Gii​ is large, this means that the image has a lot of vertical texture.

By capturing the prevalence of different types of features (GiiG_{ii}Gii​), as well as how much different features occur together (GijG_{ij}Gij​), the Style matrix GGG measures the style of an image.

# GRADED FUNCTION: gram_matrix

def gram_matrix(A):
    """
    Argument:
    A -- matrix of shape (n_C, n_H*n_W)

    Returns:
    GA -- Gram matrix of A, of shape (n_C, n_C)
    """

    ### START CODE HERE ### (≈1 line)
    GA = tf.matmul(A, tf.transpose(A, perm = [1, 0]))
    ### END CODE HERE ###

    return GA
tf.reset_default_graph()

with tf.Session() as test:
    tf.set_random_seed(1)
    A = tf.random_normal([3, 2*1], mean=1, stddev=4)
    GA = gram_matrix(A)

    print("GA = " + str(GA.eval()))
GA = [[ 6.42230511 -4.42912197 -2.09668207]
      [ -4.42912197 19.46583748 19.56387138]
      [ -2.09668207 19.56387138 20.6864624 ]]

3.2.2 - Style cost

After generating the Style matrix (Gram matrix), your goal will be to minimize the distance between the Gram matrix of the "style" image S and that of the "generated" image G. For now, we are using only a single hidden layer a[l]a^{[l]}a[l], and the corresponding style cost for this layer is defined as:

Jstyle[l](S,G)=14×nC2×(nH×nW)2∑i=1nC∑j=1nC(Gij(S)−Gij(G))2                        (2)J_{style}^{[l]}(S,G) = \frac{1}{4 \times {n_C}^2 \times (n_H \times n_W)^2} \sum _{i=1}^{n_C}\sum_{j=1}^{n_C}(G^{(S)}_{ij} - G^{(G)}_{ij})^2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)Jstyle[l]​(S,G)=4×nC​2×(nH​×nW​)21​i=1∑nC​​j=1∑nC​​(Gij(S)​−Gij(G)​)2                        (2)

where G(S)G^{(S)}G(S) and G(G)G^{(G)}G(G) are respectively the Gram matrices of the "style" image and the "generated" image, computed using the hidden layer activations for a particular hidden layer in the network.

Exercise: Compute the style cost for a single layer.

Instructions: The 3 steps to implement this function are: 1. Retrieve dimensions from the hidden layer activations a_G:

  • To retrieve dimensions from a tensor X, use: X.get_shape().as_list()

2.Unroll the hidden layer activations a_S and a_G into 2D matrices, as explained in the picture above.

3.Compute the Style matrix of the images S and G. (Use the function you had previously written.)

4.Compute the Style cost:

# GRADED FUNCTION: compute_layer_style_cost

def compute_layer_style_cost(a_S, a_G):
    """
    Arguments:
    a_S -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image S 
    a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image G

    Returns: 
    J_style_layer -- tensor representing a scalar value, style cost defined above by equation (2)
    """

    ### START CODE HERE ###
    # Retrieve dimensions from a_G (≈1 line)
    m, n_H, n_W, n_C = a_G.get_shape().as_list()

    # Reshape the images to have them of shape (n_C, n_H*n_W) (≈2 lines)
    a_S = tf.transpose(tf.reshape(a_S, [n_H*n_W, n_C]), perm = [1, 0])
    a_G = tf.transpose(tf.reshape(a_G, [n_H*n_W, n_C]), perm = [1, 0])

    # Computing gram_matrices for both images S and G (≈2 lines)
    GS = gram_matrix(a_S)
    GG = gram_matrix(a_G)

    # Computing the loss (≈1 line)
    J_style_layer = tf.reduce_sum(tf.square(tf.subtract(GS, GG))/(4*n_C**2*(n_H*n_W)**2))

    ### END CODE HERE ###

    return J_style_layer
tf.reset_default_graph()

with tf.Session() as test:
    tf.set_random_seed(1)
    a_S = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
    a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
    J_style_layer = compute_layer_style_cost(a_S, a_G)

    print("J_style_layer = " + str(J_style_layer.eval()))

"""
J_style_layer = 9.19028
"""

3.2.3 Style Weights

So far you have captured the style from only one layer. We'll get better results if we "merge" style costs from several different layers. After completing this exercise, feel free to come back and experiment with different weights to see how it changes the generated image GGG. But for now, this is a pretty reasonable default:

STYLE_LAYERS = [
('conv1_1', 0.2),
('conv2_1', 0.2),
('conv3_1', 0.2),
('conv4_1', 0.2),
('conv5_1', 0.2)]

You can combine the style costs for different layers as follows:

Jstyle(S,G)=∑lλ[l]Jstyle[l](S,G)J_{style}(S,G) = \sum_{l} \lambda^{[l]} J^{[l]}_{style}(S,G)Jstyle​(S,G)=l∑​λ[l]Jstyle[l]​(S,G)

where the values for λ[l]\lambda^{[l]}λ[l] are given in STYLE_LAYERS.

We've implemented a compute_style_cost(...) function. It simply calls your compute_layer_style_cost(...) several times, and weights their results using the values in STYLE_LAYERS. Read over it to make sure you understand what it's doing.

  1. Loop over (layer_name, coeff) from STYLE_LAYERS:

    a. Select the output tensor of the current layer. As an example, to call the tensor from the "conv1_1" layer you would do: out = model["conv1_1"]

    b. Get the style of the style image from the current layer by running the session on the tensor "out"

    c. Get a tensor representing the style of the generated image from the current layer. It is just "out".

    d. Now that you have both styles. Use the function you've implemented above to compute the style_cost for the current layer

    e. Add (style_cost x coeff) of the current layer to overall style cost (J_style)

  2. Return J_style, which should now be the sum of the (style_cost x coeff) for each layer.

def compute_style_cost(model, STYLE_LAYERS):
    """
    Computes the overall style cost from several chosen layers

    Arguments:
    model -- our tensorflow model
    STYLE_LAYERS -- A python list containing:
                        - the names of the layers we would like to extract style from
                        - a coefficient for each of them

    Returns: 
    J_style -- tensor representing a scalar value, style cost defined above by equation (2)
    """

    # initialize the overall style cost
    J_style = 0

    for layer_name, coeff in STYLE_LAYERS:

        # Select the output tensor of the currently selected layer
        out = model[layer_name]

        # Set a_S to be the hidden layer activation from the layer we have selected, by running the session on out
        a_S = sess.run(out)

        # Set a_G to be the hidden layer activation from same layer. Here, a_G references model[layer_name] 
        # and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
        # when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
        a_G = out

        # Compute style_cost for the current layer
        J_style_layer = compute_layer_style_cost(a_S, a_G)

        # Add coeff * J_style_layer of this layer to overall style cost
        J_style += coeff * J_style_layer

    return J_style

Note: In the inner-loop of the for-loop above, a_G is a tensor and hasn't been evaluated yet. It will be evaluated and updated at each iteration when we run the TensorFlow graph in model_nn() below.

How do you choose the coefficients for each layer? The deeper layers capture higher-level concepts, and the features in the deeper layers are less localized in the image relative to each other. So if you want the generated image to softly follow the style image, try choosing larger weights for deeper layers and smaller weights for the first layers. In contrast, if you want the generated image to strongly follow the style image, try choosing smaller weights for deeper layers and larger weights for the first layers

What you should remember:

  • The style of an image can be represented using the Gram matrix of a hidden layer's activations. However, we get even better results combining this representation from multiple different layers. This is in contrast to the content representation, where usually using just a single hidden layer is sufficient.

  • Minimizing the style cost will cause the image GGG to follow the style of the image SSS.

3.3 - Defining the total cost to optimize

Finally, let's create a cost function that minimizes both the style and the content cost. The formula is:

J(G)=αJcontent(C,G)+βJstyle(S,G)J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)J(G)=αJcontent​(C,G)+βJstyle​(S,G)

Exercise: Implement the total cost function which includes both the content cost and the style cost.

# GRADED FUNCTION: total_cost

def total_cost(J_content, J_style, alpha = 10, beta = 40):
    """
    Computes the total cost function

    Arguments:
    J_content -- content cost coded above
    J_style -- style cost coded above
    alpha -- hyperparameter weighting the importance of the content cost
    beta -- hyperparameter weighting the importance of the style cost

    Returns:
    J -- total cost as defined by the formula above.
    """

    ### START CODE HERE ### (≈1 line)
    J = alpha*J_content + beta*J_style
    ### END CODE HERE ###

    return J
tf.reset_default_graph()

with tf.Session() as test:
    np.random.seed(3)
    J_content = np.random.randn()    
    J_style = np.random.randn()
    J = total_cost(J_content, J_style)
    print("J = " + str(J))

"""
J = 35.34667875478276
"""

What you should remember:

  • The total cost is a linear combination of the content cost Jcontent(C,G)J_{content}(C,G)Jcontent​(C,G) and the style cost Jstyle(S,G)J_{style}(S,G)Jstyle​(S,G)

  • α\alphaα and β\betaβ are hyperparameters that control the relative weighting between content and style

4 - Solving the optimization problem

Finally, let's put everything together to implement Neural Style Transfer!

Here's what the program will have to do:

  1. Create an Interactive Session

  2. Load the content image

  3. Load the style image

  4. Randomly initialize the image to be generated

  5. Load the VGG16 model

  6. Build the TensorFlow graph:

    • Run the content image through the VGG16 model and compute the content cost

    • Run the style image through the VGG16 model and compute the style cost

    • Compute the total cost

    • Define the optimizer and the learning rate

  7. Initialize the TensorFlow graph and run it for a large number of iterations, updating the generated image at every step.

Lets go through the individual steps in detail.

Lets start the interactive session.

# Reset the graph
tf.reset_default_graph()


# Start interactive session
sess = tf.InteractiveSession()

Let's load, reshape, and normalize our "content" image (the Louvre museum picture):

content_image = scipy.misc.imread("images/louvre_small.jpg")
content_image = reshape_and_normalize_image(content_image)

Let's load, reshape and normalize our "style" image (Claude Monet's painting):

style_image = scipy.misc.imread("images/monet.jpg")
style_image = reshape_and_normalize_image(style_image)

Now, we initialize the "generated" image as a noisy image created from the content_image. By initializing the pixels of the generated image to be mostly noise but still slightly correlated with the content image, this will help the content of the "generated" image more rapidly match the content of the "content" image. (Feel free to look in nst_utils.py to see the details of generate_noise_image(...); to do so, click "File-->Open..." at the upper-left corner of this Jupyter notebook.)

generated_image = generate_noise_image(content_image)
imshow(generated_image[0])

<matplotlib.image.AxesImage at 0x7f737170e550>

Next, as explained in part (2), let's load the VGG16 model.

model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")

To get the program to compute the content cost, we will now assign a_C and a_G to be the appropriate hidden layer activations. We will use layer conv4_2 to compute the content cost. The code below does the following:

  1. Assign the content image to be the input to the VGG model.

  2. Set a_C to be the tensor giving the hidden layer activation for layer "conv4_2".

  3. Set a_G to be the tensor giving the hidden layer activation for the same layer.

  4. Compute the content cost using a_C and a_G.

# Assign the content image to be the input of the VGG model.
sess.run(model['input'].assign(content_image))


# Select the output tensor of layer conv4_2
out = model['conv4_2']


# Set a_C to be the hidden layer activation from the layer we have selected
a_C = sess.run(out)


# Set a_G to be the hidden layer activation from same layer. Here, a_G references model['conv4_2']
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out


# Compute the content cost
J_content = compute_content_cost(a_C, a_G)

Note: At this point, a_G is a tensor and hasn't been evaluated. It will be evaluated and updated at each iteration when we run the Tensorflow graph in model_nn() below.

# Assign the input of the model to be the "style" image
sess.run(model['input'].assign(style_image))


# Compute the style cost
J_style = compute_style_cost(model, STYLE_LAYERS)

Exercise: Now that you have J_content and J_style, compute the total cost J by calling total_cost(). Use alpha = 10 and beta = 40.

### START CODE HERE ### (1 line)
J = total_cost(J_content, J_style, alpha = 10, beta = 40)
### END CODE HERE ###
# define optimizer (1 line)
optimizer = tf.train.AdamOptimizer(2.0)


# define train_step (1 line)
train_step = optimizer.minimize(J)

Exercise: Implement the model_nn() function which initializes the variables of the tensorflow graph, assigns the input image (initial generated image) as the input of the VGG16 model and runs the train_step for a large number of steps.

def model_nn(sess, input_image, num_iterations = 200):

    # Initialize global variables (you need to run the session on the initializer)
    ### START CODE HERE ### (1 line)
    sess.run(tf.global_variables_initializer())
    ### END CODE HERE ###

    # Run the noisy input image (initial generated image) through the model. Use assign().
    ### START CODE HERE ### (1 line)
    sess.run(model['input'].assign(input_image))
    ### END CODE HERE ###

    for i in range(num_iterations):

        # Run the session on the train_step to minimize the total cost
        ### START CODE HERE ### (1 line)
        sess.run(train_step)
        ### END CODE HERE ###

        # Compute the generated image by running the session on the current model['input']
        ### START CODE HERE ### (1 line)
        generated_image = sess.run(model['input'])
        ### END CODE HERE ###

        # Print every 20 iteration.
        if i%20 == 0:
            Jt, Jc, Js = sess.run([J, J_content, J_style])
            print("Iteration " + str(i) + " :")
            print("total cost = " + str(Jt))
            print("content cost = " + str(Jc))
            print("style cost = " + str(Js))

            # save current generated image in the "/output" directory
            save_image("output/" + str(i) + ".png", generated_image)

    # save last generated image
    save_image('output/generated_image.jpg', generated_image)

    return generated_image

Run the following cell to generate an artistic image. It should take about 3min on CPU for every 20 iterations but you start observing attractive results after ≈140 iterations. Neural Style Transfer is generally trained using GPUs.

model_nn(sess, generated_image)
Iteration 0 :
total cost = 5.05035e+09
content cost = 7877.67
style cost = 1.26257e+08
Iteration 20 :
total cost = 9.43276e+08
content cost = 15186.9
style cost = 2.35781e+07
Iteration 40 :
total cost = 4.84898e+08
content cost = 16785.0
style cost = 1.21183e+07
Iteration 60 :
total cost = 3.12574e+08
content cost = 17465.8
style cost = 7.80998e+06
Iteration 80 :
total cost = 2.28137e+08
content cost = 17714.9
style cost = 5.699e+06
Iteration 100 :
total cost = 1.80694e+08
content cost = 17895.5
style cost = 4.51288e+06
Iteration 120 :
total cost = 1.49996e+08
content cost = 18034.3
style cost = 3.74539e+06
Iteration 140 :
total cost = 1.27698e+08
content cost = 18186.8
style cost = 3.18791e+06
Iteration 160 :
total cost = 1.10698e+08
content cost = 18354.2
style cost = 2.76287e+06
Iteration 180 :
total cost = 9.73407e+07
content cost = 18500.9
style cost = 2.42889e+06


array([[[[ -47.57871246, -61.58271027, 48.85700226],
    [ -26.18411827, -40.68395233, 26.88046837],
    [ -41.80831909, -29.10873032, 11.2266655 ],
    ...,
    [ -26.84453201, -9.60996437, 14.25239944],
    [ -30.21216583, -2.98944783, 24.00558853],
    [ -42.19843674, -4.09546566, 49.32817459]],
    [[ -61.05371094, -51.83226395, 25.20266914],
    [ -33.09142303, -31.14856339, -1.57329547],
    [ -27.17674255, -30.73937416, 15.13497353],
    ...,
    [ -26.78427505, -5.31921244, 25.96679497],
    [ -21.39604187, -17.04573441, 13.87594414],
    [ -40.57749176, -6.21540785, 9.49932194]],
    [[ -52.21483612, -51.62369156, 13.53674698],
    [ -37.20645905, -41.47898865, -6.33634949],
    [ -34.18973541, -25.27192116, 7.34102583],
    ...,
    [ -10.66579914, -37.37947845, 12.56488609],
    [ -12.27160835, -21.02189064, 17.14487648],
    [ -22.4864006 , -18.57838058, 14.23981094]],
    ...,
    [[ -50.03207779, -55.32716751, -38.2339325 ],
    [ -98.72381592, -77.81947327, -269.81747437],
    [ -76.47058868, -72.78637695, -143.97808838],
    ...,
    [ -69.79379272, -69.85897827, -29.52082825],
    [ -79.29332733, -87.44576263, -22.59799385],
    [ 1.55612087, -38.67827988, 23.88478661]],
    [[ -1.42958021, -75.45212555, 14.26480198],
    [-173.65603638, -102.86749268, -30.47251129],
    [ 6.72373009, -71.2935257 , -20.26651955],
    ...,
    [ -95.88648987, -83.97492981, -47.743927 ],
    [-102.67683411, -102.53542328, -59.64353943],
    [ -66.14694214, -95.50547791, 1.37127745]],
    [[ 50.24031448, -20.98336029, 53.0496521 ],
    [ 32.08109665, -83.5646286 , 26.98817825],
    [ 30.48530388, -39.76636887, 17.64559555],
    ...,
    [ -99.54864502, -107.69010925, -17.25076103],
    [-117.77135468, -144.40174866, -27.9162426 ],
    [ -25.51356697, -105.12289429, 20.31861687]]]], dtype=float32)

You're done! After running this, in the upper bar of the notebook click on "File" and then "Open". Go to the "/output" directory to see all the saved images. Open "generated_image" to see the generated image! :)

We didn't want you to wait too long to see an initial result, and so had set the hyperparameters accordingly. To get the best looking results, running the optimization algorithm longer (and perhaps with a smaller learning rate) might work better. After completing and submitting this assignment, we encourage you to come back and play more with this notebook, and see if you can generate even better looking images.

Here are few other examples:

5 - Test with your own image (Optional/Ungraded)

Finally, you can also rerun the algorithm on your own images!

To do so, go back to part 4 and change the content image and style image with your own pictures. In detail, here's what you should do:

  1. Click on "File -> Open" in the upper tab of the notebook

  2. Go to "/images" and upload your images (requirement: (WIDTH = 300, HEIGHT = 225)), rename them "my_content.png" and "my_style.png" for example.

  3. Change the code in part (3.4) from :

    content_image = scipy.misc.imread("images/louvre.jpg")
    style_image = scipy.misc.imread("images/claude-monet.jpg")

    to:

    content_image = scipy.misc.imread("images/my_content.jpg")
    style_image = scipy.misc.imread("images/my_style.jpg")
  4. Rerun the cells (you may need to restart the Kernel in the upper tab of the notebook).

You can also tune your hyperparameters:

  • Which layers are responsible for representing the style? STYLE_LAYERS

  • How many iterations do you want to run the algorithm? num_iterations

  • What is the relative weighting between content and style? alpha/beta

6 - Conclusion

Great job on completing this assignment! You are now able to use Neural Style Transfer to generate artistic images. This is also your first time building a model in which the optimization algorithm updates the pixel values rather than the neural network's parameters. Deep learning has many different types of models and this is only one of them!

What you should remember:

  • Neural Style Transfer is an algorithm that given a content image C and a style image S can generate an artistic image

  • It uses representations (hidden layer activations) based on a pretrained ConvNet.

  • The content cost function is computed using one hidden layer's activations.

  • The style cost function for one layer is computed using the Gram matrix of that layer's activations. The overall style cost function is obtained using several hidden layers.

  • Optimizing the total cost function results in synthesizing new images.

This was the final programming exercise of this course. Congratulations--you've finished all the programming exercises of this course on Convolutional Networks! We hope to also see you in Course 5, on Sequence models!

References:

The Neural Style Transfer algorithm was due to Gatys et al. (2015). Harish Narayanan and Github user "log0" also have highly readable write-ups from which we drew inspiration. The pre-trained network used in this implementation is a VGG network, which is due to Simonyan and Zisserman (2015). Pre-trained weights were from the work of the MathConvNet team.

Let's see how you can do this.

Following the original , we will use the VGG network. Specifically, we'll use VGG-19, a 19-layer version of the VGG network. This model has already been trained on the very large ImageNet database, and thus has learned to recognize a variety of low level features (at the earlier layers) and high level features (at the deeper layers).

The model is stored in a python dictionary where each variable name is the key and the corresponding value is a tensor containing that variable's value. To run an image through this network, you just have to feed the image to the model. In TensorFlow, you can do so using the function. In particular, you will use the assign function like this:

If you are stuck, take a look at and .

If you are stuck, take a look at , and .

This painting was painted in the style of .

Exercise: Using TensorFlow, implement a function that computes the Gram matrix of a matrix A. The formula is: The gram matrix of A is GA=AATG_A = AA^TGA​=AAT. If you are stuck, take a look at and .

You may find and useful.

You may find , and useful.

You've previously implemented the overall cost J(G)J(G)J(G). We'll now set up TensorFlow to optimize this with respect to GGG. To do so, your program has to reset the graph and use an "". Unlike a regular session, the "Interactive Session" installs itself as the default session to build a graph. This allows you to run variables without constantly needing to refer to the session object, which simplifies the code.

You'd previously learned how to set up the Adam optimizer in TensorFlow. Lets do that here, using a learning rate of 2.0.

You should see something the image presented below on the right:

The beautiful ruins of the ancient city of Persepolis (Iran) with the style of Van Gogh (The Starry Night)

The tomb of Cyrus the great in Pasargadae with the style of a Ceramic Kashi from Ispahan.

A scientific study of a turbulent fluid with the style of a abstract blue fluid painting.

Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, (2015).

Harish Narayanan,

Log0,

Karen Simonyan and Andrew Zisserman (2015).

Gatys et al.(2015)
NST paper
tf.assign
Hint1
Hint2
Hint3
Hint4
Hint5
impressionism
Hint 1
Hint 2
Hint1
Hint2
Hint3
Hint4
Hint5
Interactive Session
See reference
A Neural Algorithm of Artistic Style
Convolutional neural networks for artistic style transfer.
TensorFlow Implementation of "A Neural Algorithm of Artistic Style".
Very deep convolutional networks for large-scale image recognition
MatConvNet.