Welcome to Course 5's first assignment! In this assignment, you will implement your first Recurrent Neural Network in numpy.
Notation:
We assume that you are already familiar with numpy and/or have completed the previous courses of the specialization. Let's get started!
Let's first import all the packages that you will need during this assignment.
import numpy as npfrom rnn_utils import*
1 - Forward propagation for the basic Recurrent Neural Network
Figure 1: Basic RNN model
Here's how you can implement an RNN:
Let's go!
1.1 - RNN cell
A Recurrent neural network can be seen as the repetition of a single cell. You are first going to implement the computations for a single time-step. The following figure describes the operations for a single time-step of an RNN cell.
Exercise: Implement the RNN-cell described in Figure (2).
# GRADED FUNCTION: rnn_cell_forwarddefrnn_cell_forward(xt,a_prev,parameters):""" Implements a single forward step of the RNN-cell as described in Figure (2) Arguments: xt -- your input data at timestep "t", numpy array of shape (n_x, m). a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m) parameters -- python dictionary containing: Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x) Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a) Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a) ba -- Bias, numpy array of shape (n_a, 1) by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1) Returns: a_next -- next hidden state, of shape (n_a, m) yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m) cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters) """# Retrieve parameters from "parameters" Wax = parameters["Wax"] Waa = parameters["Waa"] Wya = parameters["Wya"] ba = parameters["ba"] by = parameters["by"]### START CODE HERE ### (≈2 lines)# compute next activation state using the formula given above a_next = np.tanh(np.dot(Waa, a_prev) + np.dot(Wax, xt) + ba)# compute output of the current cell using the formula given above yt_pred =softmax(np.dot(Wya, a_next) + by)### END CODE HERE #### store values you need for backward propagation in cache cache = (a_next, a_prev, xt, parameters)return a_next, yt_pred, cache
Exercise: Code the forward propagation of the RNN described in Figure (3).
Update the "next" hidden state and the cache by running rnn_cell_forward
Store the prediction in y
Add the cache to the list of caches
# GRADED FUNCTION: rnn_forwarddefrnn_forward(x,a0,parameters):""" Implement the forward propagation of the recurrent neural network described in Figure (3). Arguments: x -- Input data for every time-step, of shape (n_x, m, T_x). a0 -- Initial hidden state, of shape (n_a, m) parameters -- python dictionary containing: Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a) Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x) Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a) ba -- Bias numpy array of shape (n_a, 1) by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1) Returns: a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x) y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x) caches -- tuple of values needed for the backward pass, contains (list of caches, x) """# Initialize "caches" which will contain the list of all caches caches = []# Retrieve dimensions from shapes of x and parameters["Wya"] n_x, m, T_x = x.shape n_y, n_a = parameters["Wya"].shape### START CODE HERE #### initialize "a" and "y" with zeros (≈2 lines) a = np.zeros((n_a, m, T_x)) y_pred = np.zeros((n_y, m, T_x))# Initialize a_next (≈1 line) a_next = a0# loop over all time-stepsfor t inrange(T_x):# Update next hidden state, compute the prediction, get the cache (≈1 line) a_next, yt_pred, cache =rnn_cell_forward(x[:, : , t], a_next, parameters)# Save the value of the new "next" hidden state in a (≈1 line) a[:,:,t]= a_next# Save the value of the prediction in y (≈1 line) y_pred[:,:,t]= yt_pred# Append "cache" to "caches" (≈1 line) caches.append(cache)### END CODE HERE #### store values needed for backward propagation in cache caches = (caches, x)return a, y_pred, caches
In the next part, you will build a more complex LSTM model, which is better at addressing vanishing gradients. The LSTM will be better able to remember a piece of information and keep it saved for many timesteps.
2 - Long Short-Term Memory (LSTM) network
This following figure shows the operations of an LSTM-cell.
About the gates
- Forget gate
For the sake of this illustration, lets assume we are reading words in a piece of text, and want use an LSTM to keep track of grammatical structures, such as whether the subject is singular or plural. If the subject changes from a singular word to a plural word, we need to find a way to get rid of our previously stored memory value of the singular/plural state. In an LSTM, the forget gate lets us do this:
- Update gate
Once we forget that the subject being discussed is singular, we need to find a way to update it to reflect that the new subject is now plural. Here is the formulat for the update gate:
- Updating the cell
To update the new subject we need to create a new vector of numbers that we can add to our previous cell state. The equation we use is:
Finally, the new cell state is:
- Output gate
To decide which outputs we will use, we will use the following two formulas:
2.1 - LSTM cell
Exercise: Implement the LSTM cell described in the Figure (3).
# GRADED FUNCTION: lstm_cell_forwarddeflstm_cell_forward(xt,a_prev,c_prev,parameters):""" Implement a single forward step of the LSTM-cell as described in Figure (4) Arguments: xt -- your input data at timestep "t", numpy array of shape (n_x, m). a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m) c_prev -- Memory state at timestep "t-1", numpy array of shape (n_a, m) parameters -- python dictionary containing: Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x) bf -- Bias of the forget gate, numpy array of shape (n_a, 1) Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x) bi -- Bias of the update gate, numpy array of shape (n_a, 1) Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x) bc -- Bias of the first "tanh", numpy array of shape (n_a, 1) Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x) bo -- Bias of the output gate, numpy array of shape (n_a, 1) Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a) by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1) Returns: a_next -- next hidden state, of shape (n_a, m) c_next -- next memory state, of shape (n_a, m) yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m) cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters) Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilde), c stands for the memory value """# Retrieve parameters from "parameters" Wf = parameters["Wf"] bf = parameters["bf"] Wi = parameters["Wi"] bi = parameters["bi"] Wc = parameters["Wc"] bc = parameters["bc"] Wo = parameters["Wo"] bo = parameters["bo"] Wy = parameters["Wy"] by = parameters["by"]# Retrieve dimensions from shapes of xt and Wy n_x, m = xt.shape n_y, n_a = Wy.shape### START CODE HERE #### Concatenate a_prev and xt (≈3 lines) concat = np.zeros((n_a + n_x, m)) concat[: n_a,:]= a_prev concat[n_a :,:]= xt# Compute values for ft, it, cct, c_next, ot, a_next using the formulas given figure (4) (≈6 lines) ft =sigmoid(np.dot(Wf, concat) + bf) it =sigmoid(np.dot(Wi, concat) + bi) cct = np.tanh(np.dot(Wc, concat) + bc) c_next = np.multiply(ft, c_prev)+ np.multiply(it, cct) ot =sigmoid(np.dot(Wo, concat) + bo) a_next = np.multiply(ot, np.tanh(c_next))# Compute prediction of the LSTM cell (≈1 line) yt_pred =softmax(np.dot(Wy, a_next) + by)### END CODE HERE #### store values needed for backward propagation in cache cache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters)return a_next, c_next, yt_pred, cache
# GRADED FUNCTION: lstm_forwarddeflstm_forward(x,a0,parameters):""" Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (3). Arguments: x -- Input data for every time-step, of shape (n_x, m, T_x). a0 -- Initial hidden state, of shape (n_a, m) parameters -- python dictionary containing: Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x) bf -- Bias of the forget gate, numpy array of shape (n_a, 1) Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x) bi -- Bias of the update gate, numpy array of shape (n_a, 1) Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x) bc -- Bias of the first "tanh", numpy array of shape (n_a, 1) Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x) bo -- Bias of the output gate, numpy array of shape (n_a, 1) Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a) by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1) Returns: a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x) y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x) caches -- tuple of values needed for the backward pass, contains (list of all the caches, x) """# Initialize "caches", which will track the list of all the caches caches = []### START CODE HERE #### Retrieve dimensions from shapes of x and parameters['Wy'] (≈2 lines) n_x, m, T_x = x.shape n_y, n_a = parameters['Wy'].shape# initialize "a", "c" and "y" with zeros (≈3 lines) a = np.zeros((n_a, m, T_x)) c = np.zeros((n_a, m, T_x)) y = np.zeros((n_y, m, T_x))# Initialize a_next and c_next (≈2 lines) a_next = a0 c_next = c[:,:,0]# loop over all time-stepsfor t inrange(T_x):# Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line) a_next, c_next, yt, cache =lstm_cell_forward(x[:, :, t], a_next, c_next, parameters)# Save the value of the new "next" hidden state in a (≈1 line) a[:,:,t]= a_next# Save the value of the prediction in y (≈1 line) y[:,:,t]= yt# Save the value of the next cell state (≈1 line) c[:,:,t]= c_next# Append the cache into caches (≈1 line) caches.append(cache)### END CODE HERE #### store values needed for backward propagation in cache caches = (caches, x)return a, y, c, caches
Congratulations! You have now implemented the forward passes for the basic RNN and the LSTM. When using a deep learning framework, implementing the forward pass is sufficient to build systems that achieve great performance.
The rest of this notebook is optional, and will not be graded.
3 - Backpropagation in recurrent neural networks (OPTIONAL / UNGRADED)
In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers do not need to bother with the details of the backward pass. If however you are an expert in calculus and want to see the details of backprop in RNNs, you can work through this optional portion of the notebook.
When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in recurrent neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are quite complicated and we did not derive them in lecture. However, we will briefly present them below.
3.1 - Basic RNN backward pass
We will start by computing the backward pass for the basic RNN-cell.
Deriving the one step backward functions:
To compute the rnn_cell_backward you need to compute the following equations. It is a good exercise to derive them by hand.
defrnn_cell_backward(da_next,cache):""" Implements the backward pass for the RNN-cell (single time-step). Arguments: da_next -- Gradient of loss with respect to next hidden state cache -- python dictionary containing useful values (output of rnn_cell_forward()) Returns: gradients -- python dictionary containing: dx -- Gradients of input data, of shape (n_x, m) da_prev -- Gradients of previous hidden state, of shape (n_a, m) dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x) dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a) dba -- Gradients of bias vector, of shape (n_a, 1) """# Retrieve values from cache (a_next, a_prev, xt, parameters) = cache# Retrieve values from parameters Wax = parameters["Wax"] Waa = parameters["Waa"] Wya = parameters["Wya"] ba = parameters["ba"] by = parameters["by"]### START CODE HERE #### compute the gradient of tanh with respect to a_next (≈1 line) dtanh =(1- np.square(a_next))*da_next# compute the gradient of the loss with respect to Wax (≈2 lines) dxt = np.dot(Wax.T, dtanh) dWax = np.dot(dtanh, xt.T)# compute the gradient with respect to Waa (≈2 lines) da_prev = np.dot(Waa.T, dtanh) dWaa = np.dot(dtanh, a_prev.T)# compute the gradient with respect to b (≈1 line) dba = np.sum(dtanh,axis=1,keepdims=1)### END CODE HERE #### Store the gradients in a python dictionary gradients ={"dxt": dxt,"da_prev": da_prev,"dWax": dWax,"dWaa": dWaa,"dba": dba}return gradients
Implement the rnn_backward function. Initialize the return variables with zeros first and then loop through all the time steps while calling the rnn_cell_backward at each time timestep, update the other variables accordingly.
defrnn_backward(da,caches):""" Implement the backward pass for a RNN over an entire sequence of input data. Arguments: da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x) caches -- tuple containing information from the forward pass (rnn_forward) Returns: gradients -- python dictionary containing: dx -- Gradient w.r.t. the input data, numpy-array of shape (n_x, m, T_x) da0 -- Gradient w.r.t the initial hidden state, numpy-array of shape (n_a, m) dWax -- Gradient w.r.t the input's weight matrix, numpy-array of shape (n_a, n_x) dWaa -- Gradient w.r.t the hidden state's weight matrix, numpy-arrayof shape (n_a, n_a) dba -- Gradient w.r.t the bias, of shape (n_a, 1) """### START CODE HERE #### Retrieve values from the first cache (t=1) of caches (≈2 lines) (caches, x) = caches (a1, a0, x1, parameters) = caches[0]# Retrieve dimensions from da's and x1's shapes (≈2 lines) n_a, m, T_x = da.shape n_x, m = x1.shape# initialize the gradients with the right sizes (≈6 lines) dx = np.zeros((n_x, m, T_x)) dWax = np.zeros((n_a, n_x)) dWaa = np.zeros((n_a, n_a)) dba = np.zeros((n_a, 1)) da0 = np.zeros((n_a, m)) da_prevt = np.zeros((n_a, m))# Loop through all the time stepsfor t inreversed(range(T_x)): # Compute gradients at time step t. Choose wisely the "da_next" and the "cache" to use in the backward propagation step. (≈1 line)
gradients =rnn_cell_backward(da[:, :, t]+ da_prevt , caches[t])# Retrieve derivatives from gradients (≈ 1 line) dxt, da_prevt, dWaxt, dWaat, dbat = gradients["dxt"], gradients["da_prev"], gradients["dWax"], gradients["dWaa"], gradients["dba"]
# Increment global derivatives w.r.t parameters by adding their derivative at time-step t (≈4 lines) dx[:,:, t]= dxt dWax += dWaxt dWaa += dWaat dba += dbat# Set da0 to the gradient of a which has been backpropagated through all time-steps (≈1 line) da0 = da_prevt### END CODE HERE #### Store the gradients in a python dictionary gradients ={"dx": dx,"da0": da0,"dWax": dWax,"dWaa": dWaa,"dba": dba}return gradients
The LSTM backward pass is slighltly more complicated than the forward one. We have provided you with all the equations for the LSTM backward pass below. (If you enjoy calculus exercises feel free to try deriving these from scratch yourself.)