-
#9. Gradient Checking연구실 2019. 10. 7. 16:24
* How does gradient checking work?
-Backpropagation은 구현이 매우 어렵기 때문에 버그가 발생해도 어디서부터 고쳐야 할지가 마땅치 않다. 그럴 때 gradient checking을 사용한다.
- Backpropagation: ∂J/∂θ
* 1-Dimensional gradient checking
- 1차원의 linear function J(θ) = θx를 가정해보자.
- θ에 대해서 미분하면 ∂J/∂θ =x
⭐️np.linalg.norm(...): 벡터 norm을 계산해 return
def gradient_check(x, theta, epsilon = 1e-7): """ Implement the backward propagation presented in Figure 1. Arguments: x -- a real-valued input theta -- our parameter, a real number as well epsilon -- tiny shift to the input to compute approximated gradient with formula(1) Returns: difference -- difference (2) between the approximated gradient and the backward propagation gradient """ # Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit. thetaplus = theta + epsilon # Step 1 thetaminus = theta - epsilon # Step 2 J_plus = forward_propagation(x, thetaplus) # Step 3 J_minus = forward_propagation(x, thetaminus) # Step 4 gradapprox = (J_plus - J_minus) / (2*epsilon) # Step 5 # Check if gradapprox is close enough to the output of backward_propagation() grad = backward_propagation(x, theta) numerator = np.linalg.norm(grad - gradapprox) # Step 1' denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2' difference = numerator / denominator # Step 3' if difference < 1e-7: print ("The gradient is correct!") else: print ("The gradient is wrong!") return difference
- difference가 매우 작으면 gradient를 잘 계산했다고 볼 수 있다. 이것을 이제 일반화시켜 neural network 전체에 적용해보자.
* N-dimensional gradient checking
def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7): """ Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n Arguments: parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters. x -- input datapoint, of shape (input size, 1) y -- true "label" epsilon -- tiny shift to the input to compute approximated gradient with formula(1) Returns: difference -- difference (2) between the approximated gradient and the backward propagation gradient """ # Set-up variables parameters_values, _ = dictionary_to_vector(parameters) grad = gradients_to_vector(gradients) num_parameters = parameters_values.shape[0] J_plus = np.zeros((num_parameters, 1)) J_minus = np.zeros((num_parameters, 1)) gradapprox = np.zeros((num_parameters, 1)) # Compute gradapprox for i in range(num_parameters): # Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]". # "_" is used because the function you have to outputs two parameters but we only care about the first one thetaplus = np.copy(parameters_values) # Step 1 thetaplus[i][0] = thetaplus[i][0] + epsilon # Step 2 J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3 # Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]". thetaminus = np.copy(parameters_values) # Step 1 thetaminus[i][0] = thetaminus[i][0] - epsilon # Step 2 J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3 # Compute gradapprox[i] gradapprox[i] = (J_plus[i] - J_minus[i]) / (2*epsilon) # Compare gradapprox to backward propagation gradients by computing difference. numerator = np.linalg.norm(grad - gradapprox) # Step 1' denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2' difference = numerator / denominator # Step 3' if difference > 2e-7: print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m") else: print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m") return difference
- gradient checking은 매우 느리기 때문에 모든 iteration마다 돌리지 않는다.
- gradient checking은 dropout과 함께 작동하지 않는다.
'연구실' 카테고리의 다른 글
#11. Tensorflow Tutorial (0) 2019.10.10 #10. Optimization Methods (0) 2019.10.10 #7. Initialization (0) 2019.10.07 #6. Building your Deep Neural Network: Step by Step (0) 2019.10.04 #5. Classification with one hidden layer (0) 2019.10.02