ABOUT ME

-

Today
-
Yesterday
-
Total
-
  • #5. Classification with one hidden layer
    연구실 2019. 10. 2. 16:46

    * Dataset

    - a numpy-array(matrix) X that contains your features(x1, x2)

    - a numpy-array(vector) Y that contains your labels(red: 0, blue: 1)

     

     

    * Simple Logistic Regression

    - sklearn's build-in function을 사용하여 logistic regression 식을 학습시킬 수 있다.

    # Train the logistic regression classifier
    clf = sklearn.linear_model.LogisticRegressionCV();
    clf.fit(X.T, Y.T);

    <Logistic Regression 식 학습>

     

    # Plot the decision boundary for logistic regression
    plot_decision_boundary(lambda x: clf.predict(x), X, Y)
    plt.title("Logistic Regression")
    
    # Print accuracy
    LR_predictions = clf.predict(X.T)
    print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +
           '% ' + "(percentage of correctly labelled datapoints)")

    <학습시킨 모델의 decision boundary 및 accuracy 확인>

     

    result: 

    - dataset이 linear separable하지 않으므로 logistic regression이 잘 작동하지 못한다. 

     

    * Neural Network model

     

    - train a NN with a single hidden layer:

    - 과정을 설명해보자면:

        1) NN structure를 정의한다.(input unit의 갯수, hidden unit의 갯수 등등)

        2) 모델의 parameter를 초기화한다.

        3) 반복:

            - forward propagation

            - loss 계산

            - backward propagation to get the gradients

            - update parameters(gradient descent)

     

    1) Defining the neural network structure

     

    - n_x: the size of the input layer(X.shape[0])

    - n_h: the size of the hidden layer(이 예시에서는 4)

    - n_y: the size of the output layer(Y.shape[0])

     

     

    2) Initialize the model's parameters

     

    - weight는 random value로 초기화(np.random.randn(a, b) * 0.01)

    - bias는 zero로 초기화(np.zeros((a, b))

     

    3) The Loop

     

    3.1 Forward Propagation

     

    def forward_propagation(X, parameters):
        """
        Argument:
        X -- input data of size (n_x, m)
        parameters -- python dictionary containing your parameters (output of initialization function)
        
        Returns:
        A2 -- The sigmoid output of the second activation
        cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
        """
        # Retrieve each parameter from the dictionary "parameters"
        W1 = parameters["W1"]
        b1 = parameters["b1"]
        W2 = parameters["W2"]
        b2 = parameters["b2"]
        
        # Implement Forward Propagation to calculate A2 (probabilities)
        Z1 = np.dot(W1, X) + b1 #(1)
        A1 = np.tanh(Z1) #(2)
        Z2 = np.dot(W2, A1) + b2 #(3)
        A2 = sigmoid(Z2) #(4)
        
        assert(A2.shape == (1, X.shape[1]))
        
        cache = {"Z1": Z1,
                 "A1": A1,
                 "Z2": Z2,
                 "A2": A2}
        
        return A2, cache
        
    X_assess, parameters = forward_propagation_test_case()
    A2, cache = forward_propagation(X_assess, parameters)
    
    # Note: we use the mean here just to make sure that your output matches ours. 
    print(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))

    result: 0.262818640198 0.091999045227 -1.30766601287 0.212877681719

     

     

    3.2 Cost Function

     

    Cost Function

    def compute_cost(A2, Y, parameters):
        """
        Computes the cross-entropy cost given in equation (13)
        
        Arguments:
        A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
        Y -- "true" labels vector of shape (1, number of examples)
        parameters -- python dictionary containing your parameters W1, b1, W2 and b2
        [Note that the parameters argument is not used in this function, 
        but the auto-grader currently expects this parameter.
        Future version of this notebook will fix both the notebook 
        and the auto-grader so that `parameters` is not needed.
        For now, please include `parameters` in the function signature,
        and also when invoking this function.]
        
        Returns:
        cost -- cross-entropy cost given equation (13)
        
        """
        
        m = Y.shape[1] # number of example
    
        # Compute the cross-entropy cost
        logprobs = np.multiply(np.log(A2),Y)
        cost = - (1/m) * np.sum(logprobs + (1 - Y)*np.log(1-A2))
        
        cost = float(np.squeeze(cost))  # makes sure cost is the dimension we expect. 
                                        # E.g., turns [[17]] into 17 
        assert(isinstance(cost, float))
        
        return cost
        
    A2, Y_assess, parameters = compute_cost_test_case()
    print("cost = " + str(compute_cost(A2, Y_assess, parameters)))

    result: cost = 0.6930587610394646

     

     

    3.3 Backward Propagation

     

    gradient descent

     

    def backward_propagation(parameters, cache, X, Y):
        """
        Implement the backward propagation using the instructions above.
        
        Arguments:
        parameters -- python dictionary containing our parameters 
        cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
        X -- input data of shape (2, number of examples)
        Y -- "true" labels vector of shape (1, number of examples)
        
        Returns:
        grads -- python dictionary containing your gradients with respect to different parameters
        """
        m = X.shape[1]
        
        # First, retrieve W1 and W2 from the dictionary "parameters".
        W1 = parameters["W1"]
        W2 = parameters["W2"]
            
        # Retrieve also A1 and A2 from dictionary "cache".
        A1 = cache["A1"]
        A2 = cache["A2"]
        
        # Backward propagation: calculate dW1, db1, dW2, db2.  above)
        dZ2 = A2 - Y
        dW2 = (1/m) * np.dot(dZ2, A1.T)
        db2 = (1/m) * np.sum(dZ2, axis = 1, keepdims = True)
        dZ1 = np.multiply(np.dot(W2.T, dZ2), 1 - np.power(A1, 2))
        dW1 = (1/m) * np.dot(dZ1, X.T)
        db1 = (1/m) * np.sum(dZ1, axis = 1, keepdims = True)
        
        grads = {"dW1": dW1,
                 "db1": db1,
                 "dW2": dW2,
                 "db2": db2}
        
        return grads

     

     

    3.4 Update Parameters

     

    - Gradient Descent Rule: θ=θ−α*(∂J/∂θ) (α:learning rate, θ:parameter)

    - Good learning rate라면 converge할 것이고 bad learning rate라면 diverge할 것이다.

    def update_parameters(parameters, grads, learning_rate = 1.2):
        """
        Updates parameters using the gradient descent update rule given above
        
        Arguments:
        parameters -- python dictionary containing your parameters 
        grads -- python dictionary containing your gradients 
        
        Returns:
        parameters -- python dictionary containing your updated parameters 
        """
        # Retrieve each parameter from the dictionary "parameters"
        W1 = parameters["W1"]
        b1 = parameters["b1"]
        W2 = parameters["W2"]
        b2 = parameters["b2"]
        
        # Retrieve each gradient from the dictionary "grads"
        dW1 = grads["dW1"]
        db1 = grads["db1"]
        dW2 = grads["dW2"]
        db2 = grads["db2"]
        
        # Update rule for each parameter
        W1 = W1 - learning_rate * dW1
        b1 = b2 - learning_rate * db1
        W2 = W2 - learning_rate * dW2
        b2 = b2 - learning_rate *  db2
        
        parameters = {"W1": W1,
                      "b1": b1,
                      "W2": W2,
                      "b2": b2}
        
        return parameters
        
    parameters, grads = update_parameters_test_case()
    parameters = update_parameters(parameters, grads)
    
    print("W1 = " + str(parameters["W1"]))
    print("b1 = " + str(parameters["b1"]))
    print("W2 = " + str(parameters["W2"]))
    print("b2 = " + str(parameters["b2"]))

    result: W1 = [[-0.00643025 0.01936718] [-0.02410458 0.03978052] [-0.01653973 -0.02096177] [ 0.01046864 -0.05990141]] b1 = [[ 9.13687537e-05] [ 9.60772116e-05] [ 9.17236240e-05] [ 9.08396764e-05]] W2 = [[-0.01041081 -0.04463285 0.01758031 0.04747113]] b2 = [[ 0.00010457]]

     

    4) Integrate parts 1), 2) and 3) in nn_model()

     

     

    5) Predictions

     

    - use forward propagation to predict results

    - hidden unit 갯수가 늘어날수록 training set에는 더 잘 맞지만 overfitting 문제가 발생하게 된다.

    - normalization을 이용하여 overfitting 문제를 조금은 해결할 수 있다.

    '연구실' 카테고리의 다른 글

    #7. Initialization  (0) 2019.10.07
    #6. Building your Deep Neural Network: Step by Step  (0) 2019.10.04
    #4. Logistic Regression with a Neural Network midset  (0) 2019.09.29
    #3. numpy  (0) 2019.09.26
    #2. CNN  (0) 2019.09.23

    댓글

©hyunbul