1. Introduction
2. History and Overview about Artificial Neural Network
3. Single neural network
7. References
2. History and Overview about Artificial Neural Network
3. Single neural network
- 3.1 Perceptron
- 3.1.1 The Unit Step function
- 3.1.2 The Perceptron rules
- 3.1.3 The bias term
- 3.1.4 Implement Perceptron in Python
- 3.2 Adaptive Linear Neurons
- 3.2.1 Gradient Descent rule (Delta rule)
- 3.2.2 Learning rate in Gradient Descent
- 3.2.3 Implement Adaline in Python to classify Iris data
- 3.2.4 Learning via types of Gradient Descent
- 3.3 Problems with Perceptron (AI Winter)
- 4.1 Overview about Multi-layer Neural Network
- 4.2 Forward Propagation
- 4.3 Cost function
- 4.4 Backpropagation
- 4.5 Implement simple Multi-layer Neural Network to solve the problem of Perceptron
- 4.6 Some optional techniques for Multi-layer Neural Network Optimization
- 4.7 Multi-layer Neural Network for binary/multi classification
- 5.1 Overview about MNIST data
- 5.2 Implement Multi-layer Neural Network
- 5.3 Debugging Neural Network with Gradient Descent Checking
7. References
Implement Adaline in Python
We went two types of Single Neural Network and learned a lot of stuff, hope its fun and make sense for you. In this exercise, we're going to install Adaline to classify the Iris flower data set. It's a simple data set consist of 150 samples from 3 species of Iris (Setosa, Virginica and Versicolor) and 4 features such as sepal length, sepal width, petal length and petal width. For simplicity, we just use 100 samples of two labels (Setosa - 0 and Versicolor - 1) with two last features are petal length and petal width. Okay, let's go.Firstly, we usually do is import necessary libraries We also need to load the Iris data using datasets module in Scikit-learn and visualize it
We’ve done about prepare dataset, now we need to install Adaline to classify the above data, Since we had learned about the important of the learning rate η , now let’s experiment it,
As we can see, when we choose "nice" \(\eta\), e.g: \(\eta = 0.001\), our model work properly and the error decrease every epoch, conversely, if we choose \(\eta\) too large, e.g: \(\eta = 0.01\) it will overshoot the global minimum and increase the error infinitely.
Now let’s plot the decision boundary to see how our model work,
Alright, look at the decision boundary it seems our model work very well, so just play some codes for yourself and tune some parameters to see what happens, it's really fun.
No comments :
Post a Comment
Leave a Comment...