1. Introduction
2. History and Overview about Artificial Neural Network
3. Single neural network
7. References
2. History and Overview about Artificial Neural Network
3. Single neural network
- 3.1 Perceptron
- 3.1.1 The Unit Step function
- 3.1.2 The Perceptron rules
- 3.1.3 The bias term
- 3.1.4 Implement Perceptron in Python
- 3.2 Adaptive Linear Neurons
- 3.2.1 Gradient Descent rule (Delta rule)
- 3.2.2 Learning rate in Gradient Descent
- 3.2.3 Implement Adaline in Python to classify Iris data
- 3.2.4 Learning via types of Gradient Descent
- 3.3 Problems with Perceptron (AI Winter)
- 4.1 Overview about Multi-layer Neural Network
- 4.2 Forward Propagation
- 4.3 Cost function
- 4.4 Backpropagation
- 4.5 Implement simple Multi-layer Neural Network to solve the problem of Perceptron
- 4.6 Some optional techniques for Multi-layer Neural Network Optimization
- 4.7 Multi-layer Neural Network for binary/multi classification
- 5.1 Overview about MNIST data
- 5.2 Implement Multi-layer Neural Network
- 5.3 Debugging Neural Network with Gradient Descent Checking
7. References
Adaptive Linear Neurons
According to Wikipedia, Adaline was proposed after Perceptron in 1960 by Professor Bernard Widrow and his graduate student Ted Hoff at Stanford University, Adaline is similar to Perceptron also belong to Single Neural Network, but the different between Adaline and Perceptron is:
- Firstly, the cost function using in Adaline is Sum Squared Error (SSE) instead of normal error of each training sample in Perceptron
- Secondly, Adaline try to minimize the cost function using Gradient Descent (Delta rule) when Perceptron using Unit step function (Perceptron rule). And this convenient of Adaline is it allow to calculate the error based on real continous values rather than binary value, and why Adaline can do that? We'll discuss more detail in later
And in this topic, we'll also discuss detail about the learning rate \(\eta\) and kind of types of Gradient Descent such as Stochastic Gradient Descent (Online learning), Batch Gradient Descent and Mini-batch Gradient Descent.
No comments :
Post a Comment
Leave a Comment...