TrisZaska's Machine Learning Blog

Overview about Adaptive Linear Neurons

1. Introduction
2. History and Overview about Artificial Neural Network
3. Single neural network
4. Multi-layer neural network
5. Install and using Multi-layer Neural Network to classify MNIST data
6. Summary
7. References

Adaptive Linear Neurons

According to Wikipedia, Adaline was proposed after Perceptron in 1960 by Professor Bernard Widrow and his graduate student Ted Hoff at Stanford University, Adaline is similar to Perceptron also belong to Single Neural Network, but the different between Adaline and Perceptron is:
  • Firstly, the cost function using in Adaline is Sum Squared Error (SSE) instead of normal error of each training sample in Perceptron
  • Secondly, Adaline try to minimize the cost function using Gradient Descent (Delta rule) when Perceptron using Unit step function (Perceptron rule). And this convenient of Adaline is it allow to calculate the error based on real continous values rather than binary value, and why Adaline can do that? We'll discuss more detail in later
And in this topic, we'll also discuss detail about the learning rate \(\eta\) and kind of types of Gradient Descent such as Stochastic Gradient Descent (Online learning), Batch Gradient Descent and Mini-batch Gradient Descent.

No comments :

Post a Comment

Leave a Comment...