1. Introduction
2. History and Overview about Artificial Neural Network
7. References
2. History and Overview about Artificial Neural Network
- 2.1 Brief history about Artificial Neural Network
- 2.2 What is Neural Network and how it works
- 3.1 Perceptron
- 3.1.1 The Unit Step function
- 3.1.2 The Perceptron rules
- 3.1.3 The bias term
- 3.1.4 Implement Perceptron in Python
- 3.2 Adaptive Linear Neurons
- 3.2.1 Gradient Descent rule (Delta rule)
- 3.2.2 Learning rate in Gradient Descent
- 3.2.3 Implement Adaline in Python to classify Iris data
- 3.2.4 Learning via types of Gradient Descent
- 3.3 Problems with Perceptron (AI Winter)
- 4.1 Overview about Multi-layer Neural Network
- 4.2 Forward Propagation
- 4.3 Cost function
- 4.4 Backpropagation
- 4.5 Implement simple Multi-layer Neural Network to solve the problem of Perceptron
- 4.6 Some optional techniques for Multi-layer Neural Network Optimization
- 4.7 Multi-layer Neural Network for binary/multi classification
- 5.1 Overview about MNIST data
- 5.2 Implement Multi-layer Neural Network
- 5.3 Debugging Neural Network with Gradient Descent Checking
7. References
Intuition about Neural Networks
Basically, the learning algorithms inside computers are just the combination of algebra variables and mathematical equations linking together to do a specific function. It's true for Neural Networks model, the idea behind Neural Networks is to simulate human brain by interconnected thousand even billion transistors inside CPUs to do pattern recognitions, learning, making decisions like human ways.We call it's Neuron, each Neuron in Neural Networks operate like a biological neuron in the human brain, to understand how it works, firstly look at the picture above describes the workflow inside brain neuron. A biological neuron receives input signal (electrical and chemical) from output signal of other neurons connected to it by dendrites, then executed by cell nucleus when signals reached to threshold, neuron will wire signal to other neurons through axon and change the axon terminals wire, it cause brain learning. And the connection billion of billion neurons in brain processing parallelly together, human brains can do amazing things.
How it works
In generally, Neural Networks have 3 layers with different tasks:- Input layer used to receive information so-called features or variables from data, the number of neurons in input layer belongs to the number of features you want to feed in.
- Hidden layer is considered to process information from input layer, we can have more than one layers with many neurons in each layer in Hidden layer (called Deep Neural Networks and it's out of scope of this papers).
- Output layer used to return the outputs, similarly, the number of neurons in output layer belong to what are outputs you want. Example, if your Neural Networks used for binary classification, output layer just has two neurons represented 0 or 1.
So, what cause the network learning?
When the input is immutable. The Neural Network learn new things by changes the weight just like human brain changes the connections between neurons. Curiously, how can it do that? The spirit of Neural Network is the Back-Propagation algorithm we will discuss later can help Neural Network learn the pattern between arbitrary input and output, it's also the reason why called Neural Network is Supervised Learning. The idea is it try to optimize all of the weights in Neural Network based on the feedback from output layer, easy right? And what is feedback we will discuss later, too? Let's go to example to make sure you fully understand.
Example
- Feedforward
- Backpropagation
- Update the weight
No comments :
Post a Comment
Leave a Comment...