Notes on Activation Functions
AI
Activation Functions
A note on different activation functions.
Activation functions are very widely used in neural networks. Activation function is a function that calculates the output of the node. Why is it named activation function? It decides whether the neuron should be activated or not. It calculates the weighted sum and further adds bias to the neuron. They are also known as Transfer Function. Why? It transforms the summed weighted input from the node into an output value to be transferred to the next hidden layer or as output.
Background
Why activation function
The purpose of activation function is to add non-linearity to the neural network.
Types of Activation Functions
Activation functions are broadly divided into two types:
- Linear Activation Function
- Non-Linear Activation Function
Thoughts:
Citation
BibTeX citation:
@misc{kumar2024,
author = {Chandan Kumar},
title = {Notes on {Activation} {Functions}},
date = {2024-01-01},
langid = {en-GB}
}
For attribution, please cite this work as:
Chandan Kumar. 2024. “Notes on Activation Functions.”