Want to make interactive content? It’s easy in Genially!

Over 30 million people build interactive content in Genially.

Check out what others have designed:

TEN WAYS TO SAVE WATER

Horizontal infographics

GRETA THUNBERG

Horizontal infographics

FIRE FIGHTER

Horizontal infographics

STEVE JOBS

Horizontal infographics

ONE MINUTE ON THE INTERNET

Horizontal infographics

SITTING BULL

Horizontal infographics

Transcript

How neural networks operate

Info

Info

Info

Info

Info

Data Preparation

Propagation

Loss Function

Backpropagation

Iteration

  • Mean Squared Error (MSE): Used for regression tasks, this measures the dquared difference between the predicted and actual values
  • Cross-Entropy Loss: Used for classification tasks, this measures how well the predicted probabilities match the actual class labels

Loss Function

Once the neural network produces an output, it compares this to the actual target using a loss function. This quantifies the difference between the predicted output and the result. Common loss functions include:

In facial recognition, loss function is used to find the difference between the corect output and the predicted output of face features. Models are trained iteratively until the loss functions are low or zero to minimize error.

Iterative Learning

Training a neural network usually involves multiple iterations over the entire dataset. Each pass through the entire dataset is called an epoch. During each epoch, the network makes predictions, calculates the loss, and updates the weights using backpropagation. With each successive epoch, the model ideally becomes better at making predictions.

  • Compute the Gradient: Using the chain rule from calculus, bakpropagation computes the gradient of the loss function with respect to each weight to tell the network how much each weight contributed to the error
  • Update the Weights: With the gradient calculated, the network uses an optimisation algorithm to adjust the weights. The goal is to move the weights in the direction that reduces the loss to make the network's predictions more accurate.

Backpropagation

For the network to learn from its mistakes and adjusts neuron weighting to minimise error, it must go through a process called backpropagation. This means it must:

A key hyperparameter is the network's learning rate. This controls how large the updates to the weights should be. Too large and the network may not converge; too small and training will take too long.

  • Input data: the input layer receives data
  • Weighted sum: each neuron in the layer computes a weighted sum of its inputs
  • Activation function: the neuron applies an activation function to introduce non-linearity to improve flexibility and complexit of pattern recognition
  • Propagation: the output is passed to the next layer (a repeat of the process above at the next layer of neurons)
  • Output: the processed information reaches an output layer, where a prediction or classification is made

Propagation

A forward pass or propagation is when input data moves through the network, layer by layer, until it reaches the output layer. This process involves:

Initial outputs are not usually very accurate and the loss calculation stage is needed for refinement.

  • Collected
  • Cleaned
  • Transformed
  • Augmented
  • Split
  • Balanced
  • Batched
  • Encoded
  • Scaled

Data Preparation

This critical step transforms raw data into a clean, structured, and usable format that can be fed into a deep learning model for training. Data is:

In facial recognition, this means collecting a lot of faces and adding labels to them.