Want to create interactive content? It’s easy in Genially!

Get started free

NN Ops

Blended Learning Service

Created on October 22, 2024

Start designing with a free template

Discover more than 1500 professional designs like these:

Transcript

Iteration

Backpropagation

Loss Function

Propagation

Data Preparation

Info

Info

Info

Info

Info

How neural networks operate

In facial recognition, loss function is used to find the difference between the corect output and the predicted output of face features. Models are trained iteratively until the loss functions are low or zero to minimize error.

Once the neural network produces an output, it compares this to the actual target using a loss function. This quantifies the difference between the predicted output and the result. Common loss functions include:

Loss Function

  • Mean Squared Error (MSE): Used for regression tasks, this measures the dquared difference between the predicted and actual values
  • Cross-Entropy Loss: Used for classification tasks, this measures how well the predicted probabilities match the actual class labels

Training a neural network usually involves multiple iterations over the entire dataset. Each pass through the entire dataset is called an epoch. During each epoch, the network makes predictions, calculates the loss, and updates the weights using backpropagation. With each successive epoch, the model ideally becomes better at making predictions.

Iterative Learning

A key hyperparameter is the network's learning rate. This controls how large the updates to the weights should be. Too large and the network may not converge; too small and training will take too long.

For the network to learn from its mistakes and adjusts neuron weighting to minimise error, it must go through a process called backpropagation. This means it must:

Backpropagation

  • Compute the Gradient: Using the chain rule from calculus, bakpropagation computes the gradient of the loss function with respect to each weight to tell the network how much each weight contributed to the error
  • Update the Weights: With the gradient calculated, the network uses an optimisation algorithm to adjust the weights. The goal is to move the weights in the direction that reduces the loss to make the network's predictions more accurate.

Initial outputs are not usually very accurate and the loss calculation stage is needed for refinement.

A forward pass or propagation is when input data moves through the network, layer by layer, until it reaches the output layer. This process involves:

Propagation

  • Input data: the input layer receives data
  • Weighted sum: each neuron in the layer computes a weighted sum of its inputs
  • Activation function: the neuron applies an activation function to introduce non-linearity to improve flexibility and complexit of pattern recognition
  • Propagation: the output is passed to the next layer (a repeat of the process above at the next layer of neurons)
  • Output: the processed information reaches an output layer, where a prediction or classification is made

In facial recognition, this means collecting a lot of faces and adding labels to them.

This critical step transforms raw data into a clean, structured, and usable format that can be fed into a deep learning model for training. Data is:

Data Preparation

  • Collected
  • Cleaned
  • Transformed
  • Augmented
  • Split
  • Balanced
  • Batched
  • Encoded
  • Scaled