Understanding and Identifying Visual Problematic Information on the platform of Facebook
For Facebook Users to identify, evaluate and interpret the visual misinformation
START
About us
Contact
Business
Market
Services
Goals of the Project
Give Facebook a tool to examine visual problematic information by considering 4 factors1. Consider the context 2. Consider any audiovisual or photographic inconsistencies 3. Consider suspicious information within the post 4. Consider the source of the post
Help Facebook Users identify visual problematic information in the form of - deep fakes - manipulated content - fabricated content
Help Facebook User's understand how Facebook organizes content
Theme 1: Understanding the role of Facebook
- You are one of 2.56 billion users on the Facebook platform
- Facebook collects your data from the people, accounts, groups and pages you interact with
- Four billion pieces of content are shared by users every day, including 250 million photo uploads.
Your Facebook Feed: Content is determined by
Signals
Score
Predictions
How likely you are to engage with the content
How interested people will be in the content
Who posted the content
- Content from sources you interact with, including friends and companies, is more likely to appear on your feed
- A relevancy score is calculated estimating how many will be interested in thecontent
- factors include: likelihood to spend time on post, comment and share
- The algorithim uses signals to predict how likely your engagement with a post will be and whether you will find it meaningful
Challenge 1: Identifying Deepfake Content
Facebook user: Tim Black
Correct answer: True
Deepfakes- FAQ
Why are deep fakes still appearing despite regulations?
The edited photo was an example of a deepfake
How are deepfakes monitored on Facebook?
The policy does not extend to posts classified as satire or parody- producers can manipulate their deep fakes to appear in this format Facebook Warning Labels do not offer users advice to look for: 1. CONTEXT 2. AUDIOVISUAL IMPERFECTIONS 3. SUSPICIOUS CONTENT 4. THE SOURCE OF THE CONTENT
What are Deepfakes?
Facebook proposed two regulations for deepfakes:“It has been edited or synthesised … in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.” “It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”
Digitally generated images or videos that combines artifical intelligence to depicts people in events, actions or statements that never happened
How you can IDENTIfy DEEPFAKES in the form of photos or videos
1. CONTEXT
Deepfakes often contain or reference wider contextual information IN THIS EXAMPLE: The deepfake which depicts Donald Trump being arrested was posted days following Trump's indictment
3. Suspicious content
Content which violates expected behaviours, speech or attitude IN THIS EXAMPLE? Although Trump's indictment was heavily reported from news organizations, these images were not reported by news organizations and such a scenario is unlikely to occur within a public setting with a large political figure
vs
2. audiovisual imperfections
features to look out for: unusual or unnatural lip/mouth movements or glitched audio IN THIS EXAMPLE: The featured deepfake has a lack of focus in the appearance based features on the individual, examining the figures in the background, they appear pixilated and unnatural
4. Source of the content
These images were not produced or published by any credited or reputable news organizations
Manipulated Content
Changes to genuine content intended to decieve or create a false context
Contains information that does not correlate with facts and that it is directed towards disinforming the public in a conscious manner
An illustration of how Trump administration representatives have used misleadingly edited or taken out of context images to criticise Biden.
Using the 4 factor checklist to Identify Manipulated Content
Exploring the content, audio-visual or technical incosistencies, and suspicious content can be identified within the image
Context
Inconsistencies
click here
click here
Suspicious Content Source of Content
click here
click here
Fabricated Content
A Fabricated photo which circulated in 2018, depiciting Donald Trump assisting flood victims during Hurricane Florence by handing out a 'MAGA' hat
Understanding This Content
Context
The use of emotive language to invite users to share the post and engage within a republican emotive discourse is an example of fabricated content
Suspicious Content
The intent of farbricated content is to mislead, often for political or financial gain, and to steer users into a certain political or ideological direction (Dominican University)
Inconsistencies
Visual Problematic Information Guide
Rachel Orla Obray
Created on May 1, 2023
Start designing with a free template
Discover more than 1500 professional designs like these:
View
Basic Interactive Microsite
View
Beauty catalog mobile
View
3D Corporate Reporting
View
Higher Education Microsite
View
Basic Shapes Microsite
View
Microsite Vibrant Travel Guide
View
Tourism Guide Microsite
Explore all templates
Transcript
Understanding and Identifying Visual Problematic Information on the platform of Facebook
For Facebook Users to identify, evaluate and interpret the visual misinformation
START
About us
Contact
Business
Market
Services
Goals of the Project
Give Facebook a tool to examine visual problematic information by considering 4 factors1. Consider the context 2. Consider any audiovisual or photographic inconsistencies 3. Consider suspicious information within the post 4. Consider the source of the post
Help Facebook Users identify visual problematic information in the form of - deep fakes - manipulated content - fabricated content
Help Facebook User's understand how Facebook organizes content
Theme 1: Understanding the role of Facebook
Your Facebook Feed: Content is determined by
Signals
Score
Predictions
How likely you are to engage with the content
How interested people will be in the content
Who posted the content
Challenge 1: Identifying Deepfake Content
Facebook user: Tim Black
Correct answer: True
Deepfakes- FAQ
Why are deep fakes still appearing despite regulations?
The edited photo was an example of a deepfake
How are deepfakes monitored on Facebook?
The policy does not extend to posts classified as satire or parody- producers can manipulate their deep fakes to appear in this format Facebook Warning Labels do not offer users advice to look for: 1. CONTEXT 2. AUDIOVISUAL IMPERFECTIONS 3. SUSPICIOUS CONTENT 4. THE SOURCE OF THE CONTENT
What are Deepfakes?
Facebook proposed two regulations for deepfakes:“It has been edited or synthesised … in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.” “It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”
Digitally generated images or videos that combines artifical intelligence to depicts people in events, actions or statements that never happened
How you can IDENTIfy DEEPFAKES in the form of photos or videos
1. CONTEXT
Deepfakes often contain or reference wider contextual information IN THIS EXAMPLE: The deepfake which depicts Donald Trump being arrested was posted days following Trump's indictment
3. Suspicious content
Content which violates expected behaviours, speech or attitude IN THIS EXAMPLE? Although Trump's indictment was heavily reported from news organizations, these images were not reported by news organizations and such a scenario is unlikely to occur within a public setting with a large political figure
vs
2. audiovisual imperfections
features to look out for: unusual or unnatural lip/mouth movements or glitched audio IN THIS EXAMPLE: The featured deepfake has a lack of focus in the appearance based features on the individual, examining the figures in the background, they appear pixilated and unnatural
4. Source of the content
These images were not produced or published by any credited or reputable news organizations
Manipulated Content
Changes to genuine content intended to decieve or create a false context
Contains information that does not correlate with facts and that it is directed towards disinforming the public in a conscious manner
An illustration of how Trump administration representatives have used misleadingly edited or taken out of context images to criticise Biden.
Using the 4 factor checklist to Identify Manipulated Content
Exploring the content, audio-visual or technical incosistencies, and suspicious content can be identified within the image
Context
Inconsistencies
click here
click here
Suspicious Content Source of Content
click here
click here
Fabricated Content
A Fabricated photo which circulated in 2018, depiciting Donald Trump assisting flood victims during Hurricane Florence by handing out a 'MAGA' hat
Understanding This Content
Context
The use of emotive language to invite users to share the post and engage within a republican emotive discourse is an example of fabricated content
Suspicious Content
The intent of farbricated content is to mislead, often for political or financial gain, and to steer users into a certain political or ideological direction (Dominican University)
Inconsistencies