MIT Museum Resarch Deck
Luiz Bernardes & Ben Tyler
START
Introduction/Site Features
02 · Research (Ben)
01 · Research (Luiz)
03 · Multimedia (Luiz)
04 · Multimedia (Ben)
05 · Reflection Page
Deepfakes
What are they and how can you identify them?
Use
Safety
Identification
Definition
Deepfakes are used in various ways that have positive and negative impacts. Some of the different uses are for art, blackmailing, entertainment, and fraud.
Different organizations and government agencies have began to devlop software that blocls deepfakes. Some practical methods that anyone can do is use software from companies like Adobe, Microsoft, and Sentinel offer deepfake detection technology that highlights imperfections.
The easiest ways to identify the deepfake is to closley observe the people or objects in the video or image. The easiest flaws to point out are the lighting, inconsistent or unsynced audio, missspellings, inaccurate facial positioning and movement, and long periods without blinking.
A deepfake is an audio recording, video, or image generated with artificial intelligence to try and convince people of something that never happened.
Luiz Bernardes
MASKED BIAS
How much of an issue does it pose?
Bias in AI as a whole
What we've done
Exhibit
ItAI has many biases, usually due to biased data, flawed design and incomplete testing. These flaws lead to inequity, for a time it lead to people of color being falsly incarcerated until Dr. Buolamwini's findings.
Along with making it illegal to use AI as an incarceration tool, we have implemented improved data practices and utlize bias detection tools to reduce inequity in AI.
The exhibit show a white mask, which Dr. Joy Buolamwini used to test if AI recognized it as human. She found that the AI recognized the mask better than people of color.
Deepfake Examples
Bias Media
Aha moment- The chip that resembled a human brain was quite amazing to see. Personal Connection- Scratch Jr and the idea of AI itself. Future Exploration- I'm interested in continuing to learn more about AI and a potential major/minor in AI innovation
Aha moment- I enjoyed seeing MIT's long history of innovation and progress. Personal Connection- Seeing how AI can negativly impact peoples everyday lives. Future Exploration- I am excited to continue to grow and learn with AI.
Research team
Bentley UNiversity Students
Luiz
Ben
MIT Museum Pitch Deck
Luiz Gustavo Bernardes
Created on November 26, 2024
Start designing with a free template
Discover more than 1500 professional designs like these:
View
Interactive Onboarding Guide
View
Corporate Christmas Presentation
View
Business Results Presentation
View
Meeting Plan Presentation
View
Customer Service Manual
View
Business vision deck
View
Economic Presentation
Explore all templates
Transcript
MIT Museum Resarch Deck
Luiz Bernardes & Ben Tyler
START
Introduction/Site Features
02 · Research (Ben)
01 · Research (Luiz)
03 · Multimedia (Luiz)
04 · Multimedia (Ben)
05 · Reflection Page
Deepfakes
What are they and how can you identify them?
Use
Safety
Identification
Definition
Deepfakes are used in various ways that have positive and negative impacts. Some of the different uses are for art, blackmailing, entertainment, and fraud.
Different organizations and government agencies have began to devlop software that blocls deepfakes. Some practical methods that anyone can do is use software from companies like Adobe, Microsoft, and Sentinel offer deepfake detection technology that highlights imperfections.
The easiest ways to identify the deepfake is to closley observe the people or objects in the video or image. The easiest flaws to point out are the lighting, inconsistent or unsynced audio, missspellings, inaccurate facial positioning and movement, and long periods without blinking.
A deepfake is an audio recording, video, or image generated with artificial intelligence to try and convince people of something that never happened.
Luiz Bernardes
MASKED BIAS
How much of an issue does it pose?
Bias in AI as a whole
What we've done
Exhibit
ItAI has many biases, usually due to biased data, flawed design and incomplete testing. These flaws lead to inequity, for a time it lead to people of color being falsly incarcerated until Dr. Buolamwini's findings.
Along with making it illegal to use AI as an incarceration tool, we have implemented improved data practices and utlize bias detection tools to reduce inequity in AI.
The exhibit show a white mask, which Dr. Joy Buolamwini used to test if AI recognized it as human. She found that the AI recognized the mask better than people of color.
Deepfake Examples
Bias Media
Aha moment- The chip that resembled a human brain was quite amazing to see. Personal Connection- Scratch Jr and the idea of AI itself. Future Exploration- I'm interested in continuing to learn more about AI and a potential major/minor in AI innovation
Aha moment- I enjoyed seeing MIT's long history of innovation and progress. Personal Connection- Seeing how AI can negativly impact peoples everyday lives. Future Exploration- I am excited to continue to grow and learn with AI.
Research team
Bentley UNiversity Students
Luiz
Ben