Want to create interactive content? It’s easy in Genially!
Generative AI Harms
Lori Mullooly
Created on September 16, 2024
Start designing with a free template
Discover more than 1500 professional designs like these:
Transcript
*Representational Harms
Generative AI Harms
GenderedHarm*
Socio-Economic Harm*
Genocidal Harm*
Racist Harm*
Ecocidal Harm
Ableist Harm*
Marginalization of the poor
+ info
Inequality for oppressed genders and/or sexualities
+ info
Erasure of cultures
+ info
Oppression of the racialized
+ info
Environmental and/or climate degradation
+ info
Inequality for the disabled
+ info
The Uneven Distribution of AI’s Environmental Impacts. (2024, July 15). Harvard Business Review. https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts
- Training a single large AI model can consume thousands of megawatt hours of electricity and emit hundreds of tons of carbon, comparable to the annual emissions of hundreds of American households.
- AI model training can lead to significant freshwater evaporation for data center cooling, potentially straining limited water resources.
x close
Simonite, T. (n.d.). The Best Algorithms Still Struggle to Recognize Black Faces. Wired. Retrieved September 16, 2024, from https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/ Nicoletti, L., & Equality, D. B. T. +. (2024, August 7). Humans Are Biased. Generative AI Is Even Worse. Bloomberg.Com. https://www.bloomberg.com/graphics/2023-generative-ai-bias/
- Many facial analysis datasets used by companies are not representative, often skewing towards white, male, and Western faces due to web-sourced content.
- AI image generation models show biases in depicting occupations, often misrepresenting racial demographics. For example, they over-represent people with darker skin tones in roles like fast-food workers and social workers, contrary to actual US demographics.
x close
Nicoletti, L., & Equality, D. B. T. +. (2024, August 7). Humans Are Biased. Generative AI Is Even Worse. Bloomberg.Com. https://www.bloomberg.com/graphics/2023-generative-ai-bias/ Generative AI: UNESCO study reveals alarming evidence of regressive gender stereotypes | UNESCO. (n.d.). Retrieved September 16, 2024, from https://www.unesco.org/en/articles/generative-ai-unesco-study-reveals-alarming-evidence-regressive-gender-stereotypes
x close
- Women in the US are underrepresented in high-paying jobs, despite improved gender representation over time. However, Stable Diffusion AI tends to depict women as rarely having lucrative jobs or positions of power.
- Large language models, particularly Llama 2, show a tendency to generate negative content about gay people and certain ethnic groups.
How Artificial Intelligence Impacts Marginalised Groups. (n.d.). Digital Freedom Fund. Retrieved September 16, 2024, from https://digitalfreedomfund.org/how-artificial-intelligence-impacts-marginalised-groups/ Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P.-S., Cheng, M., Glaese, M., Balle, B., Kasirzadeh, A., Kenton, Z., Brown, S., Hawkins, W., Stepleton, T., Biles, C., Birhane, A., Haas, J., Rimell, L., Hendricks, L. A., … Gabriel, I. (2021). Ethical and social risks of harm from Language Models (arXiv:2112.04359). arXiv. http://arxiv.org/abs/2112.04359
x close
- Benefits from large language models (LMs) may not be equally accessible to all due to differences in internet access, language, skills, or hardware availability.
- This unequal access to LM technology could perpetuate global inequities by disproportionately benefiting certain groups.
- While language-driven technology may increase accessibility for some (e.g., those with learning disabilities), these benefits depend on more fundamental access to hardware, internet, and operational skills.
Generative AI holds great potential for those with disabilities—But it needs policy to shape it. (2023, November 3). World Economic Forum. https://www.weforum.org/agenda/2023/11/generative-ai-holds-potential-disabilities/
- Automated systems in screening, interviews, and public services can exhibit bias, particularly in recognition and sentiment analysis.
- Language-based AI systems may introduce negative connotations to disability-related terms or produce inaccurate results due to flawed training data.
- Privacy concerns arise from government agencies allegedly using social media data without consent to verify disability status for pension programs.
x close
Tr
Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P.-S., Cheng, M., Glaese, M., Balle, B., Kasirzadeh, A., Kenton, Z., Brown, S., Hawkins, W., Stepleton, T., Biles, C., Birhane, A., Haas, J., Rimell, L., Hendricks, L. A., … Gabriel, I. (2021). Ethical and social risks of harm from Language Models (arXiv:2112.04359). arXiv. http://arxiv.org/abs/2112.04359
x close
- Large language models (LMs) used in creating cultural content may contribute to more homogeneous and exclusionary public discourse.
- Widespread deployment of LMs could amplify majority norms and categories, potentially marginalizing minority perspectives.
- A feedback loop may emerge where LMs perpetuate certain norms, influencing human language use, which then reinforces these norms in future LM training data.