Want to create interactive content? It’s easy in Genially!

Get started free

Generative AI Harms

Lori Mullooly

Created on September 16, 2024

Start designing with a free template

Discover more than 1500 professional designs like these:

Transcript

Generative AI Harms

Erasure of cultures

Inequality for the disabled

+ info

+ info

Genocidal Harm*

Ableist Harm*

Inequality for oppressed genders and/or sexualities

Environmental and/or climate degradation

GenderedHarm*

Ecocidal Harm

+ info

+ info

Socio-Economic Harm*

Racist Harm*

Marginalization of the poor

Oppression of the racialized

+ info

+ info

*Representational Harms

  • Training a single large AI model can consume thousands of megawatt hours of electricity and emit hundreds of tons of carbon, comparable to the annual emissions of hundreds of American households.
  • AI model training can lead to significant freshwater evaporation for data center cooling, potentially straining limited water resources.

x close

The Uneven Distribution of AI’s Environmental Impacts. (2024, July 15). Harvard Business Review. https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts

  • Many facial analysis datasets used by companies are not representative, often skewing towards white, male, and Western faces due to web-sourced content.
  • AI image generation models show biases in depicting occupations, often misrepresenting racial demographics. For example, they over-represent people with darker skin tones in roles like fast-food workers and social workers, contrary to actual US demographics.

x close

Simonite, T. (n.d.). The Best Algorithms Still Struggle to Recognize Black Faces. Wired. Retrieved September 16, 2024, from https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/ Nicoletti, L., & Equality, D. B. T. +. (2024, August 7). Humans Are Biased. Generative AI Is Even Worse. Bloomberg.Com. https://www.bloomberg.com/graphics/2023-generative-ai-bias/

  • Women in the US are underrepresented in high-paying jobs, despite improved gender representation over time. However, Stable Diffusion AI tends to depict women as rarely having lucrative jobs or positions of power.
  • Large language models, particularly Llama 2, show a tendency to generate negative content about gay people and certain ethnic groups.

x close

Nicoletti, L., & Equality, D. B. T. +. (2024, August 7). Humans Are Biased. Generative AI Is Even Worse. Bloomberg.Com. https://www.bloomberg.com/graphics/2023-generative-ai-bias/ Generative AI: UNESCO study reveals alarming evidence of regressive gender stereotypes | UNESCO. (n.d.). Retrieved September 16, 2024, from https://www.unesco.org/en/articles/generative-ai-unesco-study-reveals-alarming-evidence-regressive-gender-stereotypes

  • Benefits from large language models (LMs) may not be equally accessible to all due to differences in internet access, language, skills, or hardware availability.
  • This unequal access to LM technology could perpetuate global inequities by disproportionately benefiting certain groups.
  • While language-driven technology may increase accessibility for some (e.g., those with learning disabilities), these benefits depend on more fundamental access to hardware, internet, and operational skills.

x close

How Artificial Intelligence Impacts Marginalised Groups. (n.d.). Digital Freedom Fund. Retrieved September 16, 2024, from https://digitalfreedomfund.org/how-artificial-intelligence-impacts-marginalised-groups/ Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P.-S., Cheng, M., Glaese, M., Balle, B., Kasirzadeh, A., Kenton, Z., Brown, S., Hawkins, W., Stepleton, T., Biles, C., Birhane, A., Haas, J., Rimell, L., Hendricks, L. A., … Gabriel, I. (2021). Ethical and social risks of harm from Language Models (arXiv:2112.04359). arXiv. http://arxiv.org/abs/2112.04359

x close

  • Automated systems in screening, interviews, and public services can exhibit bias, particularly in recognition and sentiment analysis.
  • Language-based AI systems may introduce negative connotations to disability-related terms or produce inaccurate results due to flawed training data.
  • Privacy concerns arise from government agencies allegedly using social media data without consent to verify disability status for pension programs.

Tr

Generative AI holds great potential for those with disabilities—But it needs policy to shape it. (2023, November 3). World Economic Forum. https://www.weforum.org/agenda/2023/11/generative-ai-holds-potential-disabilities/

x close

  • Large language models (LMs) used in creating cultural content may contribute to more homogeneous and exclusionary public discourse.
  • Widespread deployment of LMs could amplify majority norms and categories, potentially marginalizing minority perspectives.
  • A feedback loop may emerge where LMs perpetuate certain norms, influencing human language use, which then reinforces these norms in future LM training data.

Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P.-S., Cheng, M., Glaese, M., Balle, B., Kasirzadeh, A., Kenton, Z., Brown, S., Hawkins, W., Stepleton, T., Biles, C., Birhane, A., Haas, J., Rimell, L., Hendricks, L. A., … Gabriel, I. (2021). Ethical and social risks of harm from Language Models (arXiv:2112.04359). arXiv. http://arxiv.org/abs/2112.04359