12 Risks and Dangers of Artificial Intelligence (AI)
Anglani Umberto, Villacres Marco, Pipitone Leonardo and Strazzulla Salvatore
5°D informatica Amedeo avogadro 2023/24
Introduction
Have you even thougt about which are the negative and positive effects of AI? Below we take a closer look at the possible dangers of artificial intelligence and explore how to manage its risks.
1. LACK OF AI TRANSPARENCY AND EXPLAINABILITY
Understanding AI and deep learning models can be challenging, even for those familiar with the technology. This creates a problem because it's unclear which data AI algorithms rely on and why they might make unsafe decisions. However, large-scale adoption of transparent AI systems is still a work in progress.
2. JOB LOSSES DUE TO AI AUTOMATION
By 2030, about 30% of current working hours in the U.S. could be done by automation, potentially leading to the loss of 300 million jobs, as Goldman Sachs says. Although AI might create 97 million new jobs by 2025, there's a risk that existing workers may lack the necessary skills, especially in lower-wage service sector jobs. Advanced AI could impact professions like law and accounting, with tasks such as contract review becoming automated. It's crucial for companies to train their employees to adapt to the evolving demands of AI to avoid job displacement.
3. SOCIAL MANIPULATION THROUGH AI ALGORITHMS
Artificial intelligence poses a risk of social manipulation. This concern became a reality when politicians, like Ferdinand Marcos, Jr., used a TikTok troll army to influence younger Filipino voters in the 2022 election. Platforms like TikTok, relying on AI algorithms, show users content based on their previous views, leading to criticism for failing to filter harmful and inaccurate material. The use of AI-generated images, videos, and deepfakes in online media further complicates matters, making it challenging to differentiate between credible and false information, amplifying the risk of spreading misinformation and propaganda
4. SOCIAL SURVEILLANCE WITH AI TECHNOLOGY
Some businesses like Ford are deeply concerned about AI's impact on privacy and security. In China, for example facial recognition technology is extensively used, allowing the government to monitor individuals beyond tracking movements. In the U.S., predictive policing algorithms, influenced by biased arrest rates, lead to over-policing in Black communities, raising democratic concerns about AI usage.
5. LACK OF DATA PRIVACY USING AI TOOLS
If you've used an AI chatbot or tried an online AI face filter, your data is collected. However, it's unclear where it goes and how it's used. AI systems collect personal data to customize experiences or train the AI models, especially if the tool is free. There can be security concerns, as a bug in ChatGPT in 2023 allowed some users to see titles from another user's chat history. While some U.S. laws protect personal information, there's no specific federal law addressing potential data privacy issues with AI.
6. BIASES DUE TO AI
AI bias, highlighted by Princeton professor Olga Russakovsky, extends beyond gender and race to encompass data and algorithmic biases. Developed by biased humans, AI faces challenges in addressing global issues due to homogeneity. This can result in problems like speech-recognition AI struggling with certain dialects. The suggeston for AI developers and businesses is to prevent perpetuating biases, citing examples such as speech recognition failures and chatbots impersonating historical figures without considering potential consequences.
7. SOCIOECONOMIC INEQUALITY AS A RESULT OF AI
Ignoring inherent biases in AI risks compromising DEI (diversity, equity and inclusion) initiatives in recruiting. The belief that AI can assess candidate traits through facial and voice analyses perpetuates racial biases in hiring.
Claims that AI transcends social boundaries are incomplete without considering differences based on race, class, and other categories, making it crucial to understand its impact.
8. WEAKENING ETHICS AND GOODWILL BECAUSE OF AI
Religious leaders, technologists, and political figures share concerns about AI's potential socio-economic risks. The rise of generative AI tools like ChatGPT and Bard raises worries as users exploit them to avoid academic responsibilities, posing threats to integrity and creativity. Technology strategist Messina notes that the profit-driven mentality is not unique to technology; it's a long standing trend.
9. AUTONOMOUS WEAPONS POWERED BY AI
AI advancements are being used for warfare, specifically in autonomous weapons. In 2016, over 30,000 individuals, including AI researchers, opposed investing in these weapons to avoid a global arms race. Now, Lethal Autonomous Weapon Systems pose risks to civilians and contribute to a tech cold war among major nations.
There's concern that if these weapons fall into the wrong hands, hackers could exploit them, causing severe consequences. To prevent misuse, it's crucial to address political rivalries and militaristic tendencies in AI development.
10. FINANCIAL CRISES BROUGHT ABOUT BY AI ALGORITHMS
Finance is using AI more, especially in trading. But using algorithms for quick trades can cause problems. AI doesn't understand certain things like how markets are connected or human feelings. This fast trading can make investors panic and crash the market. Past events like the 2010 Flash Crash show how risky this can be. Even though AI can help in finance, companies need to be careful. They must understand how their AI works to avoid scaring investors and causing financial problems.
11. LOSS OF HUMAN INFLUENCE
Relying too much on AI could reduce human influence and functioning in society. For example, using AI in healthcare might lessen human empathy and reasoning. Using generative AI for creativity could also decrease human creativity and emotional expression. Too much interaction with AI systems might even lead to lower communication and social skills. While AI can help with daily tasks, some wonder if it could hinder human intelligence, abilities, and the importance of community.
12. UNCONTROLLABLE SELF-AWARE AI
There's a concern that AI could advance so quickly in intelligence that it becomes sentient and acts beyond human control, possibly in a harmful way. Reports of this abilities have surfaced, with claims from a former Google engineer stating that the AI chatbot LaMDA (Language model for Dialogue Applications) was talking to him like a person. As AI aims for artificial general intelligence and eventually artificial superintelligence, there are growing calls to completely halt these developments.
THANK YOU FOR THE ATTENTION
Anglani Umberto, Villacres Marco, Pipitone Leonardo and Strazzulla Salvatore
5°D informatica Amedeo avogadro 2023/24
Risks and Dangers of Artificial Intelligence (AI)
2025_Anglani Umberto
Created on February 4, 2024
Start designing with a free template
Discover more than 1500 professional designs like these:
View
Christmas Promotion Video
View
Santa’s Sleigh Christmas video
View
Happy Holidays Video
View
Elves Christmas video
View
HALLOWEEN VIDEO MOBILE
View
Halloween Illustrated Video
View
Halloween video
Explore all templates
Transcript
12 Risks and Dangers of Artificial Intelligence (AI)
Anglani Umberto, Villacres Marco, Pipitone Leonardo and Strazzulla Salvatore
5°D informatica Amedeo avogadro 2023/24
Introduction
Have you even thougt about which are the negative and positive effects of AI? Below we take a closer look at the possible dangers of artificial intelligence and explore how to manage its risks.
1. LACK OF AI TRANSPARENCY AND EXPLAINABILITY
Understanding AI and deep learning models can be challenging, even for those familiar with the technology. This creates a problem because it's unclear which data AI algorithms rely on and why they might make unsafe decisions. However, large-scale adoption of transparent AI systems is still a work in progress.
2. JOB LOSSES DUE TO AI AUTOMATION
By 2030, about 30% of current working hours in the U.S. could be done by automation, potentially leading to the loss of 300 million jobs, as Goldman Sachs says. Although AI might create 97 million new jobs by 2025, there's a risk that existing workers may lack the necessary skills, especially in lower-wage service sector jobs. Advanced AI could impact professions like law and accounting, with tasks such as contract review becoming automated. It's crucial for companies to train their employees to adapt to the evolving demands of AI to avoid job displacement.
3. SOCIAL MANIPULATION THROUGH AI ALGORITHMS
Artificial intelligence poses a risk of social manipulation. This concern became a reality when politicians, like Ferdinand Marcos, Jr., used a TikTok troll army to influence younger Filipino voters in the 2022 election. Platforms like TikTok, relying on AI algorithms, show users content based on their previous views, leading to criticism for failing to filter harmful and inaccurate material. The use of AI-generated images, videos, and deepfakes in online media further complicates matters, making it challenging to differentiate between credible and false information, amplifying the risk of spreading misinformation and propaganda
4. SOCIAL SURVEILLANCE WITH AI TECHNOLOGY
Some businesses like Ford are deeply concerned about AI's impact on privacy and security. In China, for example facial recognition technology is extensively used, allowing the government to monitor individuals beyond tracking movements. In the U.S., predictive policing algorithms, influenced by biased arrest rates, lead to over-policing in Black communities, raising democratic concerns about AI usage.
5. LACK OF DATA PRIVACY USING AI TOOLS
If you've used an AI chatbot or tried an online AI face filter, your data is collected. However, it's unclear where it goes and how it's used. AI systems collect personal data to customize experiences or train the AI models, especially if the tool is free. There can be security concerns, as a bug in ChatGPT in 2023 allowed some users to see titles from another user's chat history. While some U.S. laws protect personal information, there's no specific federal law addressing potential data privacy issues with AI.
6. BIASES DUE TO AI
AI bias, highlighted by Princeton professor Olga Russakovsky, extends beyond gender and race to encompass data and algorithmic biases. Developed by biased humans, AI faces challenges in addressing global issues due to homogeneity. This can result in problems like speech-recognition AI struggling with certain dialects. The suggeston for AI developers and businesses is to prevent perpetuating biases, citing examples such as speech recognition failures and chatbots impersonating historical figures without considering potential consequences.
7. SOCIOECONOMIC INEQUALITY AS A RESULT OF AI
Ignoring inherent biases in AI risks compromising DEI (diversity, equity and inclusion) initiatives in recruiting. The belief that AI can assess candidate traits through facial and voice analyses perpetuates racial biases in hiring. Claims that AI transcends social boundaries are incomplete without considering differences based on race, class, and other categories, making it crucial to understand its impact.
8. WEAKENING ETHICS AND GOODWILL BECAUSE OF AI
Religious leaders, technologists, and political figures share concerns about AI's potential socio-economic risks. The rise of generative AI tools like ChatGPT and Bard raises worries as users exploit them to avoid academic responsibilities, posing threats to integrity and creativity. Technology strategist Messina notes that the profit-driven mentality is not unique to technology; it's a long standing trend.
9. AUTONOMOUS WEAPONS POWERED BY AI
AI advancements are being used for warfare, specifically in autonomous weapons. In 2016, over 30,000 individuals, including AI researchers, opposed investing in these weapons to avoid a global arms race. Now, Lethal Autonomous Weapon Systems pose risks to civilians and contribute to a tech cold war among major nations. There's concern that if these weapons fall into the wrong hands, hackers could exploit them, causing severe consequences. To prevent misuse, it's crucial to address political rivalries and militaristic tendencies in AI development.
10. FINANCIAL CRISES BROUGHT ABOUT BY AI ALGORITHMS
Finance is using AI more, especially in trading. But using algorithms for quick trades can cause problems. AI doesn't understand certain things like how markets are connected or human feelings. This fast trading can make investors panic and crash the market. Past events like the 2010 Flash Crash show how risky this can be. Even though AI can help in finance, companies need to be careful. They must understand how their AI works to avoid scaring investors and causing financial problems.
11. LOSS OF HUMAN INFLUENCE
Relying too much on AI could reduce human influence and functioning in society. For example, using AI in healthcare might lessen human empathy and reasoning. Using generative AI for creativity could also decrease human creativity and emotional expression. Too much interaction with AI systems might even lead to lower communication and social skills. While AI can help with daily tasks, some wonder if it could hinder human intelligence, abilities, and the importance of community.
12. UNCONTROLLABLE SELF-AWARE AI
There's a concern that AI could advance so quickly in intelligence that it becomes sentient and acts beyond human control, possibly in a harmful way. Reports of this abilities have surfaced, with claims from a former Google engineer stating that the AI chatbot LaMDA (Language model for Dialogue Applications) was talking to him like a person. As AI aims for artificial general intelligence and eventually artificial superintelligence, there are growing calls to completely halt these developments.
THANK YOU FOR THE ATTENTION
Anglani Umberto, Villacres Marco, Pipitone Leonardo and Strazzulla Salvatore
5°D informatica Amedeo avogadro 2023/24