Want to create interactive content? It’s easy in Genially!
The Dangers of Artificial intelligence
Karl Niño C. Dacuycuy
Created on November 13, 2023
Start designing with a free template
Discover more than 1500 professional designs like these:
Transcript
The Dangers of
Artificial Intelligence
Karl Nino C. Dacuycuy 11-Guyabano
Source: https://www.tableau.com/data-insights/ai/risks#:~:text=There%20are%20a%20myriad%20of,humans%2C%20and%20unclear%20legal%20regulation.
Can AI be dangerous? As with most things to do with AI, the answer to this question is complicated. There are some risks associated with AI, some pragmatic and some ethical. Leading experts debate how dangerous AI could be in the future, but there is no real consensus yet. However, there are a few dangers that experts agree upon. Many of these are purely hypothetical situations that may happen in the future without proper precautions, and some are real concerns that we deal with today.
Privacy One of the biggest concerns experts cite is around consumer data privacy, security, and AI. Americans have a right to privacy, established in 1992 with the ratification of the International Covenant on on Civil and Political Rights. But many companies already skirt data privacy violations with their collection and use practices, and experts worry this may increase as we start utilizing more AI. Another major concern is that there are currently few regulations on AI (in general, or around data privacy) on the national or international level. The EU introduced the “AI Act” in April 2021 to regulate AI systems considered of risk; however, the act has not yet passed.
Real-life AI risks There are a myriad of risks to do with AI that we deal with in our lives today. Not every AI risk is as big and worrisome as killer robots or sentient AI. Some of the biggest risks today include things like consumer privacy, biased programming, danger to humans, and unclear legal regulation.
What are the risks of artificial intelligence? We talked briefly about real-life and hypothetical AI risks above. to the left, we’ve outlined each in detail. Real-life risks include things like consumer privacy, legal issues, AI bias, and more. And the hypothetical future issues include things like AI programmed for harm.
AI bias It’s a common myth that since AI is a computer system, it is inherently unbiased. However, this is blatantly untrue. AI is only as unbiased as the data and people training the programs. So if the data is flawed, impartial, or biased in any way, the resulting AI will be biased as well. The two main types of bias in AI are “data bias” and “societal bias.” Data bias is when the data used to develop and train an AI is incomplete, skewed, or invalid. This can be because the data is incorrect, excludes certain groups, or was collected in bad faith. On the other hand, societal bias is when the assumptions and biases present in everyday society make their way into AI through blind spots and expectations that the programmers held when creating the AI.
Legal responsibility Which has to do with almost all the other risks discussed above. When something goes wrong, who is responsible? The AI itself? The programmer who developed it? The company that implemented it? Or, if there was a human involved, is it the human operator’s fault? We talked above about a self-driving car that killed a pedestrian, where the backup driver was found at fault. But does that set the precedent for every case involving AI? Probably not, as the question is complex and ever-evolving. Different uses of AI will have different legal liabilities if something goes wrong.
AI programmed for harm Another risk that experts cite when talking about the risks of AI is the possibility that something that uses AI will be programmed to do something devastating. The best example of this is the idea of “autonomous weapons” which can be programmed to kill humans in war. Many countries have already banned autonomous weapons in war, but there are other ways AI could be programmed to harm humans. Experts worry that as AI evolves, it may be used for nefarious purposes and harm humanity.
Hypothetical AI risks Now that we’ve covered the everyday risks of AI, we’ll talk a little about some of the hypothetical risks. These may not be as extreme as you might see in science fiction movies, but they’re still a concern and something that leading AI experts are working to prevent and regulate right now.