Want to create interactive content? It’s easy in Genially!
DIGIT PRESENTATION
Giovanni Francesco Mosca
Created on May 25, 2024
Start designing with a free template
Discover more than 1500 professional designs like these:
View
Hr report
View
Report Human Resources
View
Black Report
View
Tech report
View
Waves Report
View
OKR Shapes Report
View
Professional Whitepaper
Transcript
DIGITAL CITIZENSHIP AND LAW, 23/24
Criminal AI?
Can AI be held accountable for its damages?Who shall be the responsible for it? Current legislation in the European Union between (economical and legal) doubts and progresses
GIOVANNI F. MOSCA
Defining AI
An impossible task?
AI Act definition
Computer science
High level expert group on Artificial intelligence
US National artificial intelligence Act
Proposals
The "Brussels effect"
2017
2018
2022
2020
2022
EP resolution on Civil law and robotics
Draft directive on AI liability system (AILD)
Expert group on Liability and New Technologies
EP passed a resolution on liability on AI systems
Revised PLD
INFORME TECH
01
Legal personhood
2017 European Parliament Report on Civil law rules on Robotics
- Legal fragmentation across Member States
- Compensation issues
- Scapegoat for developers
- Moravec's paradox
INFORME TECH
02
Current liability framework in the EU
Product liability directive (PLD) 85/374/EEC + national liability rules
3 avenues for liability:
- fault-based liability claim
- strict liability claim
- claim against the producer of a defective product (PLD)
03
Why new rules?
Business legal uncertainity
Legal fragmentation
Compensation gap
Introduction of intangible elements
New risks
INFORME TECH
04
AI LIABILITY DIRECTIVE (AILD)
Main provisions
Scope
Principles
- uniform requirements for non contractual civil liabilities about AI systems
- prevent uncertainity coming from legal fragmentation on AI systems
- promote trustworthy AI
- harmonise non-contract liability rules for damage caused by every AI system through a minimum standarization approach
- high-risk AI system strict liability
- non hig-risk AI system fault-based liability
- presumption of causality
- disclosure of evidence
AILD main provisions
Presumption of causality
- give claimants the possibility to alleviate the burden of proof and enhance the possibility to be succesful in the claim
- by default for high-risk AI systems
- demonstrate difficulty for proving the causal link for non high-risk AIs
- differentiated regime for damage caused during a professional or non professional activity
- it must be 'reasonably likely' the defendant's negligent conduct in the particular circumstance
- No reversal of the burden of the proof, but alleviation in targeted circumstances
AILD main provisions
Disclosure of evidence
- national courts can request disclosure of evidence about high-risk AI systems
- requests must be necessary and proportionate, hence the court shall take into account the legitimate interests of all parties
- if the defendant refuses to disclose evidence national court can consider it a prove of non-compliance. However, defendant has the right to rebut that presumption
05
New Product Liability Directive
Alleviation of the burden of the proof
Product defects
Scope
Main provision
Presumption of causality and disclosure of evidence
Expanding the notion of damage by including psychological harm and material losses from corruption of data
Under specific conditions, products can be deemed defected even after being put into the market
Widening definition of product, including softwares, digital manufacturing files, and digital services
06
Considerations from stakeholders and scholars
AILD
Pros
Scholars
Big tech
Consumer Associations
- generally welcome but warn against blind spots
- alignment with AI act (but shall be better developed)
- targeted approach to liability
- alignment with AI act
Cons
- huge dicentives for investors
- risk of disclosing confidential information
- chilling effect on innovation
- excessive legal burden
- still difficult to obtain compensation
- better clariy which operator is liable
- lack of clarity on some notions
- national judges interpretation can lead to more fragmentation
NPLD
Pros
Scholars
Big tech
Consumer Associations
- Generally agree with product defectiveness, but some terms shall be clarified (e.g. technical complexity)
- modernisation of liability rules was necessary
- Favoring the inclusion of softwares
- some argue of expanding even more the concept of software
Cons
- PLD has been well functioning and few lawsuits are ongoing on the topic
- Alleviation of burden of proof must be narrowed
- it does not make sense to distinguish between tangible and intangible
- jurisprudence has been already covering intangible products (e.g. electricity)
- It shall be avoided to put the burden of proof on victims
07
The law and economics perspective
The ultimate scope of this perspective is to understand what is the optimal allocation of liability for AI-related harm. Thus, the primary goal of liability rules is deterrence. Can this be really applied in reality?
Can liability be a catalyst for innovation?
What do you think?
Thanks for the attention!
Giovanni Francesco Mosca
DIGITAL CITIZENSHIP AND LAW, 23/24
Bibliography and references
- 04 2024 | A Europe Fit for the Digital Age | AI liability directive. (n.d.).
- Ada Lovelace Institute. AI assurance? Assessing and mitigating risks across the AI lifecycle (2023).https://www.adalovelaceinstitute.org/report/risks-ai-systems/
- Artificial intelligence liability directive. (n.d.).
- Bertolini, A., & Episcopo, F. (2022). Robots and AI as Legal Subjects? Disentangling the Ontological and Functional Perspective. Frontiers in Robotics and AI, 9, 842213. https://doi.org/10.3389/frobt.2022.842213
- Borges, G. (n.d.). Liability for AI Systems Under Current and Future Law.
- Buiten, M., De Streel, A., & Peitz, M. (2023). The law and economics of AI liability. Computer Law & Security Review, 48, 105794. https://doi.org/10.1016/j.clsr.2023.105794
- Čerka, P., Grigienė, J., & Sirbikytė, G. (2015). Liability for damages caused by artificial intelligence. Computer Law & Security Review, 31(3), 376–389. https://doi.org/10.1016/j.clsr.2015.03.008
- Faure, M., & Li, S. (2022). Artificial Intelligence and (Compulsory) Insurance. Journal of European Tort Law, 13(1), 1–24. https://doi.org/10.1515/jetl-2022-0001
- Gordon, J.-S. (2021). Artificial moral and legal personhood. AI & SOCIETY, 36(2), 457–471. https://doi.org/10.1007/s00146-020-01063-2
- Hacker, P. (2023). The European AI liability directives – Critique of a half-hearted approach and lessons for the future. Computer Law & Security Review, 51, 105871. https://doi.org/10.1016/j.clsr.2023.105871
- Księżak, P., & Wojtczak, S. (2023). Toward a Conceptual Network for the Private Law of Artificial Intelligence (Vol. 51). Springer International Publishing. https://doi.org/10.1007/978-3-031-19447-4
- Li, S., Faure, M., & Havu, K. (2022). Liability Rules for AI-Related Harm: Law and Economics Lessons for a European Approach. European Journal of Risk Regulation, 13(4), 618–634. https://doi.org/10.1017/err.2022.26
- Mökander, J., Juneja, P., Watson, D. S., & Floridi, L. (2022). The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: What can they learn from each other? Minds and Machines, 32(4), 751–758. https://doi.org/10.1007/s11023-022-09612-y
- New Product Liability Directive. (n.d.).
- Nilsson, N. J. (2009). The Quest for Artificial Intelligence (1st ed.). Cambridge University Press. https://doi.org/10.1017/CBO9780511819346
- Novelli, C., Casolari, F., Hacker, P., Spedicato, G., & Floridi, L. (n.d.). Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity.
- Radley-Gardner, O., Beale, H., & Zimmermann, R. (Eds.). (2016). Fundamental Texts On European Private Law. Hart Publishing. https://doi.org/10.5040/9781782258674
- Ryan, M. (2020). In AI We Trust: Ethics, Artificial Intelligence, and Reliability. Science and Engineering Ethics, 26(5), 2749–2767. https://doi.org/10.1007/s11948-020-00228-y
- Schmidpeter, R., & Altenburger, R. (Eds.). (2023). Responsible Artificial Intelligence: Challenges for Sustainable Management. Springer International Publishing. https://doi.org/10.1007/978-3-031-09245-9
- Sheikh, H., Prins, C., & Schrijvers, E. (2023). Mission AI: The New System Technology. Springer International Publishing. https://doi.org/10.1007/978-3-031-21448-6
- The Council adopts its negotiating mandate for a new EU law on liability for defective products. (n.d.).
- Wendehorst, C. (2022). Liability for Artificial Intelligence: The Need to Address Both Safety Risks and Fundamental Rights Risks. In S. Voeneky, P. Kellmeyer, O. Mueller, & W. Burgard (Eds.), The Cambridge Handbook of Responsible Artificial Intelligence (1st ed., pp. 187–209). Cambridge University Press. https://doi.org/10.1017/9781009207898.016
- Wendehorst, C. (2022) AI liability in Europe: anticipating the AI Liability Directive. Ada Lovelace Institute. Available at: https://www.adalovelaceinstitute.org/report/ai-liabilityin- europe/
Yes if policymakers understand that liability rules will impact AI adoption and innovation and only if every part is allocated the correct amount of burden. They should find the correct balance between reacting and anticipating innovation, but to do this is important to observe carefully how technology is evolving
"It is presumed when 1) damage is typically consistent with the defect in question or 2) technical or scientifc complexity causes excessive difficulty in proving liability (e.g. black-box system)"
Disclosure of information
&
Causal link
"Manufacturer must disclose information when the claimant presents facts and evidences that support the 'plausibility of the claim for compensation"
Software updates under manufacturer's control
Failure to adress cybersecurity vulnerabilities
Machine learing
Useful insights
- Concerning AILD, mandatory insurance for start-ups developing high-risk AI system can be a valid option for the future only if clear standards are set
- In the case of the NPLD it is important to acknowledge the risks trade-off of AI systems. It is crucial to know how to impose standard for defectiveness to avoid putting excessive burden on those systems which are clearly more beneficial than human decisions. For this reason, a sectorial approach would have more sense (e.g. healthcare)
It is important to note that all these assessments have been made before the adoption of the AI Act. This especially impacts the AILD