Privacy Policy Banner

We use cookies to improve your experience. By continuing, you agree to our Privacy Policy.

The positioning of international law. By Avraham Ibrahim Bessat, lawyer.

The positioning of international law. By Avraham Ibrahim Bessat, lawyer.
The positioning of international law. By Avraham Ibrahim Bessat, lawyer.

Contents of this article …

Introduction.

Rapid technological evolution has led to the emergence of artificial intelligence (AI) as a powerful tool in the military field. Autonomous systems, such as armed drones and combat robots, are increasingly integrated into military operations, raising important legal and ethical questions. While using the AI ​​in the military context promises to improve operational efficiency, it also poses challenges in matters of responsibility, discrimination, and compliance with international humanitarian law (DIH). This document looks at the international legal framework governing the use of AI in military operations, explores applicable agreements, and addresses the gaps that currently exist in the legal system.

I. AI Contexts of warres.

A. Emergence of military AI.

The development of AI in the military sector has grown in recent decades. AI military applications include:

1. Autonomous drones: many countries, including the United States, China and Russia, invest in drones capable of carrying out surveillance and attack missions without human intervention. Drones, such as the MQ-9 Reaper, use AI algorithms to analyze data in real time and make decisions.

2. Intelligent defense systems: systems like the “Phalanx CIWS” are designed to identify and neutralize threats in real time, using AI technologies to make rapid and precise decisions.

3. Predictive analysis: AI is used to analyze massive data in order to provide enemy movements or plan missions, which can improve the accuracy of military operations.

B. Advantages and disadvantages.

The use of AI in the military has advantages, but also notable disadvantages:

1. Advantages:

  • Increased efficiency: AI can process information quickly, allowing faster decision -making on the battlefield.
  • Reduction of human losses: automation of certain tasks can reduce the risk for military personnel.

2. Disadvantages:

  • Ethical risks: Fatal decisions taken by machines raise significant moral issues.
  • Climbing conflicts: AI systems can react more quickly than humans, increasing the risk of military engagement.
  • Algorithmic discrimination: algorithms can reproduce biases, resulting in inequitable decisions.

II. Applicable international legal framework.

A. International humanitarian law (DIH).

DIH regulates armed conflicts and aims to protect people who do not participate in hostilities. It is based on several fundamental principles, which must also be respected by military AI systems:

1. Principle of distinction: The parties to a conflict must distinguish between combatants and civilians. AI systems must be capable of this distinction to minimize civil losses.

Example: The use of facial recognition algorithms must be reliable to avoid fatal errors.

2. Principle of proportionality: Military attacks should not cause excessive civil losses compared to the anticipated military advantage.

Example: an autonomous drone must be programmed to assess the consequences of an attack, in order to respect this principle.

3. Responsibility: The question of responsibility in the event of DIH violations is complex. If an AI system commits an error, it is essential to determine who is responsible – the state, the designer or the user.

Example: in the event of civil losses due to an autonomous strike, it can be difficult to designate a manager.

B. International conventions.

Several international conventions can be relevant to military AI, although little specifically deal with this subject:

1. Geneva Convention (1949): Although it does not deal directly with AI, it establishes standards for the protection of people in wartime and could be interpreted to include emerging technologies.

2. Additional Protocol I from 1977 to Geneva Conventions: this protocol underlines the need to respect the principles of the DIH, but does not provide explicit directives on the use of AI.

3. The Convention on certain Classic Weapons (CCAC): it aims to prohibit or restrict the use of certain weapons, but does not regulate autonomous AI systems.

4. The Treaty on Arms Trade (TCA): Although this treaty regulates conventional arms trade, it does not cover AI systems, which raises concerns about the regulation of new military technologies.

C. International initiatives.

Discussions are under international level to examine the impact of IA on security and peace:

1. UN discussions: working groups have been established to study the implications of military AI and recommend regulatory measures.

2. NGO reports: organizations like Human Rights Watch and Amnesty International call for moratories on the development of autonomous military AI, citing ethical concerns.

3. International conferences: Forums like the Geneva Forum discuss the regulation of autonomous arms systems and the role of AI in armed conflicts.

III. Gaps in the legal framework.

A. lack of specific regulations.

The current international legal framework presents gaps concerning military AI:

1. Inadequate regulations: existing laws, including DIH, are not adapted to the challenges posed by autonomous AI systems, leaving room for maneuver to states and businesses.

2. Application difficulties: DIH and other international treaties must be adapted to deal with specific questions related to AI, in particular the responsibility and transparency of systems.

3. Complexity of algorithms: the complex and opaque nature of AI algorithms makes their regulation effective. Companies can easily bypass standards using non -transparent machine learning techniques.

B. Ethics and responsibility challenges.

Companies that develop military AI systems are facing ethical and responsibility challenges:

1. Ethics: Decisions taken by autonomous systems raise questions of morality. Companies must question the legitimacy of producing technologies that could cause human losses.

2. Legal responsibility: In the event of DIH violations, responsibility becomes unclear. Companies must establish who is responsible, which can cause expensive disputes.

3. Transparency: Companies must guarantee that their systems are transparent and auditable, in order to comply with ethical and legal standards.

IV. Towards proactive regulations.

A. Development of international standards.

To meet the challenges posed by the use of AI in the military field, it is imperative to develop a specific legal framework which meets the identified issues:

1. AI systems regulation: establish standards for transparency and explanability of the algorithms used in military decisions, in order to guarantee that citizens understand how and why decisions are made.

2. Assessment of impacts: impose compulsory assessments of the social and ethical impacts of AI systems before their implementation.

3. International cooperation: Encourage States to work together to establish international standards harmonized in the field of military AI.

B. Strengthening control mechanisms.

A control and regulation framework for companies developing military AI systems should be implemented. This would imply regular audits and the need to obtain certifications before the implementation of AI technologies.

1. Creation of regulatory organizations: set up independent agencies or commissions responsible for supervising the use of AI in public administrations and ensuring compliance with ethical and legal standards.

2. Training and awareness: educating military actors and companies on ethical and legal issues related to AI, in order to promote responsible and ethical use of technologies.

V. Case studies.

A. Case study 1: The Predator drone program.

The US Air Force Drone Predator program is a relevant example of the challenges posed by the use of AI in the military field. Predator drones are used for surveillance missions and air strikes, often with partial autonomy capacities.

1. Civil discrimination: reports have documented strikes that have led to civil losses, raising concerns concerning the capacity of AI systems to respect the principle of distinction. For example, a BBC investigation revealed that drone strikes had caused dozens of civilians in Afghanistan.

2. Liability: In case of error, it is often difficult to determine who is responsible. Pilots can be human operators, but decisions can be influenced by autonomous systems, making responsibility vague.

B. Case study 2: The development of AI by National Defense Technology China.

The National Defense Technology (CNDTE) China is in common in the development of AI technologies to strengthen its military capacities. This case raises concerns about geopolitical competition and the absence of clear ethical standards.

1. Technological arms race: competition between the United States and China to develop military AI systems can cause international tensions.

2. Ethical standards: The absence of clear regulations could allow the CNDTE to develop AI systems without taking into account ethical or legal principles, which could compromise global security.

Conclusion.

The legal supervision of companies specializing in war AI is a major challenge for international law. While AI continues to transform the military landscape, it is essential that the international community undertakes to establish standards and regulations which guarantee that these technologies are used in a responsible and ethical manner. The development of a robust legal framework can help prevent abuse, protect civil rights and maintain international peace and security.

Bibliography.

1. United Nations. (2019). “Weapon Systems and International Humanitarian Law”. United Nations Office for Disarmament Affairs.

2. International Committee of the Red Cross (ICRC). (2020). “Artificial Intelligence and the Law of Armed Conflict”.

3. United Nations Institute for Disarmament Research (UNIDIR). (2019). “The Impact of Artificial Intelligence on International Security”.

4. Future of Humanity Institute. (2018). “The Malicious Use of Artificial Intelligence : Forecasting, Prevention, and Mitigation”.

5. Crawford, K., & Calo, R. (2016). “There is a Blind Spot in AI Research”. Nature.

6. Sharkey, N. (2019). “The Ethical Implications of Autonomous Weapons”. Ethics and Information Technology.

-

PREV Last hour major fire in Notre-Dame-du-Bon-Consil, near Drummondville.
NEXT Michael Schumacher gave a rare sign of life by signing a Formula 1 helmet for a charity work