The challenges of regulatory frameworks in addressing AI risks

Artificial intelligence (AI) systems have advanced significantly since Microsoft launched the chatbot “Tay” in 2016, which was discontinued after it posted racist tweets. In 2022, OpenAI introduced ChatGPT, marking a new era of generative AI capable of processing and producing information across various media, such as text, images, and video. Despite optimism surrounding its vast potential, Tay’s experience highlighted the need for safeguards to prevent algorithms from dynamically and interactively learning harmful user behavior. Furthermore, the evolution of AI has made its associated harms less visible and more profound, particularly concerning intellectual property rights and AI governance, which requires a comprehensive approach that considers the complex social, economic, and legal implications.

In response, the significance of the June 2024 report by the Observer Research Foundation (ORF) lies in its identification of the challenges facing AI regulation and development. It proposes a flexible regulatory framework to address risks in this domain, balancing innovation with the consideration of social, economic, and environmental needs. Additionally, the report calls for enhancing international cooperation on AI regulation to ensure globally unified and coordinated standards, while addressing ethical concerns related to AI, such as privacy, transparency, and fairness.

Contextual Challenges:

There are fundamental and shifting contexts in which AI regulation operates, including technological advancements, economic and social transformations, legal challenges, and international policies. These contexts present several challenges to AI regulation, including:

Exploitation of data without respecting ownership rights: While data is not a limited resource, the computational infrastructure and proprietary algorithms used by large tech companies to collect data can undermine personal data ownership rights. This results in the conversion of collected data into analytical outputs where individuals lose their rights. Consequently, there is economic exploitation of data without compensation or redistribution of value to its original owners.

Concerns about unfair competition due to major corporations: AI development relies heavily on massive amounts of data and computational infrastructure, which strengthens the dominance of large companies like Microsoft, Facebook, Amazon, and Google. These corporations control vast amounts of data, using it to train advanced AI models, raising concerns, particularly in the European Union, about unfair competition. The market dominance of these companies prompts questions about how collaboration and competition in AI development and distribution should be managed.

Gaps between developed and developing countries: The United States and China dominate AI innovation, while the European Union leads in regulation through its AI law. This gap between innovation and regulation has led to the exploitation of the developing world as a data provider without realizing significant economic benefits. This dynamic deepens the divide between developed and developing nations, with major companies benefiting from innovation and its regulation, while the impact of these policies on the rest of the world is largely ignored.

Market Distortions:

Growing concerns about AI market distortions arise from weak regulatory frameworks in this field. The ORF report highlights the defense sector to illustrate the negative externalities generated by AI systems and the deep inroads they have made thus far. Demand for AI-driven military capabilities is rising, with the U.S. and China investing heavily in developing military AI applications. The defense AI market was valued at $9.23 billion in 2023, and it is expected to witness significant growth with advancements in anti-drone systems and autonomous smart weapons.

Some have raised questions about the accountability of AI in defense, particularly with the innovation of smart munitions, systems, and weapons equipped with AI. Additionally, there are concerns about distinguishing between civilian and military uses of dual-use technologies and the resulting civilian and criminal consequences.

The report also highlights the concentration of AI innovation and its impact on legal and ethical responsibilities. Large tech companies have come to dominate the AI market and model development due to their ownership of vast datasets and the necessary computational capacities. This concentration creates an uneven playing field, where new investors struggle to compete without relying on the infrastructure of these major companies, leading to further market consolidation.

Moreover, large corporations are often able to avoid responsibility for the damages caused by their AI systems. These companies protect themselves through policies, such as not claiming ownership of the content generated by their AI systems, which complicates liability in cases of negative consequences. Under these circumstances, questions about how to ensure fairness and accountability persist, as the current AI market seeks to maintain the dominance of major companies over innovation and resources. These market distortions highlight the significant challenges of regulating AI, whether for military or civilian use, and underscore the need for legal frameworks that consider the social and ethical risks associated with these advanced technologies.

On another front, there is the issue of using ethics as a means to sidestep international standards and laws. In other words, ethical concerns are sometimes spotlighted to avoid discussing international legal obligations or to influence international decisions in ways that justify illegal or unethical behavior. AI governance can be viewed as an ethical dimension of AI. While ethical principles related to AI are important, they remain non-binding and insufficient to address the practical and real-world challenges in this field.

For example, the AI treaty adopted by the Council of Europe in May 2024 lacks clear details on the commitments of the parties and addresses some issues only indirectly, weakening its effectiveness in addressing the harms caused by AI systems. Similarly, the United Nations General Assembly’s resolution on AI avoids addressing the necessary amendments to international law.

Added to all these challenges is the transnational nature of AI systems and the dominance of English as a form of data on which AI systems are trained, along with global disparities in AI capabilities between developing and developed countries. Furthermore, national legal frameworks struggle to keep pace with the rapid developments in AI, requiring a comprehensive review of the legal approach to managing this field.

Proposed Framework:

AI regulation requires addressing multiple interconnected challenges, such as market concentration, the unequal distribution of resources, and underrepresentation in data sets and the developer community. These conditions impact various stakeholders in the AI ecosystem, creating a domino effect that makes it difficult to pinpoint the exact sources of risk.

The importance of managing AI’s dynamic risks emerges as algorithms continue to learn from user interactions, which can lead to problems such as biased or discriminatory outputs. The ORF report suggests that AI regulation should address these risks through dynamic frameworks based on risk assessment and appropriate interventions. It proposes a framework that balances innovation with maintaining institutional integrity, taking into account shifts in the current landscape. The framework includes several key components:

Dynamic governance capabilities and strategic alignment: Policymakers and regulatory bodies must be able to detect rapid changes in the AI field and plan their responses accordingly. This includes identifying potential problem areas, the rate of their impact, and determining the scope of required regulation.

Risk mapping, impacts, and responsibilities: This involves planning and classifying risks, their causes, and effects, helping to identify and distribute responsibilities among stakeholders.

Developing compliance frameworks and support: Once risks and responsibilities are identified, frameworks, standards, and guidelines must be developed to help companies mitigate risks. These standards should be periodically reviewed based on stakeholder feedback and technological advancements.

Establishing methods and processes for networked escalation: Given the cross-border nature of large corporations and the global disparities in AI resource distribution, methods for escalating issues when necessary must be developed, starting with self-regulation and extending to government intervention if needed. This requires building institutional capacity that combines traditional regulatory expertise with technical knowledge in AI.

There is a regulatory gap between the Global North and South regarding the development and use of artificial intelligence (AI). While advanced countries and institutions agree on common principles for AI governance, the focus remains on innovation in the United States and China, or regulation in the European Union. On the other hand, developing countries, such as India, Brazil, and Argentina, are striving to build national capacities despite limited resources.

In this context, the report warns of an overreliance by developing countries on the AI regulatory principles of advanced nations, even though those principles may not be relevant to local contexts. To overcome these challenges, it is essential to develop multilateral regulatory strategies that consider sovereign needs and national competencies, helping to achieve a balance between innovation and risk mitigation.

For the proposed framework in this report to function as an effective, risk-based regulatory approach aimed at managing the emerging challenges and multidimensional risks of AI, several considerations must be integrated:

Regulation of consequences: AI regulation should stimulate innovation while deterring risks by incorporating responsibility and accountability in risk management and governance of AI.

Pandemic model: This model involves testing innovations in controlled environments before their widespread use, similar to the approach with COVID-19 vaccines. It aims to prevent unintended consequences from innovations that may prioritize profit over safety.

Algorithmic accountability: AI systems require frameworks and auditing standards that ensure accountability and transparency, along with regular documentation and evaluation processes to identify and manage emerging risks.

In conclusion, the report emphasizes the importance of international cooperation to establish harmonized global standards and frameworks for AI governance, while also stressing that developing nations need to build institutional capacities to keep pace with global advancements in this field.

Source: Saran, S., Nandi, A., & Patil, S. (2024, June). ‘Moving Horizons’: A responsive and risk-based regulatory framework for A.I. Observer Research Foundation. https://www.orfonline.org/research/moving-horizons-a-responsive-and-risk-based-regulatory-framework-for-a-i/

SAKHRI Mohamed
SAKHRI Mohamed

I hold a Bachelor's degree in Political Science and International Relations in addition to a Master's degree in International Security Studies. Alongside this, I have a passion for web development. During my studies, I acquired a strong understanding of fundamental political concepts and theories in international relations, security studies, and strategic studies.

Articles: 14904

Leave a Reply

Your email address will not be published. Required fields are marked *