Publication date
Sunday, 26 October 2025

Pyblication type
Blog post

Author(s)
Dirk Brand
Technology, Media & Telecommunications

Tags
#ai #regulation #softlaw

The role of soft law in AI governance – a regional approach

Regulating artificial intelligence ("AI") is a complex and time-consuming exercise. This is evident from the development of the first comprehensive AI legislation in the world, namely the EU AI Act (2024). Some countries such as Brazil, South Korea and China have since then adopted specific legislation on AI, while more countries are busy with the legislative processes to adopt AI legislation.

The rapid development of new forms of AI, such as new generative AI models and AI agents, continue to outpace the legislative processes by far. The need for appropriate regulation remains and it is in fact increasingly important in view of the increased risks related to the use of AI, in particular generative AI.

The use of AI transcends national boundaries, which means that it is not only a question of developing country-specific legislation, but also of international cooperation and alignment. It is therefore no surprise that the G7 countries have adopted the Hiroshima Protocol in 2023 in Japan to promote safe, secure and trustworthy AI worldwide. It is a soft law approach to guide organisations developing advanced AI systems to do so in a responsible and ethical manner. The Hiroshima Protocol consists of the following 11 actions, namely:

  • Risk management throughout the AI lifecycle
  • Post-deployment monitoring
  • Transparency and public reporting
  • Responsible information sharing and reporting of incidents
  • Development and adoption of AI governance policies
  • Investment in security controls
  • Development and deployment of reliable content authentication
  • Safety research investment
  • Focus on responding to global challenges by using AI
  • Setting of international standards.
  • Implementing appropriate data protection measures.

This set of actions is based on the rule of law, human rights, due process, diversity, fairness and non-discrimination, democracy, and human-centricity. The Hiroshima Protocol led to the adoption of the HAIP Framework (Hiroshima AI Process Reporting Framework) managed by the OECD, a voluntary reporting framework to encourage AI transparency and accountability. Organisations, in the public and private sectors and the academic environment, should respect these principles in the design, development and deployment of advanced AI systems.

The first round of publication of reports produced a diverse range of 19 organisations, including multinational technology companies like Google, Microsoft and NTT, NGO’s and academic institutions. The publication of these reports contributes to knowledge sharing and building of trust in the development of AI systems in accordance with ethical AI principles. The HAIP Framework is not perfect, but it is a useful initiative to promote responsible AI practices globally. Due to the voluntary nature of the reporting framework, it does not include binding legal requirements and also no sanctions for non-adherence thereto. This can only be provided by appropriate AI legislation. The limited responses in the first round indicate a need to do more to promote responsible AI.

In most countries in the world it might take a few years before dedicated AI laws are adopted, while the use of AI in various sectors of the economy continue to grow and new AI applications are developed continuously. So, in absence of AI legislation, is it not useful to explore a similar type of approach than the Hiroshima AI Protocol to guide the responsible development and deployment of AI?

South Africa does not have an AI law, but the national AI Policy is being finalised in the second half of 2025. The Western Cape Government adopted an AI strategy recently. This region is the technology hub of the country and home to AI focused technology companies such as Snapscan, Stone Three, Praelexis and Octoco, to name but a few. Multinational companies where technology, in particular AI, is a central focus, such as Naspers, Mediclinic, and Amazon, also have a strong basis in the Western Cape. Responsible AI is not only understood but also practiced by these companies. The Western Cape Government is taking responsible and ethical AI seriously in the way they approach the development of new AI-based solutions in areas such as healthcare, road traffic management and agriculture, which complements the AI strategy. The academic institutions based in the Western Cape promote through various initiatives innovation in technology and policy development, for example the Policy Innovation Lab at the School of Data Science and Computational Thinking, Stellenbosch University, which recently played host for an International Dialogue on AI and Human Rights.

It is therefore argued that the time is right for a collaborative initiative between government, the technology sector and academic institutions to promote the responsible development and deployment of AI. Such an initiative could produce a voluntary code of conduct or charter for responsible AI based on key ethical AI principles such as fairness, transparency, accountability and human oversight. This should confirm a commitment to the responsible development and deployment of AI. It is a soft law approach that could provide a good basis for the eventual development of AI legislation in South Africa. The Western Cape is well positioned to foster such an innovative approach, which should eventually include the whole country.


By using this website, you agree to our prevailing terms of use.
Design collaboration with Sidebar Design Studio