AI Regulatory Sandboxes: Global Developments and Lessons for the MENA Region 

Executive Summary

Artificial intelligence (AI) is rapidly growing so rapidly that traditional regulatory frameworks cannot accommodate it. Governments worldwide are exploring new governance models to keep pace with this fast-paced transition. And implementing regulatory sandboxes is one of them. Regulatory sandboxes offer an adaptive approach to AI governance by providing a platform enabled with real-world testing and cross sectoral coordination. 

This brief explores the global developments in AI sandboxes and with those as starting point, the brief would reflect on the relevance of AI sandboxes for the MENA region. The brief concludes with a set of strategic recommendations for an adaptive, inclusive AI governance framework.

Regulatory Sandboxes

Regulatory sandboxes provide a controlled environment for companies to test innovations (product, service or business model) under regulatory oversight. This allows the regulator to monitor, learn and refine policy in real time. They help to anticipate the regulatory implications of innovation before full-scale deployment. The regulators can frame adaptive, evidence-based policies with insights from these emerging technologies. These sandboxes facilitate ex-ante experimentation rather than ex-post regulation and in effect act as a bridge between fast-paced innovation and the regulatory landscape.

The concept of regulatory sandbox first originated in UK in their fintech sector to support financial innovations while safeguarding consumer protection and financial stability. Following UK, several countries joined in developing their model which is suitable for their regulatory landscapes. Initially, financial services were the focus, but later the sandbox focused on AI, blockchain, health, climate, governance and much more.

As per the report released by the Datasphere initiative, titled “Sandboxes for AI, tools for a new frontier”, as of January 2025, there are 66 sandboxes globally focused on AI, data and emerging technologies and out of this, 59 are national sandboxes and 7 are sub-national/regional sandboxes. AI sandboxes exist in over 23 countries including UK, Canada, Singapore, Brazil, Japan and Germany.


The Global Trend


Countries including the UK, Norway, and Singapore have created regulatorysandboxes for testing AI applications since 2015. UK introduced an AI-dedicated regulatory sandbox, AI Airlock led by Medicines and Healthcare products Regulatory Agency (MHRA)to address the regulatory uncertainty in AI enabled medical devices. This has enabled real-time testing, has given compliance insights and actionable recommendations to inform future AI governance in the medical field. 

Similarly, the Norwegian Data protection authority has established a regulatory sandbox under the National Strategy for Artificial Intelligence. The primary goal of the sandbox is to stimulate privacy-friendly innovation in AI due to its vast potential to transform public and commercial sectors. The Norwegian government established the sandbox as a proactive measure to address the significant challenges regarding AI’s personal data usage, providing a controlled environment to develop compliant and ethical AI solutions. As an active responsible practice, the sandbox ensured protections for previously existing intellectual property, and it allowed participants to retain ownership on any IP they brought into the sandbox collaboration.

In contrast, France’s Data Protection Authority (CNIL) launched its personal data sandbox to provide focused support to innovative projects that prioritize data privacy from their inception. CNIL offers direct engagement through its legal and technical teams to clarify regulatory requirements, provide practical advice, and audit developed solutions. Here, the sandbox focused on innovations in digital health and edtech and later in AI in public services. However, its broader systemic impact remains limited due to a narrow participant base.

Singapore’s Generative AI Evaluation Sandbox, spearheaded by the Infocomm MediaDevelopment Authority (IMDA) and established in partnership with the AI VerifyFoundation, illustrates a unique approach to sandbox implementation that diverges from traditional frameworks. This initiative brings together major multinational companies to evaluate trusted AI products. The initiative is tailored to pinpoint and address the gaps in GenAI assessments and develop benchmarks for model performance.

Together, these examples reflect growing recognition that AI governance requires adaptive regulatory tools with real-time testing, regulatory guidance and focus on sectoral risks along with broader ecosystem engagement and ability to scale in order to be effective.

UAE’s Sandbox Landscape

UAE is in the forefront in the global race to shape AI governance. With an eye towards harmonizing innovation with regulatory protections, the nation has created progressive frameworks to experiment with new technologies in sandbox settings. UAE’s approach can be unique due to its multi-tiered design (national, sectoral and city-level initiatives). This forward-looking model positions UAE as a regulatory innovator and a global testbed for responsible AI based innovations.

In January 2019, Regulations Lab (RegLab) was launched by the UAE government which was established under the federal legislation. RegLab is a controlled space in which emerging technologies including AI can be safely and responsibly tested. Under the framework, businesses are given temporary experimental licenses to test and evaluate innovations without the full weight of compliance while testing is underway. The main goals of RegLab are to forecast future regulatory requirements through monitoring real-world experience of emerging technologies and to inform and co-design legislation based on experimental outcomes, thereby reducing regulatory lag.

In addition to the national-level Regulations Lab, the UAE is developing its sandbox ecosystem through sectoral and city-level initiatives specializing in emerging digital technologies such as AI, Internet of Things (IoT), and cloud computing. These developments further crystallize the UAE’s vision to be a world leader in regulatory innovation.

In 2024, the Telecommunications and Digital Government Regulatory Authority (TDRA) introduced the ICT Regulatory Sandbox, an innovative framework that facilitates experimentation in new ICT areas. The sandbox promotes innovation in Internet of Things (IoT), digital twins and cloud services.

The TDRA sandbox prioritizes interoperability, data governance, and security-by-design. The sandbox provides shortlisted participants with a controlled setting to experiment with new digital infrastructure solutions under guided supervision, supporting the UAE’s wider digital transformation agenda. While not AI-specific, numerous AI applications are integrated into these ICT ecosystems particularly in autonomous systems, predictive maintenance and smart city infrastructure.

As a component of the D33 Economic Agenda, Dubai Future Foundation has introduced Sandbox Dubai, an initiative aimed at growing deep-tech and AI startups via regulatory innovation and international partnerships. The program supports AI startups operating in regulated sectors (e.g., health, finance, mobility), firms seeking market entry via regulatory testing arrangements and international partnerships with innovation hubs, accelerators, and think tanks. Sandbox Dubai is unique in its alignment with the long-term economic objectives and its integration with the Dubai Future Accelerators. It establishes the city as a global testing ground for emerging technologies, providing regulatory agility as a point of competition.

UAE’s sandbox ecosystem is aligned to the broader digital and economic transformation agendas. UAE treats these sandboxes as policy tools to address the existing gaps. However, to maximize effectiveness, the country needs to ensure ecosystem-wide learning by creating feedback loops and cross-sectoral knowledge sharing. 

Why AI Needs a Sandbox Governance Model

AI is complex and multifaceted. We have seen across various case how it raises ethical concerns. It is prone to various risks such as bias and privacy infringement and it operates across multiple sectors, and it is trans-boundary. This nature creates unique challenges when it comes to rigid regulatory frameworks. These points towards the insufficiency of conventional regulatory approaches to manage the rapidly evolving AI.

Thus, regulatory sandboxes offer an opportunity to address these challenges by offering an environment where the models are tested against the current legislations without exposing users or market to undue risks. Sandbox creates a collaborative space where the regulators across different ministries and agencies can come together and work for a harmonized approach.

Strategic Recommendations

Strengthening AI sandbox governance in the MENA region requires cooperation from policymakers and regulators, innovation enablers and startup firms. Here are some recommendations for each group of stakeholders.

1. Policymakers and Regulators

Cross-sectoral governance within sandboxes is needed to ensure coordination across these sectors. Government should align with the global ethical standards and local priorities in high-risk sectors.

2. Innovation enablers

Regulator-innovator dialogues and co-designing session should be used to reduce compliance uncertainty and innovation enablers can feed in best practices and technological know-hows where government lacks in-house expertise.

3. Startups

They can adhere to the compliance standards and integrate legal, ethical and privacy considerations right from the product development stage. They can use the sandbox as a credibility tool when they enter the market.

Conclusion

Governments must reimagine traditional regulatory approaches as AI is a rapidly evolving sector. Regulatory sandbox is a powerful tool that can help governments navigate this complexity by enabling real world testing and inter-sectoral coordination. This can help in balancing innovation with oversight. UAE has already made notable progress, and the current initiatives make UAE one step closer to becoming the world leader in AI. However, to remain effective, the sandboxes should align with the long-term national strategies.

To achieve this, sandbox models should be supported with ecosystem-wide learning, policy feedback loops and more public-private collaborations. This is where innovation advisory partners can step in. As AI systems become both more powerful and more embedded in everyday life, sandbox models need to evolve from isolated experiments to interoperable, learning-focused and scalable models in a wider governance network

Categories: