Ever since the breakthrough launch of OpenAI’s ChatGPT in November 2022, every major economy globally has incorporated Artificial Intelligence (AI) in its national strategy. Apart from the United States and China, the usual suspects in tech development and innovation, emerging players like the United Arab Emirates and India are placing heavy bets on AI. The global AI market size was estimated to be US$ 196.63 billion in 2023 and is projected to grow at a CAGR of 36.6 percent till 2030. Reports suggest that revenue from AI products and services in the UAE will increase from US$ 5.22 billion to US$ 46.33 billion over the next five years, representing a CAGR growth of 43.9 percent. The potential of an AI-driven gold rush has concurrently spurred a race amongst jurisdictions to establish regulatory and governance frameworks given the unpredictable—and often opaque—nature of AI development. Countries like the US, China, India, UAE and international bodies like the United Nations and OECD have released their versions of guiding principles, best practices and ethical guidelines for AI development. After over three years of deliberation, the European Union Artificial Intelligence Act (EU AIA) released in August 2024, became the world’s first legally binding governance framework for AI. During that time, the world saw the arrival of multi-modal generative AIs and speculative ideas like Artificial General and Super Intelligence becoming foreseeable technologies. This timescale represents the truism in tech discourse that innovation far outpaces policy changes. Recognising this speed differential, regulators are expanding their AI-specific regulatory toolkit. Regulatory sandboxes are emerging as relevant legal frameworks that can enable governments to strike the balance between innovation and safety. Countries like Spain and the United Kingdom have already begun piloting their versions of AI sandboxes in 2024. AI sandboxes can be useful tools for emerging players like the UAE who want to compete in the global AI market and capitalise on their geopolitical and economic positioning.
AI sandboxes: Pros and cons
Sandboxes are controlled environments designed to test novel technologies under regulatory supervision. An AI-specific sandbox can provide a testing environment where developers test AI models, algorithms and APIs (Application Programming Interfaces) using limited and approved datasets. Regulators may afford temporary legal exemptions to test products in conditions that approximate the real world. Developers may be asked to produce documentation to prove how AI products meet transparency and explainability standards. AI models can be evaluated on factors like bias mitigation during training, risk management protocols and compliance with relevant data protection laws prior to market rollout. A benefit of sandboxes is the speed with which they can be implemented and tuned to meet jurisdictional requirements and use cases. By establishing a controlled collaborative environment for developers and regulators, sandboxes can facilitate the creation of venture capital investment, speed up market entry for SMEs and help create an adaptive regulatory environment for fast-paced technologies.
However, sandboxes are not without their drawbacks. For instance, sandboxes are generally designed on a small scale with a limited number of participating cohorts. The UK’s 2024 AI Airlock framework took five, whereas the 2021 ‘SANDBOX’ programme by Dubai Silicon Oasis only took twelve participants out of several hundred. A reliance on sandboxes in highly dynamic fields with short innovation cycles, like the tech sector, can lead to the ballooning of sandbox programmes as developers identify competitive benefits. This could lead to inefficient and ineffective implementation of the frameworks by regulators and cause unpredictable impacts on regulation, consumer protection and competition. A second drawback arises due to the desire to attract talent and private investment in the AI sector by all major economies globally. High levels of global demand can allow enterprises to engage in regulatory arbitrage or “forum shopping” as they look for the most permissive and pro-business sandboxes. This scenario can cause a downward spiral of governance standards by regulators across jurisdictions as they compete to attract participants under the pressure of producing sandbox successes.
Sandboxes and their relevance for the UAE
The UAE is emerging as a key player in the global AI race, placing fifth on Stanford University’s global AI index in 2024 for AI “vibrancy.” Driven by the need to diversify its economy amidst a global deceleration of oil demand growth, the UAE has been intensifying its efforts to future-proof its position. For instance, in 2017, the UAE became the first government in the world to appoint an AI minister. Subsequently, in 2018, it established the UAE Council for Artificial Intelligence and Blockchain. Signalling its growing talent pool and digital infrastructure, the UAE recently developed its native AI foundation model named Falcon, with its latest iteration Falcon 3, outperforming many leading AI models from Silicon Valley on small infrastructures. However, the expansion of economic priorities has not been without obstacles for the Emirates. The US$ 1.5 billion strategic investment by Microsoft to acquire a minority stake in the Emirati company G42, alongside G42’s deployment of Nvidia’s H100 and H200 chips has pulled the UAE deeper into the geopolitical tensions between the US and China. Given the UAE’s internal drive for economic diversification and external pressures to avoid geopolitical sinkholes, the UAE will need tools to swiftly and effectively craft an AI-specific governance framework.
In this context, Core42 (a subsidiary of G42) has stated that it has developed a specialised sandbox framework for the deployment of Nvidia’s H100 and H200 chips. The sandbox is referred to as a “regulated technology environment” (RTE) and is designed to provide a secure space for deploying and testing emerging technologies. Although no further information has been released regarding the framework of the sandbox, Core42 has specified its priorities of aligning with US export restrictions while maintaining “technological sovereignty.” These priorities align with the blueprint published by the UAE Council for AI that identifies effective governance and regulation as a foundational step for ultimately transforming the UAE into the globally preferred AI destination.
Since governance priorities currently seem to revolve around respecting US export controls, localising data generated in the UAE and cultivating a local ecosystem for AI development, the government may allow a broad range of foundation models and API developers to access AI sandboxes to encourage local capacity-building. The UK’s ‘AI Airlock’ framework takes a contrasting approach that is sector-specific by focusing on medical diagnostics and patient care. Such a targeted approach creates a clear risk profile as assessment of success or failure can done relatively efficiently while including humans in the diagnostic loop to minimise harm. Although the crossover between the regulatory environments of the UK and UAE is limited, AI Airlock builds the case for balancing broad frameworks with narrow ones for tackling unpredictable technologies like AI.
Another challenge for crafting AI-specific sandboxes is that AI models and platforms—once developed—can be exported to other jurisdictions. While being effective in reducing undue regulatory burden, UAE’s current approach of issuing a “patchwork of decrees” will not suffice to address such concerns. Countries differ in their data and consumer protection laws, bias mitigation standards, availability of high-quality datasets and enabling policies. In this respect, the jurisdictional specificity of sandboxes raises concerns about international regulatory consistency. Therefore, sandboxes should be considered interim measures rather than replacements for legislative frameworks. Legislations offer the benefit of creating a comparatively longitudinal and consistent ruleset for developers as well as opportunities for aligning national regulation with internationally agreed-upon principles, thus facilitating cross-border partnerships. An over-reliance on sandboxes can lead to fragmented regulatory regimes and global competition for private investment in AI could lead to some jurisdictions lowering barriers to market entry that jeopardise responsible innovation.
Going forward
Although questions remain about how the AI landscape will evolve in the UAE in the near future, its regulatory approach will need to balance the need to generate investment with cross-border alignment and the development of a legislative framework. The UAE enjoys a strategic position between the US, EU, and Asia, offering opportunities to align local regulatory sandboxes with international frameworks by leveraging its trade and geopolitical relationships. Investment in UAE-based data centres and cloud infrastructure can address the problem of data localisation while allowing room to shape transparency, liability and compliance standards that minimise regulatory conflict across borders. Risks associated with sandboxes can be further minimised if regulators can create a phased model that incorporates AI sandboxes in a process geared towards forming permanent legislative frameworks. Finally, regulators can avoid regulatory stagnation by introducing sunset clauses for products that pass the sandbox stage with a requirement for reauthorisation after a determined period. The aforementioned recommendations can help close regulatory gaps while maintaining a pro-business approach that balances innovation with safety.
Siddharth Yadav is a PhD scholar with a background in history, literature and cultural studies.