The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
 
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 

Exclusive: Insights on global AI governance, ethics, and regulation from UN and EU leaders

DATE POSTED:October 8, 2024
 Insights on global AI governance, ethics, and regulation from UN and EU leaders

The hasty progress of artificial intelligence (AI) technology and its growing influence across many areas of life have sparked significant global discussions on governance, ethics, and regulatory frameworks. At the forefront of these discussions is the EU AI Act—a pioneer regulatory framework that aims to set the standard for these topics across Europe. But this isn’t just another regulatory effort; it represents a broader vision for shaping the future of AI in a way that ensures fairness, inclusivity, and respect for human rights. As AI technologies and their impact continue to accelerate, it’s becoming increasingly clear that engaging with these regulations is crucial—not just for AI developers but for policymakers, businesses, and society at large.

Dataconomy had the opportunity to speak with key EU and UN leaders to explore the global impact of AI governance in greater detail. These interviews revealed how AI regulation and ethics are unfolding on a global scale, with the EU AI Act playing a critical role. During the Digital Enterprise Show (DES) 2024 in Malaga, Wendy Hall, a UN AI Advisory Board member and prominent UK AI strategist; Carme Artigas, Co-Chair of the UN AI Advisory Body on AI Governance and Inclusion; and Dan Nechita, Head of Cabinet for MEP Dragos Tudorache and lead technical negotiator for the EU AI Act on behalf of the European Parliament, shared their exclusive insights with us on how AI governance, ethics, and regulation are being shaped in real-time.

Bridging the global AI divide

 Insights on global AI governance, ethics, and regulation from UN and EU leadersWendy Hall, a UN AI Advisory Board member and prominent UK AI strategist, strongly advocates for a globally collaborative approach to AI policy. During our discussion, Hall emphasized that while AI presents vast opportunities, the strategies employed by different nations vary widely. For instance, the UK has taken a more comprehensive, policy-driven approach to AI development. Beginning in 2017, the UK government recognized AI’s potential for economic growth and job creation, positioning the country as a leader in AI governance. At a time when Brexit consumed political focus, the UK still managed to work on AI policy. Hall notes that the UK’s early engagement helped establish its prominence, but she’s quick to point out that other countries like the US and China have followed distinctly different paths.

In the US, the focus has largely been on empowering tech companies like Google and OpenAI to push AI boundaries, leaving governance in the hands of the private sector. Conversely, China has taken a centralized, state-driven approach, with the government maintaining control over AI’s strategic direction. These divergent strategies, Hall explains, highlight the complexity of global AI governance and the need for more cohesive international policies.

Yet, Hall’s primary concern isn’t the divergence between these leading nations but rather the unequal access to AI technologies across the globe. She emphasizes the need for equitable AI development, particularly for countries outside the wealthy West. Regions like the Global South, which often lack the infrastructure and resources to keep pace with AI advancements, risk being left behind. Hall states this divide could deepen existing global inequalities unless capacity-building initiatives are implemented.

“These regions need more than just access to AI technologies—they need the infrastructure, talent, and data to develop AI systems suited to their own needs,” Hall stresses. This could include providing countries in the Global South with access to high-performance computing systems, datasets, and the technical expertise needed to build AI models locally. Hall advocates for global initiatives offering the tools and resources necessary for these countries to participate actively in the AI revolution rather than passive consumers of technology developed elsewhere.

“There’s a risk that AI could deepen global inequalities if we don’t ensure equitable access to the necessary infrastructure and talent”

Elena Poughia with Wendy Hall Elena Poughia with Wendy Hall at Digital Enterprise Show 2024

A particular concern for Hall is the rapid and unchecked development of generative AI models, such as OpenAI’s GPT-4. While these models offer groundbreaking possibilities, they also pose significant risks in the form of misinformation, disinformation, and ethical misuse. Hall is cautious about the unintended consequences of such powerful technologies, noting that generative AI can produce convincing but entirely false content if not carefully regulated.

She draws attention to the broader implications, explaining that while earlier AI technologies like automation primarily focused on improving efficiency, generative AI directly impacts knowledge creation and dissemination. “We’ve seen this with misinformation online—if the data going in is flawed, the output could be damaging, and at a scale that we’ve never dealt with before,” Hall warns. The stakes are high, particularly when AI technologies influence decisions in critical sectors like healthcare, law, and finance.

For Hall, the solution lies in advocating global partnerships aimed at creating robust ethical standards and governance frameworks. She advocates for establishing international agreements to ensure that AI technologies are developed and deployed responsibly without contributing to societal harm. Hall points to the importance of involving diverse stakeholders, including governments, private companies, and civil society organizations, to establish regulations that balance innovation with public safety.

Hall’s perspective underscores a critical point: AI could exacerbate existing global inequities and introduce new ethical dilemmas without collaboration and shared governance. Hall’s call for capacity building and ethical oversight isn’t just a recommendation—it’s a necessary step to ensure AI is developed to benefit humanity as a whole, not just a select few.

Ensuring inclusive AI governance

 Insights on global AI governance, ethics, and regulation from UN and EU leadersCarme Artigas, Co-Chair of the UN AI Advisory Body on AI Governance and Inclusion, brings a critical perspective to the conversation about AI’s global development—one focused on the glaring disparities in how different nations are included in discussions about AI governance. Artigas stresses that the current frameworks governing AI, including initiatives led by the G7, UNESCO, and the OECD, are largely dominated by wealthier, more technologically advanced nations, leaving out key voices from the Global South. “Many countries in the Global South are not even invited to the table,” Artigas points out, referring to the global discussions that shape AI’s future. In her view, this exclusion is a major governance deficit and risks creating a new form of digital colonialism. As AI technologies advance, those countries that lack the resources or influence to participate in international AI policymaking could be left even further behind. For Artigas, this isn’t just a matter of fairness—it’s a fundamental risk to global stability and equality.

Artigas highlights the need for a governance model that goes beyond the traditional frameworks of regulatory bodies. Rather than creating a single new international agency to oversee AI governance, she advocates for leveraging existing institutions. “We don’t need more agencies; we need better coordination between the ones that already exist,” she explains. Organizations such as the ITU (International Telecommunication Union), UNICEF, and WIPO (World Intellectual Property Organization) are already deeply involved in AI-related issues, each within their own sectors. What’s missing is a coordinated approach that brings together these specialized agencies under a unified global governance structure.

“True governance must go beyond mere guidelines and include mechanisms for accountability”

Elena Poughia with Carme Artigas at DES 2024Elena Poughia with Carme Artigas at DES 2024

Artigas’s vision is one where AI is governed in a way that respects international law and human rights and ensures that all countries—regardless of their technological standing—have equal access to the benefits AI can bring. This includes providing the necessary tools and resources for countries currently excluded from AI advancements to catch up. She notes that the private sector and academia also have a role in helping democratize access to AI technologies.

However, Artigas points out that ethical guidelines alone are not enough. While many companies have developed their internal ethical frameworks, she argues that these are often voluntary and unenforceable. True governance, she asserts, must go beyond mere guidelines and include mechanisms for accountability. Without clear consequences for unethical AI development or deployment, the risks of misuse and harm—particularly for vulnerable populations—remain high.

One of the key issues Artigas raises is the role of AI in exacerbating the digital divide. If not properly regulated, AI could further entrench existing inequalities, with wealthier nations gaining more economic and technological power while poorer nations fall further behind. For her, the goal of AI governance must be to close this divide, not widen it. “AI has the potential to be a great equalizer, but only if we ensure that its benefits are shared equally,” she emphasizes.

Artigas’s focus on inclusivity and coordination in AI governance reflects the growing recognition that AI is a global issue requiring global solutions. Her call for a unified approach—where existing agencies work together to govern AI—underscores the need for a more inclusive, ethical, and accountable system that benefits all of humanity, not just a select few.

Balancing innovation and regulation

 Insights on global AI governance, ethics, and regulation from UN and EU leadersDan Nechita, Head of Cabinet for MEP Dragos Tudorache and the lead technical negotiator for the EU AI Act brings a pragmatic yet forward-thinking perspective to the discussion of AI governance. As one of the key figures behind the EU AI Act, Nechita emphasizes the importance of balancing innovation with the need for robust regulation to ensure AI technologies are developed and used safely.

According to Nechita, the EU AI Act is designed to set clear rules for AI systems, particularly those considered high-risk, such as AI used in healthcare, education, law enforcement, and other critical sectors. “This isn’t just about regulating the technology itself,” Nechita explains. “It’s about protecting fundamental rights and ensuring that AI doesn’t exacerbate existing societal problems, like discrimination or privacy violations.”

One of the standout features of the EU AI Act is its emphasis on risk management. Nechita explains that AI systems are classified based on the level of risk they pose, with the highest-risk systems subject to the strictest regulations. This tiered approach allows for flexibility, enabling Europe to maintain its leadership in AI innovation while ensuring that the most sensitive applications are thoroughly regulated. For Nechita, this balance between innovation and regulation is crucial to maintaining Europe’s competitiveness in the global AI landscape.

Yet, Nechita acknowledges that implementing the EU AI Act is a complex and ongoing process. One of the challenges is ensuring that all 27 EU member states, each with their own national priorities and strategies, adhere to a unified regulatory framework. The EU AI Act requires cooperation between governments, industry leaders, and regulatory bodies to ensure its success. “We’re fostering a continuous feedback loop between companies and regulators, ensuring AI systems evolve safely while remaining compliant as new technologies emerge,” Nechita explains. “We’re not just handing companies a set of rules and walking away. We’re asking them to work with us continuously, to test their systems, report issues, and ensure compliance.”

“AI will transform the world, and we must guide it in a direction that benefits everyone”

 Insights on global AI governance, ethics, and regulation from UN and EU leadersDan Nachita on the stage explaining the EU AI Act’s implications for European enterprises

Nechita also points out that the EU AI Act is not just about creating static regulations. The Act includes provisions for continuous updates and revisions as AI technologies evolve. He argues that this dynamic approach is essential because AI is a fast-moving field, and regulations must keep pace with new developments. This is why the EU AI Act encourages ongoing dialogue between AI developers and regulators, fostering a relationship where both innovation and safety can coexist.

However, Nechita is also mindful of the broader global context. While the EU has taken a proactive stance on AI regulation, other regions, particularly the US and China, have different approaches. In the US, AI regulation is more fragmented, with companies largely self-regulating, while China’s state-controlled AI development prioritizes national interests over individual rights. Nechita acknowledges that achieving global consensus on AI governance will be difficult, but he sees potential for collaboration in areas like AI safety, sustainability, and ethical standards.

Nechita envisions an AI governance model that balances innovation with public safety. He believes the EU AI Act, focusing on risk management, transparency, and continuous collaboration, offers a model for how other regions might approach AI regulation. At the same time, he stresses the need for global cooperation, particularly in addressing AI’s ethical and societal implications.

As the EU AI Act continues to take shape, Nechita remains optimistic about its potential to set a global standard for AI governance: “AI is going to change the world, and we need to make sure it changes for the better,” he concludes. His approach reflects a nuanced understanding of the challenges ahead and a strong belief in the power of regulation to guide AI development in a direction that benefits society.

Dan Nechita is scheduled to speak at the Data Natives 2024 event in Berlin on October 22-23; the event’s theme is “2050: The ‘Good’ AI Symposium.”

A unified vision for the future of AI

Wendy Hall, Carme Artigas, and Dan Nechita’s insights reflect a crucial turning point in AI governance as we watch AI evolve at an unprecedented pace. Their perspectives converge on one undeniable truth: AI isn’t just a technological breakthrough; it’s a force that has to be firmly steered away from benefiting the few at the cost of the many.

The urgent need for global capacity building and ethical controls of AI is also being called for by Wendy Hall, who asks us to bridge the growing gap between the capabilities in this area between developed and developing nations. However, Camre Artigas’s focus on inclusivity and accountability reminds us that the enforcement that precedes any governance should be part and parcel. EU AI Act is a worthy example of balancing innovation with safety and, thus, how other regions may approach AI governance.

Together, these voices paint a holistic picture of what’s needed to shape AI’s future: focus on collaboration, human rights protection, and a strong framework that encourages innovation while protecting public interests. It’s an incredibly tough road ahead but also one with tremendous potential. AI’s future is now, and it’s up to us to make it happen right.