4 things to watch for with the EU AI Act’s implementation, according to Kai Zenner

August 1, 20242 min read

Share this article

As Transcend’s Field Chief Privacy Officer, I’m passionate about connecting with leaders across privacy, technology, and policy disciplines as we all navigate the evolving landscape of AI governance, and the implications for privacy. This passion inspired the launch of Transcend Field Trips: A CPO Listening Tour. With each episode, I aim to engage in insightful dialogues with key figures in these fields, shedding light on the unique challenges and opportunities we face.

In our latest episode, I had the honor of sitting down with Kai Zenner, Head of Office and Digital Policy Advisor for MEP Axel Voss. Kai was a pivotal voice in the negotiations of the EU AI Act, bringing a wealth of knowledge and enthusiasm for digital policy. Our conversation, set against the backdrop of Brussels, delved into the intricacies of EU lawmaking and Kai’s innovative ideas for enhancing the AI Office's effectiveness, and ensuring the act’s overall success.

Below, you’ll find four key takeaways from our discussion. I encourage you to watch the full conversation at the end of this post for a deeper understanding of Kai’s perspectives and insights.

A call for Industry engagement in the AI Act

In our conversation, Kai brought up the multiple mechanisms the EU AI Act offers for various stakeholders to contribute technical expertise to the newly formed Artificial Intelligence Office. These mechanisms include the Advisory Forum, joining the AI Pact, and participating in AI Regulatory Sandboxes (more on this below).

Kai emphasized that this engagement was critical to the Act’s success.

“Without this input, the AI Act will not work. It's too vague right now. It's really necessary that this secondary legislation is filled up. And my call on all those events is always, ‘look, industry, you wanted more rights to participate, to get involved, now you have it in theory, please use it.’"

AI regulatory sandboxes: a controlled environment for innovation

Regulatory sandboxes under the AI Act allow companies to experiment with new and innovative products under regulatory supervision. This setup provides incentives for businesses to test their AI systems in a controlled environment, helping regulators better understand the technology before it hits the market.

As outlined in the act, these sandboxes are not only intended to provide a testing environment for new AI applications, but also allow for reporting out to guide potential revisions to the AI Act.

Operationalizing the AI Office: breaking bureaucratic barriers

Kai views the operationalization of the AI Office as an opportunity to explore new enforcement and governance structures beyond existing bureaucratic limitations. He highlighted the potential for innovation, saying, "The AI Office is a big chance to have some experimental space to try out new things and really come up with a new, more modern enforcement or governance structure."

"Here in Europe, I guess at least to a certain extent, it's the same in the United States. Our administration is still living in another century. The processes are really too slow... We need to modernize administration and so on and so on."

The contentious nature of AI, differing perspectives, and time pressure to finalize negotiations led to some parts of the AI Act remaining vague. Reflecting on the negotiation challenges, Kai noted, "Technical advances, highly disputed discussions in the European Parliament, and time pressure... It's really, I would say even almost a toxic combination. It led to a situation that very often we didn't really have time to further discuss or improve things."

He continued, "The AI Act was proposed in 2021 and already in '22, '23, partially it was outdated and therefore we were always running behind, especially with foundation models and all of that."

You can watch the full conversation with Kai Zenner as part of Transcend Field Trips below.



Share this article