The rules are in place. The infrastructure is getting started. But who monitors its application? With its new AI Office, the EU wants to give direction to generative AI. At the same time, China is showing how AI is not only technology, but also a geopolitical instrument. In that context, enforcement power is not a luxury, but a necessity.
With the AI Act, Europe was the first to establish a legal framework for the use of AI. In the Netherlands, we are seeing concrete evidence of this in that companies and governments are increasingly required to demonstrate how their AI works and where the data comes from. You also notice that projects more often have to run in Europe itself. Not only because of regulations, but also because customers and governments demand this for security and control from European territory.
In part 1, we explained how these rules are based on risks. In part 2, we looked at infrastructure, the underlying computing power that makes AI possible in the first place. But legislation and data centers do not constitute policy. That requires interpretation, enforcement, and choices about how you want to deal with AI. That task will fall to the new European AI Office. It must ensure compliance with the AI Act, particularly for general-purpose AI models such as ChatGPT or Gemini. It must help establish standards, draw up codes of practice, and monitor systemic risks. On paper, this is a substantial role. But in practice, the structure is still in its infancy, and expectations are sky-high. The AI Office has been headed by Lucilla Sioli since the summer of 2025, but it still has limited capacity and is actively seeking cooperation with national regulators.
While Europe is looking for ways to implement AI, China is showing a different side of it. Internal documents from the company GoLaxy, uncovered by Vanderbilt University and recently published by The New York Times, show how AI is already being used extensively in information operations. This is not science fiction: the company is developing a "Smart Propaganda System" that monitors social media on a large scale, tracks sentiments, and automatically generates content that resembles real, human expressions.
The technology is being used to weaken opposition in Hong Kong, influence elections in Taiwan, and even profile Western politicians. According to the documents, GoLaxy collected data on thousands of American public figures. Although not everything can be verified, US intelligence agencies confirm the close ties between the company and the Chinese government.
It is precisely these types of applications that demonstrate why implementation matters. Rules without oversight remain non-binding. The EU presents itself as the standard-setter for 'trustworthy AI', but that requires more than good intentions. And that's where the problem lies. The development of the first codes of conduct, guidelines for providers of generative AI, is in full swing. But according to Euractiv, this is precisely where the AI Office risks losing its authority. The first Code of Practice for general-purpose AI appeared in July 2025; companies can apply it voluntarily to demonstrate their compliance. But instead of taking the lead itself, the Commission is considering engaging consulting firms such as the Big Four. Possibly even in collaboration with the companies that will soon be under supervision. This creates the risk of self-regulation through the back door.
Critics call it a "false start." Without clear direction and transparency, the AI Office risks becoming merely a spectator to the game it is supposed to be leading. At the same time, there are also proponents of a pragmatic approach. They point out that speed is of the essence: the codes must be ready by 2025. Moreover, bringing in experts, including commercial ones, can contribute to feasibility and alignment with practice.
For companies and institutions, this means they need to pay attention. The coming months will determine how AI rules will work in practice. Those who participate now, through consultations, sector forums, or direct contact with the Commission, can help shape the frameworks they will soon have to comply with. Important: the AI Office will also have powers to request information, carry out model assessments, and even remove models from the market in the event of serious risks. So it will not be a paper tiger, provided it fulfills its role.
China is demonstrating the potential of AI as a strategic tool. Europe wants to counter this with a different model. However, this requires more than just ideals. It requires an AI Office that is authoritative, independent, and capable. The coming months will reveal whether Brussels can deliver on this promise.
Would you like to know what this means for your organization, or how you can contribute to this process? Please contact Roel (roel@castro.lu).