By Kai Peters
The question of how lawmakers should react to upcoming digital technologies such as artificial intelligence has been of concern to the European Union for a number of years. Now, Europe might get its first answer from the European Commission within the next few weeks. "In my first 100 days in office, I will put forward legislation for a coordinated European approach on the human and ethical implications of artificial intelligence," promised Ursula von der Leyen, the next President of the European Commission, in her political guidelines for the upcoming term. Details about what to expect, however, are still subject to speculation in Brussels.
In her paper entitled "My agenda for Europe," von der Leyen acknowledged that digital technologies such as artificial intelligence (AI) were transforming the world at an unprecedented rate and promised to boost European investment in AI. However, the German remained rather vague about the ethical side of the technology. "We will jointly define standards for this new generation of technologies that will become the global norm," von der Leyen wrote in her guidelines.
For the mechanical engineering industry, it will be highly relevant how these AI standards will be set. For example, in industrial applications such as the process optimization of complex machines, predictive maintenance or surface inspection, AI technologies are already being used and will become even more important in the future. But few of these use cases are posing any risk of violating ethical norms. However, depending on the scope and nature of European regulations, mechanical engineering companies, too, might face some new rules for the development and operation of AI. This would be justified in cases where automated decisions pose a risk to human rights; however, if the EU doesn't get the scope right, companies will have to deal with compliance questions on even harmless technical applications of AI.
The political art of striking a balance is needed
Because of this potential impact, VDMA has mixed feelings about von der Leyen’s promise to deliver a new approach within 100 days. On the one hand, it is clear that - if a regulation is considered necessary - it must be on a European or even global level, avoiding a patchwork of national solutions. Only in a harmonized market with cross-border application can the necessary scaling effects be achieved and reliable framework conditions for investments be created.
On the other hand, striking the right balance between limiting the risks of AI while at the same time promoting its opportunities is a difficult task, which is becoming even more complex as technologies are evolving at a fast pace. It is clear that companies and citizens need legal certainty for the use of AI, but they also need room to innovate, experiment and experience the benefits of AI. This cannot be done with a one-size-fits-all solution, and regulation should not be drafted hastily because of self-imposed political pressure. The worst case would be if pressure led to a horizontal regulation that attempts to regulate the technology itself in detail. This would place a huge burden on the use of AI and raise a lot of detailed questions about scope and relevance.
Even without time pressure, attempts to regulate AI can go in the wrong direction. In Germany, for example, the Federal Government's Data Ethics Commission recently published a paper proposing regulatory approaches for AI, which the VDMA criticized as unbalanced and excessive as it would focus too much on potential risks for consumers. For instance, the German proposal would not distinguish sufficiently between AI that directly collects and processes the sensitive data of citizens and purely industrial applications. If this approach becomes the blueprint for regulation, technological innovations and dynamic market development will be blocked.
Getting the important things right
Instead, VDMA calls for lawmakers to limit regulation to high-risk applications concerning automated decision-making where human rights are at stake - and to avoid stipulating risk assessments for all applications of artificial intelligence. AI is not a dangerous phenomenon which needs to be regulated as such. A need for regulation arises only in certain applications, for example when human rights and dignity are affected.
A second point that is highly important in the view of VDMA is to prioritize self-regulation and voluntary commitments by companies where possible, e.g. through codes of conduct or model agreements. Soft approaches like this leave room for the economic actors to decide. It should not be underestimated that, given the right level of transparency, citizens' preferences and markets can be powerful forces that steer AI in the right direction. Therefore, one of the first steps should be to raise awareness and AI skills everywhere - after all, informed societies don't need technological paternalism.
Finally, one must not forget that AI is already subject to a wide range of regulations on a European and national level. This, for instance, covers the whole range of products where AI is embedded in classic industrial sectors such as manufacturing. Here, goods (with or without AI) already meet the requirements concerning product and machine safety, working requirements and existing standards.
Hope for a starting point rather than a decision for the future
Indeed, there are expectations that von der Leyen will propose some legislation for AI within her 100-day deadline, but that it will be a starting point of a process rather than a conclusive package. While little is known about the initiative, rumors in Brussels have it that the legislation will be rather light. The proposal is also expected to be based on the preparatory work which started a while ago, for example in the European High-Level Expert Group which published 33 recommendations in June.
But even though it won't be possible to hammer out a sophisticated European playing field for AI within 14 weeks, von der Leyen's promise indicates that artificial intelligence will be very high on the Commission’s agenda in the next five years. "It may be too late to replicate hyperscalers, but it is not too late to achieve technological sovereignty in some critical technology areas," von der Leyen wrote in her guidelines. For industry, this would be the right mission to pursue - rather than just the political promise of delivering some legislation in only 100 days.