© shutterstock | Vitalii Nesterchuk



Artificial intelligence (AI) is fundamentally changing the collaboration between humans and machines. Where are the ethical boundaries in mechanical engineering? We asked an expert from the Fraunhofer-Gesellschaft. Simone Kaiser is the Deputy Head of the Research Unit for Responsible Research and Innovation at the Institute for Organisational Development and Work Design.

By Anke Henrich

Simone Kaiser © Fraunhofer IAOMs. Kaiser, AI and ethics are at the center of global discussions after the first case of a self-driving car killing a pedestrian and the selling of Facebook data to manipulate election campaigns made the headlines. However, when it comes to the increased use of AI in mechanical engineering, there are no such discussions. Is this an advantage for mechanical engineering companies?

From my point of view, a discussion on ethics is a valuable one. For all those involved, it is important to accept a shift in perspective from the standpoint that digital ethics limits innovation to the idea that it is an opportunity for the future of businesses. Making products responsibly will allow manufacturers to gain a competitive advantage.


For example, this equates to forward-looking risk management and reduces uncertainties, ensures the acceptance and marketability of new products and thereby increases customer satisfaction.

Is that not wishful thinking? Some companies are worried that the differing levels of restrictions in different countries, such as when it comes to data security, will distort global competition.

It is precisely the trust that is placed in data security that will become a key competitive factor. When it comes to AI, companies will not only need ever increasing amounts of data, but also data of a higher quality, irrespective of whether they are active in the B2B sector or the consumer sector. This is applicable to supply chains and in a production context, for example. Companies which can guarantee the anonymization of data from third parties and the use of a secure cloud for storage will be at an advantage. 

Who should decide what artificial intelligence can and cannot do in the mechanical engineering industry in the future? Companies themselves, or politicians?

In mechanical engineering, like in other industries, all socially relevant participants should be included in the discussion at an early stage. This can take place in the shape of dialogs, in which companies, employees, politicians and experts shape the future human-machine interface together.

Do you have a practical example?

Take robots as assembly assistants. They support employees by guiding them through their tasks and taking on difficult or even dangerous work. In order for the robot to be fed all the necessary data, the person at its side may have to be monitored on video, or maybe wear sensors to record physical values - all of which is sensitive data of the employee. At this juncture there will need to be an interesting ethical discussion on the costs and benefits, but our society does not yet have a blueprint for this. In these times of Industrie 4.0 and industrial intelligence, these questions will occur ever more frequently.

Would it not be sufficient if companies simply committed to this themselves? Surely the pressure is on for competitive reasons?

Three pillars will develop. In addition to the legal regulations, many companies will formulate their own commitments, and this is already happening in part. Engineers will probably begin discussing the ethics of their profession, as computer scientists are doing already. For instance, they are talking about using their profession in such a way that the diversity of humans is taken into account and that AI applications do not discriminate as a reaction to certain stereotypes. Research is becoming more sensitive.

And what exactly do all those involved in mechanical engineering have to agree upon?

Who has access to the controls of AI? Where and under which conditions is it okay to use it and where not? What data are we collecting and in what quality? Who has the final say when humans and machines come to different conclusions? How can we shape the interaction between humans and machines? And which deployment in companies also provides added value to society?

The last question in particular could yet result in large tensions in society. Successfully using machine learning means a loss of knowledge for people and potentially even job cuts, as AI will take on increasingly sophisticated tasks. Will that be a problem?

Studies on whether digital technologies will have a positive or negative effect on the number of jobs have come to differing conclusions. One thing is certain, however: We will need different vocational profiles and training in the future. A hugely important question in this context will be the level of significance held by practical knowledge in the future.

Is the discussion on machine ethics a typically German one?

These discussions certainly have a strong cultural component. In Germany and Europe, we are interested in responsible innovations and a responsible business model for our companies. We know that technology is not something that simply happens, but is actively shaped by people.