© shutterstock | vs148

04.06.2019

AUTOMATION ANXIETY AND THE PROMISE OF AI

Will robots and Artificial Intelligence (AI) take over, lead to massive job displacements and soon send our societies into turmoil? Or are they simply great tools we can use to create a world with increased prosperity, resource efficiency and new cures for terrible diseases? A reality check shows: It is time to leave techno-panics behind.

By Patrick Schwarzkopf

One of the greatest constants in the history of automation anxiety has been the affirmation that "this time is different". Although previous waves of automation did not lead to mass unemployment, this narrative suggests that the latest technological progress is unique and incomparable to anything that happened before and will therefore force masses of people out of their jobs stripping them of their livelihoods. It has been steadily reiterated since steam engines replaced muscle power, machines mechanized manual labour, computers and robots were introduced, and the internet transformed the world. The current AI breakthrough, the sceptics warn, is a complete game changer, sparing virtually nobody from the sheer endless capabilities of these new smart machines. While it is true that AI technologies are making rapid progress and transform even the legal and medical professions, this time is NOT different. "AI systems are no different from shovels or tractors: They are tools in the service of humans, and we can use them to make our lives vastly better", concludes Robert Atkinson of the Information Technology & Innovation Foundation (ITIF). So, what is that old mechanism all about? It is simple: Technology boosts productivity, purchasing power goes up, more is produced and bought while total employment remains constant (or even grows). A virtuous cycle that makes us more prosperous.

Powerful, yet overrated

As fascinating as the progress in AI (especially deep learning) has been in recent years, a little reality check is in order here. This progress is limited to systems made for single and relatively isolated tasks, such as speech recognition, self-driving cars and automated translation. By contrast, little or no progress has been made in creating some kind of "General AI" capable of applying concepts acquired in one specific context to multiple other contexts. This is exactly what people do all the time. Humans are conscious beings, equipped with purpose and capable of combining factual knowledge, common sense, emotion and empathy. Machines do not possess any of these abilities. AI systems do not employ judgment as humans do. Instead, they are „prediction machines“ making increasingly educated guesses. By doing so, they become very valuable tools. Tools that need to be combined with human intelligence and judgment for optimum results. If we manage these powerful tools well, we can come to vastly better outcomes in the service of humanity.

Facts, not fiction

  • The German automotive industry increased its industrial robot base from 79,300 units in 2010 to 97,700 in 2017 (+23%). During the same time, employment in the sector grew from 720,000 to 841,000 (+17%).
  • Countries with an especially high robot density - such as Japan, South Korea or Germany - have low levels of unemployment.
  • Reseachers Gregory, Salomons, and Zierahn looked at automation impact on jobs in Europe and found that while technology-based automation displaces jobs, “It has simultaneously created new jobs through increased product demand, outweighing displacement effects and resulting in net employment growth.”

Wanted: Faster productivity growth

As we have seen, the proponents of the "this time is different" narrative suggest that the current technologies - mostly robotics and AI - are so powerful, that their impact on the labour market will be unprecedented. If this were true, we should already see a huge boost in labour productivity (the output produced per working hour). Instead, we see the exact opposite. While productivity grew fast in the past (e.g. in the 1950s and 1960s driven by electromechanical innovations and in the 1990s driven by computing and the internet), it has been anemic in the developed economies for many years. If this time anything is different, it is a lower (or slower) than expected impact of the promising new AI technologies. So, we need to strengthen a smart and responsible technology uptake in order to reap the benefits it can bring while mitigating any associated risks.

One size fits all?

Deep learning - a powerful subdiscipline of AI - has an extremely wide range of use cases. It can be used to identify people in photos, to evaluate creditworthiness, to detect cancer, to make vehicles drive by themselves or predict machine failures even before they occur. As with most powerful tools, their use can be as beneficial as it can be harmful. In her best-selling book "Weapons of Math Destruction", data analyst Cathy O’Neill shows examples of badly set up (or in some cases even ill-intended) algorithms denying young families the mortgage for their first family home, firing really good teachers for incompetence and undermining democracy with an avalanche of Twitter-bots, spreading fake news. We are well-advised to take these warnings seriously. Wherever people's lives and life decisions are directly affected by AI systems, we need to ensure the necessary accountability and fairness in their use. So it comes as no surprise that ethics boards are being set up around the globe to come up with much needed guidelines for a safe and unbiased use of AI. But all too often, these groups recommend a uniform set of general principles for evaluating AI no matter where and how it is used. This is not a good idea. Application fields abound where deep learning can create great benefit without incurring any risks to people’s lives. This is especially true in machine building and manufacturing, where these smart algorithms should indeed be allowed to work their magic without subjecting them to stringent "algorithmic bias" and "explainability" regulations that add no benefit where such risks simply do not exist. So rather than indiscriminately applying one comprehensive set of rules to all AI use cases, smart ethics guidelines and regulations are needed in order to  differentiate between widely varying risk levels of AI. One size does not fit all.

Further Information

VDMA Artificial Intelligence   |   VDMA Robotics + Automation   |   International Federation of Robotics (IFR) 

© VDMA
Contact
Patrick Schwarzkopf, VDMA Robotics + Automation.