Automation and The Erosion of Expertise
Published on January 16, 2020
General artificial intelligence, autonomous drones and trucks, big data and algorithms: These buzz words defined the past decade, but as we enter what many are calling the Fourth Industrial Revolution, people are questioning what the future of our workforce looks like. What jobs will we be doing when machines think for us? Who will be the decision makers? What skills will us subordinate humans bring to the table?
In this blog post we will discuss the point where the rubber meets the road. The place where the human being interfaces with our runaway technological development. The place where human expertise matters more than data.
The Benefit of Automation
Technological progress is causing disruption across all industries. The industries we operate in are no different. Mining has implemented driverless trucking fleets, Oil & Gas is experimenting with autonomous robots and the Defence sector is no stranger to autonomous drone technologies.
The long-term benefits of automation are undeniable. Reducing human exposure to injury and danger is, despite the labour force disruption, a net positive. If a robot can do something dangerous, why should a human?
On the journey to the 4th revolution there will be inevitable failures that stymy progress. Understanding where the risks exist, how to insure against them and realising the importance of expertise is crucial the journey to autonomy.
The Perilous Journey
Already we are witnessing significant failures on the path to technological nirvana.
Two Boeing planes have crashed in the last 12 months due to faulty software. The reliance of this aircraft on a single software means that a year later Boeing is halting all production as planes pile up at their Seattle factory. The CEO was recently removed.
In another embarrassment for Boeing, their joint-venture space mission with NASA failed to rendezvous with the International Space Station after entering the wrong orbit due to a problem with the clock.
In 2018 an autonomous mining train operating in the Pilbara region of Western Australia unexpectedly took off while the driver was inspecting a wagon. The ghost train travelled 90 kilometres, reaching speeds of up to 110 km/h before it was eventually deliberately derailed. The incident is expected to cost the operator close to US$300 million. Human error was to blame, but that wasn’t relevant in the headlines. If a number of fatalities had resulted from this incident, how would the Australian Government have reacted?
Despite all the advances Tesla is making on the road to autonomous vehicles, the failures still capture the attention of the human psyche. The expectation of all things robotic and autonomous is not that they will be slightly better than human performance, they need to be orders of magnitude better.
Behind all of these technological failures, human decision making still rules the day. Even when the software is to blame, the software was built on optimized decision-making models, which were derived from decades of experiences, successes, failures and innovations by thousands of people working in a given industry.
What all of these failures have in common, however, is the human surrendering agency after the “optimal model” is designed and built. The capability of the human to adjust in real time still exceeds the capability of the designers to model every eventuality to create an infinitely optimal result.
The Automation-Human Handshake
There is no denying that technological progress is accelerating. The role of operators is increasingly to manage systems and processes that are largely automated. Understanding this tension is crucial to designing safe relationships between humans and technology.
Additionally, there is inherent difference between understanding and doing. Nobody would argue that an autonomous vehicle understands why it is operating a certain way. This is both an advantage and disadvantage. It is very difficult to misunderstand, but making small positive adjustments can also be remarkably difficult for these systems. Clarify this sentence.
Increasingly we need to be able to test for these weaknesses by simulating the interaction points between technology and humans. Where are the frictions? How do they reveal themselves? We don’t have the answers to these questions, but we need to find them.
Reflecting on the failed Boeing/Nasa mission to rendezvous with the International Space Station, Jim Bridenstine, a NASA administrator said “The anomaly has to do with automation,” adding, if the astronauts had been on board “we very well may have been docking with the International Space Station tomorrow”.
Understanding the time, it seems, is still a uniquely human capability.