Human & Machine: Fighting or Teaming?

AOS has secured good visibility in the HMT domain.

— 

from www.dw.com

The six minutes challenge between the man and the machine.

The commander asked three times to bring back the nose of the Boeing 737 Max 8 immediately after take-off, but the first officer was unable to do it alone. Useless attempts also to intervene jointly: the on-board computer had the upper hand.

The two pilots of the Ethiopian Airlines ET302 flight fought for almost all six minutes at high altitude against the on-board computer of the Boeing 737 Max 8 before crashing almost vertically against the ground near Addis Ababa and causing 157 deaths. This is what emerges from the preliminary investigation report of the civil aviation authorities of Ethiopia. The MCAS, the anti-stall computer system installed only on the -8 and -9 versions of the Boeing 737 Max thus again comes under accusation after the sinking of the same type of aircraft in October 2018 off the coast of Jakarta, Indonesia. Commander and first officer, the preliminary report says, tried in every way to bring back on the nose – even following the procedures established by Boeing after the October accident – but without success. Useless attempts also to intervene jointly: the on-board computer had the upper hand

The document describes the various phases in detail:

  • 8:37:34 – 10 March, the control tower authorized the Boeing 737 Max 8 for take-off.
  • 8:38:00 – the twin-engine airplane takes off.
  • 8:38:44 – seconds after leaving the track “the left and right angles of incidence begin to register discordant values” triggering the anti-stall system.
  • 8:40:05 – the first officer asks the control tower to return to Addis Ababa airport because it is impossible to maintain the Boeing 737 Max at a fair altitude.
  • 8:40:23 – 8:40:31 – the on-board computer sends three alarms that warn of the risk of crashing to the ground.
  • 8:43:43 – the Boeing 737 Max 8 crashes to the ground at a speed of 926 kilometers per hour.

Effective human machine teaming (HMT) requires intelligent machines that are capable of interacting with teammates (either machines or humans). AOS has been involved (leading) an international study that addresses the integration of intelligent machines as members of collaborative teams. The integration of the latest technologies, notably Artificial Intelligence (AI) and Machine Learning (ML), and the use of intelligent machines in the context of HMT is a challenge to be overcome.

The advent of Human Machine Teaming (HMT) was predicted decades ago. Norbert Wiener (MIT Professor of Mathematics) stated in 1964 in his book God & Golem: “One of the great future problems, which we must face is that of the relation between man and machine”. In this context, the relation between man and machine involves much more than just human-machine interfaces. In fact, it means real and effective teaming, including, collaboration, interdependency and even empathy between humans and machines.

Humans must know what to expect of intelligent machines and the machines must have good ‘mental models’ of human behaviours. Otherwise, HMT becomes ineffective due to lack of trust on the part of humans and lack of appropriate (correct) responses on the side of machines.

The core objectives of the study were:

  • Analysis of the characteristics of effective HMT
  • Definition and recommendation of criteria for evaluating the fitness of human-machine teams
  • Identification of the necessary skills and training of the workforce for effective HMT

An analysis was conducted and ‘lessons learnt’ derived from a number of real-life incidents (relevant to HMT) that had occurred in the past. Mitigation strategies were then put forward to minimise the negative consequences of such incidents. A theoretical framework was developed to understand the meaning of effective HMT by first defining the desirable features of HMT, then identifying the individual human and machine characteristics relevant for HMT and the human-machine-team characteristics.

In the same way as for a sports team, the HMT team fitness is defined. Skills and training of the workforce for effective HMT were investigated. In this context, sixteen skills and the corresponding training requirements were identified. Training needs to be monitored by a 3rd party, which most probably should be another intelligent machine acting as an ‘observer’ that evaluates the reliability and proper responses of the machines and the performance of the humans.

The main conclusions of the study were:

The effectiveness of the team is affected not only by the performance of individual team members, but also by the quality of interaction, communication and mutual intelligibility between interdependent team members.

The choice of technologies and appropriate engineering approaches that provide enough transparency and intelligibility for the humans is a challenge.  The use of data-driven Artificial Intelligence technologies could lead to results that are not easily comprehensible to the human counterparts. This affects transparency and, ultimately, human trust in the machines.

Intelligent machines have a propensity to develop biases, especially in the case of AI and Machine Learning. A third party (observer) that helps to detect bias and other malfunctions by monitoring the input data and performing a neutral evaluation of the reliability and responses of the machines as well as the performance of the humans is deemed necessary.

Humans have a tendency towards over-reliance or under-reliance on intelligent machines. Both may cause unwanted consequences on the performance of the human-machine team. Techniques to help humans to retain high levels of skills and situation awareness, and to decide when and how much control should be kept or given away to the machines are important in the context of HMT.

Training humans to operate under quickly changing levels of machine autonomy is necessary. Besides the human ability to recognize changes in the level of autonomy of machines and to adapt seamlessly, the technical possibility to switch between levels of autonomy is relevant. Slow ramping up of automation, with the use of appropriate Verification & Validation (V&V) processes is desirable, involving progressive moves to the next higher level of autonomy as well as the possibility to move back to a lower level of autonomy. The capacity to detect malfunctions, unusual situations and to make correct decisions in such circumstances is challenging for humans.

Interdependency between team members is a pivotal characteristic of HMT. Hence, quality of communication between humans and machines, development of shared situation awareness and shared mental models are essential. Current efforts to improve human-machine communication and joint decision-making within the team include the concept of ‘debater’ or that of a ‘scenario planning advisor’ that helps in identifying extreme, yet possible, risks and opportunities not usually considered in normal operations. The introduction of such a ‘debater’ could have avoided the recent tragic Boeing 373 Max 8 accidents.

The study highlighted the existence of a relationship between human values, culture and ethics and the initial design of the machines. The adoption of an ethical framework throughout the design process of intelligent machines and raising awareness and responsibility of human AI designers were recommended.

The distribution of roles and tasks between the HMT components (machines and humans) as well as the extent to which either of the team members should have control in a given circumstance/scenario is still a problem to be solved. It is equally important to ensure that the roles of all team members are clear in all situations. ‘Management by exception’ concept is to be introduced in the context of HMT.

Recurrent incidents point towards the importance of further investigation of HMT in order to bridge gaps between design, testing, training and operationalisation of human-machine teams. Further work should focus on the independent verification and validation of human machine teams from the perspective of team design, team evaluation and team training.

by Francisco Medeiros (AOS)

 

You may also like...