The Complex Relationship of AI Ethics and Trust in Human–AI Teaming: Insights from Advanced Real‑World Subject Matter Experts
This paper describes a study that conducted 14 semi-structured interviews with US Air Force pilots on the topics of autonomous teammates, trust, and ethics. A thematic analysis revealed that the pilots see themselves serving a parental role alongside a developing machine teammate. As parents, the pilots would feel responsible for their machine teammate’s behavior, and their unethical actions may not lead to a loss of trust. However, once the pilots feel their teammate has matured, their unethical actions would likely lower trust. To repair that trust, the pilots would want to understand their teammate’s processing, yet they are concerned about their ability to understand a machine’s processing. Additionally, the pilots would expect their teammates to indicate that it is improving or plans to improve. The findings from this study highlight the nuanced relationship between trust and ethics, as well as a duality of infantilized teammates that cannot bear moral weight and advanced machines whose decision-making processes may be incomprehensibly complex. Future investigations should further explore this parent–child paradigm and its relation to trust development and maintenance in human-autonomy teams.