To leverage the complementary strengths of humans and artificial intelligence (AI) in online service encounters, firms have begun to use hyb
To leverage the complementary strengths of humans and artificial intelligence (AI) in online service encounters, firms have begun to use hybrid service agents: combinations of AI agents (e.g., chatbots) and human agents (e.g., service employees) behind a single interface. However, it is unclear whether firms should be transparent about behind-the-scenes employees working in tandem with an AI-based chatbot to serve customers. Against this backdrop, we investigated the impact of human involvement disclosure on customer interactions with hybrid service agents. Our findings suggest that disclosing human involvement before or during an interaction with the hybrid service agent leads customers to adopt a more human-oriented communication style. This effect is driven by impression management concerns that are activated when customers become aware of humans working in tandem with the chatbot. The more human-oriented communication style ultimately increases employee workload because fewer customer requests can be handled automatically by the chatbot and must be delegated to a human. These findings provide novel insights into how and why disclosing human involvement affects customer communication behavior, reveal its negative consequences for employees working in tandem with a chatbot, and highlight the potential costs and benefits of providing transparency in customer–hybrid service agent interactions. The proliferation of hybrid service agents—combinations of artificial intelligence (AI) and human employees behind a single interface—further blurs the line between humans and technology in online service encounters. While much of the current debate focuses on disclosing the nonhuman identity of AI-based technologies (e.g., chatbots), the question of whether to also disclose the involvement of human employees working behind the scenes has received little attention. We address this gap by examining how such a disclosure affects customer interactions with a hybrid service agent consisting of an AI-based chatbot and human employees. Results from a randomized field experiment and a controlled online experiment show that disclosing human involvement before or during an interaction with the hybrid service agent leads customers to adopt a more human-oriented communication style. This effect is driven by impression management concerns that are activated when customers become aware of humans working in tandem with the chatbot. The more human-oriented communication style ultimately increases employee workload because fewer customer requests can be handled automatically by the chatbot and must be delegated to a human. These findings provide novel insights into how and why disclosing human involvement affects customer communication behavior, shed light on its negative consequences for employees working in tandem with a chatbot, and help managers understand the potential costs and benefits of providing transparency in customer–hybrid service agent interactions. History: Karthik Kannan, Senior Editor; Jason Chan, Associate Editor. Supplemental Material: The online appendices are available at Visa. [ABSTRACT FROM AUTHOR]
Copyright of Information Systems Research is the property of INFORMS: Institute for Operations Research & the Management Sciences and its co
Copyright of Information Systems Research is the property of INFORMS: Institute for Operations Research & the Management Sciences and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)