Abstract
Introduction
In recent decade, artificial intelligence (AI) has seen growing adoption across hospitality and tourism (Dogru et al., 2024). Within hotel revenue management (RM), AI is embedded in revenue management systems (RMSs), where machine learning (ML) algorithms analyse large datasets to forecast demand and make economic decisions—pricing and revenue optimisation (Ampountolas and Legg, 2025; Nam et al., 2021). As RMSs become AI-driven, they are increasingly viewed as black boxes—opaque systems whose logic is difficult to interpret, even for experts, due to information asymmetry—leading to mistrust and sub-optimal decisions (Rai, 2020; Yeoman, 2019). This has created widespread recognition of the need for explainable systems (Colaner, 2022). At the 2019 DuettoX RMS user conference, Duetto’s VP of Product Management echoed this view, stating: “We need to make sure that you trust the system, that you are empowered to understand what the system is doing and why it’s doing what it is” (Tams, 2019: para. 4).
A lack of such empowerment has resulted in information asymmetry between users and RMSs, leading to costly overrides and suboptimal decision-making (Mohammed and Denizci Guillet, 2025a). Consequently, there is increasing demand for transparency and interest in explainable AI (XAI)—a class of interactive AI systems designed to clarify their underlying logic, reasoning process, inputs and outputs (Arrieta et al., 2020; Gerlings et al., 2022; Van Lent et al., 2004). Although real-world applications of XAI remain limited, its potential has been demonstrated across sectors such as healthcare, finance, and autonomous vehicles (Weber et al., 2024; Gerlings et al., 2022), with calls to tailor its design to sector-specific user needs (Kim et al., 2024). In hospitality and tourism, XAI research is still nascent, with emerging studies focusing on medical tourism (Kose and Kose, 2024), destination selection (Lin and Chen, 2022), Airbnb pricing (Sharma et al., 2021), and festival satisfaction (Oh and Lee, 2021).
Despite hotel RMSs’ reliance on ML-based black-box systems, their suitability for XAI integration remains unexamined, presenting the following key research agenda. First, whether XAI can transform opaque RMSs into transparent, trusted systems. Second, what kinds of explanations do RMS users need to understand its logic. Third, how these explanations should be delivered to enhance understanding. Fourth, what factors shape RM professionals’ and hotels’ readiness to adopt XAI. To fulfil this agenda, this study aims to: (1) Explore hotel RM professionals’ perceptions of how XAI can improve the transparency and usability of RMSs, and the potential economic impact on pricing accuracy and revenue performance. (2) Identify the types of explanations RMS users need to make informed pricing decisions, build trust in system outputs, and reduce economically costly system overrides. (3) Determine explanation delivery techniques (e.g. formats and styles) that improve comprehension and decision speed, with a focus on enhancing operational efficiency and labour productivity. (4) Examine the technological, organizational, and environmental factors affecting hotel firms’ readiness to adopt XAI-enabled RMSs, and the implications for technology return on investment (ROI) and competitive advantage.
To achieve the study objectives, an exploratory design was used, involving in-depth interviews and guided by the Task-Technology Fit (TTF) and the Technology-Organisation-Environment (TOE) frameworks. The TTF analysis found that XAI’s ability to provide explanations aligned with users’ needs for “what,” “how,” “why,” “why not,” “what if,” “how-to,” and “what else” questions. However, users’ awareness of these capabilities was low. Readiness factors were mapped to TOE components and organised into a holistic framework showing what can, should, and must be explained, offering practical guidance for fostering XAI trust, transparency, and adoption in hotel RMS.
Literature review
Conceptual underpinnings
Black-box revenue management systems
Since RM was adapted from airlines to hotels in the 1970s and 1980s, technology has played a central role in supporting economic decision-making—market analysis, demand forecasting, price-inventory optimisation, and performance evaluation (Denizci Guillet and Mohammed, 2015). Over time, this led to the development of RMSs—initially simple software—now transformed into advanced, data-driven tools powered by algorithms, machine learning, and predictive analytics (Nam et al., 2021). As complexity increased, RMSs began operating as “black boxes,” generating outputs that lack transparency and clarity. This has led to mistrust (Yeoman, 2019), frequent overrides (Mohammed and Denizci Guillet, 2025b), and algorithmic system aversion (Mohseni et al., 2021). In response, industry leaders like Richard Ratliff of Sabre Labs have advocated for more transparent systems, urging the shift from black boxes to “glass boxes” (Walson, 2022: para 20).
Glass boxing RMSs and XAI
A glass (or white or grey) box is a term used to describe a transparent system or a system that is not a black box (Rai, 2020) and the process of transforming a black box into a glass box is commonly called glass boxing (Mohseni et al., 2021). Generally, a system is deemed a black box when its construction, internal functions, logic, and parameters are opaque, making it difficult to understand and explain (Zhang et al., 2025). This opacity has led to interest in XAI as a possible solution to enhance transparency and explainability (Mohseni et al., 2021). Although the notion of explainable expert systems dates to the 1970s (Miller, 2019), the term explainable AI was coined in 2004 by van Lent et al. as AI systems that can explain their rationale to a human user, characterise their strengths and weaknesses, and convey an understanding of how they will behave in the future (Gunning and Aha, 2019).
Explanations
In XAI literature, Arrieta et al. (2020) state that, “given a certain audience, an explanation refers to the details and reasons a model gives to make its functioning clear or easy to understand” (p. 85). Based on the interpretation scale, Mohseni et al. (2021) distinguish two types of explanations: global versus local. Global (or model) explanations thoroughly describe how a model works, while local (or instance) explanations elucidate the reasoning behind a particular prediction (Mohseni et al., 2021; Doshi-Velez et al., 2019). Miller (2019) categorises explanations based on the type of questions they answer: “what” (associative reasoning), “how” (interventionist reasoning), and “why” (counterfactual reasoning). Expanding on Miller’ (2019) categorisation for XAI, Mohseni et al. (2021) generalise explanations into six question-driven types: how, why, why-not, what if, how-to, and what else, which are utilised in this study.
Theoretical frameworks
This study is grounded in information asymmetry theory, positing that XAI can minimize disparities in knowledge between AI systems and hotel RM professionals and maximize economic benefits efficiency and performance (Narangoda et al., 2025). To address this, the Task-Technology Fit (TTF) framework is employed to evaluate XAI’s potential to meet users’ explanation needs and reduce asymmetry, while the Technology-Organisation-Environment (TOE) framework is applied to organise the readiness factors XAI integration.
The TTF framework asserts that the adoption and effective use of technology are contingent upon the alignment between technological capabilities, task requirements, and user abilities (Goodhue, 1995; Goodhue and Thompson, 1995). It comprises five core constructs: task characteristics, technology characteristics, task-technology fit, technology utilisation, and performance impact. Central to the framework is the notion that perceived congruence between task and technology characteristics (i.e., task-technology fit) drives utilisation and performance outcomes. Widely applied across various domains (Marikyan and Papagiannidis, 2023), we propose that the capacity of XAI-enabled RMSs to meet stakeholders’ explanation needs (tasks) will promote their adoption and usage intentions.
Technology readiness (TR) reflects the predisposition of individuals and organisations to embrace new technologies. At the individual level, Parasuraman (2000) identifies four key dimensions: optimism, innovativeness, discomfort, and insecurity, each capturing varying attitudes toward technological adoption. At the organisational level, TR is commonly examined through the TOE framework, which considers technological context, organisational characteristics, and environmental influences. Recent applications of TOE span domains such as robotic process automation (Bagheri and Van de Wetering, 2024), blockchain in hospitality (Van Huy et al., 2024), and XAI in business settings (Darvish et al., 2024). This study integrates both TR and TOE frameworks to provide a comprehensive perspective, recognising that individuals operate within organisational contexts.
Empirical studies
Empirical XAI research is growing, with a focus on model development and refinement, prioritising system developers’ needs over end-users (Giudici and Raffinetti, 2021). Addressing this, Aslam (2024) and Kaplan et al. (2024) introduced user-centric frameworks to enhance trust and interpretability. While application-based studies remain limited, emerging research in sectors like healthcare (Gerlings et al., 2022), finance (Weber et al., 2024) and autonomous driving (Alatabani and Saeed, 2025) highlight XAI’s potential to enhance transparency and trust, calling for further exploration of adoption drivers in other sectors (Darvish et al., 2024).
Despite growing interest in XAI, its application in hospitality and tourism remains limited. Existing studies focus on areas like medical tourism (Kose and Kose, 2024), destination choice (Lin and Chen, 2022), festival satisfaction (Oh and Lee, 2021), and Airbnb pricing (Panahandeh et al., 2025; Sharma et al., 2021). Meanwhile, hotel RM, a promising domain, has yet to be explored. This study focuses on hotel RM to addresses four key gaps: the alignment of XAI capabilities with RM users’ needs, the types of explanations required by RM professionals, suitable techniques for delivering understandable explanations, and factors influencing their readiness to adopt XAI.
Methodology
This study employed a qualitative approach to explore the integration of XAI into hotel RMSs. Given the emerging nature of this research area, a qualitative method is appropriate because it allows for in-depth insights from experienced RM professionals.
The study design and participants
Semi-structured interviews were conducted with a diverse group of experts, including revenue executives across property, cluster, and corporate levels, who shared operational challenges with RMSs; hotel technology experts, who provided insights into technical issues and XAI’s potential; and RMS providers, who offered strategic views on XAI integration. This approach enabled a rich, multi-stakeholder understanding of XAI’s relevance and applicability in the field. The semi-structured format facilitated guided yet flexible conversations, enabling the exploration of emergent themes and insights. Interview questions were informed by existing literature and supplemented with items related to participants’ backgrounds and organisational structures to support contextual analysis. However, due to limited representation of certain sub-samples and the fluidity of contextual factors, such as cross-geographical experience and exposure to various AI technologies, analytical distinctions were primarily drawn along binary, non-overlapping categories such as end-user versus developer.
Participant profiles.
Data Collection protocol and analysis
The semi-structured interviews were conducted via Microsoft Teams, each lasting 30–45 minutes. Sessions were recorded and automatically transcribed to ensure accuracy. Before the interviews, participants received standardized definitions of key concepts to ensure consistent understanding. Smart Systems referred to rule-based tools (e.g., pricing algorithms using occupancy thresholds). AI referred to systems that are capable of learning and improving without reprogramming (e.g., neural networks). GenAI referred to AI that produces content (e.g., ChatGPT). XAI is AI that clarify its logic and decision-making processes (e.g., IBM Watsonx. governance, Google DeepMind).
The interviews began with general questions about AI use in the workplace, followed by RMS and XAI integration. Questions were tailored: revenue executives discussed awareness, explanation needs, and adoption readiness; technology experts addressed design and integration challenges. Participants assessed whether current RMSs are smart or intelligent, identified potential enhancements through XAI, and suggested features to improve transparency and usability. Ethical concerns, trust, and reliability were also discussed to gauge readiness for adoption. The interview guide is presented in the appendix as supplemental material.
Both researchers conducted the interviews using a consistent protocol After 21 interviews, thematic saturation was reached. Transcripts were analysed using thematic content analysis (McKibben et al., 2020). Initial coding was performed by one author (Saldaña, 2013), with themes organized around research questions and refined collaboratively. Participant validation supported interpretation. Trustworthiness was ensured through confirmability, transferability, and reflexivity (Guba, 1990). Diverse perspectives were examined to reduce bias. Participant checking and triangulation (Krefting, 1991) confirmed the interpretations. Thick descriptions enhanced contextual relevance and applicability.
Findings
In line with the research questions, the thematic analysis revealed the following key findings: (a) RM professionals’ desires to understand black-box RMSs and XAI’s capabilities; (b) question-driven types of explanation needs by hotel RMS users; (c) explanation techniques and strategies for XAI-enabled hotel RMSs; and (d) factors influencing RM professionals and hotels’ readiness to integrate XAI into RMSs.
RM professionals’ desires to understand black-box RMSs and XAI’s capabilities
In this study, RM professionals refer to two primary stakeholders of RMSs: end-users (RM executives who use RMSs for their work) and system providers or consultants (tech-savvy data or computer scientists offering RMS solutions or consultancies). Using the TTF, we assessed XAI-enabled RMSs’ capabilities to meet the explanation needs of the two stakeholders. The findings, addressing the first research question, are presented in these two sub-themes: (1) RM professionals’ desires to understand black-box RMSs, and (2) XAI’s capabilities and RM professionals’ beliefs and doubts.
RM professionals’ desires to understand black-box RMSs – what should be explained
RM professionals expressed varying levels of desire for explanations about black-box RMSs, with clear differences between end-users and system providers. Most end-users wanted specific, situational explanations, such as demand forecasts or price recommendations for a certain day or period, especially when system outputs deviated from expectations. One participant said, “I want to know why the system increases the room night rates for specific months and not the entire period.” (P18). Another added, “I don’t want to know the algorithm or the calculation behind the whole system. Instead, what I want is very simple explanations of the algorithm’s results and input data.” (P12).
Some end-users also wanted a broader understanding of how the system functions overall. As one explained, “I want to understand more about how it [the system] generates its recommendations and the steps it takes to generate them.” (P14). Another echoed, “I want to understand how the system works and what it generates.” (P2). In contrast, system providers preferred general explanations to identify exceptions, validate outputs, and debug. One noted, “I need explanations for exceptional patterns so that I can investigate them to see if the system is making the right decision or not.” (P27). Another added, “I wish the system can spot outliers and alert me where there are data concerns for interventions.” (P7).
Despite these differing expectations, there was agreement on the need for improved understanding. However, a tension exists between user demands for clarity and provider concerns about revealing proprietary information. One provider commented, “We need to understand the amount of detail that people want, need, and should get because those three things aren’t the same.” (P19). As a result, end-users often felt their questions were met with insufficient answers, despite acknowledging, “Nobody will be able to give you 100% of the information because of proprietary reasons.” (P1).
Beyond proprietary issues, competition in the hotel industry also discouraged transparency. One end-user observed, “The competitive nature of the hotel industry makes full transparency a double-edged sword; it can promote deeper understanding and collective learning, just as it can compromise competition and encourage price fixing.” (P2). System providers justified limited transparency as necessary for operational protection. One explained, “Sometimes, we do see that the lack of transparency is a protection for the hotels’ operations.” (P6). These statements suggest that the lack of transparency in RMS could be deliberate.
XAI’s capabilities and professionals’ beliefs
XAI is known to offer both global (model-level) and local (output-specific) explanations (Mohseni et al., 2021), as well as static or interactive formats (Arya et al., 2019). These capabilities suggest that XAI can meet RM professionals’ task requirements. However, whether this alignment will support adoption remains uncertain. Two insights emerged. First, many users lacked awareness of XAI’s capabilities and questioned its effectiveness. One asked, “If the human agent cannot offer satisfactory explanations, how can we have a technology capable of doing it right?” (P1).
Second, system providers doubted whether end-users could understand the complexity of XAI’s explanations. One stated, “Imagine if you have a prediction with thousands of variables, how can XAI explain that to a user to grasp fully?” (P7). Another added, “I don’t think revenue management systems’ users are going to understand how a machine-learning system comes up with its recommendations even if the system could explain itself.” (P20). An end-user agreed: “Being able to explain is one thing, and understanding is another.” (P26).
Question-driven types of explanation needs by RMSs users
To address the second research question, the analysis aimed to identify the types of explanations RM executives require from their current RMSs to inform the design of XAI-enabled RMSs tailored to the hotel industry. Respondents were asked to reflect on their experience with RMS and give examples of the typical questions they wish to have explanations. We then applied the question types to deductively classify the examples into seven types of question-driven explanations needs of users. These classifications or codes align with XAI question-driven types of explanations proposed by Mohseni et al. (2021), which are described briefly below in relation to RMS. (1) (2) (3) (4) (5) (6) (7)
The examples of each question-driven type of explanation are outlined in Figure 1. Notably, explanations to what, how, why and what-if questions are the most needed. Among other interpretations, this could imply that current RMSs or key account managers of these systems are unable to offer satisfactory answers to these questions. The range of the questions also indicates that users’ explanation needs cover both RMS input and output. Question-driven types of explanations required by RMSs users.
Explanation techniques and strategies for XAI-enabled hotel RMSs
The third research question examined users’ preferred formats for explanations, identifying techniques to tailor XAI-enabled RMSs for the hotel industry. Participants mentioned textual (words and numbers), visual (charts and graphs), and verbal (audio or speech) formats, highlighting the value of multimodal interfaces (Arya et al., 2019). Their preferences revealed three key strategies for effective XAI explanations: simplification and brevity, contextualisation and personalisation, and interactive conversation and gamification.
Simplification and brevity
Participants stressed the need for concise, easily digestible explanations. One said, “I prefer it in a bullet-point summary of no more than 250 to 300 words” (P3). Another quizzed, “If I am a third grader and it has all these numbers and variables, will I be able to understand it without being overwhelmed?” (P20).
Contextualisation and personalisation
Participants found current RMS explanations too generic and static. They called for context-specific and user-adapted outputs. As one put it, “If I want to drill down further, I am unable to do so because the answers are standard and not tailored to my needs” (P12). Another added, “We get generic explanations from them. XAI’s explanations should be contextualised, providing a little bit more specifics on how it works” (P23).
Interactive conversation and gamification
Participants favoured conversational, human-like explanations delivered in natural language. One explained, “If the XAI can explain to me in a natural language using a human-like voice, that will be great” (P13). Another supported a “more natural language way” (P8). Gamified elements were also suggested to boost engagement. One participant proposed, “One way to achieve an engaging explanation is to gamify it so that a little endorphin is kicked in when people interact with the system” (P17).
Factors influencing RM professionals and hotels’ readiness to integrate XAI into RMSs
Readiness factors of XAI integration into hotel RMS.
Technology
The identified technological factors were grouped into four thematic domains: compatibility with existing systems and adaptability, availability and access to quality data, vulnerability and risks, and dependability of the new system. Each domain had two sub-themes, with mentions ranging from 2 to 11.
Organisation
Participants identified four organisational factors affecting XAI adoption: size and resources (8), culture of adoption and change (4), ownership structure and management (9), and RM executives’ readiness (7).
At the operational and individual level, views ranged from fear to optimism. One respondent reflected, “Whenever I hear sentiments like that, they sound a bit like fear mongering… a bit like, people are nervous about their jobs being taken away… However, as hotel revenue managers, we can be XAI enhanced, not replaced.” (P17). Generational differences also shaped adoption views. As one participant put it, “Different generations have different viewpoints on whether technology is good or bad. So, these perceptions will influence the adoption.” (P15).
Environment
The analysis revealed three thematic domains of environmental factors: regulatory lapses and deficiencies, competitive pressures, and green concerns and the planet. The regulatory lapses and deficiencies, with three sub-themes (ethics, regulatory silos and responsibility), was mentioned most frequently. On the sub-theme of ethics, one respondent noted, “The unregulated use of IP [intellectual property] to train the machine and generate content is a serious concern.” (P17). Another lamented the lack of global Internet regulations and mechanisms of seeking protection. She stated, “…. there is no global Internet regulation. The same goes for AI which is of great concern.” (P21). These perspectives highlight the need for stiffer regulations, and governments can lead the role.
Potential benefits of integrating XAI into RMS
Given the nascent state of XAI, which is evolving from research-focused to real-world applications, we explored participants’ perceptions of the potential benefits of integrating it into RMSs to provide insights into the predictions of TTF theory. Most participants believed, if tailored to the RMS context, XAI can address the black box syndrome. They highlighted six thematic areas of potential benefits: transparency and understanding, human-system interaction, stakeholder trust, user confidence, decision quality and efficiency, and adoption and use. These potential benefits are discussed as follows.
Enhancing transparency and understanding
The participants agreed that integrating XAI into RMSs can enhance transparency and contribute to a better understanding of the systems’ outputs and recommendations. One participant expressed this benefit as follows: “The enhanced transparency will allow RM executives to understand why certain prices are recommended, enabling them to effectively communicate these prices to customers and other stakeholders” (P9).
Promoting human-system interaction
Given the capabilities of XAI to explain its outputs and how it works, the participants believed that when integrated into RMS, it could promote human-system interaction and engagement to reduce overrides and biases. In support of this, one participant remarked, “It will make users feel more involved and less intimidated by the technology.” (P17).
Building stakeholder trust
The participants argued that having an interactive XAI in RMS can foster stakeholder trust since the interaction will align users’ expectations with systems’ output, which is crucial for adoption and use. This point was made by two participants as follows: “Knowing how it works will make me trust the system” (P8) and “In order to trust the system, I need to know how it works” (P16).
Increasing user confidence
Considering the human-centredness of XAI, the participants articulated that integrating it into RMS can increase user confidence. By explaining questions like why-not, users can obtain counterfactual evidence to boost their confidence in the system’s recommendations
Improving decision quality and efficiency
The participants noted that since XAI can provide insights to help developers fine-tune its results or debug any operational failures, the systems’ output and decision quality can improve. This conviction was expressed by a participant, stating, “It takes a lot of time to figure out small issues or problems, sometimes, an hour to find an issue with a rate. So, if I have a tool that could help me to understand where the issue is, then I think I will be more efficient.” (P13).
Growing adoption and use
The participants argued that when the integration of XAI improves decision quality and performance, large hotel companies and RMS providers are bound to increase their R&D and investment in XAI-enabled RMS. This could lead to a widespread adoption and use of these technologies. One respondent noted, “People need to see the wins before they decide. A series of small wins can make a huge difference.” (P26).
Discussion
Developing and adopting XAI across organisations requires a deeper understanding of contextual factors and stakeholder perspectives. Yet, few studies have examined how XAI can be integrated into business processes (Bagheri and Van de Wetering, 2024). This study focuses on hotel RM, identifying themes from interviews that clarify XAI’s applicability in this setting. These themes are aligned with the TTF theory (Goodhue, 1995; Goodhue and Thompson, 1995) and TOE framework, capturing stakeholder views on XAI capabilities, beliefs, desires to understand black-box RMSs, explanation needs, and preferred techniques. Stakeholders widely agreed on the need for transparency and consistently expressed a desire for explanations, echoing calls in the literature to transform RMSs into glass boxes (Tams, 2019; Walson, 2022). These benefits are well documented (Colaner, 2022; Gunning and Aha, 2019).
In terms of explanation needs, perspectives varied. Tech-savvy providers preferred global (model-level) explanations, while end-users prioritised local (instance-specific) ones to justify outputs. This aligns with Doshi-Velez et al.’s (2019) view that global explanations support scientific understanding and bias detection, while local explanations are better for output justification. Additionally, concerns were raised about users’ ability to grasp complex insights—even when XAI can explain them. These differences underscore the need to tailor XAI to specific use contexts and stakeholder abilities.
While earlier studies on explanation needs focused mostly on developers, recent work has shifted toward user-centric models, expanding the XAI question bank to enhance user experience (Aslam, 2024; Sipos et al., 2023). Following this trend, our study applied a question-driven approach to extend the XAI question bank to hotel RMSs. Our findings also suggest user-focused explanation techniques and highlight the importance of multimodal interfaces. Lastly, the readiness factors identified reinforce insights from TR and TOE literature, pointing to technical and non-technical adoption barriers. Addressing these barriers, beyond improving model accuracy and explanation quality, is critical for successful XAI adoption.
Economic implications
Given the findings of this study, several economic implications arise. Notably, integrating XAI into RMS can boost revenues and profitability of hotels by enhancing demand forecasting accuracy, pricing precision and consistency, and inventory optimisation (Gómez-Talal et al., 2025). With XAI-enabled RMSs, input-output relationships can be better understood, information asymmetry can be minimized, and frequency of costly overrides due to misjudgements or errors can be reduced (Mohammed and Denizci Guillet, 2025a). The integration can also increase trust and acceptance, reduce manual analysis, support labour efficiency, and create competitive advantage and market differentiation which may attract more guest and increase market share.
On the cost side, integrating XAI can involve significant financial costs, including initial investment in technology upgrade, ongoing maintenance costs, and staff training to ensure effective implementation and use. Additionally, the integration may expose existing systems to cybersecurity risks, such as data breaches or system vulnerabilities, which could lead to reputational damage and financial losses (Arrieta et al., 2020). It may also erode market differentiation if competitors implement similar solutions. Therefore, to realize the full potential benefits of XAI integration, hotels must conduct thorough investment analysis, carefully weigh the costs and risks, and develop contingency plans to mitigate the risks. This approach will ensure long-term goal achievement and sustainable advantages.
Beyond the micro-level implications for firms and customers, XAI integration has significant macroeconomic ramifications for other market stakeholders like government and investors, regarding regulation, compliance and monitoring to reduce systemic risks and ethical violations in AI-assisted decision-making (Moorthy et al., 2025). Additionally, the integration will necessitate changes in labour market dynamics and employment relations to accommodate XAI and related explainable systems as co-workers or co-pilots. By extension, the future of work and workforce, and the traditional notion of agency, responsibility and accountability will evolve to realign with a hybrid system of human-AI interaction (Leonardi, 2025).
Contributions
Figure 2 summarises this study’s contributions by presenting a unified framework that outlines key considerations for applying XAI to transform black-box hotel RMSs into glass or grey boxes. The framework provides insights into: (1) XAI’s capabilities in hotel RMSs— Unified framework for integrating XAI into RMS.
Theoretical implications include the novel combination of TTF and TOE frameworks to explore XAI integration in hotel RMS, addressing a gap in the literature. The unified framework advances theoretical understanding by aligning XAI capabilities with stakeholder explanation needs and identifying adoption factors within the TOE structure. Practically, the findings can inform the design and implementation of user-friendly XAI-enabled RMSs. The identified question-driven explanation types can guide model training to meet stakeholder needs. Preferred explanation techniques, such as simplification, contextualisation, and interactivity, can be embedded to enhance usability and understanding. Finally, the technical, organisational, and environmental readiness factors offer actionable insights to support adoption and integration.
Conclusion and suggestion for future research
Explainable AI (XAI) offers a promising solution to the black-box nature of RMSs, though its adoption and applicability across contexts remain underexplored. This study focused on the hotel RMS context, interviewing 27 RM professionals to examine what can be explained, what should be explained, how to explain, and the factors influencing XAI adoption readiness. Using the TTF and TOE frameworks as a guide, findings reveal that while XAI awareness and readiness remain low, its ability to deliver question-driven explanations—such as what, how, why, why not, what if, how-to, and what-else—aligns well with user needs. The study suggests that integrating XAI into RMS can strengthen human-algorithm collaboration and improve transparency, trust, confidence, efficiency and effectiveness as RMSs grow more complex with AI.
As with all research, this study has limitations. First, it focused solely on the hotel sector, which tends to lag others, like airlines, in tech adoption, limiting generalisability. Second, while we aimed for diverse representation, access to RMS providers was limited, skewing the sample toward end-users from the U.S. and Hong Kong. This might have shaped the discourse toward the perspectives of liberal and neoliberal economies. A more balanced sample could be accessed for future studies to reflect various regulatory perspectives and ideological environments. Also, comparative studies of different economies, e.g., developed versus developing, with adequate samples could be valuable extensions in the future. Lastly, since XAI is not yet implemented in hotel RMSs, we could not assess its economic or performance impacts (e.g., ADR, occupancy, RevPAR) using quantitative methodologies. These should be explored quantitatively once XAI becomes operational in RMS environments.
Supplemental Material
Supplemental Material - Stakeholder perspectives on integrating explainable artificial intelligence into hotel revenue management system
Supplemental Material for Stakeholder perspectives on integrating explainable artificial intelligence into hotel revenue management system by Ibrahim Mohammed, Basak Denizci Guillet in Tourism Economics.
Footnotes
Funding
Declaration of conflicting interests
Supplemental Material
Author biographies
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
