Abstract
Keywords
Introduction
In recent decades, technological advancement has accelerated at an unprecedented pace, compelling the scientific community to continuously update itself and critically examine its implications. This reality has created an ever-widening gap between the rapid evolution of technological advancements and the ability of regulatory frameworks and scientific research to keep up and develop comprehensive understandings of their short- and long-term outcomes. Frequently, technologies become widely accessible to the general public before their impacts have been systematically studied, resulting in their adoption outpacing the capacity for effective oversight or meaningful interpretation of their consequences.
This phenomenon is not new. Indeed, as early as the 20th century, scholars warned of the growing disparity between the rate of technological advancement and the ethical, social, and evolutionary maturation of humanity (e.g., McLuhan & Fiore, 1967; Toffler, 1970; Wiener, 1948/2019). They expressed concern over the development of complex systems that exceed human capacities for understanding, control, and regulation, relying instead on outdated conceptual frameworks that fail to address present challenges.
In the same vein, contemporary academics have also highlighted the dangerous gap between the accelerated rate of technological progress, particularly in the fields of artificial intelligence and biotechnology, and the ability of political, ethical, and social systems to adapt accordingly. This discrepancy may lead to unforeseen consequences and unprecedented threats unless appropriate ethical and regulatory infrastructures are developed in parallel (Harari, 2018).
Against this backdrop, generative artificial intelligence (GenAI) represents a dramatic turning point in human history and in the domain of mental health in general, and thanatology (the study of loss and bereavement) in particular 1 . In this field, the integration of this technology carries especially sensitive implications that demand careful and thorough scrutiny. The rapid pace at which GenAI is developing, along with its growing adoption by the general public, underscores the urgent need for an ongoing and comprehensive scientific engagement. Accordingly, the aim of this article is to offer an up-to-date overview of the integration of GenAI into the assessment and treatment of grief-related responses, and to propose both theoretical and practical recommendations for navigating this inevitable development in an informed and responsible manner.
Generative Artificial Intelligence and Mental Health
In recent years, there has been a marked acceleration in the integration of generative artificial intelligence (GenAI) technologies into the fields of mental health assessment and treatment (e.g., Bhatt, 2025; Cruz-Gonzalez et al., 2025; Olawade et al., 2024). In its early stages, AI was primarily employed to automate routine tasks and streamline administrative and research processes (Bickman, 2020). With advances in machine learning algorithms and natural language processing, new applications have emerged that offer personalized psychological support, simulating aspects of human therapeutic interaction (Prakashan et al., 2024). This development is driven in part by the global shortage of mental health professionals, particularly in regions with limited access to psychological services (Kuhail et al., 2024).
The use of such technologies in mental health care offers several advantages, including increased accessibility of interventions for broader populations — especially those who avoid or struggle to attend in-person therapy sessions. Additionally, these technologies may reduce dropout rates from treatment programs, ensure standardization of intervention procedures, and provide greater flexibility and convenience, all while significantly lowering financial costs. Furthermore, the use of AI in this context may support the preservation of therapeutic information, enable ongoing monitoring of progress, and offer personalized professional feedback (Manevich, 2025).
These technologies include, among others, interactive digital applications and avatars (“chatbots”) designed to offer emotional support, conversational companionship, and at times, therapeutic elements (Laestadius et al., 2024). For instance, applications such as
A preliminary study investigated the effectiveness of a tailored version of the
As another example, a qualitative study by Skjuve et al. (2021) examined the development of user–chatbot relationships within the context of
Alongside the growing use of chatbots for emotional support in general mental health contexts, there is an emerging trend of employing GenAI technologies to assist individuals coping with grief and bereavement (Seckman, 2025), a development that will be explored in detail in the following sections 2 .
Generative Artificial Intelligence and Thanatology
Deathbots
Among the most intriguing and controversial developments at the intersection of GenAI and thanatology is the emergence of so-called “deathbots” — “chatbots based on the digital footprint of the deceased […] that offer mourners the possibility to ‘talk’ to their loved ones after their death” (Jimenez-Alonso & Bresco de Luna, 2023, p. 104). Unlike physical memorabilia or browsing through photo albums, deathbots offer a dialogical and personalized experience — a responsive interface that asks questions, engages, and provides a sense of ongoing connection with the deceased (Jimenez-Alonso & Bresco de Luna, 2023; Krueger & Osler, 2022). The underlying technology typically involves processing personal correspondence, voice recordings, posts, emails, or other biographical materials, which are used to create a digital entity that mimics the speech style, humor, and syntax of the deceased loved-one (Hurtado Hurtado, 2023).
While the term deathbots is often used colloquially to describe systems based on GenAI, it is important to note that these systems are not solely reliant on GenAI models. Rather, they represent a complex integration of multiple technologies, including large language models (LLMs), natural language processing (NLP) techniques, and machine learning algorithms trained on personal data. In this paper, we focus on a specific manifestation of this phenomenon: dialogical AI interfaces designed to simulate communication with a deceased person. Although related grief technologies (such as AR/VR memorials or general-purpose LLM-based support) fall outside the immediate scope of this paper, we believe that the framework presented later may nonetheless offer conceptual and practical relevance to these adjacent domains in future research.
Currently, deathbots are being used in an increasing variety of ways, reflecting the rapid evolution of AI technologies. These emerging applications challenge both traditional conceptual frameworks in thanatology and the personal ways in which individuals experience and cope with grief. One of the most common applications involves the development of a digital representation of the deceased after the person’s death, followed by holding personal conversations with this simulation, usually through daily interactions conducted via text or voice (Fabry & Alfano, 2024).
In other cases, the use is premeditated, with the deceased person actively collaborating during their lifetime to build the AI-generated representation intended to allow their family members to continue engaging with them after death (Pizzoli et al., 2024). For example, applications such as
However, despite the increasing use of this technology in grief and bereavement contexts, scientific research on its impact on mourners remains in its early stages and primarily focuses on ethical discussions surrounding the risks and potentials inherent in this technology (Fabry & Alfano, 2024). The following section will address this issue in greater detail.
Ethical Issues
As noted, the academic literature on the use of artificial intelligence in coping with loss and bereavement places significant emphasis on the ethical issues emerging with the advent of deathbots. Beyond the general ethical dilemmas associated with AI integration in mental health — such as privacy and data security concerns (Blease & Rodman, 2025) — the field of thanatology presents unique challenges. One central dilemma concerns the question of informed consent: specifically, whether an individual can meaningfully consent in advance to the use of their digital profile after death. Scholars highlight the inherent difficulty of this issue, particularly given the unpredictable development of technological capabilities (e.g., Degni, 2025; Fabry & Alfano, 2024; Hollanek & Nowaczyk-Basińska, 2024).
For instance, Degni (2025) emphasizes that such consent raises profound ethical questions, since it is granted within a specific technological context but enacted in an uncertain future environment where capabilities may differ drastically. This creates ambiguity regarding the boundaries of individual control over their digital presence post-mortem and strengthens calls for the implementation of more stringent ethical standards concerning such consent. This position aligns with findings by Kawashima et al. (2025), who examined Attitudes toward Digital Immortality (ADI) among 296 older adults in Japan. In their study only 8% of participants expressed a desire to be “digitally resurrected” on a virtual platform after death, while the majority expressed reluctance toward this concept.
Moreover, GenAI systems may infer traits or attitudes that the individual never explicitly expressed, thus generating representations based on automated data processing rather than direct statements or conscious intentions (Degni, 2025). This concern is particularly salient when the digital simulation is constructed from fragmented information without truly embodying the complexity of the individual’s identity (Krueger & Osler, 2022). Additionally, ethical questions arise regarding the privacy of family members, friends, and others whose communications with the deceased might be incorporated into the bot without their consent (Degni, 2025; Fabry & Alfano, 2024).
Another fundamental ethical dilemma involves commercial interests. Many researchers warn against the development of services driven by profit motives, often lacking sensitivity to potential negative impacts on users (Fabry & Alfano, 2024; Hollanek & Nowaczyk-Basińska, 2024; Krueger & Osler, 2022). Financial incentives may lead to the design of systems that encourage prolonged use and even emotional dependence, rather than promoting healing and adaptation. When profit, rather than mental well-being, is prioritized, there is a risk that the user’s relationship with the digital simulation of the deceased may hinder the natural grieving process (Krueger & Osler, 2022).
Together, these concerns reinforce the growing call for clear and binding ethical regulation to ensure the protection of privacy, human dignity, and transparency, alongside access restrictions to such services (Degni, 2025). Following the ethical dilemmas raised by this issue, questions also emerge regarding the documented efficacy of these technologies on bereaved individuals. This issue will be addressed in the next section.
Psychosocial Effects
Alongside the ethical discussion, the literature also presents a multifaceted understanding of the psychological effects that the use of deathbots may have on the bereaved. Research indicates that responses to this technology are neither unequivocal nor uniform but may vary depending on cultural context, personality traits, and the stage of grief an individual is experiencing (Brescó de Luna & Jiménez-Alonso, 2024). For example, Krueger and Osler (2022) suggest that deathbots can enable the bereaved to maintain “habits of intimacy” with the deceased, such as daily conversations, emotional regulation, and shared time. The bot’s responses generate a sense of familiarity that may simulate the emotional support experienced during the person’s life, thereby contributing to a feeling of continuity in the relationship.
However, other researchers caution that, under certain circumstances, the use of deathbots might impede the natural progression of grief. Fabry and Alfano (2024) describe cases in which frequent use of the bot, especially in the early stages of grief, may lead to emotional dependence and gradual loss of autonomy. The bereaved individual might rely on the bot to such an extent that it replaces the direct connection they once had with the deceased through a mediated digital relationship (Fabry & Alfano, 2024). Additionally, research suggests that intensive searching for reminders of the deceased could indicate symptoms of Prolonged Grief Disorder (PGD) (Pizzoli et al., 2024).
PGD is a psychiatric diagnosis recently added to the DSM-5-TR and ICD-11, describing an abnormal grief reaction characterized by intense yearning or persistent preoccupation with the loss, alongside emotional pain, identity disruption, loss of meaning, impaired functioning, and more. This response persists beyond socially normative timeframes and has gained broad empirical support as a distinct clinical diagnosis (Killikelly et al., 2025). The risk for PGD may be elevated among individuals with insecure attachment styles, who might use the bot as a means of denial and avoidance of deep emotional processing of the loss (Sekowski & Prigerson, 2022).
Alongside the theoretical literature on deathbots, initial empirical studies have gradually begun to emerge in recent years, aiming to understand how people actually use this technology and its implications for them. For example, a qualitative study by Xygkou et al. (2023) explored the various ways bereaved individuals use chatbots to cope with loss and the meanings they attribute to these interactions. Participants described chatbots as means for emotional processing, both when the chatbot simulated the representations of the deceased and when it acted as a general, nonjudgmental conversational partner. Frequently, interactions with the bot were perceived as a safe space for emotional expression and a temporary means of coping with loneliness — sometimes even more comfortable and open than conversations with formal therapists or close relationships.
According to Xygkou et al. (2023), users did not perceive bots as substitutes for human relationships, but rather as complementary, temporary means of supporting internal emotional processing. Among users who chose to simulate the deceased’s representations, experiences of emotional continuity were described, such as the expression of unspoken thoughts, a sense of presence, or “conversations” with the person who no longer alive. Some even reported experiences akin to therapeutic processes. However, most participants emphasized their awareness that the bot was not a real person and expressed understanding of the technology’s limitations, particularly in instances where the bot’s responses were superficial or inconsistent. The researchers concluded that chatbots may have a supportive role in coping with loss, provided their use is conscious, controlled, and embedded within a broader social or therapeutic context.
Additionally, a quantitative study by Kawashima et al. (2023) examined bereaved individuals’ attitudes toward the possibility of maintaining digital relationships with deceased loved ones and factors associated with this desire. While most participants did not seek to preserve digital contact, about 20% expressed a wish to meet the deceased in a virtual space. This finding highlights a complex and ambivalent attitude toward the option of renewing digital relationships, whereas fewer than 10% wished to “revive” the deceased in a digital form, more than twice as many preferred the possibility of a symbolic reunion. The desire for such a connection was significantly associated with several factors, including the young age of the deceased, a shorter elapsed time since the loss, and participant gender — with women showing greater interest than men. Another key finding was that the desire to maintain a digital connection significantly predicted higher levels of maladaptive grief five months after the initial measurement, even after controlling for demographic and psychological variables. Conversely, no correlation was found between digital bonds and post-traumatic growth.
These findings may be better understood through the lens of the “Continuing Bonds” paradigm (Klass et al., 1996; Klass & Steffen, 2018), which constitutes one of the foundational pillars of the field of thanatology. According to this paradigm, the relationship with the deceased does not end with their death, but rather continues to evolve and transform within the mourner’s inner world throughout the course of life. Such bonds may manifest in a wide range of experiences and behaviors, including dreaming about the deceased, speaking to or consulting with them, sensing their presence or guidance, preserving personal belongings, looking at photographs, telling stories about them, visiting their grave, frequent reminiscing, acts of memorialization, and carrying forward their legacy. Alongside this core assumption regarding the ongoing relationship with the deceased, both clinical practice and research aim to identify which types of bonds and their specific characteristics facilitate the mourner’s adaptation to loss, and which may impede it (Hewson et al., 2024).
In light of the above, a concern arises that digital connections with the deceased may develop into unresolved “external bonds” (Scholtes & Browne, 2014), potentially hindering adaptive processes, especially when not embedded within a supportive social-therapeutic framework. Kawashima et al. (2023) proposed that the intensity of the desire for digital contact might indicate a risk for poor adaptation following loss. However, they also noted that their study assessed the desire for digital contact rather than actual use of such technologies and acknowledged possible biases in technology perception and the lack of long-term follow-up. In conclusion, the researchers call for further research grounded in systematic examination of the psychological, social, and ethical impacts of digital connections with the deceased in the era of advanced technologies.
In parallel to studies focusing on attitudes and usage patterns of deathbots, other researchers have examined how this technology might be integrated into early assessment of grief complications. For example, She et al. (2022) investigated the potential use of artificial intelligence as a screening tool to identify individuals at risk of developing PGD. The study included 611 adults who completed questionnaires via a specially developed online system named
The results identified several key predictors of elevated risk: feelings of emptiness, difficulty adapting to a new routine after loss, traumatic loss (e.g., sudden death or the death of a child), and insecure attachment patterns. A notable aspect of the study was the use of explainable AI — that is, technology enabling understanding of which variables contributed to the diagnosis for each individual, not just the final outcome. According to the researchers, this feature may enhance the trust of professionals and users in the technology. Nevertheless, the study was preliminary, lacking direct clinical diagnosis and including methodological limitations such as targeted sampling within the United States and absence of long-term follow-up. However, it demonstrates the potential for AI systems to serve in the future as complementary tools for early detection of at-risk individuals in grief, especially when operating transparently, personalized, and broadly accessible (She et al., 2022).
Discussion
The integration of GenAI technologies in the mental health field in general, and clinical thanatology specifically, is occurring at an accelerated pace, raising profound questions about the capacity of the scientific, clinical, and regulatory sectors to oversee and guide the implementation and application of these technologies. These developments demand renewed preparedness based on continuous learning, ongoing scientific updates, and openness to broad multidisciplinary thinking.
In light of this, the present article seeks to offer a comprehensive mapping of current trends in the use of GenAI technologies among bereaved populations, presenting a balanced view of their potential benefits alongside the attendant risks. It is emphasized that the purpose of this discussion is not to take a definitive stance for or against the technology, but rather to acknowledge it as an existing, active, and present reality — one likely to expand and become further entrenched in the future. From this perspective, the article aims to provide a deep understanding of beneficial aspects and ethical complexities, while proposing avenues for responsible and balanced engagement with these evolving processes.
Among the central advantages of GenAI technologies in mental health are increased accessibility to psychological services for remote, vulnerable, or underserved populations; immediate and continuous availability (24/7); alleviation of burdens on healthcare systems; cost reduction; personalized interventions tailored to individual client needs; automated monitoring and feedback of therapeutic processes; early identification of symptoms and potential mental health issues; and the establishment of a certain degree of standardization in therapeutic practices. These benefits may contribute to significant efficiencies in service provision and expand mental health care beyond traditional models.
Nonetheless, alongside these advantages, the use of GenAI in mental health is not without significant risks and challenges. Among the foremost concerns are issues of privacy and security of sensitive data, especially regarding psychological and personal information; the lack of adequate and up-to-date regulation in the face of rapid technological advances; algorithmic errors or biases that may lead to inaccurate or even harmful assessments and interventions; absence of human contact and context-sensitivity; the potential for overreliance on technology at the expense of developing internal coping capacities and meaningful interpersonal connections; and the infiltration of commercial and economic interests that may dictate the manner of technology use. Furthermore, some scholars have begun to highlight the possibility of new psychopathological phenomena emerging as a result of the psychological and social impact of these technologies — phenomena not yet identified or defined within existing diagnostic frameworks (Gilat, 2023).
When examining GenAI use in the context of loss and bereavement, even more complex and unique issues arise. For example, the question of informed consent regarding the use of information after a person’s death presents complicated ethical and legal dilemmas, particularly when such data includes other individuals connected to the deceased and bereaved, whose privacy may also be compromised. Additionally, concerns are raised that AI-generated digital representations may fail to faithfully reflect the multiplicity and nuances of the deceased’s identity, potentially producing a distorted version of the person they were.
Moreover, one of the most significant risks in this context is the potential formation of a dependent relationship with the digital simulation of the deceased, a bond that may hinder the grieving process and impair the bereaved’s ability to adapt. This condition may lead to what can be conceptualized as an
From a research perspective, the literature on the integration of GenAI in bereavement contexts remains in its infancy. Although interest in the field is growing, the current number of studies directly addressing the subject is limited, predominantly conceptual in nature. Empirical studies examining the effects of deathbots on bereaved individuals are scarce, mostly relying on qualitative interviews, case studies, or small samples, with a near absence of rigorous quantitative and controlled research. This paucity hampers the development of evidence-based conclusions and responsible therapeutic practices.
Furthermore, it is important to acknowledge that much of the existing literature and empirical research has been conducted predominantly in Western and developed countries, which may limit the generalizability across diverse societies. Given that grief processes and mourning practices are deeply embedded within cultural norms and beliefs (Hilberdink et al., 2023), variations in these factors as well in technology adoption may significantly influence the impact of AI-based grief interventions on different populations. Therefore, future research should prioritize incorporating a broad range of cultural perspectives to advance a more comprehensive and sensitive understanding of grief and its technological mediation worldwide.
Given these gaps, it is crucial to advance research in several key directions: understanding the prevalence and nature of AI technology use among the bereaved across different cultures; exploring clinicians’ attitudes and practices regarding the integration of AI in therapeutic settings; investigating the long-term psychological, social, and functional impacts of AI-based digital simulations on bereaved individuals; and broadening the scope of inquiry to include non-death losses, such as experiences of family members caregiving for individuals living with dementia, traumatic brain injuries, or prolonged disorders of consciousness (Manevich et al., 2023). Building on the above, the following section presents a preliminary framework to guide and support the responsible integration of GenAI in contexts of loss and bereavement, highlighting key considerations and emerging challenges in the field.
A Framework for Responsible Integration of GenAI in Thanatology
The framework outlined below draws on a broad review of current literature, incorporating selected key studies and relevant insights from GenAI, mental health, and thanatology research. Thus, it brings together empirical findings and conceptual and ethical considerations, to identify existing gaps and propose a preliminary framework for the informed and responsible use of GenAI in grief-related contexts. It should be emphasized that the framework is intended as an initial foundation to inform future research and clinical practice, remaining open to further refinement and empirical validation. The proposed framework is based on an integrative system comprising four key components: regulation and ethics, theory, empirical research, and clinical applications (see Figure 1). Proposed Framework for Responsible GenAI Integration in Thanatology
As can be seen in Figure 1, the four key components operate in a reciprocal cycle and support one another. Regulation and ethics establish the foundational framework, defining the boundaries and moral principles that guide the development of theory, empirical research, and clinical implementation. Theory provides a conceptual understanding of the psychological and technological processes within the fields of grief and GenAI, enabling the formulation of guiding hypotheses for research and clinical models. Empirical research functions as a mechanism for validating and testing theoretical assumptions in “real-world” settings by collecting data and evaluating effectiveness and impacts, thereby feeding back new knowledge and recommendations for improvement to regulation, theory, and practice. Clinical applications, which integrate technology with direct human contact, generate essential feedback on practical challenges and opportunities, contributing to the ongoing refinement and precision of all other model components. This creates a synergistic system that facilitates structured, evidence-based, and sensitive development aligned with the needs of bereaved individuals using advanced technologies.
To translate this framework into practice, each of the model’s components must be addressed systematically, with careful consideration of the unique challenges and opportunities it entails. Accordingly, the first priority is to establish robust regulatory mechanisms that ensure oversight, control, and ethical governance over how these technologies are employed, while safeguarding privacy, respecting human dignity, and minimizing psychological harm. This includes the formation of interdisciplinary ethics review boards composed of clinicians, ethicists, technologists, and bereaved users, tasked with evaluating the design and deployment of AI applications related to grief and bereavement. In parallel, clear ethical guidelines and standardized protocols must be developed to govern the use of such technologies in sensitive contexts, with particular emphasis on interpersonal loss. These guidelines should address criteria for informed consent (including how and when it is obtained), delineate acceptable use cases (e.g., therapeutic vs. commercial), and set boundaries concerning the duration and content of interactions with AI systems. Within this regulatory framework, it is also essential to invest in advanced encryption technologies and enforce data minimization practices, in order to prevent breaches, misuse, or unwarranted intrusion into users’ intimate emotional experiences. Furthermore, developers should be required to conduct privacy impact assessments (PIAs) prior to deployment, and users must be provided with transparent information regarding the use of their data, and where applicable, the data of deceased individuals.
On a theoretical level, there is an urgent need for comprehensive conceptualization that provides a framework for understanding the interaction between loss, grief, and artificial intelligence technologies. Theoretical frameworks such as Attachment Theory (Bowlby, 1969-1980; Mikulincer & Shaver, 2022), the Continuing Bonds Paradigm (Klass et al., 1996; Klass & Steffen, 2018), the Dual Process Model of Coping with Bereavement (Fiore, 2021; Stroebe & Schut, 1999), and the Two-Track Model of Loss and Bereavement (Rubin, 1981; Rubin et al., 2020) can serve as foundational bases for evaluating, developing, and guiding interventions. To further advance both theoretical clarity and practical relevance, these conceptual frameworks should be expanded and adapted to address the specific complexities introduced by grief-related AI technologies. This requires extending key concepts to better capture the nuances of human-AI interactions in bereavement contexts. Concurrently, operationalizing these refined concepts into testable hypotheses (such as studying how individuals with varying attachment styles engage with AI grief tools, or examining the influence of AI-mediated continuing bonds on the oscillation between loss- and restoration-oriented coping) will provide essential empirical validation.
From a research perspective, an immediate expansion of the empirical knowledge base is essential to examine the clinical, psychological, and social impacts of these technologies on bereaved populations. There is a notable scarcity of quantitative, controlled, and longitudinal studies, with most existing knowledge derived from qualitative studies. Therefore, comprehensive, systematic, theory-driven, and multi-layered empirical research is required to explore the complexity of the issue and enable the construction of evidence-based therapeutic practices. To facilitate practical implementation, research efforts should include well-designed longitudinal studies that track grief trajectories over time, randomized controlled trials (RCTs) assessing specific AI-based grief interventions, and mixed-methods approaches that integrate qualitative insights with quantitative data. Additionally, developing validated measurement tools tailored to grief-related AI interactions will enhance data quality and comparability. Targeted recruitment strategies should ensure diverse and representative samples, including various cultural backgrounds, ages, and types of loss. Findings from such research must then be translated into clear clinical guidelines and training protocols for practitioners, thereby bridging the gap between empirical evidence and ethical, effective application in “real-world” settings.
Finally, in the domain of clinical application, the development of hybrid models integrating artificial intelligence with direct human contact should be encouraged. This approach ensures that the technology serves as an
It is important to emphasize that the framework presented in this article is a proposed model based on an integrative review of existing literature and theoretical insights, rather than an empirically validated tool. As such, it serves as a preliminary foundation intended to guide research and clinical practice in the responsible use of generative AI in grief-related contexts. The limitations inherent to untested frameworks must be acknowledged, including potential gaps or oversights not yet addressed. Therefore, rigorous empirical testing and validation are necessary to refine and adapt the framework. Future research should aim to evaluate its effectiveness, feasibility, and cultural sensitivity, employing mixed-methods approaches, longitudinal studies, and expert consensus processes to ensure robust and applicable guidelines. Ultimately, while the proposed framework is still in early stages, it underscores a fundamental imperative: the need to proactively and ethically engage with emerging technologies in grief contexts.
In conclusion, humanity is undergoing a profound revolution which, despite the legitimate concerns it raises, has become an inseparable part of the present and foreseeable future. In other words, avoidance or denial of these processes is neither feasible nor advisable. Accordingly, the professional community must prepare itself to integrate innovative technologies responsibly, while rigorously upholding professional ethics and the highest standards of care.
