Abstract
Background
Artificial intelligence (AI)—part of the “fourth” digital industrial revolution—is playing an increasingly crucial role in modern healthcare. 1 AI has the capacity to analyze and interpret vast volumes of data and make more accurate intelligent decisions than humans, making AI a likely future gold standard for health services.2,4 AI takes several forms such as expert systems, machine learning (ML), deep learning (DL), and artificial neural networks (ANNs). Advances in affordable storage, faster networks, and increased computer power have enabled the integration of AI into different healthcare services. Today, AI is sufficiently advanced for use in triage, disease pattern prediction, and image analysis across different medical disciplines including radiology, gynecology, neurology, cardiology, pathology, pharmacology, and robotic surgery. 5 Integration of AI into healthcare presents unique opportunities to enhance healthcare services.
Regardless of geography, the healthcare sector tends to be overworked, underfunded, and faces many challenges including the need for cost reductions, qualified personnel shortages and burnout, ageng populations, and unexpected threats such as the COVID-19 crisis. Integration of AI into healthcare can tackle these challenges by reducing the burden on healthcare workers by helping with some tasks, reducing workloads, replacing the need for humans in certain tasks, and enhancing the quality and speed of tasks performed. 6 Different AI vendors currently cater to the demand for AI in healthcare, and different companies offer AI applications, products, cloud storage, and other services that enable healthcare organizations to integrate AI into their workflows.
Healthcare workers generally agree that AI could significantly improve their work environment and patient care but, as a new technology, many healthcare workers have had little exposure to AI. 7 This has resulted in many legitimate concerns. First, developing effective AI algorithms requires sufficient data that are comprehensive, clear, and complete. This requires data mining and curation, which need suitably skilled professionals and sufficient data. Other perceived obstacles are a lack of human oversight, disruption of the patient-physician relationship, and fears about machine errors impacting patient health. 8 In addition, there are concerns over job security for healthcare workers. 9 Since AI is a relatively new concept, the regulations about its use have yet to be fully defined, highlighting the ethical and legal issues that may arise when applying AI to healthcare. 10
There have been considerable efforts over recent years by both academia and industry to digitize healthcare. 11 AI in healthcare can be separated into “virtual,” taking the form of electronic health record (EHR) systems or neural network-dependent guidance in treatment, and “physical,” such as robots aiding surgery, intelligent prostheses for disabled individuals, and machines for geriatric care. 12
AI is already being applied to healthcare in various contexts, for example in the form of online appointment scheduling systems and appointment check-ins, automated reminders about follow-up appointments or immunization dates, algorithms calculating drug dosages, and warnings about adverse effects from polypharmacy. A significant potential benefit of AI in healthcare is its potential to provide comprehensive health services management support to healthcare workers by enabling continuous, real-time information and updates, sourced from journals, textbooks, and best practice guidelines. 13 This integration of AI in health services, commonly referred to as telemedicine, has become a popularized application of AI in healthcare that has increased the quality of patient care and provided new approaches to physicians.14,15 A particular need for telemedicine and healthcare digitization was highlighted by the COVID-19 pandemic. Efficient systems and improved fast knowledge mechanisms were required to bridge the gap created by legal constraints, which restricted access to hospitals and treatment centers. Despite progress, Ekeland et al. 16 highlighted a need for further evidence demonstrating the utility and effectiveness of telemedicine and E-health. According to Guo and Li, 17 one of the key contributions that healthcare digitization and the integration of AI into healthcare can offer is timely knowledge exchange and support in low-income countries, thereby making a social impact. AI in healthcare has also been shown to make a difference by optimizing logistics, providing remote personnel training, ensuring timely supply using predictive algorithms, and bridging the gap between rural and urban healthcare. 17 AI can also be beneficial in health services management by studying the data obtained from EHRs to look for outliers, perform clinical tests, unify patient data representation, and improve predictive models for diagnostic and analytic purposes.18,19
There have been several studies about students’ perspectives on AI's role in science and technology as well in the creative arts.20,21 Moreover, several studies have evaluated the perspectives of medical students on AI-integrated healthcare.22–25 However, studies of students from different healthcare sectors are minimal. Most existing studies were conducted in European countries, where technological advances and integration came usually preceded their implementation in Asian and African countries. Acceptance of the integration of AI into healthcare also appears to vary according to geography: high-income such as the United Kingdom, the United States, and France and some middle-income countries show a clear understanding of the integration of telemedicine, 26 whereas this has not been the case in other, low-income countries. For example, Jha et al. 27 discussed how the lack of exposure to and education about AI in Nepalese medical schools has led to a lack of awareness among medical students about AI's potential, and a similar study in India revealed that a significant proportion of participants expressed a lack of knowledge about the applications and limitations of AI in healthcare. 28
Regardless, AI is rapidly being integrated into healthcare and will have a significant impact on services. Future healthcare workers, i.e., current healthcare students, are important future stakeholder who will play a key role in the development, implementation, and utilization of AI solutions in healthcare. Evaluating current healthcare students understanding about the scope of AI in healthcare and evaluating their receptivity to it would provide valuable data with which to navigate its efficient integration into healthcare systems in the future. Few studies have discussed healthcare student perspectives on the integration of AI in the field,22,27–33 and these studies are lacking in the Middle East. Given the speed of technological developments in the Middle East, including in Qatar, understanding Qatar's future healthcare workforce's perspectives on AI is essential. Our hypotheses are as follows: given that AI is a new technology, future healthcare workers might be reluctant to accept change and that this might be due to a lack of proper understanding about AI and its potential impact on their life and career. Therefore, this study aimed to investigate the perspectives of Qatar University (QU) students at QU-Health Cluster, which includes all health colleges (Medicine, Dentistry, Physiotherapy, Pharmacy, Human Nutrition, Public Health, and Biomedical Sciences), about the integration of AI into healthcare. We sought to identify their understanding about the utility of AI systems, their acceptance of the technology, and their fears surrounding the technology. Filling this gap in knowledge will help to guide implementation processes and better address the concerns that exist about AI. The outcomes of this study could be beneficial for designing a roadmap to successful AI integration in the country and beyond.
Methods
Study design and participants
This was a cross-sectional web-based study of students currently enrolled at QU in any of the QU-Health Cluster study programs, 18 years and older, and living in Qatar. The population included undergraduate and postgraduate students of Medicine, Dentistry, Physiotherapy, Pharmacy, Human Nutrition, Public Health, and Biomedical Sciences. Ethical approval was obtained from the QU Institutional Review Board (QU-IRB 1570-E/21) before starting the study. Study participation was voluntary, and participant consent was obtained electronically. Data were collected anonymously, and the confidentiality of information was guaranteed. The survey was built using www.kobotoolbox.org, an online open-source resource developed by the Harvard Humanitarian Initiative that offers different data collection and visualization tools. A link to the online survey was distributed through the QU email announcements system to QU-Health students. The questionnaire was distributed over a 3-week period in November 2021.
Instrument
A previously validated and published survey was adapted for this study.34,35 We modified or omitted pure medical questions. The questionnaire was administrated in English, as this is the language spoken on campus. The questions in the survey were divided into six sections and were in both Likert scale and multiple-choice format (Supplemental Table S1). All Likert scale questions had five options; disagree, somewhat disagree, neither agree nor disagree, somewhat agree, and agree. However, due to sample size, in some analyses we grouped the five-point scale into a 3-point scale as disagree, neither agree nor disagree, and agree. Face-to-face pilot testing was conducted on ten volunteers to assess questionnaire clarity, and this pilot testing cohort was not included in the final analysis. It was not mandatory to answer every question to complete the survey.
The first section collected demographic data about the participants including age, gender, nationality, qualification being pursued, academic year, and their study program. Questions in the second section were designed to evaluate participants’ attitudes towards the integration of AI into healthcare; participants were asked how they felt about AI's usefulness, reliability, and its diagnostic ability. The third section assessed the applicability of AI to different healthcare sectors, reviewing what participants thought would be the effects of AI integration, the areas where it could be applied best, and the advantages of AI integration. In the fourth section, participants were asked about their main concerns about AI integration into healthcare. This section covered the topics of medical errors, possibility of unethical use, and liability. Finally, the fifth section asked questions to understand how knowledgeable participants were about AI in healthcare and the different tools associated with AI. In the advantages of AI, participants were given five options and were allowed multiple selections.
Statistical analysis and sample size calculation
The survey results were analyzed using SPSS v28 (IBM Corporation, New York, NY, USA). All data were analyzed using descriptive statistics. Chi-squared tests were used to assess differences in categorical variables such as risks and knowledge associated with AI integration. For variables on ordinal scales, the gamma measure (range between −1 and 1) was used to test associations instead of the chi-squared test. Factors affecting AI acceptance in healthcare were analyzed by the chi-squared test with odds ratios (OR), 95% confidence intervals (95% CI), and
Results
Sociodemographic characteristics
Of 684 QU-Health students at QU, 193 responded to our survey, representing 28.2% of the total student number. One hundred and seventeen (63.6%) were non-Qatari students and 67 (36.4%) were Qatari students. More female students (153, 79.3%) responded than male students (40, 20.7%). Most study participants were bachelor's students (146, 75.6%), followed by master's students (39, 20.2%) and eight PhD students (4.1%). A majority of respondents were from the Biomedical Sciences (72, 41.1%), while the lowest number of respondents was from Dentistry (one student, 0.6%). The demographic data are detailed in Table 1.
Demographic details of the study participants.
Students’ attitudes and beliefs
Figure 1 shows participants’ attitudes towards the integration of AI into healthcare. The majority of participants perceived that AI has useful applications in healthcare (115, 62.2% agreed), (58, 31.4% somewhat agreed); only 4 (2.2%) disagreed and 8 (4.3%) had a neutral opinion. Participants felt that the diagnostic ability of AI is superior to humans (89, 48.1% agreed and somewhat agreed), 44 (23.8%) neither agreed nor disagreed, and 52 (28.1%) participants believed that humans are superior to AI. Regarding job security, participants were split almost equally between those who agreed and somewhat agreed (72, 38.9%) and somewhat disagreed and disagreed (80, 43.3%); and 33 (17.8%) participants had a neutral opinion. Respondents considered AI in healthcare to be reliable (153, 73.0%) and that it could help to relieve the stress faced by healthcare workers (158, 85.4% agreed and somewhat agreed). In total, 81 (43.8%) said they would always use AI in decision-making, 53 (28.6%) were neutral about it, and 51 (27.6%) disagreed with doing so.

Participants’ attitudes about the integration of artificial intelligence (AI) into healthcare.
We compared the attitudes of undergraduate students with postgraduate students about the application of in healthcare. Postgraduate students believed that AI would cause fewer medical errors than traditional healthcare (χ² = 5.9,
There was an association between the participants’ nationality and their responses about the role that AI could play in healthcare, specifically whether or not their jobs could be replaced by AI (
Associations between nationality and other variables.
*The 5-point Likert scale was grouped into disagree, neither agree nor disagree, and agree.
Applicability of AI
Most respondents (152, 82.2%) said that the results obtained by AI must be verified by humans. When asked whether AI can provide sympathetic care to patients, most respondents disagreed (107, 57.9% somewhat disagreed and disagreed). Only 42 (22.7%) participants agreed that AI would be able to provide sympathetic care, while 36 (19.5%) neither agreed nor disagreed (Figure 2). Participants were also asked about what areas of healthcare will mostly benefit from AI integration, with participants allowed to choose from multiple answers. Diagnostic laboratories was the most commonly selected area of AI integration (113 selections), while making treatment decisions was deemed to be the area in healthcare where AI would be least applicable (Supplemental Figure S1). Most study participants believe that diagnostic laboratories would be the first to integrate AI in the healthcare sector, followed by university hospitals, the pharmaceutical industry, specialized clinics, and finally primary care centers (Supplemental Figure S2).

Participants’ perceptions on how applicable artificial intelligence (AI) is to healthcare.
Advantages of AI
Most participants selected “speeding up the process in healthcare” as the main advantage of AI (129 selections), followed by “reduce medical error” (115 selections) (Figure 3).

Participant responses about the possible advantages of using artificial intelligence (AI) in healthcare.
Risks of using AI in healthcare
We evaluated participants’ perceptions about the risks involved in AI integration into healthcare, with participants given a list of five options and allowed multiple selections (Figure 4 and Supplemental Figure S3). The main concern about applying AI to healthcare was the low ability of AI to sympathize with and consider the emotional well-being of the patient (125 selections). This was followed by concerns about not being able to use AI for opinions in new situations due to inadequate information (91 responses).

Major concerns surrounding the application of artificial intelligence (AI) to healthcare.
A high proportion of students expressed concerns about the possibility of unethical use of AI data, with 118 (65.2%) selecting “high and very high” compared with traditional medical practice. Only 21 (11.5%) participants believed the possibility of unethical data use would be low. Another major concern was liability in case of legal issues. Responses were similar for both “the company that created the AI” and “the healthcare institute in charge of the AI” as the main liable parties, with 92 responses and 87 responses, respectively. Only 14 responses regarded the patient as most liable (Supplemental Figure S4).
The gamma measure (range between −1 and 1) is recommended for testing associations between variables on ordinal scales instead of the chi-squared test. Gamma coefficients and their respective p-values are reported in Table 3 regarding associations between job security and other ordinal variables. There was a significant association between job security and “AI is superior to the clinical experience” and “AI is reliable” (
Association between artificial intelligence (AI)'s role in job security and different AI indicators using the gamma coefficient.
*Significant results are in bold.
Figure 5 shows the average responses regarding superiority of AI diagnostic ability (AI ability), the effect of AI on job replacement (job security), the use of AI in decision-making (decision-making), and the use of AI for personalized testing and medication plans (test-med plan) with respect to age, gender, qualification, and the academic program. The higher the average score, the more the students disagreed with a particular statement about AI. With increasing age, students agreed more with the different AI indicators except for job security. There was greater agreement with the statements by female students than by male students. Interestingly, a higher education level led to greater disagreement with respect to AI diagnostic ability, decision-making, and the use of AI for personalized testing and medication plans. Participants from Biomedical Sciences were in greater disagreement regarding diagnostic ability and decision-making by AI.

Average response comparison of different artificial intelligence (AI) indicators with respect to age, gender, qualification, and the academic program. We grouped the 5-point Likert scale into three categories: “agree/somewhat agree,” “neither agree or disagree,” and “somewhat disagree/disagree.” The higher the average score, the more students disagreed with particular statements about AI. Four questions were analyzed: (1) do you agree that the diagnostic ability of AI is superior to the clinical experience of a human doctor (represented by “AI ability”); do you agree that AI could replace your job (represented by “job security”); do you agree that you would always use AI when making decisions in your field (represented by “decision-making”); and do you agree that AI will be able to formulate personalized testing and medication plans for patients (represented by “test-med plan”)?
Student knowledge
The questionnaire evaluated participants’ knowledge about AI in healthcare. Participants were asked whether they had read about AI and its role in healthcare: 116 (63.4%) answered yes and 67 (36.6%) answered no. Most participants (135, 73.0%) said that they had not received any training or attended any courses about AI in healthcare. Only 50 (27.0%) participants said that they had participated in such training sessions. Although most participants had not received training, 99 (53.8%) of participants said that they had relevant knowledge that could help them understand AI. Students were asked about their knowledge of the difference between DL and ML: 40.1% said they did not know anything about this. Only 18.1% were familiar with both terms and the difference between them. Students were also asked where they acquired their AI skills from, with 30% saying that they were self-taught. Only 19% claimed to have acquired their skills via university courses, while 14% obtained them through external workshops (Supplemental Figure S5). The most common barrier faced by students in learning about AI was the lack of mentorship from experts in the field (26%), followed by a lack of dedicated courses and learning materials about AI in healthcare (17%) and a lack of funding (16%). The next most common barriers were a lack of time (13%) and lack of interest among students (8%) (Figure 6).

Barriers reported by students to learning about AI.
Table 4 reports the chi-squared assessments of associations between AI training, knowledge, skills, and its role in healthcare with the demographic variables of age, gender, qualification, and study program. All variables were significantly associated with gender. The role of AI in healthcare was also associated with student qualification, whereas AI training was associated with program of study at the 10% significance level.
Chi-squared analysis of associations between artificial intelligence (AI) indicators and age, gender, qualification, and program of study.
*Significant results are in bold.
Discussion
We performed this study to understand healthcare students’ perceptions, level of understanding, concerns, and opinions about using AI in healthcare services. Overall, students at QU-Health colleges recognized the usefulness of using AI in healthcare, but there were also some concerns about the implementation of AI into healthcare. Although the questionnaire was distributed to both male and female students via the university email announcements, more female students responded (79.3% and 20.7%, respectively), reflecting the overall distribution of male and female students at the university. When students’ attitudes towards AI integration were assessed, most participants responded that AI integration can be useful (93.6%), reliable (73.0%), and that it can relieve healthcare workers’ stress (85.4%). Almost 50% of participants felt that AI has superior diagnostic ability to humans.
Nearly 40% of participants expressed concerns about AI impacting job security, compared with 43.3% of students disagreeing that AI could replace their job. There was a significant association between AI's superior diagnostic ability and its ability to replace their job. These findings are consistent with a study from Saudi Arabia, where half of the medical school students believed that their job would be in danger due to developments in AI. 36 Indeed, German medical students perceived a higher threat to their jobs compared with qualified medical professionals, 37 and half of medical students from 19 different medical schools in the UK were less likely to consider a career in radiology due to the development of AI in this sector. 38 The high percentage of students in Qatar and other countries worried about future job security cannot be ignored, since these persistent attitudes could translate into barriers and resistance to introducing AI in the workplace in the future. Policymakers must consider these findings to either challenge erroneous beliefs or re-engineer jobs instead of replacing or displacing employees from their positions.
Our findings highlight that human intervention is still needed for patient care. Most students (57.8%) felt that AI would not provide sympathetic care to patients, and only 22.7% agreed that this is a possibility. Specific health conditions require specific attention from physicians, and this seems to be a uniquely human trait.39,40 There was a significant association between participants’ understanding of the limitations of AI and whether they saw AI's diagnostic ability as superior to humans (
This study also evaluated the perceived likelihood of risks occurring with AI use compared with traditional medicine. Most participants felt that there was a lower chance of medical errors with AI than with traditional medicine. However, in the event of a medical error, respondents felt that the liability should first be with the company that created the AI, followed by the healthcare institution. Other studies found similar attitudes towards the legal responsibility of both the vendor and the healthcare institution sharing similar responsibility. 42 These contrasting opinions indicate a need to resolve medico-legal issues before integrating AI into practice. Another risk covered by our survey was the likelihood of unethical use of patient data. More participants felt that AI integration paved the way for unethical use by commercial companies. There seems to be a need to increase AI security to ensure data privacy and for regulatory bodies to introduce strict guidelines on the proper use of data and the legal consequences for misuse.43–45
Assessing knowledge about AI in QU-Health students provides insights into areas where improvements may be needed. Most respondents said that they had read about AI and its role in healthcare through self-learning, while only 19% of the participants obtained their information from university courses. Similarly, students from Canadian medical school had very few occasions to experiment with AI technologies during their degree. 46 Moreover, our students cited a lack of mentorship from experts as the main barrier to obtaining knowledge about AI, followed by lack of dedicated courses and proper funding, highlighting the need to incorporate AI into core curricula in biomedical sciences and other medical programs. 47 There was a strong association between gender and knowledge in participants, with male students having read more about AI and having more knowledge. This gender skew towards males being more knowledgeable about AI in healthcare has been reported previously.24,25 There is clearly a gap in structured education about AI at QU-Health and a need to incorporate the topic into curricula so that students can significantly enhance their understanding and expertise in the field. A collaborative effort between the Ministry of Public Health, Ministry of Education, and QU to make these subjects more accessible to students could pave the way to an AI-integrated, efficiently run national healthcare system.
A limitation of our study was that a majority of our respondents were from the Biomedical Sciences department and only 15% were from the College of Medicine. Sub-analyses without this small group did not change the outcomes. In addition, Qatar has a large population of expats; however, we could not analyze the perspectives of students from low- and middle-income countries compared with students from high-income countries as we did not collect these data. Although our sample represented 28% of students at QU-Health Cluster, our cohort is small and our results cannot be generalized to other institutions and countries. This study could be expanded by including healthcare students from other universities in the country to obtain a more comprehensive and accurate reflection. Another limitation was the accuracy of student perceptions, i.e., whether students who said they knew the difference between ML and DL really did. Finally, as our method was a web-based survey, self-selection and other biases such as social desirability might influence our results. 48
Conclusions
In conclusion, this study contributes to our understanding of the perception of and knowledge about AI in future healthcare workers. We report differences in perception about AI integration into healthcare among students. More resources are required for students to develop a better, more thorough understanding about AI with suitable expert mentorship. Finally, attention should be paid to how best to integrate AI teaching into university curricula.
Supplemental Material
sj-docx-1-dhj-10.1177_20552076231174095 - Supplemental material for Student perspectives on the integration of artificial intelligence into healthcare services
Supplemental material, sj-docx-1-dhj-10.1177_20552076231174095 for Student perspectives on the integration of artificial intelligence into healthcare services by Muna N Ahmad, Saja A Abdallah, Saddam A Abbasi and Atiyeh M Abdallah in DIGITAL HEALTH
Supplemental Material
sj-docx-2-dhj-10.1177_20552076231174095 - Supplemental material for Student perspectives on the integration of artificial intelligence into healthcare services
Supplemental material, sj-docx-2-dhj-10.1177_20552076231174095 for Student perspectives on the integration of artificial intelligence into healthcare services by Muna N Ahmad, Saja A Abdallah, Saddam A Abbasi and Atiyeh M Abdallah in DIGITAL HEALTH
Footnotes
Acknowledgments
Authors’ contributions
Availability of data and materials
Declaration of conflicting interests
Ethical approval
Funding
Guarantor
Informed consent
Supplemental material
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
