Skip to main content

AI in Research Learning Hub

A set of curated resources that provide an overview of genAI and discussion of key opportunities and challenges.

Additional resources will be added as they become available. Feel free to contact the AI in Research Working Group with suggestions!

Celi, L.A., Cellini, J., Charpignon, M.-L., Dee, E.C., Dernoncourt, F., Eber, R., Mitchell, W.G., Moukheiber, L., Schirmer, J., Situ, J., Paguio, J., Park, J., Wawira, J.G. & Yao, S. (2022). Sources of bias in artificial intelligence that perpetuate healthcare disparities — A global review. PLOS Digital Health, 1(3): e0000022.   

Literature review on data used to train clinical AI systems, and demographics of authors reporting on the use of AI in clinical medicine. The review found a disproportionate overrepresentation of US and Chinese datasets and authors in the literature, including 40% of the studies being radiology focused. The authors caution using narrow data-rich populations to train clinical AI systems could further perpetuate health disparities in data-poor populations.  

Nazer LH, Zatarah R, Waldrip S, Ke JXC, Moukheiber M, Khanna AK, et al. (2023) Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digit Health 2(6): e0000278.   

The authors outline sources of potential bias in the development and implementation of AI algorithms for healthcare and discuss strategies to mitigate them. Sources of bias are identified across the entire spectrum of AI development and implementation starting with defining the problem, data collection and processing, model development and validation, and final implementation. Many examples are provided to illustrate each type of known bias. A checklist is provided to help researchers understand potential biases and address them throughout the AI research and implementation process. 
Aiken, C., Flann, S., Longstaff, H., Manusha, S., Pavlovich, S., Scott, J. & and Wright, J. (2021). A guidance for novel ethics of privacy issues associated with artificial intelligence in the public sector research domain.    

Highlights key themes from a literature review on the use of AI in the public sector and related ethics of privacy issues. Key themes and associated recommendations include data quality and assessment, perceptions and norms, access, financial consideration, education, research participant safety and care, intellectual property, and governance.   
McCradden, M.D., Anderson, J.A., A Stephenson, E., Drysdale, E., Erdman, L., Goldenberg, A., & Zlotnik Shaul, R. (2022). A research ethics framework for the clinical translation of healthcare machine learning. American Journal of Bioethics, 22(5), 8-22. doi: 10.1080/15265161.2021.2013977.   

Discusses using an adapted research ethics guidelines and privacy protections to develop a framework for evaluating and translating machine learning models into clinical care. The research ethics framework includes three phases: (1) exploratory machine learning research; (2) silent evaluation; and (3) prospective clinical evaluation.  

World Health Organization. (2021). Ethics and governance of artificial intelligence for health.    

Based on the expertise and work from 20 leading experts, the World Health Organization identified six core principles to promote the ethical use of AI in health: (1) protect autonomy; (2) promote human well-being, human safety and the public interest; (3) ensure transparency, explainability and intelligibility; (4) foster responsibility and accountability; (5) ensure inclusiveness and equity; and (6) promote AI that is responsible and sustainable. In addition, the guidance discusses several key considerations regarding the use of AI in health, namely law and policies; key ethical principles; ethical challenges; how to build an ethical approach; liability regimes; and governance framework elements.  
Bandi, A., Adapa, P.V.S.R., & Kuchi, Y.E.V.P.K. (2023). The power of generative AI: A review of requirements, models, input–output formats, evaluation metrics, and challenges. Future Internet, 15(8), 260.   

Based on a literature review, this article provides overview and analysis of genAI requirements, models, generative types, input-output classification and evaluation metrics, and discusses challenges and implementation issues related to genAI. The article includes discussion of common phases of using genAI including problem definition, data collection and preprocessing, model selection, model training, model evaluation, model fine-tuning, deployment, and monitoring and maintenance.  

Shah, C. (2024). From prompt engineering to prompt science with human in the loop. [Unpublished paper].     

Based on the methods of qualitative data coding, this paper proposes a similar approach to address the unexplainable, unverifiable, and less generalizable outcomes from differences in prompt engineering in the use of large language models in research. The author proposes an iterative approach to prompt development, simultaneously refining prompts while training researchers to objectively, consistently, and independently evaluate large language model responses.   


 
A 20-minute course aimed at exploring the different types of machine learning, the key concepts of supervised machine learning, and approaches to solving problems with machine learning that differs from traditional approaches.  
Dwivedi, Y.K., Kshetri, N., Hughes, L., Slade, E.L., Jeyaraj, A., Kar, A.K., Baabdullah, A.M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M.A., Al-Busaidi, A.S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., Carter, L., et al. (2023). Opinion paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71(102642).  

This paper brings together opinions on the opportunities and challenges of transformative AI tools from 43 international expert across the fields of computer science, marketing, information systems, education, policy, hospitality and tourism, management, publishing and nursing. Key themes across the contributions included ChatGPT as a productivity enhancing tool, academia likely to experience some of the most disruptive effects, concerns about job losses, the potential misuse and abuse of AI, major limitations of genAI tools, the lack of regulatory templates, and future research directions. Based on these key themes, the authors propose 10 key research areas of transformative AI tools.  

Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K. & Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3), 277-304.    

Discusses key applications of generative AI in business, education, health care and content generation. The authors further discuss ethical, technological, regulatory and policy challenges with generative AI and the value of human-centered AI collaboration to guide the design and application of generative AI.  

Government of Canada. (2024). Guide on the use of generative AI.    

Provides an overview of genAI, discusses challenges and opportunities with genAI use, and provides best practices for genAI users in federal institutions related to protection of information, bias, quality, public servant autonomy, legal risks, distinguishing humans from machines, and environmental impacts. The guidance recommends assessing the risks of using genAI for different applications, and only use genAI in situations where the risks can be mitigated.  

Varghese, J. & Chapiro, J. (2023). ChatGPT: The transformative influence of generative AI on science and healthcare. Journal of Hepatology, [in press, corrected proof].    

Provides a general overview of general AI and AI subtypes. Using ChatGPT as an example, the authors discuss opportunities of genAI use for text generation and advanced coding tasks, challenges related to bias, transparency, explainability, and data fabrication, and practical opportunities and regulatory challenges for use of genAI in health care.  
University of British Columbia. (n.d.). Generative AI: UBC guidance.    

Provides a compilation of tools and resources to support faculty, staff, students and researchers in the responsible use of genAI.   

University of Toronto, School of Graduate Studies. (2023). Guidance on the appropriate use of generative artificial intelligence in graduate theses.    

Provides guidance on the use of genAI tools (e.g., ChatGPT) in graduate student research and thesis writing. Includes a set of frequently asked questions and additional resources related to the use of genAI in research, academic writing, and thesis development and editing.  

Western Canadian Deans of Graduate Studies. (2023). Generative AI and graduate and postdoctoral research and supervision.    

Provides recommendations on the use of genAI in graduate research and writing and how to develop AI literacy for graduate supervisors and students.   


SOURCE: AI in Research Learning Hub ( )
Page printed: . Unofficial document if printed. Please refer to SOURCE for latest information.

Copyright © Provincial Health Services Authority. All Rights Reserved.

    Copyright © 2024 Provincial Health Services Authority