Researchers across many disciplines are increasingly utilizing artificial intelligence (AI), including large language models (LLMs) such as ChatGPT to support empirical research and data analysis, academic writing, peer review, and development of new tools. The broad reach of AI in research raises pressing ethical questions about scientific integrity, authorship, data privacy, bias, and equity. Related issues include how trainees and students should be instructed to use and acknowledge the use of AI tools in their research. Ethical guidance from research institutions, professional organizations, journals, and governmental oversight authorities is only beginning to emerge, and ethical oversight of AI in research also remains in flux.

This conference will bring together leading experts from a range of disciplines, from biomedical sciences to the humanities, to confront the challenge of ethical use of AI in research. National leaders will discuss how AI is being used in research, the challenges to research ethics and integrity, current guidance on using AI in research and publication, including how to address concerns that training sets for LLMs may not be sufficiently representative, leading to biased models. Speakers will also debate how LLMs should be used in academic writing and peer review, and how students should use these tools. The conference will consider when and how researchers should seek informed consent to use of AI in research protocols, and how IRBs can effectively provide oversight for research with AI tools. The conference will offer recommendations for researchers, students, administrators, and IRB professionals on how to ensure ethical use of AI in research