top of page

ChatGPT Blog Post Series Part 4 – ChatGPT: A Revolutionary Tool for Scientific Writing or a Threat?


ChatGPT is a fun and useful tool for generating creative content, but there is a lot of controversy around how, if any, contribution to an academic paper should be based on ChatGPT and whether it can be a co-author for scientific articles. I set out to research the current scientific evidence for using ChatGPT to write academic papers, and to see the current applications and views of its use in scientific community.


Since my chat with ChatGPT in my previous blog yielded limited information on this topic, I went ahead and ran my own literature search on ChatGPT in PubMed and Google Scholar.


PubMed returned 514 papers and Google Scholar listed 2630 articles. Here are brief points to take home from a few that piqued my interest. For those of you who would like to delve into and explore themselves, I provided the bibliography at the end.





Based on the wide range of fields and topics represented in the literature, it is clear that ChatGPT has attracted a lot of attention and interest from researchers in various fields, especially in medicine and health sciences. Some researchers have explored the potential applications of ChatGPT in scientific, medical and clinical research, such as literature review, data analysis, hypothesis generation, and manuscript writing as well as editing. They argue that ChatGPT can enhance the quality and efficiency of research by providing novel insights and reducing human bias and errors. They also claim that ChatGPT can produce papers that are good enough for publication in peer-reviewed journals, with minimal human editing.


However, ChatGPT also poses some ethical challenges and risks for academic writing and publishing. Some researchers have examined the implications of using ChatGPT in scientific writing, such as plagiarism, deception, manipulation, and misinformation. They warn that ChatGPT can produce false or distorted information that contradicts reality or scientific evidence. They also question the criteria and responsibility of authorship when using ChatGPT and suggest some guidelines and recommendations for ensuring academic integrity and responsible use of ChatGPT. For example, some experts suggest that authors should disclose the use of ChatGPT and provide the prompts and parameters they used to generate their papers. Others propose that journals should adopt tools and policies to detect and prevent the misuse of ChatGPT.





ChatGPT has also sparked some controversy regarding authorship among the scientific community. Some researchers have listed ChatGPT as an author on their research papers published in various journals and fields. They claim that they want to acknowledge ChatGPT's contribution, test its capabilities, and raise awareness about its implications. However, many other scientists disapprove this practice, and argue that ChatGPT does not meet the standards of authorship because it lacks originality, accountability, and responsibility for its outputs. In addition, some academic journal publishers have restricted their authors’ use of ChatGPT, citing concerns about the quality, accuracy, and originality of the generated papers.


The impact of ChatGPT on academic writing and research is still uncertain and evolving. Some researchers see it as a useful tool that can enhance creativity, productivity, and accessibility in science. Others warn that it may undermine the credibility, integrity, and quality of academic literature. For now, it looks like the future of academic writing will depend on how the scientific community adapts to and regulates the use of ChatGPT.


This will conclude the blog posts directly related to ChatGPT in the series. In the next and last post in the series, I will look into some other AI tools that can be used for scientific writing.




References:


Alkaissi H, McFarlane SI. Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus. 2023 Feb 19;15(2).


Altmäe S, Sola-Leyva A, Salumets A. Artificial intelligence in scientific writing: a friend or a foe? Reprod Biomed Online. 2023 Apr 20:S1472-6483(23)00219-5.


Currie GM. Academic integrity and artificial intelligence: is ChatGPT hype, hero or heresy? Semin Nucl Med. 2023 May 22:S0001-2998(23)00036-3.


Dergaa I, Chamari K, Zmijewski P, Ben Saad H. From human writing to artificial intelligence generated text: examining the prospects and potential threats of ChatGPT in academic writing. Biol Sport. 2023 Apr;40(2):615-622.


Else H. Abstracts written by ChatGPT fool scientists. Nature. 2023 Jan;613(7944):423.


Fatani B. ChatGPT for Future Medical and Dental Research. Cureus. 2023 Apr 8;15(4):e37285.


Lee JY. Can an artificial intelligence chatbot be the author of a scholarly article? J Educ Eval Health Prof. 2023 20:6.


Liebrenz, M., Schleifer, R., Buadze, A., Bhugra, D., & Smith, A. Generating scholarly content with ChatGPT: ethical challenges for medical publishing. The Lancet Digital Health. 2023 5(3), e105-e106.


Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature. 2023 Jan;613(7945):620-621.


Thorp, H. H. ChatGPT is fun, but not an author. Science. 2023 379(6630), 313-313.




 
 
 

Comments


bottom of page