The Impact of Generative Artificial Intelligence Technologies on Chinese Librarians' information Behavior and Ethical Discussion: An Empirical Study Based on a Small Sample

Authors

  • Yifei Chen Institute of Medical Information,Chinese Academy of Medical Sciences https://orcid.org/0009-0002-9393-6299
  • Yongjie Li Institute of Medical Information, Chinese Academy of Medical Sciences
  • Dechao Wang Institute of Medical Information, Chinese Academy of Medical Sciences
  • Yinan Sun Institute of Medical Information, Chinese Academy of Medical Sciences
  • Tingyu Lv Institute of Medical Information, Chinese Academy of Medical Sciences
  • Xiaoli Tang Institute of Medical Information, Chinese Academy of Medical Sciences

DOI:

https://doi.org/10.21900/j.alise.2024.1648

Keywords:

Generative Artificial Intelligence, Information Behavior, Information Ethics

Abstract

This study used a combination of questionnaires and interviews to survey 68 librarians in mainland China. The questionnaire was divided into three parts: (1) providing descriptive statistics of the interviewed librarians; (2) exploring the impact of generative technologies on librarians' information behaviour from work scenarios; (3) investigating librarians' concerns about ethics and strategies for coping with ethical challenges. The results show that generative AI technologies had a greater impact on information seeking, information encountering, and information using behaviours, and an insignificant impact on information sharing behaviours. In addition, the results of the study reflect that 67.65% of librarians showed a very high level of concern about privacy and security; 66.18% of them believed that the content generated by the tools needed further validation. The study also provided six recommendations from the perspective of libraries and librarians to address ethical challenges such as the spread of disinformation and bias.

References

Al-Aamri, J.H., & Osman, N.E. (2022). The role of artificial intelligence abilities in library services. Int. Arab J. Inf. Technol., 19, 566-573.

Donald O. Case (2006). Information behavior. , 40(1), 293–327.

Guleria, A., Krishan, K., Sharma, V., & Kanchan, T. (2023). ChatGPT: ethical concerns and challenges in academics and research. Journal of infection in developing countries, 17(9), 1292–1299. https://doi.org/10.3855/jidc.18738

Guleria, A., Krishan, K., Sharma, V., & Kanchan, T. (2024). ChatGPT: Forensic, legal, and ethical issues. Medicine, science, and the law, 64(2), 150–156. https://doi.org/10.1177/00258024231191829

Hutchinson, B., Prabhakaran, V., Denton, E.L., Webster, K., Zhong, Y., & Denuyl, S. (2020). Social Biases in NLP Models as Barriers for Persons with Disabilities. ArXiv, abs/2005.00813. https://doi.org/10.48550/arXiv.2005.00813

Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 22 (140), 55.

Liu, X., Zheng, Y., Du, Z., Ding, M., Qian, Y., Yang, Z., & Tang, J. (2021). GPT Understands, Too. ArXiv, abs/2103.10385. https://doi.org/10.48550/arXiv.2103.10385

Mason, R.O. (1986). Four ethical issues of the information age. Management Information Systems Quarterly, 10, 5-12.

McDonald, E., Rosenfield, M., Furlow, T., Kron, T., & Lopatovska, I. (2015). Book or NOOK? Information behavior of academic librarians. Aslib J. Inf. Manag., 67, 374-391.

Wang, F., Miao, Q., Li, X., Wang, X., & Lin, Y. (2023). What Does ChatGPT Say: The DAO from Algorithmic Intelligence to Linguistic Intelligence. IEEE CAA J. Autom. Sinica, 10, 575-579. https://doi.org/10.1109/JAS.2023.123486

Downloads

Published

2024-10-16

Issue

Section

Juried Papers