Should we be wary of using artificial intelligence-based big data management in social research?

DOI: https://doi.org/10.3846/jbem.2025.24792

Abstract

This study examines the future role of artificial intelligence (AI) in transforming research processes within the social sciences, focusing on how AI may redefine researchers' responsibilities and potentially replace human participants in certain types of studies. Employing the Delphi method, the study collects expert opinions to evaluate both facilitating factors and barriers to the integration of AI into scientific research. Key findings indicate that while technological advancements – such as open-access data and the integration of AI with existing research tools – support the growing role of AI, significant challenges remain. These include the difficulty of verifying AI-generated information and concerns regarding authenticity in AI-driven research. Social factors, particularly the risk of excessive reliance on AI leading to diminished originality, emerged as critical barriers. In contrast, economic considerations, such as declining development costs, were viewed as less influential. The study’s practical implications include the need for robust ethical guidelines and enhanced AI training for researchers. By offering original insights into the evolving intersection of AI and social science research, this study highlights both the transformative potential of AI and the urgent need for its responsible integration to preserve research integrity and reliability.

Keywords:

artificial intelligence, research process, social science, Delphi method, social factors, economic factors

How to Cite

Ejdys, J., Garwolińska, M., Lăzăroiu, G., Nica, E., di Pietro, F., Poskrobko, T., & Szpilko, D. (2025). Should we be wary of using artificial intelligence-based big data management in social research?. Journal of Business Economics and Management, 26(5), 1071–1089. https://doi.org/10.3846/jbem.2025.24792

Share

Published in Issue
October 29, 2025
Abstract Views
64

References

Agüera Y Arcas, B. (2022). Do large language models understand us? Daedalus, 151(2), 183–197. https://doi.org/10.1162/daed_a_01909

Aher, G., Arriaga, R. I., & Kalai, A. T. (2023). Using large language models to simulate multiple humans and replicate human subject studies. Proceedings of Machine Learning Research, 202, 337–371.

Ametepey, S. O., Aigbavboa, C., Thwala, W. D., & Addy, H. (2024). The impact of AI in sustainable development goal implementation: A Delphi study. Sustainability, 16(9), Article 3858. https://doi.org/10.3390/su16093858

Balla, J., Huang, S., Dugan, O., Dangovski, R., & Soljacic, M. (2022). AI-assisted discovery of quantitative and formal models in social science. ArXiv. https://doi.org/10.48550/arXiv.2210.00563

Napoli, R. (2023). Data privacy: The ticking time bomb under your digital transformation. https://medium.com/@robnapoli/data-privacy-the-ticking-time-bomb-under-your-digital-transformation-5f79b0c11dbe

Buckland, F. (2023). Bringing generative AI to the Web of Science. Clarivate. https://clarivate.com/blog/bringing-generative-ai-to-the-web-of-science/

Chang, E. Y. (2024a). Behavioural emotion analysis model for large language models. In Proceedings of the 2024 IEEE 7th International Conference on Multimedia Information Processing and Retrieval (MIPR) (pp. 549–556). San Jose, CA, USA. IEEE. https://doi.org/10.1109/MIPR62202.2024.00094

Chang, E. Y. (2024b). Modeling emotions and ethics with large language models. ArXiv. https://doi.org/10.48550/arXiv.2404.13071

Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. ArXiv. https://doi.org/10.48550/arXiv.1810.04805

Ejdys, J., Czerewacz-Filipowicz, K., Halicka, K., Kononiuk, A., Magruk, A., Siderska, J., & Szpilko, D. (2023). A preparedness plan for Europe: Addressing food, energy and technological security. European Parliament. https://doi.org/10.2861/560317

Google DeepMind. (2024). AI achieves silver-medal standard solving International Mathematical Olympiad problems. https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/

Grimaldi, G., & Ehrler, B. (2023). AI et al.: Machines are about to change scientific publishing forever. ACS Energy Letters, 8(1), 878–880. https://doi.org/10.1021/acsenergylett.2c02828

Guo, F. (2023). GPT in game theory experiments. ArXiv. https://doi.org/10.48550/arXiv.2305.05516

Gupta, A. K., & Srivastava, M. K. (2024). Framework for AI adoption in healthcare sector: Integrated DELPHI, ISM–MICMAC approach. IEEE Transactions on Engineering Management, 71, 8116–8131. https://doi.org/10.1109/TEM.2024.3386580

Hadi, M. U., Al Tashi, Q., Qureshi, R., Shah, A., Muneer, A., Irfan, M., Zafar, A., Shaikh, M. B., Akhtar, N., Al-Garadi, M. A., Wu, J., & Mirjalili, S. (2023). Large language models: A comprehensive survey of its applications, challenges, limitations, and future prospects. TechRxiv. https://doi.org/10.36227/techrxiv.23589741.v3

Hartka, T. (2024). The American Journal of Emergency Medicine’s policy on large language model usage in manuscript preparation: Balancing innovation and responsibility. The American Journal of Emergency Medicine, 82, 105–106. https://doi.org/10.1016/j.ajem.2024.06.002

Head, G. (2020). Ethics in educational research: Review boards, ethical issues and researcher development. European Educational Research Journal, 19(1), 72–83. https://doi.org/10.1177/1474904118796315

Hu, H., Jiang, S., Goswami, S. S., & Zhao, Y. (2024). Fuzzy integrated Delphi-ISM-MICMAC hybrid multi-criteria approach to optimize the Artificial Intelligence (AI) factors influencing cost management in civil engineering. Information, 15(5), Article 280. https://doi.org/10.3390/info15050280

IPSOS. (2023). Global Trustworthiness Index 2023. https://www.ipsos.com/sites/default/files/ct/news/documents/2023-10/Ipsos-global-trustworthiness-index-2023.pdf

Kang, H., & Liu, X.-Y. (2023). Deficiency of large language models in finance: An empirical examination of hallucination. ArXiv. https://doi.org/10.48550/arXiv.2311.15548

Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., Sesing, A., & Baum, K. (2021). What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, 296, Article 103473. https://doi.org/10.1016/j.artint.2021.103473

Li, C. (2020). OpenAI’s GPT-3 language model: A technical overview. Lambda. https://lambdalabs.com/blog/demystifying-gpt-3

Liu, Y., Fabbri, A. R., Chen, J., Zhao, Y., Han, S., Joty, S. R., Liu, P., Radev, D., Wu, C.-S., & Cohan, A. (2024). Benchmarking generation and evaluation capabilities of large language models for instruction controllable summarization. ArXiv. https://doi.org/10.48550/arXiv.2311.09184

Májovský, M., Černý, M., Kasal, M., Komarc, M., & Netuka, D. (2023). Artificial Intelligence can generate fraudulent but authentic-looking scientific medical articles: Pandora’s box has been opened. Journal of Medical Internet Research, 25, Article e46924. https://doi.org/10.2196/46924

Maslej, N., Fattorini, L., Perrault, R., Parli, V., Reuel, A., Brynjolfsson, E., Etchemendy, J., Ligett, K., Lyons, T., Manyika, J., Niebles, J. C., Shoham, Y., Wald, R., & Clark, J. (2024). Artificial Intelligence Index Report 2024. ArXiv. https://doi.org/10.48550/arXiv.2405.19522

Messeri, L., & Crockett, M. J. (2024). Artificial intelligence and illusions of understanding in scientific research. Nature, 627, 49–58. https://doi.org/10.1038/s41586-024-07146-0

Mökander, J., & Schroeder, R. (2022). AI and social theory. AI and Society, 37(4), 1337–1351. https://doi.org/10.1007/s00146-021-01222-z

Motoki, F., Pinho Neto, V., & Rodrigues, V. (2024). More human than human: Measuring ChatGPT political bias. Public Choice, 198, 3–23. https://doi.org/10.1007/s11127-023-01097-2

Munikoti, S., Acharya, A., Wagle, S., & Horawalavithana, S. (2023). Evaluating the effectiveness of retrieval-augmented large language models in scientific document reasoning. ArXiv. https://doi.org/10.48550/arXiv.2311.04348

Nature Portfolio. (2024). Artificial Intelligence (AI). https://www.nature.com/nature-portfolio/editorial-policies https://www.nature.com/nature-portfolio/editorial-policies/ai

OpenAI. (2024a). Introducing GPT-4o and more tools to ChatGPT free users. https://openai.com/index/gpt-4o-and-more-tools-to-chatgpt-free/

OpenAI. (2024b). Introducing OpenAI o1-preview. https://openai.com/index/introducing-openai-o1-preview/

Pantha, N., Ramasubramanian, M., Gurung, I., Maskey, M., & Ramachandran, R. (2024). Challenges in guardrailing large language models for science. ArXiv. https://doi.org/10.48550/arXiv.2411.08181

Park, J. S., O’Brien, J., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative agents: Interactive simulacra of human behavior. UIST 2023: Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, Article 2. ACM Digital Library. https://doi.org/10.1145/3586183.3606763

Park, P. S., Schoenegger, P., & Zhu, C. (2024). Diminished diversity-of-thought in a standard large language model. Behavior Research Methods, 56, 5754–5770. https://doi.org/10.3758/s13428-023-02307-x

Phelps, S., & Russell, Y. (2024). The machine psychology of cooperation: Can GPT models operationalise prompts for altruism, cooperation, competitiveness and selfishness in economic games?. ArXiv. https://doi.org/10.48550/arXiv.2305.07970

Pu, X., Gao, M., & Wan, X. (2023). Summarization is (almost) dead. ArXiv. https://doi.org/10.48550/arXiv.2309.09558

Rokhshad, R., Karteva, T., Chaurasia, A., Richert, R., Mörch, C., Tamimi, F., & Ducret, M. (2024). Artificial intelligence and smile design: An e‐Delphi consensus statement of ethical challenges. Journal of Prosthodontics, 33(8), 730–735. https://doi.org/10.1111/jopr.13858

Siderska, J. (2020). Robotic process automation – a driver of digital transformation? Engineering Management in Production and Services, 12(2), 21–31. https://doi.org/10.2478/emj-2020-0009

Siderska, J., Aunimo, L., Süße, T., von Stamm, J., Kedziora, D., & Aini, S. N. B. M. (2023). Towards Intelligent Automation (IA): Literature review on the evolution of Robotic Process Automation (RPA), its challenges, and future trends. Engineering Management in Production and Services, 15(4), 90–103. https://doi.org/10.2478/emj-2023-0030

Szpilko, D. (2014). The use of Delphi method in the process of building a tourism development strategy in the region. Ekonomia i Zarządzanie, 6(4), 329–346.

Tang, L., Sun, Z., Idnay, B., Nestor, J., Soroush, A., Elias, P. A., Xu, Z., Ding, Y., Durrett, G., Rousseau, J. F., Weng, C., & Peng, Y. (2023). Evaluating large language models on medical evidence summarization. NPJ Digital Medicine, 6, Article 158. https://doi.org/10.1038/s41746-023-00896-7

Taylor, P. (2025). Amount of data created, consumed, and stored 2010–2023, with forecasts to 2028. https://www.statista.com/statistics/871513/worldwide-data-created/

Ungless, E. L., Vitsakis, N., Talat, Z., Garforth, J., Ross, B., Onken, A., Kasirzadeh, A., & Birch, A. (2024). Ethics whitepaper: Whitepaper on ethical research into large language models. ArXiv. https://doi.org/10.48550/arXiv.2410.19812

Wong, R. Y., Madaio, M. A., & Merrill, N. (2022). Seeing like a toolkit: How toolkits envision the work of AI ethics. Proceedings of the ACM on Human-Computer Interaction, 7, Article 145. https://doi.org/10.1145/3579621

Vishnumolakala, S. K. Sobin, C. C., Subheesh, N. P., Kumar, P. & Kumar, R. (2024). AI-Based Research Companion (ARC): An innovative tool for fostering research activities in undergraduate engineering education. In Proceedings of the 2024 IEEE Global Engineering Education Conference (EDUCON) (pp. 1–5). Kos Island, Greece. IEEE. https://doi.org/10.1109/EDUCON60312.2024.10578646

Xu, R., Sun, Y., Ren, M., Guo, S., Pan, R., Lin, H., Sun, L., & Han, X. (2024). AI for social science and social science of AI: A survey. Information Processing and Management, 61(3), Article 103665. https://doi.org/10.1016/j.ipm.2024.103665

Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z., Du, Y., Yang, C., Chen, Y., Chen, Z., Jiang, J., Ren, R., Li, Y., Tang, X., Liu, Z., … & Wen, J.-R. (2023). A Survey of large language models. ArXiv. https://doi.org/10.48550/arXiv.2303.18223

View article in other formats

CrossMark check

CrossMark logo

Published

2025-10-29

Issue

Section

Articles

How to Cite

Ejdys, J., Garwolińska, M., Lăzăroiu, G., Nica, E., di Pietro, F., Poskrobko, T., & Szpilko, D. (2025). Should we be wary of using artificial intelligence-based big data management in social research?. Journal of Business Economics and Management, 26(5), 1071–1089. https://doi.org/10.3846/jbem.2025.24792

Share