Other Titles
PechaKucha Presentation
Abstract
Purpose: Researchers can use social media to recruit diverse populations for research studies with targeted posts in groups and across platforms with marketing. Sadly, Meta estimates 3% of the active 3.07 billion Facebook accounts are fake. Fake accounts that use bots and artificial intelligence (AI) threaten the integrity of research findings. Unfortunately, no guidelines exist to help researchers identify bad actors, or individuals not engaging with a survey in good faith via bots or AI assistance. We discuss our strategies to detect and remove bad actors.
Methods: We recruited participants from a Facebook pelvic congestion syndrome support group to participate in a survey on pain. Participants were compensated with a gift card. Our goal was to recruit 300 participants, but 961 completed the survey, with 930 completing in less than 24 hours. To ensure the integrity of our data, we halted collection and utilized multiple techniques to evaluate responses for bad actors.
Results: Good faith participants required an average of 13.82 minutes for survey completion, were 34 years old, 5.4 feet tall, weighed 145 pounds, and slept 7.9 hours per night. Bad actors required 4.13 minutes for survey completion, were 25 old, 5.4 feet tall, weighed 103.94 pounds, and slept 13.4 hours per night. On average, good faith participants reported 2.15 pregnancies, with 1.70 term births, 0.29 pre-term births, and 0.53 miscarriages or abortions. In comparison, bad actors reported an average of 8.4 pregnancies, with 12.36 term births, 12.39 pre-term births, and 12.41 miscarriages or abortions.
Concerningly, 595 joined from a single IP address. Of 748 completers, 70% (n=524) required less than 5 minutes for a survey estimated to take 15 minutes. 92% of the 524 completers finished at the exact same time. In open-ended responses, good faith respondents provided anticipated answers for their diagnosis, while bad actors provided emphatic (“I’m so sorry to hear you’re feeling this pain”), question-asking (“Have you received a diagnosis for this pain?”), and informative (“The diagnosis is fibromyalgia, which is a chronic condition…”) AI responses.
Conclusions: Strategies are needed to identify and remove bad actors in research. We recommend: remove compensation language from posts; embed security features to prevent responses from duplicate IP addresses or bot respondents; generate completion emails to verify respondent details; and add questions difficult for bot or AI-assisted responses.
Notes
References: Ayers, J. W., Zhu, Z., Poliak, A., Leas, E. C., Dredze, M., Hogarth, M., & Smith, D. M. (2023). Evaluating artificial intelligence responses to public health questions. JAMA Network Open, 6(6), e2317517. https://doi.org/10.1001/jamanetworkopen.2023.17517
Christensen, T., Riis, A. H., Hatch, E. E., Wise, L. A., Nielsen, M. G., Rothman, K. J., Sørensen, H. T., Mikkelsen, E. M., & Toft Sørensen, H. (2017). Costs and efficiency of online and offline recruitment methods: A web-based cohort study. Journal of Medical Internet Research, 19(3), 1-1. https://doi.org/10.2196/jmir.6716
Darko, E. M., Kleib, M., & Olson, J. (2022). Social media use for research participant recruitment: Integrative literature review. Journal of Medical Internet Research, 24(8), e38015. https://doi.org/10.2196/38015
Gelinas, L., Pierce, R., Winkler, S., Cohen, I. G., Lynch, H. F., & Bierer, B. E. (2017). Using social media as a research recruitment tool: Ethical issues and recommendations. Am J Bioeth, 17(3), 3-14. https://doi.org/10.1080/15265161.2016.1276644
Meta. (n.d.). Fake Accounts. Retrieved September 12 from https://transparency.meta.com/reports/community-standards-enforcement/fake-accounts/facebook/#content-actioned
Patel, S. E., & Chesnut, S. R. (2024). Relationships among pelvic congestion syndrome pain, daily activities, and quality of life. Journal of Obstetric, Gynecologic & Neonatal Nursing, 53(4), 416-426. https://doi.org/10.1016/j.jogn.2024.03.002
Sigma Membership
Lambda Phi
Type
Presentation
Format Type
Text-based Document
Study Design/Type
Other
Research Approach
Other
Keywords:
Ethics, Social Media, Fake Accounts, Research
Recommended Citation
Patel, Sarah E. and Chesnut, Steven R., "Detecting Bad Actors in Social Media Research Recruitment: AI and Bots, Oh My!" (2025). International Nursing Research Congress (INRC). 76.
https://www.sigmarepository.org/inrc/2025/presentations_2025/76
Conference Name
36th International Nursing Research Congress
Conference Host
Sigma Theta Tau International
Conference Location
Seattle, Washington, USA
Conference Year
2025
Rights Holder
All rights reserved by the author(s) and/or publisher(s) listed in this item record unless relinquished in whole or part by a rights notation or a Creative Commons License present in this item record.
Review Type
Abstract Review Only: Reviewed by Event Host
Acquisition
Proxy-submission
Detecting Bad Actors in Social Media Research Recruitment: AI and Bots, Oh My!
Seattle, Washington, USA
Purpose: Researchers can use social media to recruit diverse populations for research studies with targeted posts in groups and across platforms with marketing. Sadly, Meta estimates 3% of the active 3.07 billion Facebook accounts are fake. Fake accounts that use bots and artificial intelligence (AI) threaten the integrity of research findings. Unfortunately, no guidelines exist to help researchers identify bad actors, or individuals not engaging with a survey in good faith via bots or AI assistance. We discuss our strategies to detect and remove bad actors.
Methods: We recruited participants from a Facebook pelvic congestion syndrome support group to participate in a survey on pain. Participants were compensated with a gift card. Our goal was to recruit 300 participants, but 961 completed the survey, with 930 completing in less than 24 hours. To ensure the integrity of our data, we halted collection and utilized multiple techniques to evaluate responses for bad actors.
Results: Good faith participants required an average of 13.82 minutes for survey completion, were 34 years old, 5.4 feet tall, weighed 145 pounds, and slept 7.9 hours per night. Bad actors required 4.13 minutes for survey completion, were 25 old, 5.4 feet tall, weighed 103.94 pounds, and slept 13.4 hours per night. On average, good faith participants reported 2.15 pregnancies, with 1.70 term births, 0.29 pre-term births, and 0.53 miscarriages or abortions. In comparison, bad actors reported an average of 8.4 pregnancies, with 12.36 term births, 12.39 pre-term births, and 12.41 miscarriages or abortions.
Concerningly, 595 joined from a single IP address. Of 748 completers, 70% (n=524) required less than 5 minutes for a survey estimated to take 15 minutes. 92% of the 524 completers finished at the exact same time. In open-ended responses, good faith respondents provided anticipated answers for their diagnosis, while bad actors provided emphatic (“I’m so sorry to hear you’re feeling this pain”), question-asking (“Have you received a diagnosis for this pain?”), and informative (“The diagnosis is fibromyalgia, which is a chronic condition…”) AI responses.
Conclusions: Strategies are needed to identify and remove bad actors in research. We recommend: remove compensation language from posts; embed security features to prevent responses from duplicate IP addresses or bot respondents; generate completion emails to verify respondent details; and add questions difficult for bot or AI-assisted responses.
Description
Social media provides researchers a platform to recruit diverse, hard to reach populations for research participation. However, fake accounts using bots and artificial intelligence (AI) threaten the integrity of research findings. Yet, there are no guidelines to detect participants using bots or AI assistance. This presentation will describe the multiple techniques and strategies we used to evaluate for bad actors in a research study using social media for recruitment.