A recent study conducted by a team of political science and computer science professors and graduate students at BYU has examined the potential of using artificial intelligence (AI) as a substitute for human responders in survey-style research. The team tested the accuracy of programmed algorithms of a GPT-3 language model, which imitates the complex relationships between human ideas, attitudes, and sociocultural contexts of various subpopulations.
Artificial Personas and Voting Patterns
In one experiment, the researchers created artificial personas by assigning specific characteristics to the AI, such as race, age, ideology, and religiosity. They then tested whether these artificial personas would vote the same way as humans did in the 2012, 2016, and 2020 U.S. presidential elections. By using the American National Election Studies (ANES) as their comparative human database, they discovered a high correspondence between AI and human voting patterns.
David Wingate, a BYU computer science professor and co-author of the study, expressed his surprise at the results:
“It’s especially interesting because the model wasn’t trained to do political science — it was just trained on a hundred billion words of text downloaded from the internet. But the consistent information we got back was so connected to how people really voted.”
Interview-Style Surveys and Future Applications
In another experiment, the researchers conditioned artificial personas to offer responses from a list of options in an interview-style survey, again using the ANES as their human sample. They found a high similarity between the nuanced patterns in human and AI responses.
The study’s findings offer exciting prospects for researchers, marketers, and pollsters. AI could be used to craft better survey questions, refine them to be more accessible and representative, and even simulate populations that are difficult to reach. It can also be used to test surveys, slogans, and taglines before conducting focus groups.
BYU political science professor Ethan Busby commented:
“It’s not replacing humans, but it is helping us more effectively study people. It’s about augmenting our ability rather than replacing it. It can help us be more efficient in our work with people by allowing us to pre-test our surveys and our messaging.”
Ethical Questions and Future Research
As large language models continue to advance, numerous questions arise regarding their applications and implications. Which populations will benefit from this technology, and which will be negatively impacted? How can we protect ourselves from scammers and fraudsters who may manipulate AI to create more sophisticated phishing scams?
While many of these questions remain unanswered, the study provides a set of criteria that future researchers can use to determine the accuracy of AI models for various subject areas.
Wingate acknowledges the potential positive and negative consequences of AI development:
“We’re going to see positive benefits because it’s going to unlock new capabilities. We’re also going to see negative things happen because sometimes computer models are inaccurate and sometimes they’re biased. It will continue to churn society.”
Busby emphasizes that surveying artificial personas should not replace the need to survey real people, and calls for academics and experts to collaborate in defining the ethical boundaries of AI surveying in social science research.