AI Simulations of Audience Attitudes and Policy Preferences: “Silicon Sampling” Guidance for Communications Practitioners
By John Wihbey and Samantha D’Alonzo
AIMES Lab, Northeastern University.
This working paper reviews emerging research on “silicon sampling”—the use of large language models (LLMs) to simulate public opinion—and examines its potential role in communications and media practice. Drawing on about 30 academic studies, the authors analyze where in the survey workflow LLMs can provide value, such as in drafting and refining questions or conducting exploratory pilot tests, and where they fall short, particularly as substitutes for human respondents in policy and opinion research. The paper highlights the risks of bias, stereotyping, and oversimplification in LLM-generated outputs, while also outlining opportunities for using AI to improve efficiency and early-stage design. To support practitioners, the authors propose a decision framework that emphasizes hybrid approaches, transparency, and validation against human benchmarks.




