Abstract
Regulating AI is a key societal challenge, but effective methods remain unclear. This study evaluates geographic restrictions on AI services, focusing on ChatGPT, which OpenAI blocks in several countries, including China and Russia. If restrictions were effective, ChatGPT usage in these countries should be minimal. We measured usage with a classifier trained to detect distinctive word choices (e.g., “delve”) typical of early ChatGPT outputs. The classifier, trained on pre- and post-ChatGPT “polished” abstracts, outperformed GPTZero and ZeroGPT on validation sets, including papers with self-reported AI use. Applying our classifier to preprints from Arxiv, BioRxiv, and MedRxiv revealed ChatGPT use in approximately 12.6% of preprints by August 2023, with usage 7.7% higher in restricted countries. This gap emerged before China’s first major domestic LLM became widely available. To address whether high demand could have driven even greater use without restrictions, we compared Asian countries with high expected demand (where English is not an official language) and found higher usage in countries with restrictions. ChatGPT use correlated with increased views and downloads but not with citations or journal placement. Overall, geographic restrictions on ChatGPT appear ineffective in science and potentially other domains, likely due to widespread workarounds.
https://www.webofscience.com/api/gateway/wos/peer-review/10.1162/qss_a_00368
Author notes
These authors contributed equally.
Handling Editor: Vincent Larivière